source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Heating%20degree%20day | Heating degree day (HDD) is a measurement designed to quantify the demand for energy needed to heat a building. HDD is derived from measurements of outside air temperature. The heating requirements for a given building at a specific location are considered to be directly proportional to the number of HDD at that location.
Related measurements include the cooling degree day (CDD), which quantifies demand for air conditioning.
Definition
Heating degree days are defined relative to a base temperature—the outside temperature above which a building needs no heating. Base temperatures may be defined for a particular building as a function of the temperature that the building is heated to, or it may be defined for a country or region for example. In the latter case, building standards or conventions may exist for the temperature threshold. These include:
The base temperature does not necessarily correspond to the building mean internal temperature, as standards may consider mean building insulation levels and internal gains to determine an average external temperature at which heating will be required. Base temperatures of 16 °C and 19 °C (61, 66 °F) are also used. The variation in choice of base temperature implies that HDD values cannot always be compared – care must be taken to ensure that only HDDs with equal base temperatures are compared.
There are a number of ways in which HDD can be calculated: the more detailed a record of temperature data, the more accurate the HDD that can be calculated. HDD are often calculated using simple approximation methods that use daily temperature readings instead of more detailed temperature records such as half-hourly readings, the latter of which can be used to estimate an integral. One popular approximation method, that used by the U.S. National Weather Service, is to take the average temperature on any given day (the mean of the high and low temperature) and subtract it from the base temperature. If the value is less than or e |
https://en.wikipedia.org/wiki/Multispectral%20imaging | Multispectral imaging captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or detected with the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i.e. infrared and ultra-violet. It can allow extraction of additional information the human eye fails to capture with its visible receptors for red, green and blue. It was originally developed for military target identification and reconnaissance. Early space-based imaging platforms incorporated multispectral imaging technology to map details of the Earth related to coastal boundaries, vegetation, and landforms. Multispectral imaging has also found use in document and painting analysis.
Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands. Hyperspectral imaging is a special case of spectral imaging where often hundreds of contiguous spectral bands are available.
Applications
Military target tracking
Multispectral imaging measures light emission and is often used in detecting or tracking military targets. In 2003, researchers at the United States Army Research Laboratory and the Federal Laboratory Collaborative Technology Alliance reported a dual band multispectral imaging focal plane array (FPA). This FPA allowed researchers to look at two infrared (IR) planes at the same time. Because mid-wave infrared (MWIR) and long wave infrared (LWIR) technologies measure radiation inherent to the object and require no external light source, they also are referred to as thermal imaging methods.
The brightness of the image produced by a thermal imager depends on the objects emissivity and temperature. Every material has an infrared signature that aids in the identification of the object. These signatures are less pronounced in hyperspectral systems (which image in many more bands than multispectral systems) and when exposed to wi |
https://en.wikipedia.org/wiki/Tetrabenazine | Tetrabenazine is a drug for the symptomatic treatment of hyperkinetic movement disorders. It is sold under the brand names Nitoman and Xenazine among others. On August 15, 2008, the U.S. Food and Drug Administration approved the use of tetrabenazine to treat chorea associated with Huntington's disease. Although other drugs had been used "off label," tetrabenazine was the first approved treatment for Huntington's disease in the U.S. The compound has been known since the 1950s.
Medical uses
Tetrabenazine is used as a treatment, but not as a cure, for hyperkinetic disorders such as:
Huntington's disease – specifically, the chorea associated with it
Tourette syndrome and other tic disorders
Tardive dyskinesia, a serious and sometimes irreversible side effect of long-term use of many antipsychotics, mainly typical antipsychotics
Hemiballismus, spontaneous flinging limb movements due to contra-lateral subthalamic nucleus damage
Tetrabenazine has been used as an antipsychotic in the treatment of schizophrenia, both in the past and in modern times.
Side effects
The most common adverse reactions, which have occurred in at least 10% of subjects in studies and at least 5% greater than in subjects who received placebo, have been: sedation or somnolence, fatigue, insomnia, depression, suicidal thoughts, akathisia, anxiety and nausea.
Warnings
There is a boxed warning associated with the use of tetrabenazine:
Increases the risk of depression and suicidal thoughts and behavior in patients with Huntington's disease
Balance risks of depression and suicidality with the clinical need for control of chorea when considering the use of tetrabenazine
Monitor patients for emergence or worsening of depression, suicidality or unusual changes in behavior
Inform patients, caregivers and families of the risk of depression and suicidality and instruct to report behaviours of concern promptly to the treating physician
Exercise caution when treating patients with a history of depres |
https://en.wikipedia.org/wiki/Leading%20zero | A leading zero is any 0 digit that comes before the first nonzero digit in a number string in positional notation. For example, James Bond's famous identifier, 007, has two leading zeros. Any zeroes appearing to the left of the first non-zero digit (of any integer or decimal) do not affect its value, and can be omitted (or replaced with blanks) with no loss of information. Therefore, the usual decimal notation of integers does not use leading zeros except for the zero itself, which would be denoted as an empty string otherwise. However, in decimal fractions strictly between −1 and 1, the leading zeros digits between the decimal point and the first nonzero digit are necessary for conveying the magnitude of a number and cannot be omitted, while trailing zeros – zeros occurring after the decimal point and after the last nonzero digit – can be omitted without changing the meaning.
Occurrence
Often, leading zeros are found on non-electronic digital displays or on such electronic ones as seven-segment displays, that contain fixed sets of digits. These devices include manual counters, stopwatches, odometers, and digital clocks. Leading zeros are also generated by many older computer programs when creating values to assign to new records, accounts and other files, and as such are likely to be used by utility billing systems, human resources information systems and government databases. Many digital cameras and other electronic media recording devices use leading zeros when creating and saving new files to make names of the equal length.
Leading zeros are also present whenever the number of digits is fixed by the technical system (such as in a memory register), but the stored value is not large enough to result in a non-zero most significant digit. The count leading zeros operation efficiently determines the number of leading zero bits in a machine word.
A leading zero appears in roulette in the United States, where "00" is distinct from "0" (a wager on "0" will not win |
https://en.wikipedia.org/wiki/Thomas%20precession | In physics, the Thomas precession, named after Llewellyn Thomas, is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope and relates the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion.
For a given inertial frame, if a second frame is Lorentz-boosted relative to it, and a third boosted relative to the second, but non-collinear with the first boost, then the Lorentz transformation between the first and third frames involves a combined boost and rotation, known as the "Wigner rotation" or "Thomas rotation". For accelerated motion, the accelerated frame has an inertial frame at every instant. Two boosts a small time interval (as measured in the lab frame) apart leads to a Wigner rotation after the second boost. In the limit the time interval tends to zero, the accelerated frame will rotate at every instant, so the accelerated frame rotates with an angular velocity.
The precession can be understood geometrically as a consequence of the fact that the space of velocities in relativity is hyperbolic, and so parallel transport of a vector (the gyroscope's angular velocity) around a circle (its linear velocity) leaves it pointing in a different direction, or understood algebraically as being a result of the non-commutativity of Lorentz transformations. Thomas precession gives a correction to the spin–orbit interaction in quantum mechanics, which takes into account the relativistic time dilation between the electron and the nucleus of an atom.
Thomas precession is a kinematic effect in the flat spacetime of special relativity. In the curved spacetime of general relativity, Thomas precession combines with a geometric effect to produce de Sitter precession. Although Thomas precession (net rotation after a trajectory that returns to its initial velocity) is a purely kinematic effect, it only occurs in curvilinear motion and therefore cannot |
https://en.wikipedia.org/wiki/KTXH | KTXH (channel 20), branded on-air as My 20 Vision, is a television station in Houston, Texas, United States, serving as the local outlet for the MyNetworkTV programming service. It is owned and operated by Fox Television Stations alongside Fox outlet KRIV (channel 26). Both stations share studios on Southwest Freeway (I-69/US 59) in Houston, while KTXH's transmitter is located near Missouri City, Texas.
KTXH began broadcasting in November 1982 as Houston's third independent station. A month after going on air, its broadcast tower collapsed in a construction accident that killed five people. The station recovered and emerged as Houston's sports independent, beginning long associations with the Houston Astros and Houston Rockets that continued uninterrupted through the late 1990s and sporadically until the early 2010s. Not long after starting up, KTXH was sold twice in rapid succession for large amounts. However, when the independent station trade, advertising market, and regional economy cooled, it was sold again for less than half of its previous value. The Paramount Stations Group acquired KTXH and other stations in two parts between 1989 and 1991, bringing much-needed stability.
KTXH was one of several Paramount-owned stations to be charter outlets for the United Paramount Network (UPN) in 1995; in 2001, after UPN was acquired by CBS, Fox took possession of the station in a trade and merged its operations with KRIV. When UPN merged into The CW in 2006, bypassing all of Fox's UPN and independent stations in the process, the station became part of Fox's MyNetworkTV service. In 2021, the station became one of two ATSC 3.0 (NextGen TV) transmitters for the Houston area; its subchannels are now transmitted by other local stations on its behalf.
History
Construction, start-up, and tragedy
Interest in channel 20 in Houston began to emerge in 1976, as three groups filed applications for new television stations in light of the emerging technology of subscription televi |
https://en.wikipedia.org/wiki/Polar%20filament | The term polar filament may refer to either of two analogous structures used for host invasion by different groups of parasites: Myxozoa (Metazoa) and Microsporidia (Fungi), respectively.
In Myxozoa
The polar filament is a structure found in the polar capsule of myxosporean organisms. It is homologous to the "penetrant" structure found in cnidocytes.
The polar filament is coiled along the inner wall of the polar capsule, and is capable of rapid extrusion, during which it everts "inside-out". When everted, it is sticky, and likely serves to hold the spore onto the intestinal wall of the prospective host, and to help separate the valves of the spore.
The polar filament is important in species classification. In some species of Ceratomyxa, the polar filament forms a straight basal section, which the rest of the filament coils around, while in the genus Sphaeromyxa, the filament is folded in a zig-zag arrangement rather than being coiled. |
https://en.wikipedia.org/wiki/Polar%20capsule | Polar capsules are structures found in the valves of Myxosporean parasites, which contain the polar filament. The polar capsule is constructed of a proteinaceous and a polysaccharide layer, both layers of which continue into the polar filament.
The mouth of the capsule is covered with a cap-like structure. This structure may function as a stopper, its digestion in the alimentary tract possibly triggering the discharge of the polar filaments.
Two ideas have been proposed to explain the eversion of the polar filaments. Firstly, that the hydrostatic pressure in the polar capsule pushes the filament out, rather like the cnidocyst of jellyfish. The second is that extrusion is an active process involving contractile proteins and is calcium-dependent (Uspenskaya, 1982). |
https://en.wikipedia.org/wiki/SLOSS%20debate | The SLOSS debate was a debate in ecology and conservation biology during the 1970's and 1980's as to whether a single large or several small (SLOSS) reserves were a superior means of conserving biodiversity in a fragmented habitat. Since its inception, multiple alternate theories have been proposed. There have been applications of the concept outside of the original context of habitat conservation.
History
In 1975, Jared Diamond suggested some "rules" for the design of protected areas, based on Robert MacArthur and E. O. Wilson's book The Theory of Island Biogeography. One of his suggestions was that a single large reserve was preferable to several smaller reserves whose total areas were equal to the larger.
Since species richness increases with habitat area, as established by the species area curve, a larger block of habitat would support more species than any of the smaller blocks. This idea was popularised by many other ecologists, and has been incorporated into most standard textbooks in conservation biology, and was used in real-world conservation planning. This idea was challenged by Wilson's former student Daniel Simberloff, who pointed out that this idea relied on the assumption that smaller reserves had a nested species composition — it assumed that each larger reserve had all the species presented in any smaller reserve. If the smaller reserves had unshared species, then it was possible that two smaller reserves could have more species than a single large reserve.
Simberloff and Abele expanded their argument in subsequent paper in the journal The American Naturalist stating neither ecological theory nor empirical data exist to support the hypothesis that subdividing a nature reserve would increase extinction rates, basically negating Diamond as well as MacArthur and Wilson. Bruce A. Wilcox and Dennis D. Murphy responded with a key paper "Conservation strategy - effects of fragmentation on extinction" pointing out flaws in their argument while prov |
https://en.wikipedia.org/wiki/Volume-weighted%20average%20price | In finance, volume-weighted average price (VWAP) is the ratio of the value of a security or financial asset traded to the total volume of transactions during a trading session. It is a measure of the average trading price for the period.
Typically, the indicator is computed for one day, but it can be measured between any two points in time.
VWAP is often used as a trading benchmark by investors who aim to be as passive as possible in their execution. Many pension funds, and some mutual funds, fall into this category. The aim of using a VWAP trading target is to ensure that the trader executing the order does so in line with the volume on the market. It is sometimes argued that such execution reduces transaction costs by minimizing market impact costs (the additional cost due to the market impact, i.e. the adverse effect of a trader's activities on the price of a security).
VWAP is often used in algorithmic trading. A broker may guarantee the execution of an order at the VWAP and have a computer program enter the orders into the market to earn the trader's commission and create P&L. This is called a guaranteed VWAP execution. The broker can also trade in a best effort way and answer the client with the realized price. This is called a VWAP target execution; it incurs more dispersion in the answered price compared to the VWAP price for the client but a lower received/paid commission. Trading algorithms that use VWAP as a target belong to a class of algorithms known as volume participation algorithms.
The first execution based on the VWAP was in 1984 for the Ford Motor Company by James Elkins, then head trader at Abel Noser.
Formula
VWAP is calculated using the following formula:
where:
is Volume Weighted Average Price;
is price of trade ;
is quantity of trade ;
is each individual trade that takes place over the defined period of time, excluding cross trades and basket cross trades.
Using the VWAP
The VWAP can be used similar to moving averages, where prices |
https://en.wikipedia.org/wiki/Unique%20negative%20dimension | Unique negative dimension (UND) is a complexity measure for the model of learning from positive examples.
The unique negative dimension of a class of concepts is the size of the maximum subclass such that for every concept , we have is nonempty.
This concept was originally proposed by M. Gereb-Graus in "Complexity of learning from one-side examples", Technical Report TR-20-89, Harvard University Division of Engineering and Applied Science, 1989.
See also
Computational learning theory |
https://en.wikipedia.org/wiki/SEMI | SEMI is an industry association comprising companies involved in the electronics design and manufacturing supply chain. They provide equipment, materials and services for the manufacture of semiconductors, photovoltaic panels, LED and flat panel displays, micro-electromechanical systems (MEMS), printed and flexible electronics, and related micro and nano-technologies.
SEMI is headquartered in Milpitas, California, and has offices in Bangalore; Berlin; Brussels; Hsinchu; Seoul; Shanghai; Singapore; Tokyo; and Washington, D.C. Its main activities include conferences and trade shows, development of industry standards, market research reporting, and industry advocacy. The president and chief executive officer of the organization is Ajit Manocha.
Global advocacy
SEMI Global Advocacy represents the interests of the semiconductor industry's design, manufacturing and supply chain businesses worldwide. SEMI promotes its positions on public issues via press releases, position papers, presentations, social media, web content, and media interviews.
SEMI Global Advocacy focuses on five priorities: taxes, trade, technology, talent, and environment, health and safety (EHS).
Workforce development
SEMI Workforce Development attracts, and develops talent that can fulfill the requirements of the electronics industry. SEMI programs include:
SEMI Works. Begun in 2019, SEMI Works develops a standardized process that identifies technical competencies and certifies relevant college coursework. The program is designed to improve the job hiring process for both applicants and employers.
Diversity and Inclusion Council. This council communicates best practices and benefits arising from diverse and inclusive cultures, using white papers, services, webinars, workshops, presentations and events.
SEMI standards
The SEMI Standards program was established in 1973 using proceeds from the west coast SEMICON show. Its first initiative, following meetings with silicon suppliers, was a succe |
https://en.wikipedia.org/wiki/Phaser%20%28effect%29 | A phaser is an electronic sound processor used to filter a signal by creating a series of peaks and troughs in the frequency spectrum. The position of the peaks and troughs of the waveform being affected is typically modulated by an internal low-frequency oscillator so that they vary over time, creating a sweeping effect.
Phasers are often used to give a "synthesized" or electronic effect to natural sounds, such as human speech. The voice of C-3PO from Star Wars was created by taking the actor's voice and treating it with a phaser.
Process
The electronic phasing effect is created by splitting an audio signal into two paths. One path treats the signal with an all-pass filter, which preserves the amplitude of the original signal and alters the phase. The amount of change in phase depends on the frequency. When signals from the two paths are mixed, the frequencies that are out of phase will cancel each other out, creating the phaser's characteristic notches. Changing the mix ratio changes the depth of the notches; the deepest notches occur when the mix ratio is 50%.
The definition of phaser typically excludes such devices where the all-pass section is a delay line; such a device is called a flanger. Using a delay line creates an unlimited series of equally spaced notches and peaks. It is possible to cascade a delay line with another type of all-pass filter. This combines the unlimited number of notches from the flanger with the uneven spacing of the phaser.
Structure
Traditional electronic phasers use a series of variable all-pass phase-shift networks which alter the phases of the different frequency components in the signal. These networks pass all frequencies at equal volume, introducing only phase change to the signal. Human ears are not very responsive to phase differences, but this creates audible interferences when mixed back with the dry (unprocessed) signal, creating notches. The simplified structure of a mono phaser is shown below:
The number of all-pas |
https://en.wikipedia.org/wiki/Jigdo | Jigdo (a portmanteau of "Jigsaw" and "download") is a utility typically used for downloading to piece together a large file, most commonly an optical disk image such as a CD, DVD or Blu-ray Disc (BD) image, from many smaller individual constituent files. The constituent files may be local and/or retrieved from one or more mirror sites. Jigdo's features are similar to BitTorrent, but unlike BitTorrent, Jigdo uses a client-server model, not peer-to-peer.
Jigdo itself is quite portable and is available for many UNIX and Unix-like operating systems,
and is also available for Microsoft Windows.
Released under the terms of the GPL-2.0-only license, Jigdo is free software.
Uses
A quite common use would be to construct a Linux CD or DVD image for installation or distribution, where a slightly older version or release of same, or a cache or local partial mirror, already contains some or many of the needed constituent files. That would typically proceed as follows: Jigdo would be invoked using the jigdo-lite command, with a command line argument of the URL of a ".jigdo" file. Jigdo would then download that file, and after examining its contents, would also download a ".template" file. After inspecting the ".template" file, Jigdo would prompt for the location of files to scan. The user would then either enter or select from a list the location of files to scan. Jigdo would scan that location for any files that match any of the needed constituent files. Any matching files would be used in constructing the target image. Jigdo prompts again, and if the user gives a location, the process repeats - giving Jigdo the opportunity to scan multiple locations for the needed files. If the user enters no location, Jigdo proceeds to download any unmatched constituent files and to use them to assemble the target image file.
The jigdo-file utility is generally used to create the ".jigdo" and ".template" files needed to create target images using Jigdo.
Presently at least
Debian and Ubu |
https://en.wikipedia.org/wiki/TNT%20%28instant%20messenger%29 | TNT is an open source instant messaging client which is designed to use AIM and uses the AOL TOC protocol. The client is run within Emacs or XEmacs and is written in Emacs Lisp.
The client was originally written for AOL, but the project was abandoned around 1999, along with its other TOC clients, TiK and TAC. Since then, independent developers have continued to add features and make new releases.
TNT has been revised to work with the TOC2 protocol.
See also
Comparison of cross-platform instant messaging clients
External links
TNT independent development project site
Instant messaging clients
AIM (software) clients
Unix instant messaging clients
Instant messaging clients for Linux
Windows instant messaging clients
MacOS instant messaging clients
Cross-platform software |
https://en.wikipedia.org/wiki/Cobweb%20plot | A cobweb plot, or Verhulst diagram is a visual tool used in the dynamical systems field of mathematics to investigate the qualitative behaviour of one-dimensional iterated functions, such as the logistic map. Using a cobweb plot, it is possible to infer the long term status of an initial condition under repeated application of a map.
Method
For a given iterated function , the plot consists of a diagonal () line and a curve representing . To plot the behaviour of a value , apply the following steps.
Find the point on the function curve with an x-coordinate of . This has the coordinates ().
Plot horizontally across from this point to the diagonal line. This has the coordinates ().
Plot vertically from the point on the diagonal to the function curve. This has the coordinates ().
Repeat from step 2 as required.
Interpretation
On the cobweb plot, a stable fixed point corresponds to an inward spiral, while an unstable fixed point is an outward one. It follows from the definition of a fixed point that these spirals will center at a point where the diagonal y=x line crosses the function graph. A period 2 orbit is represented by a rectangle, while greater period cycles produce further, more complex closed loops. A chaotic orbit would show a 'filled out' area, indicating an infinite number of non-repeating values.
See also
Jones diagram – similar plotting technique
Fixed-point iteration – iterative algorithm to find fixed points (produces a cobweb plot) |
https://en.wikipedia.org/wiki/Accessory%20pigment | Accessory pigments are light-absorbing compounds, found in photosynthetic organisms, that work in conjunction with chlorophyll a. They include other forms of this pigment, such as chlorophyll b in green algal and vascular ("higher") plant antennae, while other algae may contain chlorophyll c or d. In addition, there are many non-chlorophyll accessory pigments, such as carotenoids or phycobiliproteins, which also absorb light and transfer that light energy to photosystem chlorophyll. Some of these accessory pigments, in particular the carotenoids, also serve to absorb and dissipate excess light energy, or work as antioxidants. The large, physically associated group of chlorophylls and other accessory pigments is sometimes referred to as a pigment bed.
The different chlorophyll and non-chlorophyll pigments associated with the photosystems all have different absorption spectra, either because the spectra of the different chlorophyll pigments are modified by their local protein environment or because the accessory pigments have intrinsic structural differences. The result is that, in vivo, a composite absorption spectrum of all these pigments is broadened and flattened such that a wider range of visible and infrared radiation is absorbed by plants and algae. Most photosynthetic organisms do not absorb green light well, thus most remaining light under leaf canopies in forests or under water with abundant plankton is green, a spectral effect called the "green window". Organisms such as some cyanobacteria and red algae contain accessory phycobiliproteins that absorb green light reaching these habitats.
In aquatic ecosystems, it is likely that the absorption spectrum of water, along with gilvin and tripton (dissolved and particulate organic matter, respectively), determines phototrophic niche differentiation. The six shoulders in the light absorption of water between wavelengths 400 and 1100 nm correspond to troughs in the collective absorption of at least twenty diverse |
https://en.wikipedia.org/wiki/CPN-AMI | CPN-AMI is a computer-aided software engineering environment based on Petri Net specifications. It provides the ability to specify the behavior of a distributed system—and to evaluate properties such as invariants (preservation of resources), absence of deadlocks, liveness, or temporal logic properties (relations between events in the system).
CPN-AMI relies on AMI-Nets, that are well-formed Petri nets with syntactic facilities. Well Formed Petri nets were jointly elaborated between the University of Paris 6 (Université P. & M. Curie) and the University of Torino in the early 1990s. This Petri net class supports symbolic techniques for model checking, and thus provides a very compressed way to store all states of a system.
Since 2016 CPN-AMI has been listed by the owners as "still available but not maintained any more" ().
See also
Well-formed Petri net
Petriscript |
https://en.wikipedia.org/wiki/Geotagging | Geotagging, or GeoTagging, is the process of adding geographical identification metadata to various media such as a geotagged photograph or video, websites, SMS messages, QR Codes or RSS feeds and is a form of geospatial metadata. This data usually consists of latitude and longitude coordinates, though they can also include altitude, bearing, distance, accuracy data, and place names, and perhaps a time stamp.
Geotagging can help users find a wide variety of location-specific information from a device. For instance, someone can find images taken near a given location by entering latitude and longitude coordinates into a suitable image search engine. Geotagging-enabled information services can also potentially be used to find location-based news, websites, or other resources. Geotagging can tell users the location of the content of a given picture or other media or the point of view, and conversely on some media platforms show media relevant to a given location.
The geographical location data used in geotagging can, in almost every case, be derived from the global positioning system, and based on a latitude/longitude-coordinate system that presents each location on the earth from 180° west through 180° east along the Equator and 90° north through 90° south along the prime meridian.
The related term geocoding refers to the process of taking non-coordinate-based geographical identifiers, such as a street address, and finding associated geographic coordinates (or vice versa for reverse geocoding). Such techniques can be used together with geotagging to provide alternative search techniques.
Applications
In social media
Geotagging is a popular feature on several social media platforms, such as Facebook and Instagram.
Facebook users can geotag photos that can be added to the page of the location they are tagging. Users may also use a feature that allows them to find nearby Facebook friends by generating a list of people according to the location tracker in their |
https://en.wikipedia.org/wiki/Corchorus | Corchorus is a genus of about 40–100 species of flowering plants in the family Malvaceae, native to tropical and subtropical regions throughout the world.
Different common names are used in different contexts, with jute applying to the fiber produced from the plant, and jute mallow leaves for the leaves used as a vegetable.
Description
The plants are tall, usually annual herbs, reaching a height of 2–4 m, unbranched or with only a few side branches. The leaves are alternate, simple, lanceolate, 5–15 cm long, with an acuminate tip and a finely serrated or lobed margin. The flowers are small (2–3 cm diameter) and yellow, with five petals; the fruit is a many-seeded capsule.
Taxonomy
The genus Corchorus is classified under the subfamily Grewioideae of the family Malvaceae. It contains around 40 to 100 species.
The genus Oceanopapaver, previously of uncertain placement, has recently been synonymized under Corchorus. The name was established by André Guillaumin in 1932 for the single species Oceanopapaver neocaledonicum Guillaumin from New Caledonia. The genus has been classified in a number of different families including Capparaceae, Cistaceae, Papaveraceae, and Tiliaceae. The putative family name "Oceanopapaveraceae" has occasionally appeared in print and on the web but is a nomen nudum and has never been validly published nor recognised by any system of plant taxonomy.
The genus Corchorus was first described by Linnaeus in his great work Species Plantarum (1753). It is derived from the Ancient Greek word or ( or ) which referred to a wild plant of uncertain identity, possibly jute or wild asparagus.
Species
Species in the genus include:
Corchorus aestuans L.
Corchorus africanus Bari
Corchorus angolensis Exell & Mendonça
Corchorus aquaticus Rusby
Corchorus argillicola Moeaha & P.J.D.Winter
Corchorus asplenifmô0olius Burch.
Corchorus aulacocarpus Halford
Corchorus baldaccii Mattei
Corchorus brevicornutus Vollesen
Corchorus capsularis L.
Corchorus ca |
https://en.wikipedia.org/wiki/Magnaporthe%20grisea | Magnaporthe grisea, also known as rice blast fungus, rice rotten neck, rice seedling blight, blast of rice, oval leaf spot of graminea, pitting disease, ryegrass blast, Johnson spot, neck blast, wheat blast and , is a plant-pathogenic fungus and model organism that causes a serious disease affecting rice. It is now known that M. grisea consists of a cryptic species complex containing at least two biological species that have clear genetic differences and do not interbreed. Complex members isolated from Digitaria have been more narrowly defined as M. grisea. The remaining members of the complex isolated from rice and a variety of other hosts have been renamed Magnaporthe oryzae, within the same M. grisea complex. Confusion on which of these two names to use for the rice blast pathogen remains, as both are now used by different authors.
Members of the M. grisea complex can also infect other agriculturally important cereals including wheat, rye, barley, and pearl millet causing diseases called blast disease or blight disease. Rice blast causes economically significant crop losses annually. Each year it is estimated to destroy enough rice to feed more than 60 million people. The fungus is known to occur in 85 countries worldwide and was the most devastating fungal plant pathogen in the world.
Hosts and symptoms
M. grisea is an ascomycete fungus. It is an extremely effective plant pathogen as it can reproduce both sexually and asexually to produce specialized infectious structures known as appressoria that infect aerial tissues and hyphae that can infect root tissues.
Rice blast has been observed on rice strains M-201, M-202, M-204, M-205, M-103, M-104, S-102, L-204, Calmochi-101, with M-201 being the most vulnerable. Initial symptoms are white to gray-green lesions or spots with darker borders produced on all parts of the shoot, while older lesions are elliptical or spindle-shaped and whitish to gray with necrotic borders. Lesions may enlarge and coalesce to kil |
https://en.wikipedia.org/wiki/VT180 | The VT180 is a personal computer produced by Digital Equipment Corporation (DEC) of Maynard, Massachusetts, USA.
Introduced in early 1982, the CP/M-based VT180 was DEC's entry-level microcomputer. "VT180" is the unofficial name for the combination of the VT100 computer terminal and VT18X option. The VT18X includes a 2 MHz Zilog Z80 microprocessor and 64K RAM on two circuit boards that fit inside the terminal, and two external 5.25-inch floppy disk drives with room for two more in an external enclosure. The VT180 was codenamed "Robin".
Digital later released a full-fledged personal computer known as the Rainbow 100 as the successor to Robin.
When Digital ended the VT100 terminal family in 1983, it also discontinued the VT180. No direct replacement was offered, although the Rainbow 100 eventually provided a superset of Robin's functionality. |
https://en.wikipedia.org/wiki/Sticky%20bit | In computing, the sticky bit is a user ownership access right flag that can be assigned to files and directories on Unix-like systems.
There are two definitions: one for files, one for directories.
For files, particularly executables, superuser could tag these as to be retained in main memory, even when their need ends, to minimize swapping that would occur when another need arises, and the file now has to be reloaded from relatively slow secondary memory. This function has become obsolete due to swapping optimization.
For directories, when a directory's sticky bit is set, the filesystem treats the files in such directories in a special way so only the file's owner, the directory's owner, or root user can rename or delete the file. Without the sticky bit set, any user with write and execute permissions for the directory can rename or delete contained files, regardless of the file's owner. Typically this is set on the /tmp directory to prevent ordinary users from deleting or moving other users' files.
The modern function of the sticky bit refers to directories, and protects directories and their content from being hijacked by non-owners; this is found in most modern Unix-like systems. Files in a shared directory such as /tmp belong to individual owners, and non-owners may not delete, overwrite or rename them.
History
The sticky bit was introduced in the Fifth Edition of Unix (in 1974) for use with pure executable files. When set, it instructed the operating system to retain the text segment of the program in swap space after the process exited. This speeds up subsequent executions by allowing the kernel to make a single operation of moving the program from swap to real memory. Thus, frequently-used programs like editors would load noticeably faster. One notable problem with "stickied" programs was replacing the executable (for instance, during patching); to do so required removing the sticky bit from the executable, executing the program and exiting to flush the |
https://en.wikipedia.org/wiki/Rainbow%20100 | The Rainbow 100 is a microcomputer introduced by Digital Equipment Corporation (DEC) in 1982. This desktop unit had a monitor similar to the VT220 and a dual-CPU box with both Zilog Z80 and Intel 8088 CPUs.
The Rainbow 100 was a triple-use machine: VT100 mode (industry standard terminal for interacting with DEC's own VAX), 8-bit CP/M mode (using the Z80), and CP/M-86 or MS-DOS mode using the 8088.
It ultimately failed to in the marketplace which became dominated by the simpler IBM PC and its clones which established the industry standard as compatibility with CP/M became less important than IBM PC compatibility. Writer David Ahl called it a disastrous foray into the personal computer market.
The Rainbow was launched along with the similarly packaged DEC Professional and DECmate II which were also not successful. The failure of DEC to gain a significant foothold in the high-volume PC market would be the beginning of the end of the computer hardware industry in New England, as nearly all computer companies located there were focused on minicomputers for large organizations, from DEC to Data General, Wang, Prime, Computervision, Honeywell, and Symbolics Inc.
Models
The Rainbow came in three models, the 100A, 100B and 100+. The "A" model was the first released, followed later by the "B" model. The most noticeable differences between the two models were the firmware and slight hardware changes. The systems were referred to with model numbers PC-100A and PC-100B respectively; later were also designated PC-100B2. The system included a user-changeable ROM chip in a special casing to support their keyboard layout and language of the boot screen. On the 100A, the ROMs only supported three languages.
The Rainbow did not have an ISA bus, so the typical RAM limit didn't apply, with both models supporting a maximum RAM of over .
PC-100A
The "A" model was the first produced by Digital. The distinguishing characteristic of the "A" model from an end-user perspective wa |
https://en.wikipedia.org/wiki/Spin%E2%80%93orbit%20interaction | In quantum physics, the spin–orbit interaction (also called spin–orbit effect or spin–orbit coupling) is a relativistic interaction of a particle's spin with its motion inside a potential. A key example of this phenomenon is the spin–orbit interaction leading to shifts in an electron's atomic energy levels, due to electromagnetic interaction between the electron's magnetic dipole, its orbital motion, and the electrostatic field of the positively charged nucleus. This phenomenon is detectable as a splitting of spectral lines, which can be thought of as a Zeeman effect product of two relativistic effects: the apparent magnetic field seen from the electron perspective and the magnetic moment of the electron associated with its intrinsic spin. A similar effect, due to the relationship between angular momentum and the strong nuclear force, occurs for protons and neutrons moving inside the nucleus, leading to a shift in their energy levels in the nucleus shell model. In the field of spintronics, spin–orbit effects for electrons in semiconductors and other materials are explored for technological applications. The spin–orbit interaction is at the origin of magnetocrystalline anisotropy and the spin Hall effect.
For atoms, energy level splitting produced by the spin–orbit interaction is usually of the same order in size as the relativistic corrections to the kinetic energy and the zitterbewegung effect. The addition of these three corrections is known as the fine structure. The interaction between the magnetic field created by the electron and the magnetic moment of the nucleus is a slighter correction to the energy levels known as the hyperfine structure.
In atomic energy levels
This section presents a relatively simple and quantitative description of the spin–orbit interaction for an electron bound to a hydrogen-like atom, up to first order in perturbation theory, using some semiclassical electrodynamics and non-relativistic quantum mechanics. This gives results that |
https://en.wikipedia.org/wiki/Webcentral | Webcentral, formerly known as Melbourne IT Group, is an Australian digital services provider. It is a publicly-traded company that was listed on the Australian Securities Exchange () in December 1999. It provides internet domain registration, email/office applications, cloud hosting, cloud services, 5G networks, managed services, IT services, DevOps security, and digital marketing. Founded in 1996, it was the first Australian domain name registrar.
History
Beginnings
Webcentral's history dates back to April 1996 when Eugene Falk and Professor Peter Gerrand were appointed as Chairman and CEO, respectively, for the University of Melbourne's new commercial subsidiary Melbourne Information Technology International, which commenced operations from 1 May 1996. Professor Iain Morrison was appointed the third foundation director of the company. The company chose to trade under the business name of Melbourne IT from its earliest days.
Contrary to popular belief, the company was not set up to trade in domain names. The company's charter was to demonstrate the University's strategic leadership in working with industry and government in selected areas of IT. Its first and continuingly profitable business, up until its float on the Australian Securities Exchange in December 1999, was its joint venture ASAC (Advanced Services Applications Centre) with Ericsson Australia. ASAC was set up to develop applications with synergies between the Internet and advanced telecommunications, particularly mobile products. ASAC was recognized by Ericsson as one of its Global Design Centres in 1997 and contributed $0.5 million in profit to Melbourne IT in the year before its float. ASAC was incorporated as an independent joint venture in December 2000, but became a casualty of Ericsson's downsizing of its global R&D following the bursting of the Dot-com bubble in July 2000.
On 21 June 1996 a front-page article in the Australian Financial Review (AFR) by Charles Wright drew attention to the p |
https://en.wikipedia.org/wiki/Information%20continuum | The term information continuum is used to describe the whole set of all information, in connection with information management. The term may be used in reference to the information or the information infrastructure of a people, a species, a scientific subject or an institution.
Other usages
in biological anthropology, term information continuum is related to study of social information transfer and evolution of communication in animals.
the Internet is sometimes called an information continuum. |
https://en.wikipedia.org/wiki/Bluetongue%20virus | Bluetongue virus (BTV) is a Sedoreoviridae dsRNA virus part of the genus Orbivirus. The virus causes Bluetongue disease. |
https://en.wikipedia.org/wiki/Foundation%20species | In ecology, the foundation species are species that have a strong role in structuring a community. A foundation species can occupy any trophic level in a food web (i.e., they can be primary producers, herbivores or predators). The term was coined by Paul K. Dayton in 1972, who applied it to certain members of marine invertebrate and algae communities. It was clear from studies in several locations that there were a small handful of species whose activities had a disproportionate effect on the rest of the marine community and they were therefore key to the resilience of the community. Dayton’s view was that focusing on foundation species would allow for a simplified approach to more rapidly understand how a community as a whole would react to disturbances, such as pollution, instead of attempting the extremely difficult task of tracking the responses of all community members simultaneously. The term has since been applied to a range of organisms in ecosystems around the world, in both aquatic and terrestrial environments. Aaron Ellison et al. introduced the term to terrestrial ecology by applying the term foundation species to tree species that define and structure certain forest ecosystems through their influences on associated organisms and modulation of ecosystem processes.
Examples and outcomes of foundation species loss
A study conducted at the McKenzie Flats of the Sevilleta National Wildlife Refuge in New Mexico, a semiarid biome transition zone, observed the result of loss of a variety of different dominant and codominant foundation species of plants on the growth of other species. This transition zone consists of two Chihuahuan Desert species, black grama (Bouteloua eriopoda) and creosote bush (Larrea tridentata), and a shortgrass steppe species, blue grama (Bouteloua gracillis). Each species dominates an area with a specific soil environment. Black grama dominates sandy soils, while blue grama dominates in soils with high clay content, and creosote bush |
https://en.wikipedia.org/wiki/Ann%20Arbor%20staging | Ann Arbor staging is the staging system for lymphomas, both in Hodgkin's lymphoma (formerly designated Hodgkin's disease) and non-Hodgkin lymphoma (abbreviated NHL). It was initially developed for Hodgkin's, but has some use in NHL. It has roughly the same function as TNM staging in solid tumors.
The stage depends on both the place where the malignant tissue is located (as located with biopsy, CT scanning, gallium scan and increasingly positron emission tomography) and on systemic symptoms due to the lymphoma ("B symptoms": night sweats, weight loss of >10% or fevers).
Principal stages
The principal stage is determined by location of the tumor:
Stage I indicates that the cancer is located in a single region, usually one lymph node and the surrounding area. Stage I often will not have outward symptoms.
Stage II indicates that the cancer is located in two separate regions, an affected lymph node or lymphatic organ and a second affected area, and that both affected areas are confined to one side of the diaphragm—that is, both are above the diaphragm, or both are below the diaphragm.
Stage III indicates that the cancer has spread to both sides of the diaphragm, including one organ or area near the lymph nodes or the spleen.
Stage IV indicates diffuse or disseminated involvement of one or more extralymphatic organs, including any involvement of the liver, bone marrow, or nodular involvement of the lungs.
Modifiers
These letters can be appended to some stages:
A or B: the absence of constitutional (B-type) symptoms is denoted by adding an "A" to the stage; the presence is denoted by adding a "B" to the stage.
S: is used if the disease has spread to the spleen.
E: is used if the disease is "extranodal" (not in the lymph nodes) or has spread from lymph nodes to adjacent tissue.
X: is used if the largest deposit is >10 cm large ("bulky disease"), or whether the mediastinum is wider than ⅓ of the chest on a chest X-ray.
Type of staging
The nature of the staging is |
https://en.wikipedia.org/wiki/Rock%20mechanics | Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses.
Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment.
Background
Rock mechanics is part of a much broader subject of geomechanics, which is concerned with the mechanical responses of all geological materials, including soils.
Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering.
Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" It also includes the design of reinforcement systems, such as rock bolting patterns.
Assessing the Project Site
Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions.
The first step of the investigation is the collection of maps and aerial photos to analyze. This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units.
Boreholes
Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a bett |
https://en.wikipedia.org/wiki/Biochemical%20cascade | A biochemical cascade, also known as a signaling cascade or signaling pathway, is a series of chemical reactions that occur within a biological cell when initiated by a stimulus. This stimulus, known as a first messenger, acts on a receptor that is transduced to the cell interior through second messengers which amplify the signal and transfer it to effector molecules, causing the cell to respond to the initial stimulus. Most biochemical cascades are series of events, in which one event triggers the next, in a linear fashion. At each step of the signaling cascade, various controlling factors are involved to regulate cellular actions, in order to respond effectively to cues about their changing internal and external environments.
An example would be the coagulation cascade of secondary hemostasis which leads to fibrin formation, and thus, the initiation of blood coagulation. Another example, sonic hedgehog signaling pathway, is one of the key regulators of embryonic development and is present in all bilaterians. Signaling proteins give cells information to make the embryo develop properly. When the pathway malfunctions, it can result in diseases like basal cell carcinoma. Recent studies point to the role of hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target hedgehog signaling to fight diseases are being actively developed by a number of pharmaceutical companies.
Introduction
Signaling cascades
Cells require a full and functional cellular machinery to live. When they belong to complex multicellular organisms, they need to communicate among themselves and work for symbiosis in order to give life to the organism. These communications between cells triggers intracellular signaling cascades, termed signal transduction pathways, that regulate specific cellular functions. Each signal transduction occurs with a p |
https://en.wikipedia.org/wiki/Trumpet%20Winsock | Trumpet Winsock is a TCP/IP stack for Windows 3.x that implemented the Winsock API, which is an API for network sockets. It was developed by Peter Tattam from Trumpet Software International and distributed as shareware software.
History
The first version, 1.0A, was released in 1994. It rapidly gained reputation as the best tool for connecting to the internet. Guides for internet connectivity commonly advised to use Trumpet Winsock. The author received very little financial compensation for developing the software. In 1996, a 32-bit version was released.
Lawsuit
In the Trumpet Software Pty Ltd. v OzEmail Pty Ltd. case, the defendant had distributed Trumpet Winsock for free with a magazine. It did also suppress notices that the software was developed by Trumpet Software.
Replacement by Microsoft
Windows 95 includes an IPv4 stack but it is not installed by default. An early version of this IPv4 stack, codenamed Wolverine, was released by Microsoft for Windows for Workgroups in 1994. Microsoft also released Internet Explorer 5 for Windows 3.x with an included dialer application for calling the modem pool of a dial-up Internet service provider. The Wolverine stack does not include a dialer but another computer on the same LAN may make a dialed connection or a dialer not included with Wolverine may be used on the computer using Wolverine.
Architecture
The binary for Trumpet Winsock is called TCPMAN.EXE. Other files included the main winsock.dll and three UCSC connection .cmd file scripts. |
https://en.wikipedia.org/wiki/Relativistic%20plasma | Relativistic plasmas in physics are plasmas for which relativistic corrections to a particle's mass and velocity are important. Such corrections typically become important when a significant number of electrons reach speeds greater than 0.86c (Lorentz factor =2).
Such plasmas may be created either by heating a gas to very high temperatures or by the impact of a high-energy particle beam. A relativistic plasma with a thermal distribution function has temperatures greater than around 260 keV, or 3.0 GK (5.5 billion degrees Fahrenheit), where approximately 10% of the electrons have . Since these temperatures are so high, most relativistic plasmas are small and brief, and are often the result of a relativistic beam impacting some target. (More mundanely, "relativistic plasma" might denote a normal, cold plasma moving at a significant fraction of the speed of light relative to the observer.)
Relativistic plasmas may result when two particle beams collide at speeds comparable to the speed of light, and in the cores of supernovae. Plasmas hot enough for particles other than electrons to be relativistic are even more rare, since other particles are more massive and thus require more energy to accelerate to a significant fraction of the speed of light. (About 10% of protons would have at a temperature of 481 MeV - 5.6 TK.) Still higher energies are necessary to achieve a quark–gluon plasma.
The primary changes in a plasma's behavior as it approaches the relativistic regime is slight modifications to the equations which describe a non-relativistic plasma and to collision and interaction cross sections. The equations may also need modifications to account for pair production of electron-positron pairs (or other particles at the highest temperatures).
A plasma double layer with a large potential drop and layer separation, may accelerate electrons to relativistic velocities, and produce synchrotron radiation.
Applications
Laser Wakefield Acceleration
See also
Li |
https://en.wikipedia.org/wiki/Maternal%20impression | The conception of a maternal impression rests on the belief that a powerful mental (or sometimes physical) influence working on the mother's mind may produce an impression, either general or definite, on the child she is carrying. The child might be said to be "marked" as a result.
Medicine
Maternal impression, according to a long-discredited medical theory, was a phenomenon that explained the existence of birth defects and congenital disorders. The theory stated that an emotional stimulus experienced by a pregnant woman could influence the development of the fetus. For example, it was sometimes supposed that the mother of the Elephant Man was frightened by an elephant during her pregnancy, thus "imprinting" the memory of the elephant onto the gestating fetus. Mental problems, such as schizophrenia and depression, were believed to be a manifestation of similar disordered feelings in the mother. For instance, a pregnant woman who experienced great sadness might imprint depressive tendencies onto the fetus in her womb.
The theory of maternal impression was largely abandoned by the 20th century, with the development of modern genetic theory.
Folklore
In folklore, maternal imprinting, or Versehen (a German noun meaning "inadvertence" or as a verb "to provide") as it is usually called, is the belief that a sudden fear of some object or animal in a pregnant woman can cause her child to bear the mark of it.
Some of the more vivid examples are given in Vance Randolph's Ozark Superstitions:Children are also said to be marked by some sudden fright or unpleasant experience of the mother, and I have myself seen a pop-eyed, big-mouthed idiot whose condition is ascribed to the fact that his mother stepped on a toad several months before his birth. In another case, a large red mark on a baby's cheek was caused by the mother seeing a man shot down at her side, when the discharge of the gun threw some of the blood and brains into her face.Other explanations claimed that birthma |
https://en.wikipedia.org/wiki/Residual%20stress | In materials science and solid mechanics, residual stresses are stresses that remain in a solid material after the original cause of the stresses has been removed. Residual stress may be desirable or undesirable. For example, laser peening imparts deep beneficial compressive residual stresses into metal components such as turbine engine fan blades, and it is used in toughened glass to allow for large, thin, crack- and scratch-resistant glass displays on smartphones. However, unintended residual stress in a designed structure may cause it to fail prematurely.
Residual stresses can result from a variety of mechanisms including inelastic (plastic) deformations, temperature gradients (during thermal cycle) or structural changes (phase transformation). Heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. Another example occurs during semiconductor fabrication and microsystem fabrication when thin film materials with different thermal and crystalline properties are deposited sequentially under different process conditions. The stress variation through a stack of thin film materials can be very complex and can vary between compressive and tensile stresses from layer to layer.
Applications
While uncontrolled residual stresses are undesirable, some designs rely on them. In particular, brittle materials can be toughened by including compressive residual stress, as in the case for toughened glass and pre-stressed concrete. The predominant mechanism for failure in brittle materials is brittle fracture, which begins with initial crack formation. When an external tensile stress is applied to the material, the crack tips concentrate stress, increasing the local tensile stresses experienced at the crack tips to a greater extent than the average stress on the bulk material. This |
https://en.wikipedia.org/wiki/Education%20Quality%20and%20Accountability%20Office | The Education Quality and Accountability Office (EQAO) is a Crown agency of the Government of Ontario in Canada. It was legislated into creation in 1996 in response to recommendations made by the Royal Commission on Learning in February 1995.
EQAO is governed by a board of directors appointed by the Lieutenant Governor in Council. Cameron Montgomery has been the chair of the board since February 2019. EQAO has an annual budget of approximately $33 million CDN.
Purpose
The stated purpose of EQAO tests is to ensure that there is accountability between school boards and schools in the publicly funded system in Ontario. Educational accountability is important to three key stakeholders: taxpayers, elected officials, and teachers. By providing yearly standardized tests, the Ministry of Education hopes to increase the quality of education in Ontario, while also using the tests to make plans for future improvement.
EQAO tests are intended to measure the student's ability to:
Make sense of what they read in different kinds of texts
Express their thoughts in writing using appropriate grammar, spelling and punctuation and
Use appropriate math skills to solve problems
EQAO versus classroom tests
EQAO tests have different goals and intentions than normal classroom tests. These tests are not the same, but when considering the EQAO test results along with the classroom results, they can provide a picture of the students' overall learning.
Classroom tests:
measure how well students have learned specific information;
provide quick results teachers can use to modify teaching strategies;
may have subjective components, based on the teacher's knowledge of each student, and
provide results that may not be comparable across the school, board or province
EQAO tests:
measure students' cumulative knowledge and skills in relation to a provincial standard;
are given at key stages of students' education;
are administered, scored and reported on in a consistent and objective m |
https://en.wikipedia.org/wiki/Barbara%20Keeley | Barbara Mary Keeley (born 26 March 1952) is a British Labour Party politician who has served as the Member of Parliament (MP) for Worsley and Eccles South, previously Worsley, since 2005. A member of the Labour Party, she has served as Shadow Minister for Music and Tourism since 2023. She previously served as Deputy Leader of the House of Commons from 2009 to 2010 and served in Jeremy Corbyn’s Shadow Cabinet as Shadow Minister for Mental Health and Social Care from 2016 to 2020.
Early life
Keeley was educated at Mount St Mary's College in Leeds and the University of Salford, gaining a BA in Politics and Contemporary History.
Her early career was with IBM, first as a Systems Engineer and then as a Field Systems Engineering Manager. Later she became an independent consultant, working on community regeneration issues across North West England.
She was elected as a Labour councillor on Trafford Council in 1995 on which Keeley served as a member for Priory ward until 2004. She was Cabinet member for Children and Young People, Early Years and Childcare and Health and Wellbeing. From 2002 to 2004, she was Cabinet member for Education, Children's Social Services and all services for children and young people and Director of a Pathfinder Children's Trust. She is a member of the GMB Union, the Co-operative Party and the Fabian Society.
From 2002 to 2005, she worked as a consultant to the charity, the Princess Royal Trust for Carers, researching carers' issues — particularly those related to primary health care. She is co-author of the reports Carers Speak Out and Primary Carers.
Parliamentary career
In the House of Commons, Keeley served as a member of the Constitutional Affairs Select committee and from February 2006, the Finance and Services Committee. On 8 February 2006, she was appointed as Parliamentary Private Secretary (PPS) to the Cabinet Office, working with the Cabinet Office Minister, Jim Murphy MP. In June 2006, she moved to be PPS to Jim Murphy as Minister o |
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Severi%20group | In algebraic geometry, the Néron–Severi group of a variety is
the group of divisors modulo algebraic equivalence; in other words it is the group of components of the Picard scheme of a variety. Its rank is called the Picard number. It is named after Francesco Severi and André Néron.
Definition
In the cases of most importance to classical algebraic geometry, for a complete variety V that is non-singular, the connected component of the Picard scheme is an abelian variety written
Pic0(V).
The quotient
Pic(V)/Pic0(V)
is an abelian group NS(V), called the Néron–Severi group of V. This is a finitely-generated abelian group by the Néron–Severi theorem, which was proved by Severi over the complex numbers and by Néron over more general fields.
In other words, the Picard group fits into an exact sequence
The fact that the rank is finite is Francesco Severi's theorem of the base; the rank is the Picard number of V, often denoted ρ(V). The elements of finite order are called Severi divisors, and form a finite group which is a birational invariant and whose order is called the Severi number. Geometrically NS(V) describes the algebraic equivalence classes of divisors on V; that is, using a stronger, non-linear equivalence relation in place of linear equivalence of divisors, the classification becomes amenable to discrete invariants. Algebraic equivalence is closely related to numerical equivalence, an essentially topological classification by intersection numbers.
First Chern class and integral valued 2-cocycles
The exponential sheaf sequence
gives rise to a long exact sequence featuring
The first arrow is the first Chern class on the Picard group
and the Neron-Severi group can be identified with its image.
Equivalently, by exactness, the Neron-Severi group is the kernel of the second arrow
In the complex case, the Neron-Severi group is therefore the group of 2-cocycles whose Poincaré dual is represented by a complex hypersurface, that is, a Weil divisor.
For |
https://en.wikipedia.org/wiki/Virotherapy | Virotherapy is a treatment using biotechnology to convert viruses into therapeutic agents by reprogramming viruses to treat diseases. There are three main branches of virotherapy: anti-cancer oncolytic viruses, viral vectors for gene therapy and viral immunotherapy. These branches use three different types of treatment methods: gene overexpression, gene knockout, and suicide gene delivery. Gene overexpression adds genetic sequences that compensate for low to zero levels of needed gene expression. Gene knockout uses RNA methods to silence or reduce expression of disease-causing genes. Suicide gene delivery introduces genetic sequences that induce an apoptotic response in cells, usually to kill cancerous growths. In a slightly different context, virotherapy can also refer more broadly to the use of viruses to treat certain medical conditions by killing pathogens.
History
Chester M. Southam, a researcher at Memorial Sloan Kettering Cancer Center, pioneered the study of viruses as potential agents to treat cancer.
Oncolytic virotherapy
Oncolytic virotherapy is not a new idea – as early as the mid 1950s doctors were noticing that cancer patients who suffered a non-related viral infection, or who had been vaccinated recently, showed signs of improvement; this has been largely attributed to the production of interferon and tumour necrosis factors in response to viral infection, but oncolytic viruses are being designed that selectively target and lyse only cancerous cells.
In the 1940s and 1950s, studies were conducted in animal models to evaluate the use of viruses in the treatment of tumours. In the 1940s–1950s some of the earliest human clinical trials with oncolytic viruses were started.
Mechanism
It is believed that oncolytic virus achieve their goals by two mechanisms: selective killing of tumor cells as well as recruitment of host immune system. One of the major challenges in cancer treatment is finding treatments that target tumor cells while ignoring non-canc |
https://en.wikipedia.org/wiki/Darkdevil | Darkdevil (Reilly Tyne) is a superhero appearing in American comic books published by Marvel Comics. Created by Tom DeFalco and Pat Olliffe, the character first appeared in Spider-Girl #2 (November 1998). Darkdevil primarily appears in the Marvel Comics 2 future of the Marvel Universe.
Publication history
Darkkdevil debuted in Spider-Girl #2 (November 1998), by writer Tom DeFalco and artist Pat Olliffe. He appeared in the 2000 Darkdevil series, his first solo comic book series. He appred in the 2005 Last Hero Standing series.
Fictional character biography
Reilly Tyne is the son of Ben Reilly (Spider-Man's clone) and Elizabeth Tyne. Before he reached his teens, his inherited powers began to manifest but brought with them clonal degeneration. Kaine, the degenerated first clone of Peter Parker, found him, and placed him within a regeneration tank to slow the process. Kaine's efforts were for two goals: to resurrect Daredevil, who had previously died saving Kaine, and to heal Tyne. Kaine summoned the demon Zarathos, which attempted to possess Tyne, but he was saved by the soul of Daredevil, who drove out Zarathos, although both Daredevil's soul and a piece of the demon remained within Tyne, and he was left with a demonic appearance and certain demonic abilities. Through meditation and concentration, Tyne eventually learned to project a human appearance, but he now looked to be in his twenties, almost twice his actual age. Following in both of Daredevil's paths, he studied law and became an attorney, while taking on a costume bearing a resemblance to Daredevil's and using his demonic abilities to fight crime as Darkdevil. He apparently has access to at least some of Daredevil's memories, since he knows Spider-Man's secret identity.
Darkdevil has fought alongside Spider-Girl several times, as well as the semi-retired Spider-Man. Neither Spider-Man nor Spider-Girl are aware of his genetic relation to them, but Darkdevil has hinted that he owes his existence to the ori |
https://en.wikipedia.org/wiki/Singleton%20bound | In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by and even earlier by .
Statement of the bound
The minimum distance of a set of codewords of length is defined as
where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a -ary block code of length and minimum distance .
Then the Singleton bound states that
Proof
First observe that the number of -ary words of length is , since each letter in such a word may take one of different values, independently of the remaining letters.
Now let be an arbitrary -ary block code of minimum distance . Clearly, all codewords are distinct. If we puncture the code by deleting the first letters of each codeword, then all resulting codewords must still be pairwise different, since all of the original codewords in have Hamming distance at least from each other. Thus the size of the altered code is the same as the original code.
The newly obtained codewords each have length
and thus, there can be at most of them. Since was arbitrary, this bound must hold for the largest possible code with these parameters, thus:
Linear codes
If is a linear code with block length , dimension and minimum distance over the finite field with elements, then the maximum number of codewords is and the Singleton bound implies:
so that
which is usually written as
In the linear code case a different proof of the Singleton bound can be obtained by observing that rank of the parity check matrix is . Another simple proof follows from observing that the rows of any generator matrix in standard form have weight at most .
History
The usual citation given for this result is , but was proven earlier by . Joshi notes that the result was obtained earlier by using a more complex proof. also |
https://en.wikipedia.org/wiki/Reconstruction%20filter | In a mixed-signal system (analog and digital), a reconstruction filter, sometimes called an anti-imaging filter, is used to construct a smooth analog signal from a digital input, as in the case of a digital to analog converter (DAC) or other sampled data output device.
Sampled data reconstruction filters
The sampling theorem describes why the input of an ADC requires a low-pass analog electronic filter, called the anti-aliasing filter: the sampled input signal must be bandlimited to prevent aliasing (here meaning waves of higher frequency being recorded as a lower frequency).
For the same reason, the output of a DAC requires a low-pass analog filter, called a reconstruction filter - because the output signal must be bandlimited, to prevent imaging (meaning Fourier coefficients being reconstructed as spurious high-frequency 'mirrors'). This is an implementation of the Whittaker–Shannon interpolation formula.
Ideally, both filters should be brickwall filters, constant phase delay in the pass-band with constant flat frequency response, and zero response from the Nyquist frequency. This can be achieved by a filter with a 'sinc' impulse response.
Implementation
While in theory a DAC outputs a series of discrete Dirac impulses, in practice, a real DAC outputs pulses with finite bandwidth and width. Both idealized Dirac pulses, zero-order held steps and other output pulses, if unfiltered, would contain spurious high-frequency replicas, "or images" of the original bandlimited signal. Thus, the reconstruction filter smooths the waveform to remove image frequencies (copies) above the Nyquist limit. In doing so, it reconstructs the continuous time signal (whether originally sampled, or modelled by digital logic) corresponding to the digital time sequence.
Practical filters have non-flat frequency or phase response in the pass band and incomplete suppression of the signal elsewhere. The ideal sinc waveform has an infinite response to a signal, in both the positive and ne |
https://en.wikipedia.org/wiki/Tinidazole | Tinidazole, sold under the brand name Tindamax among others, is a medication used against protozoan infections. It is widely known throughout Europe and the developing world as a treatment for a variety of anaerobic amoebic and bacterial infections. It was developed in 1972 and is a prominent member of the nitroimidazole antibiotic class.
It is on the World Health Organization's List of Essential Medicines.
Medical uses
Tinidazole may be a therapeutic alternative in the setting of metronidazole intolerance. Tinidazole is used to treat Helicobacter pylori, Amoebic dysentery, Giardia and Trichomonas vaginalis.
Side effects
Drinking alcohol while taking tinidazole causes an unpleasant disulfiram-like reaction, which includes nausea, vomiting, headache, increased blood pressure, flushing, and shortness of breath.
Half-life
Elimination half-life is 13.2 ± 1.4 hours. Plasma half-life is 12 to 14 hours. |
https://en.wikipedia.org/wiki/Software%20industry | The software industry includes businesses for development, maintenance and publication of software that are using different business models, mainly either "license/maintenance based" (on-premises) or "Cloud based" (such as SaaS, PaaS, IaaS, MBaaS, MSaaS, DCaaS etc.). The industry also includes software services, such as training, documentation, consulting and data recovery. The software and computer services industry spends more than 11% of its net sales for Research & Development which is in comparison with other industries the second highest share after pharmaceuticals & biotechnology.
History
The first company founded to provide software products and services was Computer Usage Company in 1955. Before that time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as Sperry Rand and IBM.
The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers. Some were distributed freely between users of a particular machine for no charge. Others were done on a commercial basis, and other firms such as Computer Sciences Corporation (founded in 1959) started to grow. Other influential or typical software companies begun in the early 1960s included Advanced Computer Techniques, Automatic Data Processing, Applied Data Research, and Informatics General. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines.
When Digital Equipment Corporation (DEC) brought a relatively low-priced microcomputer to market, it brought computing within the reach of many more companies and universities worldwide, and it spawned great innovation in terms of new, powerful programming languages and methodologies. New software was built for microcomputer |
https://en.wikipedia.org/wiki/Elliptic%20filter | An elliptic filter (also known as a Cauer filter, named after Wilhelm Cauer, or as a Zolotarev filter, after Yegor Zolotarev) is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple (whether the ripple is equalized or not). Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations.
As the ripple in the stopband approaches zero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches zero, the filter becomes a type II Chebyshev filter and finally, as both ripple values approach zero, the filter becomes a Butterworth filter.
The gain of a lowpass elliptic filter as a function of angular frequency ω is given by:
where Rn is the nth-order elliptic rational function (sometimes known as a Chebyshev rational function) and
is the cutoff frequency
is the ripple factor
is the selectivity factor
The value of the ripple factor specifies the passband ripple, while the combination of the ripple factor and the selectivity factor specify the stopband ripple.
Properties
In the passband, the elliptic rational function varies between zero and unity. The gain of the passband therefore will vary between 1 and .
In the stopband, the elliptic rational function varies between infinity and the discrimination factor which is defined as:
The gain of the stopband therefore will vary between 0 and .
In the limit of the elliptic rational function becomes a Chebyshev polynomial, and therefore the filter becomes a Chebyshev type I filter, with ripple factor ε
Since the Butterworth filter is a limiting form of the Chebyshev filter, it follows that in the limit of , an |
https://en.wikipedia.org/wiki/Address%20geocoding | Address geocoding, or simply geocoding, is the process of taking a text-based description of a location, such as an address or the name of a place, and returning geographic coordinates, frequently latitude/longitude pair, to identify a location on the Earth's surface. Reverse geocoding, on the other hand, converts geographic coordinates to a description of a location, usually the name of a place or an addressable location. Geocoding relies on a computer representation of address points, the street / road network, together with postal and administrative boundaries.
Geocode (verb): provide geographical coordinates corresponding to (a location).
Geocode (noun): is a code that represents a geographic entity (location or object).In general is a human-readable and short identifier; like a nominal-geocode as ISO 3166-1 alpha-2, or a grid-geocode, as Geohash geocode.
Geocoder (noun): a piece of software or a (web) service that implements a geocoding process i.e. a set of interrelated components in the form of operations, algorithms, and data sources that work together to produce a spatial representation for descriptive locational references.
The geographic coordinates representing locations often vary greatly in positional accuracy. Examples include building centroids, land parcel centroids, interpolated locations based on thoroughfare ranges, street segments centroids, postal code centroids (e.g. ZIP codes, CEDEX), and Administrative division Centroids.
History
Geocoding – a subset of Geographic Information System (GIS) spatial analysis – has been a subject of interest since the early 1960s.
1960s
In 1960, the first operational GIS – named the Canada Geographic Information System (CGIS) – was invented by Dr. Roger Tomlinson, who has since been acknowledged as the father of GIS. The CGIS was used to store and analyze data collected for the Canada Land Inventory, which mapped information about agriculture, wildlife, and forestry at a scale of 1:50,000, in order |
https://en.wikipedia.org/wiki/Repressor | In molecular genetics, a repressor is a DNA- or RNA-binding protein that inhibits the expression of one or more genes by binding to the operator or associated silencers. A DNA-binding repressor blocks the attachment of RNA polymerase to the promoter, thus preventing transcription of the genes into messenger RNA. An RNA-binding repressor binds to the mRNA and prevents translation of the mRNA into protein. This blocking or reducing of expression is called repression.
Function
If an inducer, a molecule that initiates the gene expression, is present, then it can interact with the repressor protein and detach it from the operator. RNA polymerase then can transcribe the message (expressing the gene). A co-repressor is a molecule that can bind to the repressor and make it bind to the operator tightly, which decreases transcription.
A repressor that binds with a co-repressor is termed an aporepressor or inactive repressor. One type of aporepressor is the trp repressor, an important metabolic protein in bacteria. The above mechanism of repression is a type of a feedback mechanism because it only allows transcription to occur if a certain condition is present: the presence of specific inducer(s). In contrast, an active repressor binds directly to an operator to repress gene expression.
While repressors are more commonly found in prokaryotes, they are rare in eukaryotes. Furthermore, most known eukaryotic repressors are found in simple organisms (e,g. yeast), and act by interacting directly with activators. This contrasts prokaryotic repressors which can also alter DNA or RNA structure.
Within the eukaryotic genome are regions of DNA known as silencers. These are DNA sequences that bind to repressors to partially or fully repress a gene. Silencers can be located several bases upstream or downstream from the actual promoter of the gene. Repressors can also have two binding sites: one for the silencer region and one for the promoter. This causes chromosome looping, allowi |
https://en.wikipedia.org/wiki/Cystocele | The cystocele, also known as a prolapsed bladder, is a medical condition in which a woman's bladder bulges into her vagina. Some may have no symptoms. Others may have trouble starting urination, urinary incontinence, or frequent urination. Complications may include recurrent urinary tract infections and urinary retention. Cystocele and a prolapsed urethra often occur together and is called a cystourethrocele. Cystocele can negatively affect quality of life.
Causes include childbirth, constipation, chronic cough, heavy lifting, hysterectomy, genetics, and being overweight. The underlying mechanism involves weakening of muscles and connective tissue between the bladder and vagina. Diagnosis is often based on symptoms and examination.
If the cystocele causes few symptoms, avoiding heavy lifting or straining may be all that is recommended. In those with more significant symptoms a vaginal pessary, pelvic muscle exercises, or surgery may be recommended. The type of surgery typically done is known as a colporrhaphy. The condition becomes more common with age. About a third of women over the age of 50 are affected to some degree.
Signs and symptoms
The symptoms of a cystocele may include:
a vaginal bulge
the feeling that something is falling out of the vagina
the sensation of pelvic heaviness or fullness
difficulty starting a urine stream
a feeling of incomplete urination
frequent or urgent urination
fecal incontinence
frequent urinary tract infections
back and pelvic pain
fatigue
painful sexual intercourse
bleeding
A bladder that has dropped from its normal position and into the vagina can cause some forms of incontinence and incomplete emptying of the bladder.
Complications
Complications may include urinary retention, recurring urinary tract infections and incontinence. The anterior vaginal wall may actually protrude though the vaginal introitus (opening). This can interfere with sexual activity. Recurrent urinary tract infections are common for those wh |
https://en.wikipedia.org/wiki/Next-generation%20network | The next-generation network (NGN) is a body of key architectural changes in telecommunication core and access networks. The general idea behind the NGN is that one network transports all information and services (voice, data, and all sorts of media such as video) by encapsulating these into IP packets, similar to those used on the Internet. NGNs are commonly built around the Internet Protocol, and therefore the term all IP is also sometimes used to describe the transformation of formerly telephone-centric networks toward NGN.
NGN is a different concept from Future Internet, which is more focused on the evolution of Internet in terms of the variety and interactions of services offered.
Introduction of NGN
According to ITU-T, the definition is:
A next-generation network (NGN) is a packet-based network which can provide services including Telecommunication Services and is able to make use of multiple broadband, quality of service-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.
From a practical perspective, NGN involves three main architectural changes that need to be looked at separately:
In the core network, NGN implies a consolidation of several (dedicated or overlay) transport networks each historically built for a different service into one core transport network (often based on IP and Ethernet). It implies amongst others the migration of voice from a circuit-switched architecture (PSTN) to VoIP, and also migration of legacy services such as X.25, Frame Relay (either commercial migration of the customer to a new service like IP VPN, or technical emigration by emulation of the "legacy service" on the NGN).
In the wired access network, NGN implies the migration from the dual system of legacy voice n |
https://en.wikipedia.org/wiki/LOM%20port | The LOM port (Lights Out Management port) is a remote access facility on a Sun Microsystems server. When the main processor is switched off, or when it is impossible to telnet to the server, an operator would use a link to the LOM port to access the server. As long as the server has power, the LOM facility will work, regardless of whether or not the main processor is switched on.
To use the LOM port, a rollover cable is connected to the LOM port, which is located at the back of the Sun server. The other end of the cable is connected to a terminal or a PC running a terminal emulator. The terminal or emulator must be set to a transmission rate of 9600 bits per second, and hardware flow control enabled.
Implementations
Specific implementations include:
Advanced Lights Out Management (ALOM), Sun Microsystems-specific and comes standard on newer Sun servers (SunFire V125/V210/V215/V240/V245/V250/V440/T1000/T2000, Sun Netra 210/240/440).
Integrated Lights Out Management (ILOM), Sun Microsystems's ALOM replacement on Sun x64 server SunFire X4100(M2)/X4200(M2)/X4600(M2)/X4140/X4240/X4440/X4150/X4250/X4450/X4170/X4270/X2250/X2270, Sun Blade 6000 Chassis Management Module/Blade Module(X6220/X6420/X6240/X6440/X6250/X6450/X6270/X6275), Sun CMT servers/blades (Sun T5120, T5220, T5240, T6340, T6320). Not to be confused with the similar-sounding HP Integrated Lights-Out management technology.
Lomlite and Lomlite2 Single-chip implementations on the Netra T1 and possibly others. In the cases of the T1-200 and X1, the OpenBoot firmware implements lom@ and lom! commands allowing access to the registers representing temperature, voltage, etc.
See also
Out-of-band management
Power distribution unit
External links
Netra-T1 AC200 LOM Usage
Computer buses
Out-of-band management
Sun Microsystems hardware |
https://en.wikipedia.org/wiki/Bone%20morphogenetic%20protein | Bone morphogenetic proteins (BMPs) are a group of growth factors also known as cytokines and as metabologens. Originally discovered by their ability to induce the formation of bone and cartilage, BMPs are now considered to constitute a group of pivotal morphogenetic signals, orchestrating tissue architecture throughout the body. The important functioning of BMP signals in physiology is emphasized by the multitude of roles for dysregulated BMP signalling in pathological processes. Cancerous disease often involves misregulation of the BMP signalling system. Absence of BMP signalling is, for instance, an important factor in the progression of colon cancer, and conversely, overactivation of BMP signalling following reflux-induced esophagitis provokes Barrett's esophagus and is thus instrumental in the development of esophageal adenocarcinoma.
Recombinant human BMPs (rhBMPs) are used in orthopedic applications such as spinal fusions, nonunions, and oral surgery. rhBMP-2 and rhBMP-7 are Food and Drug Administration (FDA)-approved for some uses. rhBMP-2 causes more overgrown bone than any other BMPs and is widely used off-label.
Medical uses
BMPs for clinical use are produced using recombinant DNA technology (recombinant human BMPs; rhBMPs). Recombinant BMP-2 and BMP-7 are currently approved for human use.
rhBMPs are used in oral surgeries. BMP-7 has also recently found use in the treatment of chronic kidney disease (CKD). BMP-7 has been shown in murine animal models to reverse the loss of glomeruli due to sclerosis.
A 2022 study by researchers from the Mayo Clinic, Maastricht University, and Ethris GmBH, a biotech company that focuses on RNA therapeutics, found that chemically modified mRNA encoding BMP-2 promoted dosage-dependent healing of femoral osteotomies in male rats. The mRNA molecules were complexed within nonviral lipid particles, loaded onto sponges, and surgically implanted into the bone defects. They remained localized around the site of application. Comp |
https://en.wikipedia.org/wiki/Permissible%20stress%20design | Permissible stress design is a design philosophy used by mechanical engineers and civil engineers.
The civil designer ensures that the stresses developed in a structure due to service loads do not exceed the elastic limit. This limit is usually determined by ensuring that stresses remain within the limits through the use of factors of safety.
In structural engineering, the permissible stress design approach has generally been replaced internationally by limit state design (also known as ultimate stress design, or in USA, Load and Resistance Factor Design, LRFD) as far as structural engineering is considered, except for some isolated cases.
In USA structural engineering construction, allowable stress design (ASD) has not yet been completely superseded by limit state design except in the case of Suspension bridges, which changed from allowable stress design to limit state design in the 1960s. Wood, steel, and other materials are still frequently designed using allowable stress design, although LRFD is probably more commonly taught in the USA university system.
In mechanical engineering design such as design of pressure equipment, the method uses the actual loads predicted to be experienced in practice to calculate stress and deflection. Such loads may include pressure thrusts and the weight of materials. The predicted stresses and deflections are compared with allowable values that have a "factor" against various failure mechanisms such as leakage, yield, ultimate load prior to plastic failure, buckling, brittle fracture, fatigue, and vibration/harmonic effects. However, the predicted stresses almost always assumes the material is linear elastic. The "factor" is sometimes called a factor of safety, although this is technically incorrect because the factor includes allowance for matters such as local stresses and manufacturing imperfections that are not specifically calculated; exceeding the allowable values is not considered to be good practice (i.e. is not "safe |
https://en.wikipedia.org/wiki/Classical%20Heisenberg%20model | The Classical Heisenberg model, developed by Werner Heisenberg, is the case of the n-vector model, one of the models used in statistical physics to model ferromagnetism, and other phenomena.
Definition
It can be formulated as follows: take a d-dimensional lattice, and a set of spins of the unit length
,
each one placed on a lattice node.
The model is defined through the following Hamiltonian:
with
a coupling between spins.
Properties
The general mathematical formalism used to describe and solve the Heisenberg model and certain generalizations is developed in the article on the Potts model.
In the continuum limit the Heisenberg model (2) gives the following equation of motion
This equation is called the continuous classical Heisenberg ferromagnet equation or shortly Heisenberg model and is integrable in the sense of soliton theory. It admits several integrable and nonintegrable generalizations like Landau-Lifshitz equation, Ishimori equation and so on.
One dimension
In case of long range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive, at low enough temperature, if (infrared bounds).
As in any 'nearest-neighbor' n-vector model with free boundary conditions, if the external field is zero, there exists a simple exact solution.
Two dimensions
In the case of long-range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive at low enough temperature if (infrared bounds).
Polyakov has conjectured that, as opposed to the classical XY model, there is no dipole phase for any ; i.e. at non-zero temperature the correlations cluster exponentially fast.
Three and higher dimensions
Independently of the range of the interaction, at low enough temperature the magnetization is positive.
Conjecturally, in each of the low temperature extremal states the truncated correlations decay algebraically.
See al |
https://en.wikipedia.org/wiki/PGLO | The pGLO plasmid is an engineered plasmid used in biotechnology as a vector for creating genetically modified organisms. The plasmid contains several reporter genes, most notably the green fluorescent protein (GFP) and the ampicillin resistance gene. GFP was isolated from the jelly fish Aequorea victoria. Because it shares a bidirectional promoter with a gene for metabolizing arabinose, the GFP gene is expressed in the presence of arabinose, which makes the transgenic organism express its fluorescence under UV light. GFP can be induced in bacteria containing the pGLO plasmid by growing them on +arabinose plates. pGLO is made by Bio-Rad Laboratories.
Structure
pGLO is made up of three genes that are joined together using recombinant DNA technology. They are as follows:
Bla, which codes for the enzyme beta-lactamase giving the transformed bacteria resistance to the beta-lactam family of antibiotics (such as of the penicillin family)
araC, a promoter region that regulates the expression of GFP (specifically, the GFP gene will be expressed only in the presence of arabinose)
GFP, the green fluorescent protein, which gives a green glow if cells produce this type of protein
Like most other circular plasmids, the pGLO plasmid contains an origin of replication (ori), which is a region of the plasmid where replication will originate.
The pGLO plasmid was made famous by researchers in France who used it to produce a green fluorescent rabbit named Alba.
Other features on pGLO, like most other plasmids, include a selectable marker and an MCS (multiple cloning site) located at the end of the GFP gene. The plasmid is 5371 base pairs long. In supercoiled form, it runs on an agarose gel in the 4200–4500 range.
Discovery of GFP
The GFP gene was first observed by Osamu Shimomura and his team in 1962 while studying the jellyfish Aequorea victoria that have a ring of blue light under their umbrella. Shimomura and his team isolated the protein aequorin from thousands of jellyfis |
https://en.wikipedia.org/wiki/Geographic%20information%20system%20software | A GIS software program is a computer program to support the use of a geographic information system, providing the ability to create, store, manage, query, analyze, and visualize geographic data, that is, data representing phenomena for which location is important. The GIS software industry encompasses a broad range of commercial and open-source products that provide some or all of these capabilities within various information technology architectures.
History
The earliest geographic information systems, such as the Canadian Geographic Information System started in 1963, were bespoke programs developed specifically for a single installation (usually a government agency), based on custom-designed data models. During the 1950s and 1960s, academic researchers during the quantitative revolution of geography began writing computer programs to perform spatial analysis, especially at the University of Washington and the University of Michigan, but these were also custom programs that were rarely available to other potential users.
Perhaps the first general-purpose software that provided a range of GIS functionality was the Synagraphic Mapping Package (SYMAP), developed by Howard T. Fisher and others at the nascent Harvard Laboratory for Computer Graphics and Spatial Analysis starting in 1965. While not a true full-range GIS program, it included some basic mapping and analysis functions, and was freely available to other users. Through the 1970s, the Harvard Lab continued to develop and publish other packages focused on automating specific operations, such as SYMVU (3-D surface visualization), CALFORM (choropleth maps), POLYVRT (topological vector data management), WHIRLPOOL (vector overlay), GRID and IMGRID (raster data management), and others. During the late 1970s, several of these modules were brought together into Odyssey, one of the first commercial complete GIS programs, released in 1980.
During the late 1970s and early 1980s, GIS was emerging in many large gover |
https://en.wikipedia.org/wiki/Nerve%20conduction%20study | A nerve conduction study (NCS) is a medical diagnostic test commonly used to evaluate the function, especially the ability of electrical conduction, of the motor and sensory nerves of the human body. These tests may be performed by medical specialists such as clinical neurophysiologists, physical therapists, physiatrists (physical medicine and rehabilitation physicians), and neurologists who subspecialize in electrodiagnostic medicine. In the United States, neurologists and physiatrists receive training in electrodiagnostic medicine (performing needle electromyography (EMG) and NCSs) as part of residency training and in some cases acquire additional expertise during a fellowship in clinical neurophysiology, electrodiagnostic medicine, or neuromuscular medicine. Outside the US, clinical neurophysiologists learn needle EMG and NCS testing.
Nerve conduction velocity (NCV) is a common measurement made during this test. The term NCV often is used to mean the actual test, but this may be misleading, since velocity is only one measurement in the test suite.
Medical uses
Nerve conduction studies along with needle electromyography measure nerve and muscle function, and may be indicated when there is pain in the limbs, weakness from spinal nerve compression, or concern about some other neurologic injury or disorder. Spinal nerve injury does not cause neck, mid back pain or low back pain, and for this reason, evidence has not shown EMG or NCS to be helpful in diagnosing causes of axial lumbar pain, thoracic pain, or cervical spine pain.
Nerve conduction studies are used mainly for evaluation of paresthesias (numbness, tingling, burning) and/or weakness of the arms and legs. The type of study required is dependent in part by the symptoms presented. A physical exam and thorough history also help to direct the investigation. Some of the common disorders that can be diagnosed by nerve conduction studies are:
Carpal tunnel syndrome
Cubital Tunnel Syndrome
Guillain–Barré syn |
https://en.wikipedia.org/wiki/Multiple%20cloning%20site | A multiple cloning site (MCS), also called a polylinker, is a short segment of DNA which contains many (up to ~20) restriction sites - a standard feature of engineered plasmids. Restriction sites within an MCS are typically unique, occurring only once within a given plasmid. The purpose of an MCS in a plasmid is to allow a piece of DNA to be inserted into that region.
An MCS is found in a variety of vectors, including cloning vectors to increase the number of copies of target DNA, and in expression vectors to create a protein product. In expression vectors, the MCS is located downstream of the promoter.
Creating a multiple cloning site
In some instances, a vector may not contain an MCS. Rather, an MCS can be added to a vector. The first step is designing complementary oligonucleotide sequences that contain restriction enzyme sites along with additional bases on the end that are complementary to the vector after digesting. Then the oligonucleotide sequences can be annealed and ligated into the digested and purified vector. The digested vector is cut with a restriction enzyme that complements the oligonucleotide insert overhangs. After ligation, transform the vector into bacteria and verify the insert by sequencing. This method can also be used to add new restriction sites to a multiple cloning site.
Uses
Multiple cloning sites are a feature that allows for the insertion of foreign DNA without disrupting the rest of the plasmid which makes it extremely useful in biotechnology, bioengineering, and molecular genetics. MCS can aid in making transgenic organisms, more commonly known as a genetically modified organism (GMO) using genetic engineering. To take advantage of the MCS in genetic engineering, a gene of interest has to be added to the vector during production when the MCS is cut open. After the MCS is made and ligated it will include the gene of interest and can be amplified to increase gene copy number in a bacterium-host. After the bacterium replicates, the |
https://en.wikipedia.org/wiki/Stigmatism | In geometric optics, stigmatism refers to the image-formation property of an optical system which focuses a single point source in object space into a single point in image space. Two such points are called a stigmatic pair of the optical system. Many optical systems, even those exhibiting optical aberrations, including astigmatism, have at least one stigmatic pair. Stigmatism is applicable only in the approximation provided by geometric optics. In reality, image formation is, at best diffraction-limited, and point-like images are not possible due to the wave nature of light. |
https://en.wikipedia.org/wiki/Operator%20product%20expansion | In quantum field theory, the operator product expansion (OPE) is used as an axiom to define the product of fields as a sum over the same fields. As an axiom, it offers a non-perturbative approach to quantum field theory. One example is the vertex operator algebra, which has been used to construct two-dimensional conformal field theories. Whether this result can be extended to QFT in general, thus resolving many of the difficulties of a perturbative approach, remains an open research question.
In practical calculations, such as those needed for scattering amplitudes in various collider experiments, the operator product expansion is used in QCD sum rules to combine results from both perturbative and non-perturbative (condensate) calculations.
2D Euclidean quantum field theory
In 2D Euclidean field theory, operator product expansion is a Laurent series expansion associated with two operators. A Laurent series is a generalization of the Taylor series in that finitely many powers of the inverse of the expansion variable(s) are added to the Taylor series: pole(s) of finite order(s) are added to the series.
Heuristically, in quantum field theory one is interested in the result of physical observables represented by operators. If one wants to know the result of making two physical observations at two points and , one can time order these operators in increasing time.
If one maps coordinates in a conformal manner, one is often interested in radial ordering. This is the analogue of time ordering where increasing time has been mapped to some increasing radius on the complex plane. One is also interested in normal ordering of creation operators.
A radial-ordered OPE can be written as a normal-ordered OPE minus the non-normal-ordered terms. The non-normal-ordered terms can often be written as a commutator, and these have useful simplifying identities. The radial ordering supplies the convergence of the expansion.
The result is a convergent expansion of the product |
https://en.wikipedia.org/wiki/Endace | Endace Ltd is a privately owned network monitoring company, based in New Zealand and founded in 2001. It provides network visibility and network recording products to large organizations. The company was listed on the London Stock Exchange in 2005 and then delisted in 2013 when it was acquired by Emulex. In 2016 Endace was spun out of Emulex and is currently a private company.
In October 2016, The Intercept revealed that some Endace clients were intelligence agencies, including the British GCHQ (known for conducting massive surveillance on network communications) and the Moroccan DGST, likewise known for mass surveillance of its citizens.
Background and history
Endace was founded after the DAG project at the School of Computing and Mathematical Sciences at the University of Waikato in New Zealand. The first cards designed at the university were intended to measure latency in ATM networks.
In 2006, Endace transitioned from component manufacturer to appliance manufacturer to managed infrastructure provider. The company now sells network visibility fabrics, based on its range of network recorders, to large corporations and government agencies.
Endace was the first New Zealand company to list on London's Alternative Investment Market when it floated in mid-June 2005 a move which was not without controversy. Poor share price performance in the early years and a seeming failure to attract a broad enough shareholder base lent weight to the criticism that Endace should have focused initially on developing its local profile (via NZX) rather than pushing for overseas investment (via London AIM).
Endace is headquartered in Auckland, New Zealand, and has an R&D centre in Hamilton, New Zealand, and offices in Australia, United States and Great Britain.
Key innovations of the DAG
The DAG project grew from academic research at Waikato University. Having found that software measurements of ATM cells (or packets) were unsatisfactory, both for reasons of accuracy and lack of |
https://en.wikipedia.org/wiki/Kenbak-1 | The Kenbak-1 is considered by the Computer History Museum, the Computer Museum of America and the American Computer Museum to be the world's first "personal computer", invented by John Blankenbaker (born 1929) of Kenbak Corporation in 1970 and first sold in early 1971. Less than 50 machines were ever built, using Bud Industries enclosures as a housing. The system first sold for US$750. Today, only 14 machines are known to exist worldwide, in the hands of various collectors and museums. Production of the Kenbak-1 stopped in 1973, as Kenbak failed and was taken over by CTI Education Products, Inc. CTI rebranded the inventory and renamed it the 5050, though sales remained elusive.
Since the Kenbak-1 was invented before the first microprocessor, the machine didn't have a one-chip CPU but was instead based purely on small-scale integration TTL chips. The 8-bit machine offered 256 bytes of memory, implemented on Intel's type 1404A silicon gate MOS shift registers. The clock signal period was 1 microsecond (equivalent to a clock speed of 1 MHz), but the program speed averaged below 1,000 instructions per second due the many clock cycles needed for each operation and slow access to serial memory.
The machine was programmed in pure machine code using an array of buttons and switches. Output consisted of a row of lights.
Internally, the Kenbak-1 has a serial computer architecture, processing one bit at a time.
Technical description
Registers
The Kenbak-1 has a total of nine registers. All are memory mapped. It has three general-purpose registers: A, B and X. Register A is the implicit destination of some operations. Register X is also known as the index register and turns the direct and indirect modes into indexed direct and indexed indirect modes. It also has program counter, called Register P, three "overflow and carry" registers for A, B and X, respectively, as well as an Input Register and an Output Register.
Addressing modes
Add, Subtract, Load, Store, Load Com |
https://en.wikipedia.org/wiki/Digital%20loop%20carrier | A digital loop carrier (DLC) is a system which uses digital transmission to extend the range of the local loop farther than would be possible using only twisted pair copper wires. A DLC digitizes and multiplexes the individual signals carried by the local loops onto a single datastream on the DLC segment.
Reasons for using DLCs
Subscriber Loop Carrier systems address a number of problems:
Electrical constraints on long loops.
Insufficient available cable pairs.
Cable route congestion (inability to add cable due to lack of space, particularly in urban street, bridge, and building conduit)
Construction challenges (in areas of difficult terrain) when limited cable pairs are already available
Expense due to cable cost and the associated labour-intensive installation work (especially to solve the specific problems listed above)
Long loops, such as those terminating at more than 18,000 feet (5.49 kilometres) from the central office, pose
electrical challenges. When the subscriber goes off-hook, a cable pair behaves like a single loop
inductance coil with a -48 V dc potential and an Electric current of between 20–50 mA dc. Electric current values vary with cable length and gauge. A minimum current of around 20 mA dc is required to convey terminal signalling information to the network. There is also a minimum power level required to provide adequate volume for the voice signal. A variety of schemes were implemented before DLC technology to offset the impedance long loops offered to signalling and volume levels. They included the following:
Use heavy-gauge conductors – Up to 19 gauge (approximately the gauge of pencil lead), which is costly and bulky. The heavy-gauge cables yielded far fewer pairs per cable and led to early congestion in cable routes, especially in bridge crossings and other areas of limited space.
Increase battery voltage – This violation of operating standards could pose a safety hazard.
Add amplifiers to power the voice signal on long loops. This requi |
https://en.wikipedia.org/wiki/Siku%20Toys | Sieper Lüdenscheid GmbH & Co. KG, mostly known by its trade name Siku, is a German manufacturer of scale models headquartered in Lüdenscheid. Some of the products sold by Siku are model cars, figurines, model aircraft, model commercial vehicles, and model agricultural machinery.
History
Founded in 1921, Sieper-Werke (Sieper Works) was originally a manufacturer of metal tools and cutlery in zamak and aluminium; later on, it also made ashtrays, badges, medals, belt buckles, and buttons. Its factory in Ludenscheid was outfitted with new casting moulds in 1949 for grating, sandblasting and painting cast zinc goods. The company was even contracted to make Mercedes-Benz's star-shaped hood ornament.
Sieper-Werke also experimented with early plastics. In 1943, it expanded to a facility in Hilchenbach, about from Lüdenscheid (though the latter remained Sieper-Werke's headquarters), in which products like plastics, furniture, mirrors, and cabinets were developed and manufactured. Its Lüdenscheid operations generally focused on promotional items for major brands, such as the 'elephant shoe' and 'Zeller black cat', which were injection-moulded.
It was not until 1950 that the company started producing toys in Lüdenscheid, registering the "Siku" trademark for the new products. The name originates from abbreviating the name of the founder of the company, Richard Sieper, and the German word for plastic, Kunststoffe (i.e. Sieper Plastics). Originally, there were a broad variety of Siku toys which at first were plastic, including figures and animals. These were often called 'margarine figures' because they came in margarine packages as a food promotion. The success of the plastic figures gave Siku capital to start a line of postwar vehicles.
Between 1951 and 1955, the first vehicles were generic representations of a fire truck, a race car, an amphibious truck, a moving van, and finally, in 1955, a Porsche 356. The scale chosen was approximately 1:60. By 1958, Sieper-Werke had d |
https://en.wikipedia.org/wiki/X%20Font%20Server | The X font server (xfs) provides a standard mechanism for an X server to communicate with a font renderer, frequently one running on a remote machine. It usually runs on TCP port 7100.
Current status
The use of server-side fonts is currently considered deprecated in favour of client-side fonts. Such fonts are rendered by the client, not by the server, with the support of the Xft2 or Cairo libraries and the XRender extension.
For the few cases in which server-side fonts are still needed, the new servers have their own integrated font renderer, so that no external one is needed. Server-side fonts can now be configured in the X server configuration files. For example, will set the server-side fonts for Xorg.
No specification on client-side fonts is given in the core protocol.
Future
As of October 2006, the manpage for xfs on Debian states that:
FUTURE DIRECTIONS
Significant further development of xfs is unlikely. One of the original motivations behind xfs was the single-threaded nature of the X server — a user’s X session could seem to "freeze up" while the X server took a moment to rasterize a font. This problem with the X server (which remains single-threaded in all popular implementations to this day) has been mitigated on two fronts: machines have gotten much faster, and client-side font rendering (particularly via the Xft library) has become the norm in contemporary software.
Deployment issues
So the choice between local filesystem font access and xfs-based font access is purely a local deployment choice. It does not make much sense in a single computer scenario.
See also
X Window System core protocol
X logical font description |
https://en.wikipedia.org/wiki/Hamming%20bound | In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.
Background on error-correcting codes
An original message and an encoded version are both composed in an alphabet of q letters. Each code word contains n letters. The original message (of length m) is shorter than n letters. The message is converted into an n-letter codeword by an encoding algorithm, transmitted over a noisy channel, and finally decoded by the receiver. The decoding process interprets a garbled codeword, referred to as simply a word, as the valid codeword "nearest" the n-letter received string.
Mathematically, there are exactly qm possible messages of length m, and each message can be regarded as a vector of length m. The encoding scheme converts an m-dimensional vector into an n-dimensional vector. Exactly qm valid codewords are possible, but any one of qn words can be received because the noisy channel might distort one or more of the n letters when a codeword is transmitted.
Statement of the bound
Preliminary definitions
An alphabet set is a set of symbols with elements. The set of strings of length on the alphabet set are denoted . (There are distinct strings in this set of strings.) A -ary block code of length is a subset of the strings of , where the alphabet set is any alphabet set having elements.
Defining the bound
Let denote the maximum possible size of a -ary block code of length and minimum Hamming distance between elements of the block code (necessarily positive for ).
Then, the Hamming bound is |
https://en.wikipedia.org/wiki/Oxygen%20bar | An oxygen bar is an establishment, or part of one, that sells oxygen for recreational use. Individual scents may be added to enhance the experience. The flavors in an oxygen bar come from bubbling oxygen through bottles containing aromatic solutions before it reaches the nostrils: most bars use food-grade particles to produce the scent, but some bars use aroma oils.
History
In 1776, Thomas Henry, an apothecary and Fellow of the Royal Society of England speculated tongue in cheek that Joseph Priestley’s newly discovered dephlogisticated air (now called oxygen) might become "as fashionable as French wine at the fashionable taverns". He did not expect, however, that tavern goers would "relish calling for a bottle of Air, instead of Claret."
Another early reference to the recreational use of oxygen is found in Jules Verne's 1870 novel Around the Moon. In this work, Verne states:
Modeled after the "air stations" in polluted downtown Tokyo and Beijing, the first oxygen bar (the O2 Spa Bar) opened in Toronto, Canada, in 1996. The trend continued in North America and by the late 1990s, bars were in use in New York, California, Florida, Las Vegas and the Rocky Mountain region. Customers in these bars breathe oxygen through a plastic nasal cannula inserted into their nostrils. Oxygen bars can now be found in many venues such as nightclubs, salons, spas, health clubs, resorts, tanning salons, restaurants, coffee houses, bars, airports, ski chalets, yoga studios, chiropractors, and casinos. They can also be found at trade shows, conventions and corporate meetings, as well as at private parties and promotional events.
Provision of oxygen
Oxygen bar guests pay about one U.S. dollar per minute to inhale a percentage of oxygen greater than the normal atmospheric content of 20.9% oxygen. This oxygen is gathered from the ambient air by an industrial (non-medical) oxygen concentrator and inhaled through a nasal cannula for up to about 20 minutes.
The machines used by oxygen ba |
https://en.wikipedia.org/wiki/Java%20Anon%20Proxy | Java Anon Proxy (JAP) also known as JonDonym, was a proxy system designed to allow browsing the Web with revocable pseudonymity. It was originally developed as part of a project of the Technische Universität Dresden, the Universität Regensburg and Privacy Commissioner of the state of Schleswig-Holstein. The client-software is written in the Java programming language. The service has been closed since August 2021.
Cross-platform and free, it sends requests through a Mix Cascade and mixes the data streams of multiple users in order to further obfuscate the data to outsiders.
JonDonym is available for all platforms that support Java. Furthermore, ANONdroid is a JonDonym proxy client for Android.
Design
The JonDonym client program allows the user to choose among several Mix Cascades (i.e. a group of anonymization proxies) offered by independent organisations. Users may choose by themselves whom of these operators they will trust, and whom they won't. This is different from peer-to-peer based anonymity networks like Tor and I2P, whose anonymization proxies are anonymous themselves, which means the users have to rely on unknown proxy operators. However, it means that all the relays used for JonDonym-mediated connections are known and identified, and therefore potentially targeted very easily by hackers, governmental agencies or lobbying groups. This has for example led to the issues mentioned below, where court orders essentially gave all control over the whole system to the German government. As discussed below, solutions like international distribution of the relays and the additional use of Tor can somewhat mitigate this loss of independence.
The speed and availability of the service depends on the operators of the Mixes in the cascades, and therefore varies. More users on a cascade improve anonymity, but a large number of users might diminish the speed and bandwidth available for a single user.
Cost, name change and commercial service
Use of JonDonym has been |
https://en.wikipedia.org/wiki/Peter%20Barham | Peter Barham (born 1950) is emeritus professor of physics at the University of Bristol. He was visiting professor of Molecular Gastronomy at the University of Copenhagen, Denmark.
Early life
Peter Barham was born in 1950. He received his BSc from the University of Warwick, and his MSc and PhD from the University of Bristol.
Career
Peter Barham's research at the University of Bristol is concerned with polymer physics. He found ways to connect his research with his love of penguins, including the creation of silicon-based flipper bands which can be used for monitoring penguin populations. The silicone bands are designed to minimize the potential impact of carrying an external marking device and are currently in use on African penguins (Spheniscus demersus) at Bristol Zoo, UK and in the wild in South Africa. More recently, together with colleagues in the Computer Science Department at the University of Bristol, he has developed a computer vision system for the automatic recognition of African penguins. This system in 2008 was undergoing trials in South Africa.
Barham has contributed to the development of the new science of molecular gastronomy and has authored the book The Science of Cooking (). He has collaborated with a number of chefs including Heston Blumenthal, the chef/owner of The Fat Duck and also a proponent of molecular gastronomy. He is editor-in-chief of a new journal, Flavour, which covers the science of molecular gastronomy. In 1994 he appeared as the Scientific Cook in a regular feature on Channel 4 food magazine series Food File, in which he explained some of the chemical mysteries that take place during the cooking process.
Peter Barham contributes to the public understanding of science by giving public lectures on molecular gastronomy and penguin conservation biology. He has addressed audiences in both the UK and further afield. Titles of previous public lectures include "Ice cream delights", "Why do we like some foods and hate others?", "Kitchen |
https://en.wikipedia.org/wiki/Graphophone | The Graphophone was the name and trademark of an improved version of the phonograph. It was invented at the Volta Laboratory established by Alexander Graham Bell in Washington, D.C., United States.
Its trademark usage was acquired successively by the Volta Graphophone Company, the American Graphophone Company, the North American Phonograph Company, and finally by the Columbia Phonograph Company (known today as Columbia Records), all of which either produced or sold Graphophones.
Research and development
It took five years of research under the directorship of Benjamin Hulme, Harvey Christmas, Charles Sumner Tainter and Chichester Bell at the Volta Laboratory to develop and distinguish their machine from Thomas Edison's Phonograph.
Among their innovations, the researchers experimented with lateral recording techniques as early as 1881. Contrary to the vertically-cut grooves of Edison Phonographs, the lateral recording method used a cutting stylus that moved from side to side in a "zig zag" pattern across the record. While cylinder phonographs never employed the lateral cutting process commercially, this later became the primary method of phonograph disc recording.
Bell and Tainter also developed wax-coated cardboard cylinders for their record cylinder. Edison's grooved mandrel covered with a removable sheet of tinfoil (the actual recording medium) was prone to damage during installation or removal. Tainter received a separate patent for a tube assembly machine to automatically produce the coiled cardboard tube cores of the wax cylinder records. The shift from tinfoil to wax resulted in increased sound fidelity and record longevity.
Besides being far easier to handle, the wax recording medium also allowed for lengthier recordings and created superior playback quality. Additionally the Graphophones initially deployed foot treadles to rotate the recordings, then wind-up clockwork drive mechanisms, and finally migrated to electric motors, instead of the manual cr |
https://en.wikipedia.org/wiki/Lagrangian%20foliation | In mathematics, a Lagrangian foliation or polarization is a foliation of a symplectic manifold, whose leaves are Lagrangian submanifolds. It is one of the steps involved in the geometric quantization of a square-integrable functions on a symplectic manifold. |
https://en.wikipedia.org/wiki/Bursa%20of%20Fabricius | In birds, the bursa of Fabricius (Latin: bursa cloacalis or bursa fabricii) is the site of hematopoiesis. It is a specialized organ that, as first demonstrated by Bruce Glick and later by Max Dale Cooper and Robert Good, is necessary for B cell (part of the immune system) development in birds. Mammals generally do not have an equivalent organ; the bone marrow is often the site of both hematopoiesis and B cell development. The bursa is present in the cloaca of birds and is named after Hieronymus Fabricius, who described it in 1621.
Description
The bursa is an epithelial and lymphoid organ that is found only in birds. The bursa develops as a dorsal diverticulum of the proctadael region of the cloaca. The luminal (interior) surface of the bursa is plicated with as many as 15 primary and 7 secondary plicae or folds. These plicae have hundreds of bursal follicles containing follicle-associated epithelial cells, lymphocytes, macrophages, and plasma cells. Lymphoid stem cells migrate from the fetal liver to the bursa during ontogeny. In the bursa, these stem cells acquire the characteristics of mature, immunocompetent B cells. The bursa is active in young birds. It atrophies after about six months.
Research history
In 1956, Bruce Glick showed that removal of the bursa in newly hatched chicks severely impaired the ability of the adult birds to produce antibodies. In contrast, removal of the bursa in adult chickens has little effect on the immune system. This was a serendipitous discovery that came about when a fellow graduate, Timothy S. Chang, who was teaching a course on antibody production obtained chickens from Glick that had been bursectomised (removal of the bursa). When these chickens failed to produce antibody in response to an immunization with Staphylococcus bacteria, the two students realized that the bursa is necessary for antibody production. Their initial attempts to publish their findings were thwarted by an editor who commented that "further el |
https://en.wikipedia.org/wiki/Phantasmagoria | Phantasmagoria (), alternatively fantasmagorie and/or fantasmagoria was a form of horror theatre that (among other techniques) used one or more magic lanterns to project frightening images, such as skeletons, demons, and ghosts, onto walls, smoke, or semi-transparent screens, typically using rear projection to keep the lantern out of sight. Mobile or portable projectors were used, allowing the projected image to move and change size on the screen, and multiple projecting devices allowed for quick switching of different images. In many shows, the use of spooky decoration, total darkness, (auto-)suggestive verbal presentation, and sound effects were also key elements. Some shows added a variety of sensory stimulation, including smells and electric shocks. Such elements as required fasting, fatigue (late shows), and drugs have been mentioned as methods of making sure spectators would be more convinced of what they saw. The shows started under the guise of actual séances in Germany in the late 18th century and gained popularity through most of Europe (including Britain) throughout the 19th century.
The word "phantasmagoria" has also been commonly used to indicate changing successions or combinations of fantastic, bizarre, or imagined imagery.
Etymology
From French phantasmagorie, from Ancient Greek φάντασμα (phántasma, “ghost”) + possibly either αγορά (agorá, “assembly, gathering”) + the suffix -ia, or ἀγορεύω (agoreúō, “to speak publicly”).
Paul Philidor (also known simply as "Phylidor") announced his show of ghost apparitions and evocation of the shadows of famous people as Phantasmagorie in the Parisian periodical Affiches, annonces et avis divers of December 16, 1792. About two weeks earlier the term had been the title of a letter by a certain "A.L.M.", published in Magazin Encyclopédique. The letter also promoted Phylidor's show. Phylidor had previously advertised his show as Phantasmorasi in Vienna in March 1790.
The English variation Phantasmagoria was introd |
https://en.wikipedia.org/wiki/Opte%20Project | The Opte Project, created in 2003 by Barrett Lyon, seeks to generate an accurate representation of the breadth of the Internet using visual graphics. Lyon believes that his network mapping can help teach students more about the Internet while also acting as a gauge illustrating both overall Internet growth and the specific areas where that growth occurs. It was not the first such project; others predated it, such as the Bell Labs Internet Mapping Project.
Lyon has been generating image maps using traceroute, and later switched to mapping using BGP routes. The generated images were published on the Opte Project website. In 2021, Lyon created different video animations, using his mapping technique: shedding light on internet growth between 1997 and 2021, the Iranian internet shutdown of 2019, the United States Department of Defense's place on the internet as well as the few entry points into the Chinese internet.
The project has gathered notice worldwide having been featured by Time, Cornell University, New Scientist, and Kaspersky Lab. In addition, Opte Project maps have found homes in at least two art galleries and exhibits such as The Museum of Modern Art and the Museum of Science's Mapping the World Around Us permanent exhibit.
Opte images are licensed under a Creative Commons license and while use of The Opte Image is free for all non-commercial applications, a license fee is required for all others. |
https://en.wikipedia.org/wiki/16%20mm%20scale | 16 mm to 1 foot or 1:19.05 is a popular scale of model railway in the UK which represents narrow gauge prototypes. The most common gauge for such railways is , representing gauge prototypes. This scale/gauge combination is sometimes referred to as "SM32" (terminology popularised by Peco, one of the principal manufacturers of appropriate track) and is often used for model railways that run in gardens, being large enough to easily accommodate live steam models. The next most common gauge is , which represents the theoretical non-existent gauge . This gauge is commonly used to portray prototypes between and gauge.
Overview
There are a number of commercial manufacturers of 16 mm scale models as well as many enthusiastic amateurs who build their own rolling stock. Because real railways were most commonly found in the UK, many of the models are of British prototypes. European and North American narrow gauge railways are also modeled in this scale, mainly with scratch-built or kit-built models.
Although models of approximately this scale were being built as early as the 1930s, it was the founding of the Merioneth Railway Society just after the Second World War that marks the popularization of this scale.
This set the light-hearted spirit of the 16 mm fraternity, where a sense of fun and whimsy often override more serious concerns. The use of live steam as the predominant motive power of the models means absolute scale reproduction is often sacrificed to the demands of steam engineering at this scale. However the realistic sound, smell and visual effects of steam-driven locomotives makes up for loss of fidelity elsewhere. Driving a live steam locomotive, even at this small scale is very different from driving an electrically powered model.
For many years there were no commercially available parts, and everything was hand-built or kit-bashed from O scale components. In the 1968 Archangel emerged as the first commercial manufacturer on a large scale, followed by Merl |
https://en.wikipedia.org/wiki/Backchannel | Backchannel is the use of networked computers to maintain a real-time online conversation alongside the primary group activity or live spoken remarks. The term was coined from the linguistics term to describe listeners' behaviours during verbal communication.
The term "backchannel" generally refers to online conversation about the conference topic or speaker. Occasionally backchannel provides audience members a chance to fact-check the presentation.
First growing in popularity at technology conferences, backchannel is increasingly a factor in education where WiFi connections and laptop computers allow participants to use ordinary chat like IRC or AIM to actively communicate during presentation. More recent research include works where the backchannel is brought publicly visible, such as the ClassCommons, backchan.nl and Fragmented Social Mirror.
Twitter is also widely used today by audiences to create backchannels during broadcasting of content or at conferences. For example, television drama, other forms of entertainment and magazine programs. This practice is often also called live tweeting. Many conferences nowadays also have a hashtag that can be used by the participants to share notes and experiences; furthermore such hashtags can be user generated.
History
Victor Yngve first used the phrase "back channel" in 1970 in a linguistic meaning, in the following passage: "In fact, both the person who has the turn and his partner are simultaneously engaged in both speaking and listening. This is because of the existence of what I call the back channel, over which the person who has the turn receives short messages such as 'yes' and 'uh-huh' without relinquishing the turn."
Such systems were widely imagined and tested in late 1990s and early 2000s. These cases include researcher's installations on conferences and classroom settings. The first famous instance of backchannel communications influencing a talk occurred on March 26, 2002, at the PC Forum conference, |
https://en.wikipedia.org/wiki/Historical%20ecology | Historical ecology is a research program that focuses on the interactions between humans and their environment over long-term periods of time, typically over the course of centuries. In order to carry out this work, historical ecologists synthesize long-series data collected by practitioners in diverse fields. Rather than concentrating on one specific event, historical ecology aims to study and understand this interaction across both time and space in order to gain a full understanding of its cumulative effects. Through this interplay, humans adapt to and shape the environment, continuously contributing to landscape transformation. Historical ecologists recognize that humans have had world-wide influences, impact landscape in dissimilar ways which increase or decrease species diversity, and that a holistic perspective is critical to be able to understand that system.
Piecing together landscapes requires a sometimes difficult union between natural and social sciences, close attention to geographic and temporal scales, a knowledge of the range of human ecological complexity, and the presentation of findings in a way that is useful to researchers in many fields. Those tasks require theory and methods drawn from geography, biology, ecology, history, sociology, anthropology, and other disciplines. Common methods include historical research, climatological reconstructions, plant and animal surveys, archaeological excavations, ethnographic interviews, and landscape reconstructions.
History
The discipline has several sites of origins by researchers who shared a common interest in the problem of ecology and history, but with a diversity of approaches. Edward Smith Deevey, Jr. used the term in the 1960s to describe a methodology that had been in long development. Deevey wished to bring together the practices of "general ecology" which was studied in an experimental laboratory, with a "historical ecology" which relied on evidence collected through fieldwork. For example, D |
https://en.wikipedia.org/wiki/Daniel%20Tammet | Daniel Tammet (born Daniel Paul Corney; 31 January 1979) is an English writer and savant. His memoir, Born on a Blue Day (2006), is about his early life with Asperger syndrome and savant syndrome, and was named a "Best Book for Young Adults" in 2008 by the American Library Association's Young Adult Library Services magazine. His second book, Embracing the Wide Sky, was one of France's best-selling books of 2009. His third book, Thinking in Numbers, was published in 2012 by Hodder & Stoughton in the United Kingdom and in 2013 by Little, Brown and Company in the United States and Canada. His books have been published in over 20 languages.
He was elected in 2012 to serve as a fellow of the Royal Society of Arts.
Personal life
Tammet was born Daniel Paul Corney, the eldest of nine children, and raised in Barking and Dagenham, East London. As a young child, he had epileptic seizures, which remitted following medical treatment.
He participated twice in the World Memory Championships in London under his birth name, placing 11th in 1999 and 4th in 2000.
He changed his birth name by deed poll because "it didn't fit with the way he saw himself". He took the Estonian surname Tammet, which is related to "oak trees".
At age twenty-five, he was diagnosed with Asperger syndrome by Simon Baron-Cohen of the University of Cambridge Autism Research Centre. He is one of fewer than a hundred "prodigious savants" according to Darold Treffert, the world's leading researcher in the study of savant syndrome.
He was the subject of a documentary film titled Extraordinary People: The Boy with the Incredible Brain, first broadcast on Channel 4 on 23 May 2005.
He met software engineer Neil Mitchell in 2000, and they started a relationship. They lived in Kent. He and Mitchell operated the online e-learning company Optimnem, where they created and published language courses.
Tammet now lives in Paris, with his husband Jérôme Tabet, a photographer whom he met while promoting his autobiog |
https://en.wikipedia.org/wiki/Factor%20theorem | In algebra, the factor theorem connects polynomial factors with polynomial roots. Specifically, if is a polynomial, then is a factor of if and only if (that is, is a root of the polynomial). The theorem is a special case of the polynomial remainder theorem.
The theorem results from basic properties of addition and multiplication. It follows that the theorem holds also when the coefficients and the element belong to any commutative ring, and not just a field.
In particular, since multivariate polynomials can be viewed as univariate in one of their variables, the following generalization holds : If and are multivariate polynomials and is independent of , then is a factor of if and only if is the zero polynomial.
Factorization of polynomials
Two problems where the factor theorem is commonly applied are those of factoring a polynomial and finding the roots of a polynomial equation; it is a direct consequence of the theorem that these problems are essentially equivalent.
The factor theorem is also used to remove known zeros from a polynomial while leaving all unknown zeros intact, thus producing a lower degree polynomial whose zeros may be easier to find. Abstractly, the method is as follows:
Deduce the candidate of zero of the polynomial from its leading coefficient and constant term . (See Rational Root Theorem.)
Use the factor theorem to conclude that is a factor of .
Compute the polynomial , for example using polynomial long division or synthetic division.
Conclude that any root of is a root of . Since the polynomial degree of is one less than that of , it is "simpler" to find the remaining zeros by studying .
Continuing the process until the polynomial is factored completely, which all its factors is irreducible on or .
Example
Find the factors of
Solution: Let be the above polynomial
Constant term = 2
Coefficient of
All possible factors of 2 are and . Substituting , we get:
So, , i.e, is a factor of . On dividing by , we g |
https://en.wikipedia.org/wiki/FireHOL | FireHOL is a shell script designed as a wrapper for iptables written to ease the customization of the Linux kernel's firewall netfilter. FireHOL is free software and open-source, distributed under the terms of the GNU General Public License.
FireHOL does not have graphical user interface, but is configured through an easy to understand plain text configuration file. FireHOL first parses the configuration file and then sets the appropriate iptables rules to achieve the expected firewall behavior. It is a large, complex BASH script file, depending on the iptables console tools rather than communicating with the kernel directly. Any Linux system with iptables, BASH, and the appropriate tools can run it. Its main drawback is slower starting times, particularly on older systems. FireHOL's configuration files are fully functional BASH scripts in of themselves.
External links
Firewall software
Free security software |
https://en.wikipedia.org/wiki/Urophagia | Urophagia is the consumption of urine. Urine was used in several ancient cultures for various health, healing, and cosmetic purposes; urine drinking is still practiced today. In extreme cases, people may drink urine if no other fluids are available, although numerous credible sources (including the US Army Field Manual) advise against using it. Urine may also be consumed as a sexual activity.
Reasons for urophagia
As an emergency survival technique
Survival guides such as the US Army Field Manual, the SAS Survival Handbook, and others generally advise against drinking urine for survival. These guides state that drinking urine tends to worsen rather than relieve dehydration due to the salts in it, and that urine should not be consumed in a survival situation, even when no other fluid is available.
In one incident, Aron Ralston drank urine when trapped for several days with his arm under a boulder. Survivalist television host Bear Grylls drank urine and encouraged others to do so on several episodes on his TV shows.
Folk medicine
In various cultures, alternative medicine applications exist of urine from humans, or animals such as camels or cattle, for medicinal or cosmetic purposes, including drinking of one's own urine, but no evidence supports their use.
Health warnings
The World Health Organization has found that the pathogens contained in urine rarely pose a health risk. However, it does caution that in areas where Schistosoma haematobium is prevalent, it can be transmitted from person to person. |
https://en.wikipedia.org/wiki/External%20memory%20algorithm | In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model.
Model
External memory algorithms are analyzed in an idealized model of computation called the external memory model (or I/O model, or disk access model). The external memory model is an abstract machine similar to the RAM machine model, but with a cache in addition to main memory. The model captures the fact that read and write operations are much faster in a cache than in main memory, and that reading long contiguous blocks is faster than reading randomly using a disk read-and-write head. The running time of an algorithm in the external memory model is defined by the number of reads and writes to memory required. The model was introduced by Alok Aggarwal and Jeffrey Vitter in 1988. The external memory model is related to the cache-oblivious model, but algorithms in the external memory model may know both the block size and the cache size. For this reason, the model is sometimes referred to as the cache-aware model.
The model consists of a processor with an internal memory or cache of size , connected to an unbounded external memory. Both the internal and external memory are divided into blocks of size . One input/output or memory transfer operation consists of moving a block of contiguous elements from external to internal memory, and the running time of an algorithm is determined by the number of these input/output operations.
Algorithms
Algorithms in the external memory model take advantage of the fact that retrieving one object from external memory retrieves an entire block of size . This property is sometimes referred |
https://en.wikipedia.org/wiki/Grating%20light%20valve | The "'grating light valve'" ("'GLV'") is a "micro projection" technology that operates using a dynamically adjustable diffraction grating. It competes with other light valve technologies such as Digital Light Processing (DLP) and liquid crystal on silicon (LCoS) for implementation in video projector devices such as rear-projection televisions. The use of microelectromechanical systems (MEMS) in optical applications, which is known as optical MEMS or micro-opto-electro-mechanical structures (MOEMS), has enabled the possibility to combine the mechanical, electrical, and optical components in tiny-scale.
Silicon Light Machines (SLM), in Sunnyvale CA, markets and licenses GLV technology with the capitalised trademarks "'Grated Light Valve'" and GLV, previously Grating Light Valve. The valve diffracts laser light using an array of tiny movable ribbons mounted on a silicon base. The GLV uses six ribbons as each pixel's diffraction gratings. Electronic signals alter the alignment of the gratings, and this displacement controls the intensity of the diffracted light in a very smooth gradation.
Brief history
The light valve was initially developed at Stanford University, in California, by electrical engineering professor David M. Bloom, along with William C. Banyai, Raj Apte, Francisco Sandejas, and Olav Solgaard, professor in the Stanford Department of Electrical Engineering. In 1994, the start-up company Silicon Light Machines was founded by Bloom to develop and commercialize the technology. Cypress Semiconductor acquired Silicon Light Machines in 2000 and sold the company to Dainippon Screen. Before the acquisition by Dainippon Screen, several marketing articles were published in EETimes, EETimes China, EETimes Taiwan, Electronica Olgi, and Fibre Systems Europe, highlighting Cypress Semiconductor's new MEMS manufacturing capabilities. The company is now wholly owned by Dainippon Screen Manufacturing Co., Ltd.
In July 2000, Sony announced the signing of a technology li |
https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics | This is a list of algebraic coding theory topics.
Algebraic coding theory |
https://en.wikipedia.org/wiki/Gilbert%E2%80%93Varshamov%20bound | In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gilbert–Shannon–Varshamov bound (or the GSV bound), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes.
Statement of the bound
Let
denote the maximum possible size of a q-ary code with length n and minimum Hamming distance d (a q-ary code is a code over the field of q elements).
Then:
Proof
Let be a code of length and minimum Hamming distance having maximal size:
Then for all , there exists at least one codeword such that the Hamming distance between and satisfies
since otherwise we could add x to the code whilst maintaining the code's minimum Hamming distance – a contradiction on the maximality of .
Hence the whole of is contained in the union of all balls of radius having their centre at some :
Now each ball has size
since we may allow (or choose) up to of the components of a codeword to deviate (from the value of the corresponding component of the ball's centre) to one of possible other values (recall: the code is q-ary: it takes values in ). Hence we deduce
That is:
An improvement in the prime power case
For q a prime power, one can improve the bound to where k is the greatest integer for which
See also
Singleton bound
Hamming bound
Johnson bound
Plotkin bound
Griesmer bound
Grey–Rankin bound
Gilbert–Varshamov bound for linear codes
Elias-Bassalygo bound |
https://en.wikipedia.org/wiki/Littlewood%27s%20three%20principles%20of%20real%20analysis | Littlewood's three principles of real analysis are heuristics of J. E. Littlewood to help teach the essentials of measure theory in mathematical analysis.
The principles
Littlewood stated the principles in his 1944 Lectures on the Theory of Functions
as:
The first principle is based on the fact that the inner measure and outer measure are equal for measurable sets, the second is based on Lusin's theorem, and the third is based on Egorov's theorem.
Example
Littlewood's three principles are quoted in several real analysis texts, for example Royden,
Bressoud,
and Stein & Shakarchi.
Royden gives the bounded convergence theorem as an application of the third principle. The theorem states that if a uniformly bounded sequence of functions converges pointwise, then their integrals on a set of finite measure converge to the integral of the limit function. If the convergence were uniform this would be a trivial result, and Littlewood's third principle tells us that the convergence is almost uniform, that is, uniform outside of a set of arbitrarily small measure. Because the sequence is bounded, the contribution to the integrals of the small set can be made arbitrarily small, and the integrals on the remainder converge because the functions are uniformly convergent there.
Notes
Real analysis
Heuristics
Measure theory
Mathematical principles |
https://en.wikipedia.org/wiki/IBRIX%20Fusion | IBRIX Fusion is a parallel file system combined with a logical volume manager, availability features and a management interface. The software was produced, sold, and supported by IBRIX Incorporated of Billerica, Massachusetts. HP announced on July 17, 2009 that it had reached a definitive agreement to acquire IBRIX. Subsequent to the acquisition, the software components of IBRIX have been combined with ProLiant servers to form the X9000 series of storage systems.
The X9000 storage systems are designed to provide network-attached storage over both standard protocols (SMB, NFS, HTTP and NDMP) as well as a proprietary protocol. Architecturally, the file system is limited to 16 petabytes under a single namespace, and is based upon a design described in .
It was used in the HPE StoreOnce (former D2D) products.
See also
List of file systems
Distributed file system |
https://en.wikipedia.org/wiki/Algebraic%20geometry%20code | Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982.
History
The name of these codes has evolved since the publication of Goppa's paper describing them. Historically these codes have also been referred to as geometric Goppa codes; however, this is no longer the standard term used in coding theory literature. This is due to the fact that Goppa codes are a distinct class of codes which were also constructed by Goppa in the early 1970s.
These codes attracted interest in the coding theory community because they have the ability to surpass the Gilbert–Varshamov bound; at the time this was discovered, the Gilbert–Varshamov bound had not been broken in the 30 years since its discovery. This was demonstrated by Tfasman, Vladut, and Zink in the same year as the code construction was published, in their paper "Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound". The name of this paper may be one source of confusion affecting references to algebraic geometry codes throughout 1980s and 1990s coding theory literature.
Construction
In this section the construction of algebraic geometry codes is described. The section starts with the ideas behind Reed–Solomon codes, which are used to motivate the construction of algebraic geometry codes.
Reed–Solomon codes
Algebraic geometry codes are a generalization of Reed–Solomon codes. Constructed by Irving Reed and Gustave Solomon in 1960, Reed–Solomon codes use univariate polynomials to form codewords, by evaluating polynomials of sufficiently small degree at the points in a finite field .
Formally, Reed–Solomon codes are defined in the following way. Let . Set positive integers . Let The Reed–Solomon code is the evaluation code
Codes from algebraic curves
Goppa observed that can be considered as an affine line, with corresponding projective lin |
https://en.wikipedia.org/wiki/Singin%27%20in%20the%20Rain%20%28song%29 | "Singin' in the Rain" is a song with lyrics by Arthur Freed and music by Nacio Herb Brown. Doris Eaton Travis introduced the song on Broadway in The Hollywood Music Box Revue in 1929. It was then widely popularized by Cliff Edwards and the Brox Sisters in The Hollywood Revue of 1929. Many contemporary artists have since recorded the song.
The musical film of the same name, Singin' in the Rain (1952), was "suggested by" the song. The performance by Gene Kelly dancing through puddles in a rainstorm garnered the song the third spot on the American Film Institute ranking of 100 Years...100 Songs.
Song form
The song has an unusual form: the 32-bar chorus, rather than being preceded by a verse and containing an internal bridge as was becoming standard at the time, opens the song and then is followed by a 24-bar verse that has the feeling of a bridge before the chorus repeats.
Covers
B.A. Rolfe and his Lucky Strike Orchestra recorded the song possibly as early as 1928 but perhaps 1929. The song was recorded by Annette Hanshaw (reissued on the 1999 CD Annette Hanshaw, Volume 6, 1929). It is performed on film by a nightclub band as dance music and sung in a Chinese dialect in The Ship from Shanghai (1930), by Jimmy Durante in Speak Easily (1932), by Judy Garland in Little Nellie Kelly (1940), and as background music at the beginning of MGM's The Divorcee (1930) starring Norma Shearer.
Singer Nick Lucas recorded Singing in the Rain in 1929 (one week after recording what would become the biggest hit of his career, Tiptoe Through the Tulips).
British duo Bob and Alf Pearson recorded the song in 1929 at their first session.
"Singin' in the Rain" was performed in the 1930 film short Dogville Melody, presumably by Zion Myers and Jules White.
Valaida Snow recorded it in 1935 accompanied by Billy Mason And His Orchestra - London, Apr 26, 1935 (Parlophone (E)F-165 (CE-6953-1))
The song is sung by Dean Martin in a November 1950 episode of the variety show The Colgate Comedy Hour. |
https://en.wikipedia.org/wiki/Cool%27n%27Quiet | AMD Cool'n'Quiet is a CPU dynamic frequency scaling and power saving technology introduced by AMD with its Athlon XP processor line. It works by reducing the processor's clock rate and voltage when the processor is idle. The aim of this technology is to reduce overall power consumption and lower heat generation, allowing for slower (thus quieter) cooling fan operation. The objectives of cooler and quieter result in the name Cool'n'Quiet. The technology is similar to Intel's SpeedStep and AMD's own PowerNow!, which were developed with the aim of increasing laptop battery life by reducing power consumption.
Due to their different usage, Cool'n'Quiet refers to desktop and server chips, while PowerNow! is used for mobile chips; the technologies are similar but not identical. This technology was also introduced on "e-stepping" Opterons, however it is called Optimized Power Management, which is essentially a re-tooled Cool'n'Quiet scheme designed to work with registered memory.
Cool'n'Quiet is fully supported in the Linux kernel from version 2.6.18 onward (using the powernow-k8 driver) and FreeBSD from 6.0-CURRENT onward.
Implementation
In-order to take advantage of Cool'n'Quiet Technology in Microsoft's Operating Systems:
Cool'n'Quiet should be Enabled in system BIOS
In Windows XP and 2000: Operating Systems "Minimal Power Management" profile must be active in "Power Schemes". A PPM driver was also released by AMD that facilitates this.
In Windows Vista and 7: "Minimum processor state" found in "Processor Power Management" of "Advanced Power Settings" should be lower than "100%".
Also In Windows Vista and 7 the "Power Saver" power profile allows much lower power state (frequency and voltage) than in the "High Performance" power state.
Unlike Windows XP, Windows Vista only supports Cool'n'Quiet on motherboards that support ACPI 2.0 or later.
With earlier versions of Windows, processor drivers along with Cool'n'Quiet software also need to be installed. The latest v |
https://en.wikipedia.org/wiki/Same-origin%20policy | In computing, the same-origin policy (SOP) is an important concept in the web application security model. Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page through that page's Document Object Model (DOM).
This mechanism bears a particular significance for modern web applications that extensively depend on HTTP cookies to maintain authenticated user sessions, as servers act based on the HTTP cookie information to reveal sensitive information or take state-changing actions. A strict separation between content provided by unrelated sites must be maintained on the client-side to prevent the loss of data confidentiality or integrity.
It is very important to remember that the same-origin policy applies only to scripts. This means that resources such as images, CSS, and dynamically-loaded scripts can be accessed across origins via the corresponding HTML tags (with fonts being a notable exception). Attacks take advantage of the fact that the same origin policy does not apply to HTML tags.
History
The concept of same-origin policy was introduced by Netscape Navigator 2.02 in 1995, shortly after the introduction of JavaScript in Netscape 2.0. JavaScript enabled scripting on web pages, and in particular programmatic access to the Document Object Model (DOM).
The policy was originally designed to protect access to the DOM, but has since been broadened to protect sensitive parts of the global JavaScript object.
Implementation
All modern browsers implement some form of the same-origin policy as it is an important security cornerstone. The policies are not required to match an exact specification but are often extended to define roughly compatible security boundaries for |
https://en.wikipedia.org/wiki/Adeno-associated%20virus | Adeno-associated viruses (AAV) are small viruses that infect humans and some other primate species. They belong to the genus Dependoparvovirus, which in turn belongs to the family Parvoviridae. They are small (approximately 26 nm in diameter) replication-defective, nonenveloped viruses and have linear single-stranded DNA (ssDNA) genome of approximately 4.8 kilobases (kb).
Several features make AAV an attractive candidate for creating viral vectors for gene therapy, and for the creation of isogenic human disease models. Gene therapy vectors using AAV can infect both dividing and quiescent cells and persist in an extrachromosomal state without integrating into the genome of the host cell. In the native virus, however, integration of virally carried genes into the host genome does occur. Integration can be important for certain applications, but can also have unwanted consequences. Recent human clinical trials using AAV for gene therapy in the retina have shown promise.
In March 2023, a series of Nature papers linked infection of adeno-associated virus 2 (AAV2) to a wave of childhood hepatitis.
History
The adeno-associated virus (AAV), previously thought to be a contaminant in adenovirus preparations, was first identified as a dependoparvovirus in the 1960s in the laboratories of Bob Atchison at Pittsburgh and Wallace Rowe at NIH. Serological studies in humans subsequently indicated that, despite being present in people infected by helper viruses such as adenovirus or herpes virus, AAV itself did not cause any disease.
Use in gene therapy
Advantages and drawbacks
Wild-type AAV has attracted considerable interest from gene therapy researchers due to a number of features. Chief amongst these was the virus's apparent lack of pathogenicity. It can also infect non-dividing cells and has the ability to stably integrate into the host cell genome at a specific site (designated AAVS1) in the human chromosome 19. This feature makes it somewhat more predictable than retrovir |
https://en.wikipedia.org/wiki/Astronomical%20constant | An astronomical constant is any of several physical constants used in astronomy. Formal sets of constants, along with recommended values, have been defined by the International Astronomical Union (IAU) several times: in 1964 and in 1976 (with an update in 1994). In 2009 the IAU adopted a new current set, and recognizing that new observations and techniques continuously provide better values for these constants, they decided to not fix these values, but have the Working Group on Numerical Standards continuously maintain a set of Current Best Estimates. The set of constants is widely reproduced in publications such as the Astronomical Almanac of the United States Naval Observatory and HM Nautical Almanac Office.
Besides the IAU list of units and constants, also the International Earth Rotation and Reference Systems Service defines constants relevant to the orientation and rotation of the Earth, in its technical notes.
The IAU system of constants defines a system of astronomical units for length, mass and time (in fact, several such systems), and also includes constants such as the speed of light and the constant of gravitation which allow transformations between astronomical units and SI units. Slightly different values for the constants are obtained depending on the frame of reference used. Values quoted in barycentric dynamical time (TDB) or equivalent time scales such as the Teph of the Jet Propulsion Laboratory ephemerides represent the mean values that would be measured by an observer on the Earth's surface (strictly, on the surface of the geoid) over a long period of time. The IAU also recommends values in SI units, which are the values which would be measured (in proper length and proper time) by an observer at the barycentre of the Solar System: these are obtained by the following transformations:
Astronomical system of units
The astronomical unit of time is a time interval of one day (D) of 86400 seconds. The astronomical unit of mass is the mass of the |
https://en.wikipedia.org/wiki/Live%20steam | Live steam is steam under pressure, obtained by heating water in a boiler. The steam may be used to operate stationary or moving equipment.
A live steam machine or device is one powered by steam, but the term is usually reserved for those that are replicas, scale models, toys, or otherwise used for heritage, museum, entertainment, or recreational purposes, to distinguish them from similar devices powered by electricity, internal combustion, or some other more convenient method but designed to look as if they are steam-powered. Revenue-earning steam-powered machines such as mainline and narrow gauge steam locomotives, full-sized steamships, and the worldwide electric power-generating industry steam turbines are not normally referred to as "live steam".
Steamrollers and traction engines are popular, in 1:4 or 1:3 scale, as are model stationary steam engines, ranging from pocket-size to 1:2 scale.
Railroads or railways
Ridable, large-scale live steam railroading on a backyard railroad is a popular aspect of the live steam hobby, but it is time-consuming to build a locomotive from scratch and it can be costly to purchase one already built. Garden railways, in smaller scales (that cannot pull a "live" person nor be ridden on), offer the benefits of real steam engines (and at lower cost and in less space), but do not provide the same experience as operating one's own locomotive in the larger scales and riding on (or behind) it, while doing so.
One of the most famous live steam railroads was Walt Disney's Carolwood Pacific Railroad around his California home; it later inspired Walt Disney to surround his planned Disneyland amusement park with a working, narrow gauge railroad.
The live steam hobby is especially popular in the UK, US, New Zealand, Australia, and Japan. All over the world, there are hundreds of clubs and associations as well as many thousands of private backyard railroads. The world's largest live steam layout, with over of trackage is Train Mounta |
https://en.wikipedia.org/wiki/Shulba%20Sutras | The Shulva Sutras or Śulbasūtras (Sanskrit: शुल्बसूत्र; : "string, cord, rope") are sutra texts belonging to the Śrauta ritual and containing geometry related to fire-altar construction.
Purpose and origins
The Shulba Sutras are part of the larger corpus of texts called the Shrauta Sutras, considered to be appendices to the Vedas. They are the only sources of knowledge of Indian mathematics from the Vedic period. Unique fire-altar shapes were associated with unique gifts from the Gods. For instance, "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman" and "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus".
The four major Shulba Sutras, which are mathematically the most significant, are those attributed to Baudhayana, Manava, Apastamba and Katyayana. Their language is late Vedic Sanskrit, pointing to a composition roughly during the 1st millennium BCE. The oldest is the sutra attributed to Baudhayana, possibly compiled around 800 BCE to 500 BCE. Pingree says that the Apastamba is likely the next oldest; he places the Katyayana and the Manava third and fourth chronologically, on the basis of apparent borrowings. According to Plofker, the Katyayana was composed after "the great grammatical codification of Sanskrit by Pāṇini in probably the mid-fourth century BCE", but she places the Manava in the same period as the Baudhayana.
With regard to the composition of Vedic texts, Plofker writes,The Vedic veneration of Sanskrit as a sacred speech, whose divinely revealed texts were meant to be recited, heard, and memorized rather than transmitted in writing, helped shape Sanskrit literature in general. ... Thus texts were composed in formats that could be easily memorized: either condensed prose aphorisms (sūtras, a word later applied to mean a rule or algorithm in general) or ver |
https://en.wikipedia.org/wiki/Brahmagupta%20theorem | In geometry, Brahmagupta's theorem states that if a cyclic quadrilateral is orthodiagonal (that is, has perpendicular diagonals), then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side. It is named after the Indian mathematician Brahmagupta (598-668).
More specifically, let A, B, C and D be four points on a circle such that the lines AC and BD are perpendicular. Denote the intersection of AC and BD by M. Drop the perpendicular from M to the line BC, calling the intersection E. Let F be the intersection of the line EM and the edge AD. Then, the theorem states that F is the midpoint AD.
Proof
We need to prove that AF = FD. We will prove that both AF and FD are in fact equal to FM.
To prove that AF = FM, first note that the angles FAM and CBM are equal, because they are inscribed angles that intercept the same arc of the circle. Furthermore, the angles CBM and CME are both complementary to angle BCM (i.e., they add up to 90°), and are therefore equal. Finally, the angles CME and FMA are the same. Hence, AFM is an isosceles triangle, and thus the sides AF and FM are equal.
The proof that FD = FM goes similarly: the angles FDM, BCM, BME and DMF are all equal, so DFM is an isosceles triangle, so FD = FM. It follows that AF = FD, as the theorem claims.
See also
Brahmagupta's formula for the area of a cyclic quadrilateral |
https://en.wikipedia.org/wiki/Window%20of%20the%20World | The Window of the World () is a theme park located in the western part of the city of Shenzhen in the People's Republic of China. It has about 130 reproductions of some of the most famous tourist attractions in the world squeezed into 48 hectares (118 acres). The 108 metre (354 ft) tall Eiffel Tower dominates the skyline and the sight of the Pyramids and the Taj Mahal all in proximity to each other are all part of the appeal of this theme park.
Transportation
The Window of the World Station on Line 1 and Line 2 of the Shenzhen Metro is located directly in front of the park. The Happy Line monorail has a stop near Window of the World.
Monorail and open cars runs inside the park.
In media
In his autobiographical graphic novel Shenzhen, Guy Delisle visits the park with a Chinese acquaintance. The Park was a destination of The Amazing Race 28.
List of major attractions in the Window of the World
Europe region
The Matterhorn and the Alps between the Valais canton of and the Aosta Valley region of
The gloriette in the gardens at Schönbrunn Palace and the Johann Strauss monument at Stadtpark of Vienna,
The Lion's Mound near Waterloo,
The Little Mermaid statue of Copenhagen,
The Eiffel Tower, Arc de Triomphe, Louvre Pyramid, Notre Dame cathedral, Grande Arche, Fountain of Warsaw, and Fontaine de l'Observatoire in Paris, Île-de-France
The Palace of Versailles near the town of Versailles, Île-de-France
Mont Saint-Michel in Normandy
The Pont du Gard aqueduct of Vers-Pont-du-Gard, Languedoc-Roussillon
The Cologne cathedral of Cologne, North Rhine-Westphalia
Neuschwanstein Castle of Hohenschwangau, Bavaria
The Acropolis of Athens, Attica
The Lion Gate at Mycenae of Mykines, Argolis
The Colosseum, St. Peter's Basilica, Palazzo Poli, Trajan's Column, and Spanish Steps of Rome, Lazio
Canals and St. Mark's Square of Venice, Veneto
The Leaning Tower and cathedral of Pisa, Tuscany
The Piazza della Signoria of Florence, Tuscany
The windmills and tulips |
https://en.wikipedia.org/wiki/Lenore%20Blum | Lenore Carol Blum (née Epstein, born December 18, 1942) is an American computer scientist and mathematician who has made contributions to the theories of real number computation, cryptography, and pseudorandom number generation. She was a distinguished career professor of computer science at Carnegie Mellon University until 2019 and is currently a professor in residence at the University of California, Berkeley. She is also known for her efforts to increase diversity in mathematics and computer science.
Early life and education
Blum was born to a Jewish family in New York City, where her mother was a science teacher. They moved to Venezuela when Blum was nine.
After graduating from her Venezuelan high school at age 16, she studied architecture at Carnegie Institute of Technology (now Carnegie Mellon University) beginning in 1959. With the assistance of Alan Perlis, she shifted fields to mathematics in 1960. She married Manuel Blum, then a student at the Massachusetts Institute of Technology, and transferred in 1961 to Simmons College, a private women's liberal arts college in Boston. Simmons did not have a strong mathematics program but she was eventually able to take Isadore Singer's mathematics classes at MIT, graduating from Simmons with a B.S. in mathematics in 1963.
She received her Ph.D. in mathematics from the Massachusetts Institute of Technology in 1968. Her dissertation, Generalized Algebraic Theories: A Model Theoretic Approach, was supervised by Gerald Sacks. She had switched to being advised by Sacks after being unable to follow an earlier advisor in his move to Princeton University because, at the time, Princeton did not accept female graduate students.
Career
After completing her doctorate, Blum went to the University of California at Berkeley to work with Julia Robinson as a postdoctoral fellow and lecturer in mathematics.
However, the department had no permanent positions for women, and after two years, her position as lecturer was not renewe |
https://en.wikipedia.org/wiki/Splendid%20China%20Folk%20Village | Splendid China Folk Village (Chinese: 锦绣中华民俗村, pinyin: Jǐnxiù Zhōnghuá Mínsú Cūn) is a theme park including two areas (Splendid China Miniature Park & China Folk Culture Village) located in Shenzhen, Guangdong province, People's Republic of China. The park's theme reflects the history, culture, art, ancient architecture, customs and habits of various nationalities. It is one of the world's largest scenery parks in the amount of scenarios reproduced. The park is developed and managed by the major travel and tourist corporation, China Travel Service.
Location
Splendid China is situated by the Shenzhen Bay in a tourist area of Overseas Chinese Town (OCT) in the Shenzhen Special Economic Zone. It is a 35-40 minute train ride from Luohu Station on Line 1 of the Shenzhen Metro or 30 minutes by bus (bus number 101 or mini-bus 23 are two examples).
Hours and tickets
Time: 9.00 am to 9.00 pm
Entry Closing time: 6.00 pm
1 Day ticket
Ticket Price: RMB 220
Child Ticket: RMB 90 (height 1.2 m to 1.5m)
Small kids; free (smaller than 1.2m height)
Annual Ticket
Solo Annual Ticket: RMB 360 (only 1 person can use)
Parent Annual Ticket: RMB 460 (One parent with 1 child under 1.5 m)
Family Annual Ticket: RMB 660 (Two parents with 1 child under 1.5 m)
Note: Annual Ticket prices as per February 2011. Solo as in November 2015
About the park
Over 100 major tourist attractions have been miniaturized and laid out according to the map of China. Most attractions have been reduced on a scale of 1:15. It is divided into Scenic Spot Area and Comprehensive Service Area. The entire park covers 30 hectares.
There are cars and trains to transport visitors around the park, making it possible to visit the Great Wall of China, Forbidden City, Temple of Heaven, Summer Palace, Three Gorges Dam, Potala Palace and the Terracotta Army in one day.
The park also hosts several shows depicting various events in Chinese History (e.g. a horse riding show depicting a battle led by Genghis Khan), C |
https://en.wikipedia.org/wiki/Wigner%20quasiprobability%20distribution | The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution, after Eugene Wigner and Jean-André Ville) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space.
It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction .
Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (see Weyl quantization). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram.
In 1949, José Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields, such as electrical engineering, seismology, time–frequency analysis for music signals, spectrograms in biology and speech processing, and engine design.
Relation to classical mechanics
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails
for a quantum p |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.