source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Composition%20of%20relations | In the mathematics of binary relations, the composition of relations is the forming of a new binary relation from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication, and its result is called a relative product. Function composition is the special case of composition of relations where all relations involved are functions.
The word uncle indicates a compound relation: for a person to be an uncle, he must be the brother of a parent. In algebraic logic it is said that the relation of Uncle () is the composition of relations "is a brother of" () and "is a parent of" ().
Beginning with Augustus De Morgan, the traditional form of reasoning by syllogism has been subsumed by relational logical expressions and their composition.
Definition
If and are two binary relations, then
their composition is the relation
In other words, is defined by the rule that says if and only if there is an element such that (that is, and ).
Notational variations
The semicolon as an infix notation for composition of relations dates back to Ernst Schroder's textbook of 1895. Gunther Schmidt has renewed the use of the semicolon, particularly in Relational Mathematics (2011). The use of semicolon coincides with the notation for function composition used (mostly by computer scientists) in category theory, as well as the notation for dynamic conjunction within linguistic dynamic semantics.
A small circle has been used for the infix notation of composition of relations by John M. Howie in his books considering semigroups of relations. However, the small circle is widely used to represent composition of functions which reverses the text sequence from the operation sequence. The small circle was used in the introductory pages of Graphs and Relations until it was dropped in favor of juxtaposition (no infix notation). Juxtaposition is commonly used in algebra to signify multiplication, so too, it can signify r |
https://en.wikipedia.org/wiki/Chronic%20care | Chronic care refers to medical care which addresses pre-existing or long-term illness, as opposed to acute care which is concerned with short term or severe illness of brief duration. Chronic medical conditions include asthma, diabetes, emphysema, chronic bronchitis, congestive heart disease, cirrhosis of the liver, hypertension and depression. Without effective treatment chronic conditions may lead to disability.
The incidence of chronic disease has increased as mortality rates have decreased. It is estimated that by 2030 half of the population of the USA will have one or more chronic conditions.
According to the CDC, 6 out of 10 adults in the U.S. are managing at least one chronic disease and 42% of adults have two or more chronic conditions.
Conditions, injuries and diseases which were previously fatal can now be treated with chronic care. Chronic care aims to maintain wellness by keeping symptoms in remission while balancing treatment regimes and quality of life. Many of the core functions of primary health care are central to chronic care. Chronic care is complex in nature because it may extend over a pro-longed period of time, requires input from a diverse set of health professionals, various medications and possibly monitoring equipment.
Policy making
According to 2008 figures from the Centers for Disease Control and Prevention chronic medical care accounts for more than 75% of health care spending in the US. In response to the increased government expenditure in dealing with chronic care policy makers are searching for effective interventions and strategies. These strategies can broadly be described within four categories. These are disease prevention and early detection, new providers, settings and qualifications, disease management programs and integrated care models.
Challenges
One of the major problems from a health care system which is poorly coordinated for people with chronic conditions is the incidence of patients receiving conflicting ad |
https://en.wikipedia.org/wiki/A%20History%20of%20Pi | A History of Pi (also titled A History of ) is a 1970 non-fiction book by Petr Beckmann that presents a layman's introduction to the concept of the mathematical constant pi ().
Author
Beckmann was a Czechoslovakian who fled the Communist regime to go to the United States. His dislike of authority gives A History of Pi a style that belies its dry title. For example, his chapter on the era following the classical age of ancient Greece is titled "The Roman Pest"; he calls the Catholic Inquisition the act of "insane religious fanatic"; and he says that people who question public spending on scientific research are "intellectual cripples who drivel about 'too much technology' because technology has wounded them with the ultimate insult: 'They can't understand it any more.'"
Beckmann was a prolific scientific author who wrote several electrical engineering textbooks and non-technical works, founded Golem Press, which published most of his books, and published his own monthly newsletter, Access to Energy. In his self-published book Einstein Plus Two and in Internet flame wars, he claimed that the theory of relativity is incorrect.
Bibliography
A History of Pi was originally published as A History of in 1970 by Golem Press. This edition did not cover any approximations of calculated after 1946. A second edition, printed in 1971, added material on the calculation of by electronic computers, but still contained historical and mathematical errors, such as an incorrect proof that there exist infinitely many prime numbers. A third edition was published as A History of Pi in 1976 by St. Martin's Press. It was published as A History of Pi by Hippocrene Books in 1990. The title is given as A History of Pi by both Amazon and by WorldCat.
See also
History of Pi |
https://en.wikipedia.org/wiki/Italia%20turrita | Italia turrita (; "Turreted Italy") is the national personification or allegory of Italy, in the appearance of a young woman with her head surrounded by a mural crown completed by towers (hence turrita or "with towers" in Italian). It is often accompanied by the Stella d'Italia ("Star of Italy"), from which the so-called Italia turrita e stellata ("turreted and starry Italy"), and by other additional attributes, the most common of which is the cornucopia. The allegorical representation with the towers, which draws its origins from ancient Rome, is typical of Italian civic heraldry, so much so that the wall crown is also the symbol of the cities of Italy.
Italia turrita, which is one of the national symbols of Italy, has been widely depicted for centuries in the fields of art, politics and literature. Its most classic aspect, which derives from the primordial myth of the Great Mediterranean Mother and which was definitively specified at the turn of the 16th and 17th centuries by Cesare Ripa, wants to symbolically convey the royalty and nobility of Italian cities (thanks to the presence of crown turrita), the abundance of agricultural crops of the Italian peninsula (represented by the cornucopia) and the shining destiny of Italy (symbolized by the Stella d'Italia).
Appearance and representation
The personification of Italy is generally depicted as a woman with a rather luxuriant body, with typical Mediterranean attributes, such as colored complexion and dark hair. Throughout history it has repeatedly changed the attributes with which it is characterized: a bunch of wheat ears in hand (symbol of fertility and reference to the agricultural economy of the Italian peninsula), a sword or a scale, metaphors of justice, or a cornucopia, allegory of abundance; during fascism it also supported one of the symbols of this political movement, the fasces.
After the birth of the Italian flag, which occurred in 1797, it is frequently shown with a green, white and red dress. Abov |
https://en.wikipedia.org/wiki/Multipoint%20ground | A multipoint ground is an alternate type of electrical installation that attempts to solve the ground loop and mains hum problem by creating many alternate paths for electrical energy to find its way back to ground. The distinguishing characteristic of a multipoint ground is the use of many interconnected grounding conductors into a loose grid configuration. There will be many paths between any two points in a multipoint grounding system, rather than the single path found in a star topology ground. This type of ground may also be known as a Signal Reference Grid or Ground (SRG) or an Equipotential Ground.
Advantages
If installed correctly, it can maintain reference ground potential much better than a star topology in a similar application across a wider range of frequencies and currents.
Disadvantages
A multipoint ground system is more complicated to install and maintain over the long term, and can be more expensive to install.
Star topology systems can be converted to multipoint systems by installing new conductors between old existing ones. However, this should be done with care as it can inadvertently introduce noise onto signal lines during the conversion process. The noise can be diminished over time as noisy and failed components are removed and repaired, but some isolation of high current (e.g. motors and lighting) and sensitive low current (e.g. amplifiers and radios) equipment may always be necessary.
Design considerations
A multipoint grounding system can solve several problems, but they must all be addressed in turn. The size of the conductors must be designed to meet the expected load in operations and in lightning protection. The amount of cross bonding, and the topology of the grids, is determined by the expected frequencies in the signals to be carried and the uses the installation will be put to.
A ground grid is provided primarily for safety, and the size of the conductors is probably governed by local building or electrical code. |
https://en.wikipedia.org/wiki/Levinson%27s%20inequality | In mathematics, Levinson's inequality is the following inequality, due to Norman Levinson, involving positive numbers. Let and let be a given function having a third derivative on the range , and such that
for all . Suppose and for . Then
The Ky Fan inequality is the special case of Levinson's inequality, where |
https://en.wikipedia.org/wiki/Hybrid%20speciation | Hybrid speciation is a form of speciation where hybridization between two different species leads to a new species, reproductively isolated from the parent species. Previously, reproductive isolation between two species and their parents was thought to be particularly difficult to achieve, and thus hybrid species were thought to be very rare. With DNA analysis becoming more accessible in the 1990s, hybrid speciation has been shown to be a somewhat common phenomenon, particularly in plants. In botanical nomenclature, a hybrid species is also called a nothospecies. Hybrid species are by their nature polyphyletic.
Ecology
A hybrid may occasionally be better fitted to the local environment than the parental lineage, and as such, natural selection may favor these individuals. If reproductive isolation is subsequently achieved, a separate species may arise. Reproductive isolation may be genetic, ecological, behavioral, spatial, or a combination of these.
If reproductive isolation fails to establish, the hybrid population may merge with either or both parent species. This will lead to an influx of foreign genes into the parent population, a situation called an introgression. Introgression is a source of genetic variation, and can in itself facilitate speciation. There is evidence that introgression is a ubiquitous phenomenon in plants and animals, even in humans, where genetic material from Neanderthals and Denisovans is responsible for much of the immune genes in non-African populations.
Ecological constraints
For a hybrid form to persist, it must be able to exploit the available resources better than either parent species, which, in most cases, it will have to compete with. For example: while grizzly bears and polar bears may be able to mate and produce offspring, a grizzly–polar bear hybrid is apparently less- suited in either of the parents' ecological niches than the original parent species themselves. So: although the hybrid is fertile (i.e. capable of reproducti |
https://en.wikipedia.org/wiki/Pandora%20FMS | Pandora FMS (for Pandora Flexible Monitoring System) is software for monitoring computer networks. Pandora FMS allows monitoring in a visual way the status and performance of several parameters from different operating systems, servers, applications and hardware systems such as firewalls, proxies, databases, web servers or routers.
Pandora FMS can be deployed in almost any operating system. It features remote monitoring (WMI, SNMP, TCP, UDP, ICMP, HTTP...) and it can also use agents. An agent is available for each platform. It can also monitor hardware systems with a TCP/IP stack, such as load balancers, routers, network switches, printers or firewalls.
Pandora FMS has several servers that process and get information from different sources, using WMI for gathering remote Windows information, a predictive server, a plug-in server which makes complex user-defined network tests, an advanced export server to replicate data between different sites of Pandora FMS, a network discovery server, and an SNMP Trap console.
Released under the terms of the GNU General Public License, Pandora FMS is free software. At first the project was hosted on SourceForge.net, from where it has been downloaded over one million times, and selected the “Staff Pick” Project of the Month, June 2016, and elected “Community Choice” Project of the Month, November 2017.
Components
Pandora Server
In Pandora FMS architecture, servers are the core of the system because they are the recipients of bundles of information. They also generate monitoring alerts. It is possible to have different modular configurations for the servers: several servers for very big systems, or just a single server. Servers are also responsible for inserting the gathered data into Pandora's database. It is possible to have several Pandora Servers connected to a single Database. Different servers are used for different kind of monitoring: remote monitoring, WMI monitoring, SNMP and other network monitoring, inventory recol |
https://en.wikipedia.org/wiki/MojoPac | MojoPac was an application virtualization product from RingCube Technologies. MojoPac turns any USB 2.0 storage device into a portable computing environment. The term "MojoPac" is used by the company to refer to the software application, the virtualized environment running inside this software, and the USB storage device that contains the software and relevant applications. MojoPac supports popular applications such as Firefox and Microsoft Office, and it is also high performance enough to run popular PC Games such as World of Warcraft, Minecraft and Half-Life 2.
The RingCube website is currently forwarded to Citrix, which has apparently purchased the company and discontinued MojoPac.
Usage
To initially set up the MojoPac device, the user runs the installer and selects a USB device attached to the system. Once MojoPac is installed, it creates an executable in the root of that device along with an autorun file that gives the user the option of starting the MojoPac environment automatically when the device is plugged in (subject to how the host PC is configured). Once this application is started, a new Windows Desktop (with its own wallpaper, icons, shell, etc.) is started up in the virtualized MojoPac environment. Any application that runs inside this environment runs off the USB device without affecting the filesystem of the host. A user installs most applications (including Microsoft Office, Adobe Photoshop, Firefox) on the portable storage device by simply running the installer inside this environment. The user can switch between the host environment and the MojoPac environment by using the MojoBar at the top of the screen. Once the user is done with the applications, they exit MojoPac and eject the USB device.
To run the applications on a different computer, the user does not need to reinstall the application. The user can plug the portable storage device into any Windows XP computer. All the user's settings, applications, and documents function the same irres |
https://en.wikipedia.org/wiki/Model%20railroad%20layout | In model railroading, a layout is a diorama containing scale track for operating trains. The size of a layout varies, from small shelf-top designs to ones that fill entire rooms, basements, or whole buildings.
Attention to modeling details such as structures and scenery is common. Simple layouts are generally situated on a table, although other methods are used, including doors. More permanent construction methods involve attaching benchwork framing to the walls of the room or building in which the layout is situated.
Track layout
An important aspect of any model railway is the layout of the track itself. Apart from the stations, there are four basic ways of arranging the track, and innumerable variations:
Continuous loop. A circle or oval, with trains going round and round. Used in train sets.
Point to point. A line with a station at each end, with trains going from one station to the other.
Out and back. A pear shaped track, with trains leaving a station, going round a reversing loop, and coming back to the same station.
Shunting (US: Switching). Either a station, a motive power depot or a yard where the primary mode of operation is shunting. This includes layouts which are built as a train shunting puzzle such as Timesaver and Inglenook Sidings
Common variations:
On a point to point layout, the train can increase the time it takes to get from A to B by going around a continuous loop a few times.
Single or double track or more, so more trains can run at the same time.
Intermediate stations, to distinguish between express trains which go straight through and local trains which stop briefly.
Branch lines, to add an excuse for more stations and different types of trains.
Use of multiple levels.
Arranging the continuous loop as a figure-of-8, possibly with one track going over the other instead of having tracks crossing on the same level.
Folding one loop of a figure-of-8 over the other loop to produce a looped-8, so as to reduce the amount of space |
https://en.wikipedia.org/wiki/Nuclear%20pharmacy | Nuclear pharmacy, also known as radiopharmacy, involves preparation of radioactive materials for patient administration that will be used to diagnose and treat specific diseases in nuclear medicine. It generally involves the practice of combining a radionuclide tracer with a pharmaceutical component that determines the biological localization in the patient. Radiopharmaceuticals are generally not designed to have a therapeutic effect themselves, but there is a risk to staff from radiation exposure and to patients from possible contamination in production. Due to these intersecting risks, nuclear pharmacy is a heavily regulated field. The majority of diagnostic nuclear medicine investigations are performed using technetium-99m.
History
The concept of nuclear pharmacy was first described in 1960 by Captain William H. Briner while at the National Institutes of Health (NIH) in Bethesda, Maryland. Along with Mr. Briner, John E. Christian, who was a professor in the School of Pharmacy at Purdue University, had written articles and contributed in other ways to set the stage of nuclear pharmacy. William Briner started the NIH Radiopharmacy in 1958. John Christian and William Briner were both active on key national committees responsible for the development, regulation and utilization of radiopharmaceuticals. A technetium-99m generator was commercially available, followed by the availability of a number of Tc-99m based radiopharmaceuticals.
In the United States nuclear pharmacy was the first pharmacy specialty established in 1978 by the Board of Pharmacy Specialties.
Various models of production exist internationally. Institutional nuclear pharmacy is typically operated through large medical centers or hospitals while commercial centralized nuclear pharmacies provide their services to subscriber hospitals. They prepare and dispense radiopharmaceuticals as unit doses that are then delivered to the subscriber hospital by nuclear pharmacy personnel.
Operation
A few basic st |
https://en.wikipedia.org/wiki/Nonobtuse%20mesh | In computer graphics, a nonobtuse triangle mesh is a polygon mesh composed of a set of triangles in which no angle is obtuse, i.e. greater than 90°. If each (triangle) face angle is strictly less than 90°, then the triangle mesh is said to be acute. Every polygon with sides has a nonobtuse triangulation with triangles (expressed in big O notation), allowing some triangle vertices to be added to the sides and interior of the polygon. These nonobtuse triangulations can be further refined to produce acute triangulations with triangles.
Nonobtuse meshes avoid certain problems of nonconvergence or of convergence to the wrong numerical solution as demonstrated by the Schwarz lantern. The immediate benefits of a nonobtuse or acute mesh include more efficient and more accurate geodesic computation using fast marching, and guaranteed validity for planar mesh embeddings via discrete harmonic maps. |
https://en.wikipedia.org/wiki/Sock%20and%20buskin | The sock and buskin (sometimes referring to drama masks or theatrical masks) are two ancient symbols of comedy and tragedy. In ancient Greek theatre, actors in tragic roles wore a boot called a buskin (Latin cothurnus). The actors with comedic roles wore only a thin-soled shoe called a sock (Latin soccus).
Melpomene, the muse of tragedy, is often depicted holding the tragic mask and wearing buskins. Thalia, the muse of comedy, is similarly associated with the mask of comedy and comic's socks. Some people refer to the masks themselves as "Sock and Buskin."
In modern times, they have been used as an iconographic representation of theatre as a whole. |
https://en.wikipedia.org/wiki/Implicit%20solvation | Implicit solvation (sometimes termed continuum solvation) is a method to represent solvent as a continuous medium instead of individual “explicit” solvent molecules, most often used in molecular dynamics simulations and in other applications of molecular mechanics. The method is often applied to estimate free energy of solute-solvent interactions in structural and chemical processes, such as folding or conformational transitions of proteins, DNA, RNA, and polysaccharides, association of biological macromolecules with ligands, or transport of drugs across biological membranes.
The implicit solvation model is justified in liquids, where the potential of mean force can be applied to approximate the averaged behavior of many highly dynamic solvent molecules. However, the interfaces and the interiors of biological membranes or proteins can also be considered as media with specific solvation or dielectric properties. These media are not necessarily uniform, since their properties can be described by different analytical functions, such as “polarity profiles” of lipid bilayers.
There are two basic types of implicit solvent methods: models based on accessible surface areas (ASA) that were historically the first, and more recent continuum electrostatics models, although various modifications and combinations of the different methods are possible.
The accessible surface area (ASA) method is based on experimental linear relations between Gibbs free energy of transfer and the surface area of a solute molecule. This method operates directly with free energy of solvation, unlike molecular mechanics or electrostatic methods that include only the enthalpic component of free energy. The continuum representation of solvent also significantly improves the computational speed and reduces errors in statistical averaging that arise from incomplete sampling of solvent conformations, so that the energy landscapes obtained with implicit and explicit solvent are different. Although |
https://en.wikipedia.org/wiki/Merchant%27s%20mark | A merchant's mark is an emblem or device adopted by a merchant, and placed on goods or products sold by him in order to keep track of them, or as a sign of authentication. It may also be used as a mark of identity in other contexts.
History
Ancient use
Merchants' marks are as old as the sealings of the third millennium BCE found in Sumer that originated in the Indus Valley. Impressions of cloth, strings and other packing material on the reverse of tags with seal impressions indicate that the Harappan seals were used to control economic administration and trade. Amphorae from the Roman Empire can sometimes be traced to their sources from the inscriptions on their handles. Commercial inscriptions in Latin, known as tituli picti, appear on Roman containers used for trade.
Middle ages and early modern period
Symbolic merchants' marks continued to be used by artisans and townspeople of the medieval and early modern eras to identify themselves and authenticate their goods. These distinctive and easily recognizable marks often appeared in their seals on documents and on products made for sale. They are often found on headstones and in works of stained glass, brass, and stone, serving in place of heraldic imagery, which could not be used by the middle classes. They were the precursors of hallmarks, printer's marks, and trademarks.
Legal requirements and superstitions
To manage the risks of piracy or shipwreck, merchants often consigned a cargo to several vessels or caravans; a mark on a bale established legal ownership and avoided confusion. Early travellers, voyagers and merchants displayed their merchant's marks as well to ward off evil. Adventurous travellers and sailors ascribed the terrors and perils of their life to the wrath of the Devil. To counter these dangers merchants employed all sorts of religious and magical means to place their caravans, ships and merchandise under the protection of God and His Saints.
One such symbol combined the mystical "Sign o |
https://en.wikipedia.org/wiki/Machine%20Age | The Machine Age is an era that includes the early-to-mid 20th century, sometimes also including the late 19th century. An approximate dating would be about 1880 to 1945. Considered to be at its peak in the time between the first and second world wars, the Machine Age overlaps with the late part of the Second Industrial Revolution (which ended around 1914 at the start of World War I) and continues beyond it until 1945 at the end of World War II. The 1940s saw the beginning of the Atomic Age, where modern physics saw new applications such as the atomic bomb, the first computers, and the transistor. The Digital Revolution ended the intellectual model of the machine age founded in the mechanical and heralding a new more complex model of high technology. The digital era has been called the Second Machine Age, with its increased focus on machines that do mental tasks.
Universal chronology
Developments
Artifacts of the Machine Age include:
Reciprocating steam engine replaced by gas turbines, internal combustion engines and electric motors
Electrification based on large hydroelectric and thermal electric power production plants and distribution systems
Mass production of high-volume goods on moving assembly lines, particularly of the automobile
Gigantic production machinery, especially for producing and working metal, such as steel rolling mills, bridge component fabrication, and car body presses
Powerful earthmoving equipment
Steel-framed buildings of great height (skyscrapers)
Radio and phonograph technology
High-speed printing presses, enabling the production of low-cost newspapers and mass-market magazines
Low cost appliances for the mass market that employ fractional power electric motors, such as vacuum cleaners and washing machines
Fast and comfortable long-distance travel by railways, cars, and aircraft
Development and employment of modern war machines such as tanks, aircraft, submarines and the modern battleship
Streamline designs in cars and trains |
https://en.wikipedia.org/wiki/Ixora | Ixora is a genus of flowering plants in the family Rubiaceae. It is the only genus in the tribe Ixoreae. It consists of tropical evergreen trees and shrubs and holds around 544 species. Though native to the tropical and subtropical areas throughout the world, its centre of diversity is in Tropical Asia. Ixora also grows commonly in subtropical climates in the United States, such as Florida where it is commonly known as West Indian jasmine.
Name
George Don writes that Ixora is a name of "a Malabar idol, to which the flowers of some of the species are offered."
Other common names include viruchi, kiskaara, kepale, rangan, kheme, ponna, chann tanea, techi, pan, siantan, jarum-jarum/jejarum, cây trang thái, jungle flame, jungle geranium, and cruz de Malta, among others.
Botany
The plants possess leathery leaves, ranging from 3 to 6 inches in length, and produce large clusters of tiny flowers in the summer. Members of Ixora prefer acidic soil, and are suitable choices for bonsai. It is also a popular choice for hedges in parts of South East Asia. In tropical climates, they flower year round and are commonly used in Hindu worship, as well as in ayurveda and Indian folk medicine.
In Brazil, fungal species Pseudocercospora ixoricola was found to be causing leaf spots on Ixora coccinea. Then in 2018, in Taiwan, during a fungal study, it was found that plant pathogens of Pseudopestalotiopsis ixorae and Pseudopestalotiopsis taiwanensis caused leaf spots on species of Ixora, which is a popular garden plant in Taiwan.
Selected species
Ixora albersii
Ixora backeri
Ixora beckleri
Ixora brevipedunculata
Ixora calycina
Ixora chinensis
Ixora coccinea
Ixora elongata
Ixora euosmia
Ixora finlaysoniana
Ixora foliosa
Ixora johnsonii
Ixora jucunda
Ixora killipii
Ixora lawsonii
Ixora malabarica
Ixora margaretae
Ixora marquesensis
Ixora mooreensis
Ixora nigerica
Ixora nigricans
Ixora ooumuensis
Ixora panurensis
Ixora pavetta
Ixora peruviana
Ixora pudica
I |
https://en.wikipedia.org/wiki/Data%20card | A datacard is an electronic card for data operations (storage, transfer, transformation, input, output).
Datacard types
Datacards can be sorted by their purposes:
Expansion card – printed-circuit board: inserted in a special slot in the device and used to add functions to this device;
Memory card or flash card: a card which is inserted into the corresponding device socket and used for data storage and transmission;
Identification card: a card that works by a contact/contactless interface and contains the data used for performance of various functions, for example access control in subway or offices. It is also used for prepaid services like banking and telecom;
Datacard or "electronic card": a card dealing with e.g. geographical, climatic, road or topographical data to be displayed on the video screen of some device (computer or GPS navigator), or represented otherwise to be more convenient to use in a certain situation (for example, navigator's vocal instructions).
Expansion cards
The expansion card in the computer is equipped with contacts on one of its edges, and it can be inserted into the motherboard slot socket.
There are various types of expansion cards:
A videocard transforms data from the computer memory into the video signal for the monitor. The videocard has its own processor, relieving the CPU of the computer;
A sound card enables the computer to work with sound;
A network card enables the computer to interact on a local network.
Memory cards
Many modern devices demand non-volatile memory requiring low power. Flash memory is used for these purposes. It is widespread in digital portable devices such as photo and video cameras, dictaphones, MP3 players, handheld computers, mobile phones, and also in smart phones and communicators. It is used for storage of the built-in software in various devices (like routers, mini-phonestations, printers, scanners, modems and controllers).
In recent years USB flash-drives have become more popular and have a |
https://en.wikipedia.org/wiki/IWARP | iWARP is a computer networking protocol that implements remote direct memory access (RDMA) for efficient data transfer over Internet Protocol networks. Contrary to some accounts, iWARP is not an acronym.
Because iWARP is layered on Internet Engineering Task Force (IETF)-standard congestion-aware protocols such as Transmission Control Protocol (TCP) and Stream Control Transmission Protocol (SCTP), it makes few requirements on the network, and can be successfully deployed in a broad range of environments.
History
In 2007, the IETF published five Request for Comments (RFCs) that define iWARP:
RFC 5040 A Remote Direct Memory Access Protocol Specification is layered over Direct Data Placement Protocol (DDP). It defines how RDMA Send, Read, and Write operations are encoded using DDP into headers on the network.
RFC 5041 Direct Data Placement over Reliable Transports is layered over MPA/TCP or SCTP. It defines how received data can be directly placed into an upper layer protocols receive buffer without intermediate buffers.
RFC 5042 Direct Data Placement Protocol (DDP) / Remote Direct Memory Access Protocol (RDMAP) Security analyzes security issues related to iWARP DDP and RDMAP protocol layers.
RFC 5043 Stream Control Transmission Protocol (SCTP) Direct Data Placement (DDP) Adaptation defines an adaptation layer that enables DDP over SCTP.
RFC 5044 Marker PDU Aligned Framing for TCP Specification defines an adaptation layer that enables preservation of DDP-level protocol record boundaries layered over the TCP reliable connected byte stream.
These RFCs are based on the RDMA Consortium's specifications for RDMA over TCP. The RDMA Consortium's specifications are influenced by earlier RDMA standards, including Virtual Interface Architecture (VIA) and InfiniBand (IB).
Since 2007, the IETF has published three additional RFCs that maintain and extend iWARP:
RFC 6580 IANA Registries for the Remote Direct Data Placement (RDDP) Protocols published in 2012 defines IANA re |
https://en.wikipedia.org/wiki/Completely%20distributive%20lattice | In the mathematical area of order theory, a completely distributive lattice is a complete lattice in which arbitrary joins distribute over arbitrary meets.
Formally, a complete lattice L is said to be completely distributive if, for any doubly indexed family
{xj,k | j in J, k in Kj} of L, we have
where F is the set of choice functions f choosing for each index j of J some index f(j) in Kj.
Complete distributivity is a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices.
Without the axiom of choice, no complete lattice with more than one element can ever satisfy the above property, as one can just let xj,k equal the top element of L for all indices j and k with all of the sets Kj being nonempty but having no choice function.
Alternative characterizations
Various different characterizations exist. For example, the following is an equivalent law that avoids the use of choice functions. For any set S of sets, we define the set S# to be the set of all subsets X of the complete lattice that have non-empty intersection with all members of S. We then can define complete distributivity via the statement
The operator ( )# might be called the crosscut operator. This version of complete distributivity only implies the original notion when admitting the Axiom of Choice.
Properties
In addition, it is known that the following statements are equivalent for any complete lattice L:
L is completely distributive.
L can be embedded into a direct product of chains [0,1] by an order embedding that preserves arbitrary meets and joins.
Both L and its dual order Lop are continuous posets.
Direct products of [0,1], i.e. sets of all functions from some set X to [0,1] ordered pointwise, are also called cubes.
Free completely distributive lattices
Every poset C can be completed in a completely distributive lattice.
A completely distributive lattice L is called the free completely distributive lattice over a poset C if and only if |
https://en.wikipedia.org/wiki/MySQL%20Enterprise | MySQL Enterprise is a subscription-based service produced by Oracle Corporation and targeted toward the commercial market. Oracle's official support, training and certification focus on MySQL Enterprise.
MySQL Enterprise contains
MySQL Enterprise Server software, a distribution of the MySQL Server
MySQL Enterprise Monitor
MySQL Enterprise Backup
MySQL Enterprise Audit
MySQL Enterprise Firewall
MySQL Workbench Standard Edition
Production Support
New versions of MySQL Enterprise Server are released monthly as Rapid Updates (MRUs), and quarterly as Service Packs (QSPs).
Relationship to free or community versions
MySQL Enterprise Edition was created by MySQL AB as a commercial product, available for purchase as a subscription. It has been continued by Sun, and Oracle.
External links
MySQL Enterprise Server
MySQL |
https://en.wikipedia.org/wiki/Edwin%20Hewitt | Edwin Hewitt (January 20, 1920, Everett, Washington – June 21, 1999) was an American mathematician known for his work in abstract harmonic analysis and for his discovery, in collaboration with Leonard Jimmie Savage, of the Hewitt–Savage zero–one law.
He received his Ph.D. in 1942 from Harvard University, and served on the faculty of mathematics at the University of Washington from 1954.
Hewitt pioneered the construction of the hyperreals by means of an ultrapower construction (Hewitt, 1948).
Hewitt wrote the 1975 English translation of A. A. Kirillov's 1972 Russian monograph Elements of the Theory of Representations (Элементы Теории Представлений), and co-authored Abstract Harmonic Analysis with Kenneth A. Ross (1st edn., 1st vol. in 1963; 1st edn., 2nd vol. in 1970), an extensive work in two volumes.
See also
Cohen–Hewitt factorization theorem
Publications |
https://en.wikipedia.org/wiki/Morphism%20of%20schemes | In algebraic geometry, a morphism of schemes generalizes a morphism of algebraic varieties just as a scheme generalizes an algebraic variety. It is, by definition, a morphism in the category of schemes.
A morphism of algebraic stacks generalizes a morphism of schemes.
Definition
By definition, a morphism of schemes is just a morphism of locally ringed spaces.
A scheme, by definition, has open affine charts and thus a morphism of schemes can also be described in terms of such charts (compare the definition of morphism of varieties). Let ƒ:X→Y be a morphism of schemes. If x is a point of X, since ƒ is continuous, there are open affine subsets U = Spec A of X containing x and V = Spec B of Y such that ƒ(U) ⊆ V. Then ƒ: U → V is a morphism of affine schemes and thus is induced by some ring homomorphism B → A (cf. #Affine case.) In fact, one can use this description to "define" a morphism of schemes; one says that ƒ:X→Y is a morphism of schemes if it is locally induced by ring homomorphisms between coordinate rings of affine charts.
Note: It would not be desirable to define a morphism of schemes as a morphism of ringed spaces. One trivial reason is that there is an example of a ringed-space morphism between affine schemes that is not induced by a ring homomorphism (for example, a morphism of ringed spaces:
that sends the unique point to s and that comes with .) More conceptually, the definition of a morphism of schemes needs to capture "Zariski-local nature" or localization of rings; this point of view (i.e., a local-ringed space) is essential for a generalization (topos).
Let be a morphism of schemes with . Then, for each point x of X, the homomorphisms on the stalks:
is a local ring homomorphism: i.e., and so induces an injective homomorphism of residue fields
.
(In fact, φ maps th n-th power of a maximal ideal to the n-th power of the maximal ideal and thus induces the map between the (Zariski) cotangent spaces.)
For each scheme X, there is a natural morphi |
https://en.wikipedia.org/wiki/Good%20Clinical%20Practice%20Directive | The Good Clinical Practice Directive (Directive 2005/28/EC of 8 April 2005 of the European Parliament and of the Council) lays down principles and detailed guidelines for good clinical practice as regards conducting clinical trials of medicinal products for human use, as well as the requirements for authorisation of the manufacturing or importation of such products.
The directive deals with the following items:
Good clinical practice for the design, conduct, recording and reporting of clinical trials:
Good Clinical Practice (GCP)
The Ethics Committee
The sponsors
Investigator's Brochure
Manufacturing or import authorisation
Exemption for Hospital & Health Centres and Reconstitution
Conditions of Holding a Manufacturing Licence
The Trial master file and archiving
Format of Trial Master File
Retention of Essential and Medical Records
Inspectors
Inspection procedures
Final provisions |
https://en.wikipedia.org/wiki/Arthur%20Saint-L%C3%A9on | Arthur Saint-Léon (17 September 1821, in Paris – 2 September 1870) was the Maître de Ballet of St. Petersburg Imperial Ballet from 1859 until 1869 and is famous for creating the choreography of the ballet Coppélia.
Biography
He was born Charles Victor Arthur Michel in Paris, but was raised in Stuttgart, where his father was dance master for the court and the theatre ballet. Saint-Léon was encouraged by his father, who had also been a dancer of the Paris Opéra Ballet, to study music and dance. Saint-Léon studied violin with Joseph Mayseder and Niccolò Paganini. At the same time, he studied ballet so he could perform both as violinist and dancer.
When he was 17 years old, he made his début as first demi-charactére dancer at the Théâtre de la Monnaie in Brussels. He started to tour across Europe dancing in Germany, Italy, England, obtaining a lot of success. In particular, the London audience, who did not like at that time to see men dancing on stage, liked him very much. He was much appreciated for his tours and his jumps. He was able to gain applause in every theatre he danced, and this was not very common in the Romantic Era, where the only star on stage was the Ballerina dancing en pointe.
When in Vienna, Saint-Léon could dance for the first time with Fanny Cerrito and from that very moment the two of them became almost indivisible, until they married in 1845. For Cerrito, Saint-Léon choreographed a ballet that was a hit in London La Vivandière (1843). He created also ballets for the Teatro La Fenice in Venice and for the Paris Opéra.
He became the teacher of the master class at the Opéra and he was in charge to choreograph the divertissements of the most important ballet production. He parted from his wife in 1851 and when she was invited to dance at the Opéra, Saint-Léon retired.
After touring across Europe, (he also worked three years for the Teatro San Carlos in Lisbon), he was invited to succeed Jules Perrot in 1859 as Maître de Ballet to the Imperia |
https://en.wikipedia.org/wiki/WIMATS | WIMATS is an application software to transcript mathematical and scientific text input into braille script in braille presses. Based on the Nemeth Code, the output can be printed in a variety of braille embossers. This transcription software was jointly developed by Webel Mediatronics Limited (WML) and International Council for Education of People with Visual Impairment (ICEVI), and was officially launched on 17 July 2006.
WIMATS support inputs of arithmetic, algebra, geometry, trigonometry, calculus, vector, set notations and Greek alphabets. The graphical user interface is very user friendly, and training does not take a long time. This software fulfills a long felt need for the availability of Mathematics and Science study materials at the Higher Secondary and College level for the visually impaired persons. With the introduction of this software, visually impaired persons will no longer need to worry about the availability of books for mathematics and science courses.
ICEVI has presence in 185 countries in the world and the software developed jointly with WML is made available to all these countries through ICEVI. |
https://en.wikipedia.org/wiki/Atkin%E2%80%93Lehner%20theory | In mathematics, Atkin–Lehner theory is part of the theory of modular forms describing when they arise at a given integer level N in such a way that the theory of Hecke operators can be extended to higher levels.
Atkin–Lehner theory is based on the concept of a newform, which is a cusp form 'new' at a given level N, where the levels are the nested congruence subgroups:
of the modular group, with N ordered by divisibility. That is, if M divides N, Γ0(N) is a subgroup of Γ0(M). The oldforms for Γ0(N) are those modular forms f(τ) of level N of the form g(d τ) for modular forms g of level M with M a proper divisor of N, where d divides N/M. The newforms are defined as a vector subspace of the modular forms of level N, complementary to the space spanned by the oldforms, i.e. the orthogonal space with respect to the Petersson inner product.
The Hecke operators, which act on the space of all cusp forms, preserve the subspace of newforms and are self-adjoint and commuting operators (with respect to the Petersson inner product) when restricted to this subspace. Therefore, the algebra of operators on newforms they generate is a finite-dimensional C*-algebra that is commutative; and by the spectral theory of such operators, there exists a basis for the space of newforms consisting of eigenforms for the full Hecke algebra.
Atkin–Lehner involutions
Consider a Hall divisor e of N, which means that not only does e divide N, but also e and N/e are relatively prime (often denoted e||N). If N has s distinct prime divisors, there are 2s Hall divisors of N; for example, if N = 360 = 23⋅32⋅51, the 8 Hall divisors of N are 1, 23, 32, 51, 23⋅32, 23⋅51, 32⋅51, and 23⋅32⋅51.
For each Hall divisor e of N, choose an integral matrix We of the form
with det We = e. These matrices have the following properties:
The elements We normalize Γ0(N): that is, if A is in Γ0(N), then WeAW is in Γ0(N).
The matrix W, which has determinant e2, can be written as eA where A is in Γ0(N). We will be i |
https://en.wikipedia.org/wiki/Forest%20management | Forest management is a branch of forestry concerned with overall administrative, legal, economic, and social aspects, as well as scientific and technical aspects, such as silviculture, protection, and forest regulation. This includes management for timber, aesthetics, recreation, urban values, water, wildlife, inland and nearshore fisheries, wood products, plant genetic resources, and other forest resource values. Management objectives can be for conservation, utilisation, or a mixture of the two. Techniques include timber extraction, planting and replanting of different species, building and maintenance of roads and pathways through forests, and preventing fire.
Definition
The forest is a natural system that can supply different products and services. Forests supply water, mitigate climate change, provide habitats for wildlife including many pollinators which are essential for sustainable food production, provide timber and fuelwood, serve as a source of non-wood forest products including food and medicine, and contribute to rural livelihoods.
The working of this system is influenced by the natural environment: climate, topography, soil, etc., and also by human activity. The actions of humans in forests constitute forest management. In developed societies, this management tends to be elaborated and planned in order to achieve the objectives that are considered desirable.
Some forests have been and are managed to obtain traditional forest products such as firewood, fiber for paper, and timber, with little thinking for other products and services. Nevertheless, as a result of the progression of environmental awareness, management of forests for multiple use is becoming more common.
Public input and awareness
There has been increased public awareness of natural resource policy, including forest management. Public concern regarding forest management may have shifted from the extraction of timber for economic development, to maintaining the flow of the range of ec |
https://en.wikipedia.org/wiki/Constraint%20%28information%20theory%29 | Constraint in information theory is the degree of statistical dependence between or among variables.
Garner provides a thorough discussion of various forms of constraint (internal constraint, external constraint, total constraint) with application to pattern recognition and psychology.
See also
Mutual Information
Total Correlation
Interaction information |
https://en.wikipedia.org/wiki/Associative%20magic%20square | An associative magic square is a magic square for which each pair of numbers symmetrically opposite to the center sum up to the same value. For an n × n square, filled with the numbers from 1 to n2, this common sum must equal n2 + 1. These squares are also called associated magic squares, regular magic squares, regmagic squares, or symmetric magic squares.
Examples
For instance, the Lo Shu Square – the unique 3 × 3 magic square – is associative, because each pair of opposite points form a line of the square together with the center point, so the sum of the two opposite points equals the sum of a line minus the value of the center point regardless of which two opposite points are chosen. The 4 × 4 magic square from Albrecht Dürer 1514 engraving – also found in a 1765 letter of Benjamin Franklin – is also associative, with each pair of opposite numbers summing to 17.
Existence and enumeration
The numbers of possible associative n × n magic squares for n = 3,4,5,..., counting two squares as the same whenever they differ only by a rotation or reflection, are:
1, 48, 48544, 0, 1125154039419854784, ...
The number zero for n = 6 is an example of a more general phenomenon: associative magic squares do not exist for values of n that are singly even (equal to 2 modulo 4). Every associative magic square of even order forms a singular matrix, but associative magic squares of odd order can be singular or nonsingular. |
https://en.wikipedia.org/wiki/PSXLinux | PSXLinux (also known as Runix) is a Linux kernel and development kit for the PlayStation (MIPS-NOMMU). PSXLinux is based on the μClinux 2.4.x kernel and contains specific support for the Sony PlayStation.
Features
Serial console over RS232 SIO
Virtual console over PlayStation GPU
Multiple memory cards as storagedevice, block device driver
USB host driver capable of keyboard and mouse support using a Cypress Semiconductor SL811
Compiling the kernel
Various attempts to compile the kernel resulted into either errors or people unable to run the kernel from suggested bootmethods. A cross compiler is required to make the kernel compatible for the PlayStations CPU.
Execution methods
Over SIO (Serial)
Loading the compiled RUNIX binary (PS-EXE) into a PlayStation may be done by using a Serial Adapter (such as the Net Yaroze Serial Cable) or Parallel Port device (Xplorer, Caetla). Another method is by installing a modchip within the PlayStation and burning a CD-ROM containing the executable data that will allow the system to boot burned discs. Runix did supply some tools on their website to transfer files if one had obtained or built their own serial cable. The filename was: psx-serial.0.9.7.tar.gz
From CD
Due to Linux ELF format not supported by the PlayStation, conversion is required and a suggested path is to first convert the Linux kernel file to ECOFF, the NetYaroze native executable standard. This can be done with an enclosed tool called elf2ecoff inside the kernel source. Next step is converting the ECOFF file to PS-EXE file, the format found on PlayStation game disks, after CD-ROM mastering a valid disk image.
Multiple Memorycards as Storage
From Beta1 on the Runix sources support Multiple or single memorycards of the PlayStations default size or larger. Multiple memory cards could have been formatted using a tool Runix supplied on their website to format into Ext2.
The tools to do so seem to be lost or no sources can be found only the name: psx-mcard.0 |
https://en.wikipedia.org/wiki/Dry%20Sheep%20Equivalent | Dry Sheep Equivalent (DSE) is a standard unit frequently used in Australia to compare the feed requirements of different classes of stock or to assess the carrying capacity and potential productivity of a given farm or area of grazing land.
The unit represents the amount of feed required by a two-year-old, 45 kg (some sources state 50 kg) Merino sheep (wether or non-lactating, non-pregnant ewe) to maintain its weight. One DSE is equivalent to 7.60 megajoule (MJ) per day.
The carrying capacity of a farm is commonly determined in Australia by expressing the number of stock carried during a period of feed shortage in terms of their DSEs.
Benchmarking standards used by Grazing for Profit programmes quote that one labour unit (40 hours per week) is required for 6,000 DSE (other benchmarking standards set the figure at 7,000 DSE).
See also
Livestock grazing comparison
Sheep |
https://en.wikipedia.org/wiki/Shaping%20codes | In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency.
Description
Typical digital communication systems uses M-Quadrature Amplitude Modulation(QAM) to communicate through an analog channel (specifically a communication channel with Gaussian noise). For Higher bit rates(M) the minimum signal-to-noise ratio (SNR) required by a QAM system with Error Correcting Codes is about 1.53 dB higher than minimum SNR required by a Gaussian source(>30% more transmitter power) as given in Shannon–Hartley theorem
where
C is the channel capacity in bits per second;
B is the bandwidth of the channel in hertz;
S is the total signal power over the bandwidth and
N is the total noise power over the bandwidth.
S/N is the signal-to-noise ratio of the communication signal to the Gaussian noise interference expressed as a straight power ratio (not as decibels).
This 1.53 dB difference is called the shaping gap. Typically digital system will encode bits with uniform probability to maximize the entropy. Shaping code act as buffer between digital sources and modulator communication system. They will receive uniformly distributed data and convert it to Gaussian like distribution before presenting to the modulator. Shaping codes are helpful in reducing transmit power and thus reduce the cost of Power amplifier and the interference caused to other users in the vicinity.
Application
Some of the methods used for shaping are described in the trellis shaping paper by Dr. G. D. Forney Jr.
Shell mapping is used in V.34 modems to get a shaping gain of .8 dB.
All the shaping schemes in the literature try to reduce the transmitted signal power. In future this may have find application in wireless networks where the interference from other nodes are becoming the major issue. |
https://en.wikipedia.org/wiki/European%20Cultivated%20Potato%20Database | The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions. The information that it contains can be searched by variety name, or by selecting one or more required characteristics.
159,848 observations
29 contributors
91 characters
4,119 cultivated varieties
1,354 breeding lines
The data is indexed by variety, character, country of origin, and contributor. There is a facility to select a variety and to find similar varieties based upon botanical characteristics.
ECPD is the result of collaboration between participants in eight European Union countries and five East European countries. It is intended to be a source of information on varieties maintained by them. More than twenty-three scientific organisations are contributing to this information source.
The database is maintained and updated by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR), which is organised by Bioversity International. The European Cultivated Potato Database was created to advance the conservation and use of genetic diversity for the well-being of present and future generations.
External links
The European Cultivated Potato Database
Biodiversity databases
Databases in Scotland
Government databases in the United Kingdom
Information technology organizations based in Europe
Online databases
Potatoes |
https://en.wikipedia.org/wiki/Paracoccus%20denitrificans | Paracoccus denitrificans, is a coccoid bacterium known for its nitrate reducing properties, its ability to replicate under conditions of hypergravity and for being a relative of the eukaryotic mitochondrion (endosymbiotic theory).
Description
Paracoccus denitrificans, is a gram-negative, coccus, non-motile, denitrifying (nitrate-reducing) bacterium. It is typically a rod-shaped bacterium but assumes spherical shapes during the stationary phase. Like all gram-negative bacteria, it has a double membrane with a cell wall. Formerly known as Micrococcus denitrificans, it was first isolated in 1910 by Martinus Beijerinck, a Dutch microbiologist. The bacterium was reclassified in 1969 to Paracoccus denitrificans by D.H. Davis. The genome of P. denitrificans was sequenced in 2004.
Ecology and ecological applications
Metabolically Paracoccus denitrificans is very flexible and has been recorded in soil in both aerobic or anaerobic environments. The microbe also has the ability to live in many different kinds of media and environments and is known to be an extremophile. The bacteria are able to obtain energy both from organic compounds, such as methanol and methylamine, and from inorganic compounds, such as hydrogen and sulfur. The ability to metabolise compounds of hydrogen and sulfur, such as thiosulfate has led to the microbe being exploited as a model organism for the study of poorly characterized sulfur compound transformations.
The denitrification properties of Paracoccus denitrificans are an important cause for the loss of nitrogen fertilisers in agricultural soil. This is possibly due to the chemical process called "denitrification" in which nitrogen is converted to dinitrogen to produce nitric oxide and nitrous oxide which cause damage to the atmosphere. Although the enzymatic mechanisms of this denitrification process are well characterised, the exact molecular controles are yet to be fully described. As such, Paracoccus denitrificans has emerged as an important m |
https://en.wikipedia.org/wiki/Prime%20triplet | In number theory, a prime triplet is a set of three prime numbers in which the smallest and largest of the three differ by 6. In particular, the sets must have the form or . With the exceptions of and , this is the closest possible grouping of three prime numbers, since one of every three sequential odd numbers is a multiple of three, and hence not prime (except for 3 itself).
Examples
The first prime triplets are
(5, 7, 11), (7, 11, 13), (11, 13, 17), (13, 17, 19), (17, 19, 23), (37, 41, 43), (41, 43, 47), (67, 71, 73), (97, 101, 103), (101, 103, 107), (103, 107, 109), (107, 109, 113), (191, 193, 197), (193, 197, 199), (223, 227, 229), (227, 229, 233), (277, 281, 283), (307, 311, 313), (311, 313, 317), (347, 349, 353), (457, 461, 463), (461, 463, 467), (613, 617, 619), (641, 643, 647), (821, 823, 827), (823, 827, 829), (853, 857, 859), (857, 859, 863), (877, 881, 883), (881, 883, 887)
Subpairs of primes
A prime triplet contains a single pair of:
Twin primes: or ;
Cousin primes: or ; and
Sexy primes: .
Higher-order versions
A prime can be a member of up to three prime triplets - for example, 103 is a member of , and . When this happens, the five involved primes form a prime quintuplet.
A prime quadruplet contains two overlapping prime triplets, and .
Conjecture on prime triplets
Similarly to the twin prime conjecture, it is conjectured that there are infinitely many prime triplets. The first known gigantic prime triplet was found in 2008 by Norman Luhn and François Morain. The primes are with . the largest known proven prime triplet contains primes with 20008 digits, namely the primes with .
The Skewes number for the triplet is 87613571, and for the triplet it is 337867. |
https://en.wikipedia.org/wiki/Krypto%20%28game%29 | Krypto is a card game designed by Daniel Yovich in 1963 and published by Parker Brothers and MPH Games Co. It is a mathematical game that promotes proficiency with basic arithmetic operations. More detailed analysis of the game can raise more complex statistical questions.
Rules
The Krypto deck consists of 56 cards: three of each of the numbers 1-6, four each of the numbers 7-10, two each of 11-17, one each of 18-25. Six cards are dealt: a common objective card at the top and five other cards below.
Each player must use all five of the cards' numbers exactly once, using any combination of arithmetic operations (addition, subtraction, multiplication, and division), to form the objective card's number. The first player to come up with a correct formula is the winner.
International tournament rules
The official international rules for Krypto differ slightly from the house rules, and they involve a system of scorekeeping.
Five cards are dealt face up in the center of the game table. (Each player works with the same set of five cards, rather than a set exclusive to them.) Then a sixth card is dealt face up in the center of the table that becomes the Objective Card. Each player commences (mentally) to mathematically manipulate the numbers of each card so that the last solution equals the Objective Card number. Krypto International Rules specify the use of whole numbers only, using addition, subtraction, division, multiplication and/or any combination thereof ... fractions, negative numbers or square rooting are not permitted. Each of the five cards must be used once and only once. The first player to solve the problem declares "Krypto" and has 30 seconds to explain the answer. When a player
"Kryptos" and cannot relate the proper solution, a new hand is dealt and the hand is replayed. The player that errored receives a minus one point in the score box for that hand and is not eligible to play for a score for the replay of that hand.
Each hand must be solved within |
https://en.wikipedia.org/wiki/Terminal%20mode | A terminal mode is one of a set of possible states of a terminal or pseudo terminal character device in Unix-like systems and determines how characters written to the terminal are interpreted. In cooked mode data is preprocessed before being given to a program, while raw mode passes the data as-is to the program without interpreting any of the special characters.
The system intercepts special characters in cooked mode and interprets special meaning from them. Backspace, delete, and Control-D are typically used to enable line-editing for the input to the running programs, and other control characters such as Control-C and Control-Z are used for job control or associated with other signals. The precise definition of what constitutes a cooked mode is operating system-specific.
For example, if “ABC<Backspace>D” is given as an input to a program through a terminal character device in cooked mode, the program gets “ABD”. But, if the terminal is in raw mode, the program gets the characters “ABC” followed by the Backspace character and followed by “D”. In cooked mode, the terminal line discipline processes the characters “ABC<Backspace>D” and presents only the result (“ABD”) to the program.
Technically, the term “cooked mode” should be associated only with streams that have a terminal line discipline, but generally it is applied to any system that does some amount of preprocessing.
cbreak mode
cbreak mode (sometimes called rare mode) is a mode between raw mode and cooked mode. Unlike cooked mode it works with single characters at a time, rather than forcing a wait for a whole line and then feeding the line in all at once. Unlike raw mode, keystrokes like abort (usually Control-C) are still processed by the terminal and will interrupt the process.
See also
Terminal emulator
Serial communications
Chapter Serial communications in Linux and Unix of the Serial Data Communications Programming Wikibook
Command and Data modes |
https://en.wikipedia.org/wiki/Overall%20pressure%20ratio | In aeronautical engineering, overall pressure ratio, or overall compression ratio, is the ratio of the stagnation pressure as measured at the front and rear of the compressor of a gas turbine engine. The terms compression ratio and pressure ratio are used interchangeably. Overall compression ratio also means the overall cycle pressure ratio which includes intake ram.
History of overall pressure ratios
Early jet engines had limited pressure ratios due to construction inaccuracies of the compressors and various material limits. For instance, the Junkers Jumo 004 from World War II had an overall pressure ratio 3.14:1. The immediate post-war Snecma Atar improved this marginally to 5.2:1. Improvements in materials, compressor blades, and especially the introduction of multi-spool engines with several different rotational speeds, led to the much higher pressure ratios common today.
Modern civilian engines generally operate between 40 and 55:1. The highest in-service is the General Electric GEnx-1B/75 with an OPR of 58 at the end of the climb to cruise altitude (Top of Climb) and 47 for takeoff at sea level.
Advantages of high overall pressure ratios
Generally speaking, a higher overall pressure ratio implies higher efficiency, but the engine will usually weigh more, so there is a compromise.
A high overall pressure ratio permits a larger area ratio nozzle to be fitted on the jet engine. This means that more of the heat energy is converted to jet speed, and energetic efficiency improves. This is reflected in improvements in the engine's specific fuel consumption.
The GE Catalyst has a 16:1 OPR and its thermal efficiency is 40%, the 32:1 Pratt & Whitney GTF has a thermal efficiency of 50% and the 58:1 GEnx has a thermal efficiency of 58%.
Disadvantages of high overall pressure ratios
One of the primary limiting factors on pressure ratio in modern designs is that the air heats up as it is compressed. As the air travels through the compressor stages it can reach tempe |
https://en.wikipedia.org/wiki/Adaptive%20grammar | An adaptive grammar is a formal grammar that explicitly provides mechanisms within the formalism to allow its own production rules to be manipulated.
Overview
John N. Shutt defines adaptive grammar as a grammatical formalism that allows rule sets (aka sets of production rules) to be explicitly manipulated within a grammar. Types of manipulation include rule addition, deletion, and modification.
Early history
The first description of grammar adaptivity (though not under that name) in the literature is generally taken to be in a paper by Alfonso Caracciolo di Forino published in 1963. The next generally accepted reference to an adaptive formalism (extensible context-free grammars) came from Wegbreit in 1970 in the study of extensible programming languages, followed by the dynamic syntax of Hanford and Jones in 1973.
Collaborative efforts
Until fairly recently, much of the research into the formal properties of adaptive grammars was uncoordinated between researchers, only first being summarized by Henning Christiansen in 1990 in response to a paper in ACM SIGPLAN Notices by Boris Burshteyn. The Department of Engineering at the University of São Paulo has its Adaptive Languages and Techniques Laboratory, specifically focusing on research and practice in adaptive technologies and theory. The LTA also maintains a page naming researchers in the field.
Terminology and taxonomy
While early efforts made reference to dynamic syntax and extensible, modifiable, dynamic, and adaptable grammars, more recent usage has tended towards the use of the term adaptive (or some variant such as adaptativa, depending on the publication language of the literature). Iwai refers to her formalism as adaptive grammars, but this specific use of simply adaptive grammars is not typically currently used in the literature without name qualification. Moreover, no standardization or categorization efforts have been undertaken between various researchers, although several have made efforts in this d |
https://en.wikipedia.org/wiki/Expand%20Networks | Expand Networks, Ltd. was a Tel Aviv, Israel based provider of WAN optimization technology founded in 1998 and liquidated in 2011.
About
Expand Networks was a privately held company, co-founded by Talmon Marco in 1998; initial financing was provided by Discount Investment Corporation Ltd., The Eurocom Group, Ophir Holdings, and a private group of investors, including Memco Software founder Israel Mezin. Additional investors joined in subsequent rounds of funding. The company raised over $95 million.
Expand Networks headquarters was in Tel-Aviv, Israel with sales in the United States and Europe, New Jersey, Australia, China, Singapore, and South Africa.
The company manufactured accelerators in physical, virtual and mobile deployment options.
Liquidation
In mid October 2011, following the requests of Plenus, one of the company's lenders, an Israeli court appointed a liquidator - Paz Rimer. The liquidator gradually terminated the company's employees and eventually, on 11 January 2012 sold most of the assets of the company to Riverbed Technology, which immediately terminated all the company's products and ceased support.
External links
Expand Networks Home Page
Expand Networks reassures partners it's business as usual |
https://en.wikipedia.org/wiki/Libby%20Zion%20Law | New York State Department of Health Code, Section 405, also known as the Libby Zion Law, is a regulation that limits the amount of resident physicians' work in New York State hospitals to roughly 80 hours per week. The law was named after Libby Zion, who died in 1984 at the age of 18 under the care of what her father believed to be overworked resident physicians and intern physicians. In July 2003, the Accreditation Council for Graduate Medical Education adopted similar regulations for all accredited medical training institutions in the United States.
Although regulatory and civil proceedings found conflicting evidence about Zion's death,<ref
name=nyt05></ref> today her death is widely believed to have been caused by serotonin syndrome from the drug interaction between the phenelzine she was taking prior to her hospital visit, and the pethidine administered by a resident physician. The lawsuits and regulatory investigations following her death, and their implications for working conditions and supervision of interns and residents, were highly publicized in both lay media and medical journals.
Death of Libby Zion
Libby Zion (November 1965 – March 5, 1984) was a freshman at Bennington College in Bennington, Vermont. She took a prescribed antidepressant, phenelzine, daily. A hospital autopsy revealed traces of cocaine, but other later tests showed no traces. She was the daughter of Sidney Zion, a lawyer who had been a writer for The New York Times. She had two brothers, Adam and Jed. Her obituary in The New York Times, written the day after her death, stated that she had been ill with a "flu-like ailment" for the past several days. The article stated that after being admitted to New York Hospital, she died of cardiac arrest, the cause of which was not known.
Libby Zion had been admitted to the hospital through the emergency room by the resident physician assigned to the ER on the night of March 4. Raymond Sherman, the Zion family physician, agreed with their plan to |
https://en.wikipedia.org/wiki/Systems%20immunology | Systems immunology is a research field under systems biology that uses mathematical approaches and computational methods to examine the interactions within cellular and molecular networks of the immune system. The immune system has been thoroughly analyzed as regards to its components and function by using a "reductionist" approach, but its overall function can't be easily predicted by studying the characteristics of its isolated components because they strongly rely on the interactions among these numerous constituents. It focuses on in silico experiments rather than in vivo.
Recent studies in experimental and clinical immunology have led to development of mathematical models that discuss the dynamics of both the innate and adaptive immune system. Most of the mathematical models were used to examine processes in silico that can't be done in vivo. These processes include: the activation of T cells, cancer-immune interactions, migration and death of various immune cells (e.g. T cells, B cells and neutrophils) and how the immune system will respond to a certain vaccine or drug without carrying out a clinical trial.
Techniques of modelling in Immune cells
The techniques that are used in immunology for modelling have a quantitative and qualitative approach, where both have advantages and disadvantages. Quantitative models predict certain kinetic parameters and the behavior of the system at a certain time point or concentration point. The disadvantage is that it can only be applied to a small number of reactions and prior knowledge about some kinetic parameters is needed. On the other hand, qualitative models can take into account more reactions but in return they provide less details about the kinetics of the system. The only thing in common is that both approaches lose simplicity and become useless when the number of components drastically increase.
Ordinary Differential Equation model
Ordinary differential equations (ODEs) are used to describe the dynamics of biol |
https://en.wikipedia.org/wiki/Sinkov%20statistic | Sinkov statistics, also known as log-weight statistics, is a specialized field of statistics that was developed by Abraham Sinkov, while working for the small Signal Intelligence Service organization, the primary mission of which was to compile codes and ciphers for use by the U.S. Army. The mathematics involved include modular arithmetic, a bit of number theory, some linear algebra of two dimensions with matrices, some combinatorics, and a little statistics.
Sinkov did not explain the theoretical underpinnings of his statistics, or characterized its distribution, nor did he give a decision procedure for accepting or rejecting candidate plaintexts on the basis of their S1 scores. The situation becomes more difficult when comparing strings of different lengths because Sinkov does not explain how the distribution of his statistics changes with length, especially when applied to higher-order grams. As for how to accept or reject a candidate plaintext, Sinkov simply said to try all possibilities and to pick the one with the highest S1 value. Although the procedure works for some applications, it is inadequate for applications that require on-line decisions. Furthermore, it is desirable to have a meaningful interpretation of the S1 values. |
https://en.wikipedia.org/wiki/European%20Federation%20of%20Pharmaceutical%20Industries%20and%20Associations | The European Federation of Pharmaceutical Industries and Associations (EFPIA) is a Brussels-based trade association and lobbying organisation, founded in 1978 and representing the pharmaceutical industry operating in Europe. Through its membership of 36 national associations and 39 leading pharmaceutical companies, the EFPIA represents 1,900 European companies.
EFPIA priorities
EFPIA priorities include the speeding up of regulatory approval and reimbursement processes for new medicines, creating a strong science base in Europe, joining forces with key stakeholders on political issues concerning health and addressing safety concerns. EFPIA also includes specialised groups like Vaccines Europe who produce approximately 80% of vaccines used worldwide and European Biopharmaceutical Enterprises harness biotechnology to develop approximately one-fifth of new medicines.
Innovative Medicines Initiative
The Innovative Medicines Initiative (IMI) is a public-private partnership designed by the European Commission and EFPIA. It is a pan-European collaboration that brings together large biopharmaceutical companies, small- and medium-sized enterprises (SMEs), patient organisations, academia, hospitals and public authorities. The initiative aims to accelerate the discovery and development of better medicines by removing bottlenecks in the drug development process. It focuses on creating better methods and tools that improve and enhance the drug development process, rather than on developing specific, new medicines.
Controversy
From 1991 to 1998 Emer Cooke worked for the EFPIA. She became executive director of the European Medicines Agency (EMA), an agency of the European Union (EU) in charge of the evaluation and supervision of medicinal products, in November 2020.
In a session of the Austrian Parliament member of parliament Gerald Hauser on 1 April 2021 publicly criticised a potential conflict of interest, by her allowing the controversial Oxford–AstraZeneca COVID-19 vaccine |
https://en.wikipedia.org/wiki/Second-order%20cone%20programming | A second-order cone program (SOCP) is a convex optimization problem of the form
minimize
subject to
where the problem parameters are , and . is the optimization variable.
is the Euclidean norm and indicates transpose. The "second-order cone" in SOCP arises from the constraints, which are equivalent to requiring the affine function to lie in the second-order cone in .
SOCPs can be solved by interior point methods and in general, can be solved more efficiently than semidefinite programming (SDP) problems. Some engineering applications of SOCP include filter design, antenna array weight design, truss design, and grasping force optimization in robotics. Applications in quantitative finance include portfolio optimization; some market impact constraints, because they are not linear, cannot be solved by quadratic programming but can be formulated as SOCP problems.
Second-order cone
The standard or unit second-order cone of dimension is defined as
.
The second-order cone is also known by quadratic cone, ice-cream cone, or Lorentz cone. The second-order cone in is .
The set of points satisfying a second-order cone constraint is the inverse image of the unit second-order cone under an affine mapping:
and hence is convex.
The second-order cone can be embedded in the cone of the positive semidefinite matrices since
i.e., a second-order cone constraint is equivalent to a linear matrix inequality (Here means is semidefinite matrix). Similarly, we also have,
.
Relation with other optimization problems
When for , the SOCP reduces to a linear program. When for , the SOCP is equivalent to a convex quadratically constrained linear program.
Convex quadratically constrained quadratic programs can also be formulated as SOCPs by reformulating the objective function as a constraint. Semidefinite programming subsumes SOCPs as the SOCP constraints can be written as linear matrix inequalities (LMI) and can be reformulated as an instance of semidefinite program. Th |
https://en.wikipedia.org/wiki/Minkowski%E2%80%93Steiner%20formula | In mathematics, the Minkowski–Steiner formula is a formula relating the surface area and volume of compact subsets of Euclidean space. More precisely, it defines the surface area as the "derivative" of enclosed volume in an appropriate sense.
The Minkowski–Steiner formula is used, together with the Brunn–Minkowski theorem, to prove the isoperimetric inequality. It is named after Hermann Minkowski and Jakob Steiner.
Statement of the Minkowski-Steiner formula
Let , and let be a compact set. Let denote the Lebesgue measure (volume) of . Define the quantity by the Minkowski–Steiner formula
where
denotes the closed ball of radius , and
is the Minkowski sum of and , so that
Remarks
Surface measure
For "sufficiently regular" sets , the quantity does indeed correspond with the -dimensional measure of the boundary of . See Federer (1969) for a full treatment of this problem.
Convex sets
When the set is a convex set, the lim-inf above is a true limit, and one can show that
where the are some continuous functions of (see quermassintegrals) and denotes the measure (volume) of the unit ball in :
where denotes the Gamma function.
Example: volume and surface area of a ball
Taking gives the following well-known formula for the surface area of the sphere of radius , :
where is as above. |
https://en.wikipedia.org/wiki/Brunn%E2%80%93Minkowski%20theorem | In mathematics, the Brunn–Minkowski theorem (or Brunn–Minkowski inequality) is an inequality relating the volumes (or more generally Lebesgue measures) of compact subsets of Euclidean space. The original version of the Brunn–Minkowski theorem (Hermann Brunn 1887; Hermann Minkowski 1896) applied to convex sets; the generalization to compact nonconvex sets stated here is due to Lazar Lyusternik (1935).
Statement
Let n ≥ 1 and let μ denote the Lebesgue measure on Rn. Let A and B be two nonempty compact subsets of Rn. Then the following inequality holds:
where A + B denotes the Minkowski sum:
The theorem is also true in the setting where are only assumed to be measurable and non-empty.
Multiplicative version
The multiplicative form of Brunn–Minkowski inequality states that for all .
The Brunn–Minkowski inequality is equivalent to the multiplicative version.
In one direction, use the inequality (exponential is convex), which holds for . In particular, .
Conversely, using the multiplicative form, we find
The right side is maximized at , which gives
.
The Prékopa–Leindler inequality is a functional generalization of this version of Brunn–Minkowski.
On the hypothesis
Measurability
It is possible for to be Lebesgue measurable and to not be; a counter example can be found in "Measure zero sets with non-measurable sum." On the other hand, if are Borel measurable, then is the continuous image of the Borel set , so analytic and thus measurable. See the discussion in Gardner's survey for more on this, as well as ways to avoid measurability hypothesis.
We note that in the case that A and B are compact, so is A + B, being the image of the compact set under the continuous addition map : , so the measurability conditions are easy to verify.
Non-emptiness
The condition that are both non-empty is clearly necessary. This condition is not part of the multiplicative versions of BM stated below.
Proofs
We give two well known proofs of Brunn–Minkowski.
We g |
https://en.wikipedia.org/wiki/Storage%20resource%20management | In computing, storage resource management (SRM) involves optimizing the efficiency and speed with which a storage area network (SAN) utilizes available drive space.
History
Data growth averages around 50% to 100% per year and organizations face rising hardware and storage management costs. Storage professionals who face out-of-control data growth are looking at SRM to help them navigate the storage environment. SRM identifies under-utilized capacity, identifies old or non-critical data that could be moved to less expensive storage, and helps predict future capacity requirements.
SRM evolved beyond quota management. it included functions such as storage area network (SAN) management. |
https://en.wikipedia.org/wiki/Late%20congenital%20syphilitic%20oculopathy | Late congenital syphilitic oculopathy is a disease of the eye, a manifestation of late congenital syphilis. It can appear as:
Interstitial keratitis – this commonly appears between ages 6 and 12. Symptoms include lacrimation and photophobia. Pathological vascularization of the cornea cause it to turn pink or salmon colored. 90% of cases affect both eyes.
Episcleritis or scleritis – nodules appear in or overlying the sclera (white of eye)
Iritis or iris papules – vascular infiltration of the iris causes rosy color change and yellow/red nodules.
Chorioretinitis, papillitis, retinal vasculitis – retinal changes can resemble retinitis pigmentosa.
Exudative retinal detachment
Congenital syphilis is categorized by the age of the child. Early congenital syphilis occurs in children under 2 years old, and late congenital syphilis in children at or greater than 2 years old. Manifestations of late congenital syphilis are similar to those of secondary syphilis and tertiary syphilis in adults. |
https://en.wikipedia.org/wiki/Calmar%20ratio | Calmar ratio (or Drawdown ratio) is a performance measurement used to evaluate Commodity Trading Advisors and hedge funds. It was created by Terry W. Young and first published in 1991 in the trade journal Futures.
Young owned California Managed Accounts, a firm in Santa Ynez, California, which managed client funds and published the newsletter CMA Reports. The name of his ratio "Calmar" is an acronym of his company's name and its newsletter: CALifornia Managed Accounts Reports. Young defined it thus:
Young believed the Calmar ratio was superior because
It should be mentioned that a competitor newsletter, Managed Account Reports (founded in 1979 by publisher Leon Rose), had previously defined and popularized another performance measurement, the MAR Ratio, equal to the compound annual return from inception, divided by the maximum drawdown from inception.
Although the Calmar ratio and MAR ratio are sometimes assumed to be identical, they are in fact different: Calmar ratio uses 36 months of performance data, whereas MAR ratio uses all performance data from inception onwards. Later versions of the Calmar ratio introduce the risk free rate into the numerator to create a Sharpe type ratio.
See also
Modigliani risk-adjusted performance
Omega ratio
Risk return ratio
Sharpe ratio
Sterling ratio
Sortino ratio |
https://en.wikipedia.org/wiki/Jung%27s%20theorem | In geometry, Jung's theorem is an inequality between the diameter of a set of points in any Euclidean space and the radius of the minimum enclosing ball of that set. It is named after Heinrich Jung, who first studied this inequality in 1901. Algorithms also exist to solve the smallest-circle problem explicitly.
Statement
Consider a compact set
and let
be the diameter of K, that is, the largest Euclidean distance between any two of its points. Jung's theorem states that there exists a closed ball with radius
that contains K. The boundary case of equality is attained by the regular n-simplex.
Jung's theorem in the plane
The most common case of Jung's theorem is in the plane, that is, when n = 2. In this case the theorem states that there exists a circle enclosing all points whose radius satisfies
and this bound is as tight as possible since when K is an equilateral triangle (or its three vertices) one has
General metric spaces
For any bounded set in any metric space, . The first inequality is implied by the triangle inequality for the center of the ball and the two diametral points, and the second inequality follows since a ball of radius centered at any point of will contain all of . Both these inequalities are tight:
In a uniform metric space, that is, a space in which all distances are equal, .
At the other end of the spectrum, in an injective metric space such as the Manhattan distance in the plane, : any two closed balls of radius centered at points of have a non-empty intersection, therefore all such balls have a common intersection, and a radius ball centered at a point of this intersection contains all of .
Versions of Jung's theorem for various non-Euclidean geometries are also known (see e.g. Dekster 1995, 1997). |
https://en.wikipedia.org/wiki/Retrocausality | Retrocausality, or backwards causation, is a concept of cause and effect in which an effect precedes its cause in time and so a later event affects an earlier one. In quantum physics, the distinction between cause and effect is not made at the most fundamental level and so time-symmetric systems can be viewed as causal or retrocausal. Philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, but the two phenomena are distinct.
Philosophy
Philosophical efforts to understand causality extend back at least to Aristotle's discussions of the four causes. It was long considered that an effect preceding its cause is an inherent self-contradiction because, as 18th century philosopher David Hume discussed, when examining two related events, the cause is by definition the one that precedes the effect.
In the 1950s, Michael Dummett wrote in opposition to such definitions, stating that there was no philosophical objection to effects preceding their causes. This argument was rebutted by fellow philosopher Antony Flew and, later, by Max Black. Black's "bilking argument" held that retrocausality is impossible because the observer of an effect could act to prevent its future cause from ever occurring. A more complex discussion of how free will relates to the issues Black raised is summarized by Newcomb's paradox. Essentialist philosophers have proposed other theories, such as the existence of "genuine causal powers in nature" or by raising concerns about the role of induction in theories of causality.
Physics
Most physical theories are time symmetric: microscopic models like Newton's laws or electromagnetism have no inherent direction of time. The "arrow of time" that distinguishes cause and effect must have another origin. To reduce confusion, physicists distinguish strong (macroscopic) from weak (microscopic) causality.
Macroscopic causality
The imaginary ability to affect the past is sometimes |
https://en.wikipedia.org/wiki/IBM%20CP-40 | CP-40 was a research precursor to CP-67, which in turn was part of IBM's then-revolutionary CP[-67]/CMS – a virtual machine/virtual memory time-sharing operating system for the IBM System/360 Model 67, and the parent of IBM's VM family. CP-40 ran multiple instances of client operating systems – particularly CMS, the Cambridge Monitor System, built as part of the same effort. Like CP-67, CP-40 and the first version of CMS were developed by IBM's Cambridge Scientific Center (CSC) staff, working closely with MIT researchers at Project MAC and Lincoln Laboratory. CP-40/CMS production use began in January 1967. CP-40 ran on a unique, specially modified IBM System/360 Model 40.
Project goals
CP-40 was a one-off research system. Its declared goals were:
Provide research input to the System/360 Model 67 team working in Poughkeepsie, who were breaking new ground with the as-yet-unproven concept of virtual memory.
Support CSC's time-sharing requirements in Cambridge.
However, there was also an important unofficial mission: To demonstrate IBM's commitment to and capability for supporting time-sharing users like MIT. CP-40 (and its successor) achieved its goals from technical and social standpoints – they helped to prove the viability of virtual machines, to establish a culture of time-sharing users, and to launch a remote computer services industry. The project became embroiled in an internal IBM political war over time-sharing versus batch processing; and it failed to win the hearts and minds of the academic computer science community, which ultimately turned away from IBM to systems like Multics, UNIX, TENEX, and various DEC operating systems. Ultimately the virtualization concepts developed in the CP-40 project bore fruit in diverse areas, and remain important today.
Features
CP-40 was the first operating system that implemented complete virtualization, i.e. it provided a virtual machine environment supporting all aspects of its target computer system (a S/360-40), suc |
https://en.wikipedia.org/wiki/Soundbeam | Soundbeam is an interactive MIDI hardware and software system developed by The Soundbeam Project / EMS in which movement within a series of ultrasonic beams is used to control multimedia hardware and software.
System
Soundbeam uses a combination of ultrasound (sonar) and tangible (foot controller) inputs to generate MIDI messages. The sonar uses 50 kHz signals.
The latest version SOUNDBEAM 6 incorporates many of the features of both soundbeam 2 - 5. It has a full touchscreen interface, inbuilt sampling, inbuilt high quality sounds, inbuilt mini keyboard - it comes with a wide and varied sample / sound library and with 37 preset soundsets for immediate musical composition and performance. It now also includes film that is embedded with each Soundset and you can also add your own film. The film is triggered by the switches.
Version history
Implementation
Originally designed by Edward Williams for the production of avant-garde dance music, Soundbeam has been used primarily in the field of special needs education due to the minimal physical movement required for its operation. David Jackson's Tonewall project has utilized Soundbeam since 1992.
Due to the system's ability for expansion with four sensors and eight switches, installations have included DAW synchronization (such as with Reason or Ableton), as well as live video manipulation. |
https://en.wikipedia.org/wiki/Harish-Chandra%20Research%20Institute | The Harish-Chandra Research Institute (HRI) is an institution dedicated to research in mathematics and theoretical physics, located in Prayagraj, Uttar Pradesh in India. Established in 1975, HRI offers masters and doctoral program in affiliation with the Homi Bhabha National Institute.
HRI has a residential campus in Jhusi town in Prayagraj on the banks of the river Ganga. The institute has over 30 faculty, 50 doctoral students and 25 post-doctoral visiting research fellows and scientists. HRI is funded by the Department of Atomic Energy (DAE) of the Government of India.
History
The institute was founded as the Mehta Research Institute of Mathematics and Mathematical Physics in 1975, with an endowment from the B.S. Mehta Trust, Calcutta. The institute was initially managed by Badri Nath Prasad and following his death in January 1966 by S.R. Sinha, both from the Allahabad University. The first official director of the institute was Prabhu Lal Bhatnagar in 1975 when it became truly operational. He was followed by S.R. Sinha again.
On 29 November 1975 B. Devadas Acharya joined the Mehta Research Institute (MRI) as its first postdoctoral fellow and on 1 January 1980 was appointed as the first assistant professor of mathematics at MRI. During his research work between 1975 and 1984, he gave many talks on graph theory and its applications in computing. In one of his talks to international audiences, he envisioned a computing engine based on matrices which would be much more powerful.
Sharadchandra Shankar Shrikhande joined the institute as its director in January 1983. The institute was facing financial difficulties, and Shrikhande sought DAE support for the institute. Following the recommendations of the DAE review committee, the Government of Uttar Pradesh committed to provide a campus for HRI, while the DAE committed to provide full funding for all operational expenses.
In January 1990, the institute was granted about in Jhusi town of Prayagraj district and H.S. |
https://en.wikipedia.org/wiki/Lindstr%C3%B6m%27s%20theorem | In mathematical logic, Lindström's theorem (named after Swedish logician Per Lindström, who published it in 1969) states that first-order logic is the strongest logic (satisfying certain conditions, e.g. closure under classical negation) having both the (countable) compactness property and the (downward) Löwenheim–Skolem property.
Lindström's theorem is perhaps the best known result of what later became known as abstract model theory, the basic notion of which is an abstract logic; the more general notion of an institution was later introduced, which advances from a set-theoretical notion of model to a category-theoretical one. Lindström had previously obtained a similar result in studying first-order logics extended with Lindström quantifiers.
Lindström's theorem has been extended to various other systems of logic, in particular modal logics by Johan van Benthem and Sebastian Enqvist.
Notes |
https://en.wikipedia.org/wiki/Game%20Connect | Game Connect: Asia Pacific (GCAP) is Australia’s annual game development conference and networking event for the Asia Pacific Games Industry and is administered by the Game Developers’ Association of Australia.
See also
Australian Game Developers Conference |
https://en.wikipedia.org/wiki/Fecal%20vomiting | Fecal vomiting or copremesis is a kind of vomiting wherein the material vomited is of fecal origin. It is a common symptom of gastrojejunocolic fistula and intestinal obstruction in the ileum. Fecal vomiting is often accompanied by gastrointestinal symptoms, including abdominal pain, abdominal distension, dehydration, and diarrhea. In severe cases of bowel obstruction or constipation (such as those related to clozapine treatment) fecal vomiting has been identified as a cause of death.
Fecal vomiting occurs when the bowel is obstructed for some reason, and intestinal contents cannot move normally. Peristaltic waves occur in an attempt to decompress the intestine, and the strong contractions of the intestinal muscles push the contents backwards through the pyloric sphincter into the stomach, where they are then vomited.
Fecal vomiting can also occur in cats.
Fecal vomiting does not include vomiting of the proximal small intestine contents, which commonly occurs during vomiting.
Fecal vomiting has been cited in liver cancer, ovarian cancer, and colorectal cancer cases. |
https://en.wikipedia.org/wiki/East%20Malling%20Research%20Station | NIAB EMR is a horticultural and agricultural research institute at East Malling, Kent in England, with a specialism in fruit and clonally propagated crop production. In 2016, the institute became part of the NIAB Group.
History
A research station was established on the East Malling site in 1913 on the impetus of local fruit growers. The original buildings are still in use today. Some of the finest and most important research on perennial crops has been conducted on the site, resulting in East Malling’s worldwide reputation. Some of the more well-known developments have been achieved in the areas of plant raising, fruit plant culture (especially the development of rootstocks), fruit breeding, ornamental breeding, fruit storage and the biology and control of pests and diseases.
From 1990 a division of Horticulture Research International (HRI) was on the site. HRI closed in 2009.
In 2016, East Malling Research became part of the National Institute of Agricultural Botany (NIAB) group.
Apple rootstocks
In 1912, Ronald Hatton initiated the work of classification, testing and standardisation of apple tree rootstocks. With the help of Dr Wellington, Hatton sorted out the incorrect naming and mixtures then widespread in apple rootstocks distributed throughout Europe. These verified and distinct apple rootstocks are called the "Malling series". The most widespread used was the M9 rootstock.
Structure
It is situated east of East Malling, and north of the Maidstone East Line. The western half of the site is in East Malling and Larkfield and the eastern half is in Ditton. It is just south of the A20, and between junctions 4 and 5 of the M20 motorway.
Function
Today the Research Centre also acts as a business enterprise centre supported by leading local businesses including QTS Analytical and Network Computing Limited. The conference centre trades as East Malling Ltd, being incorporated on 17 February 2004. |
https://en.wikipedia.org/wiki/Differentiated%20service | Differentiated service is a design pattern for business services and software, in which the service varies automatically according to the identity of the consumer and/or the context in which the service is used. Sometimes known as smart service or context-aware service.
Concept
Differentiated service is extensively covered in a few narrow technical areas, such as telecoms networks and internet (see Differentiated services). It is also mentioned in some marketing sources, with reference to customer segmentation. But the general principle of service differentiation extends far beyond these domains, and it is one of the mechanisms for implementing flexibility in a service-oriented architecture (SOA).
Various dimensions of the service can be differentiated, including:
Information quality. For example, an information service providing stock prices may offer real-time prices to selected users, and 15-minute-delay prices to everyone else.
Security. For example, a user may have restricted access to sensitive information when he is using an insecure network connection. And access to the financial accounts may be restricted prior to publication.
Customer Segmentation. For example, each retail customer may get a different set of special offers, and this can be generated dynamically, according to the contents of the shopping basket or the path through the store.
Differentiating factors can include identity (including personalization) and context (including presence).
Examples
Pay-As-You-Drive Insurance (PAYD)
Telematics 2.0
See also
Context-aware pervasive systems
Differentiated security
External links
Differentiated Services
Design Pattern: Differentiated Service (Fewer Interfaces than Components) CBDI Forum December 2000.
Business Flexibility: Implementing Context Driven Services CBDI Forum June 2002.
Smart Services SustainIT
Manners Externalize Semantics for On-demand Composition of Context-aware Services
Software design patterns
Service-oriented (business computing) |
https://en.wikipedia.org/wiki/Turtle%20F2F | Turtle was a free anonymous peer-to-peer network project being developed at the Vrije Universiteit in Amsterdam, involving professor Andrew Tanenbaum. It is not developed anymore. Like other anonymous P2P software, it allows users to share files and otherwise communicate without fear of legal sanctions or censorship. Turtle's claims of anonymity are backed by two research papers provided in the "external links" below.
Architecture
Technically, Turtle is a friend-to-friend (F2F) network - a special type of peer-to-peer network in which all your communication goes only to your friends, and then to their friends, and so on, to the ultimate destination.
The basic idea behind Turtle is to build a P2P overlay on top of pre-existing trust relationships among Turtle users. Each user acts as node in the overlay by running a copy of the Turtle client software. Unlike existing P2P networks, Turtle does not allow arbitrary nodes to connect and exchange information. Instead, each user establishes secure and authenticated channels with a limited number of other nodes controlled by trusted people (friends).
In the Turtle overlay, both queries and results move hop by hop; the net result is that information is only exchanged between people that trust each other and is always encrypted. Consequently, a snooper or adversary has no way to determine who is requesting / providing information, and what that information is. Given this design, a Turtle network offers a number of useful security properties, such as confined damage in case of node compromise, and resilience against denial of service attacks.
See also
giFT
Internet privacy
File sharing
F2F
RetroShare (inspired by "Turtle Hopping" feature)
External links
Turtle homepage (student's project, 2004)
Petr Matejka's master thesis on Turtle (2004)
"Safe and Private Data Sharing with Turtle: Friends Team-Up and Beat the System"
"Turtle: Safe and Private Data Sharing" from Usenix 2005 conference
Turtle is also cited by this |
https://en.wikipedia.org/wiki/MAYA-II | MAYA-II (Molecular Array of YES and ANDNOT logic gates) is a DNA computer, based on DNA Stem Loop Controllers, developed by scientists at Columbia University and the University of New Mexico and created in 2006.
Replacing the normally silicon-based circuits, this chip has DNA strands to form the circuit. It is said that the speed of such DNA-circuited computer chips will rival and surpass the silicon-based ones, they will be of use in blood samples and in the body and might partake in single cell signaling.
It is the successor to the MAYA I which was composed of only 23 logic gates and could only complete specific games of tic-tac-toe. MAYA-II has more than 100 DNA circuits and can now thoroughly play a game of tic-tac-toe. It is very slow - one move in a game of tic-tac-toe can take up to 30 minutes making it more of a demonstration than an actual application.
The arrangement of this device looks like that of a tic-tac-toe grid and consists of nine wells coated with culture cells. The logic gates are made of the E6 Deoxyribozymes (or DNAzyme) which react to specific oligonucleotide input. Upon reaction, the DNAzyme cleaves the substrate producing an increase in red or green fluorescence, depending on whether it is the computer's or the human's turn respectively.
This technology was used to deepen the quality of diagnostics given to patients infected with the West Nile virus. Joanne Macdonald, a Columbia University virologist, hopes this device can be implanted in the human body and control the presence of cancer cells or the levels of insulin for diabetic patients.
One of the suggested uses put forward by MAYA's creators is that technology such as this can be used in situations where fluid is involved, such as in a sample of blood or a body, since it does not use traditional silicon components. |
https://en.wikipedia.org/wiki/Spike%20protein | In virology, a spike protein or peplomer protein is a protein that forms a large structure known as a spike or peplomer projecting from the surface of an enveloped virus. The proteins are usually glycoproteins that form dimers or trimers.
History and etymology
The term "peplomer" refers to an individual spike from the viral surface; collectively the layer of material at the outer surface of the virion has been referred to as the "peplos". The term is derived from the Greek peplos, "a loose outer garment", "robe or cloak", or "woman['s] mantle". Early systems of viral taxonomy, such as the Lwoff-Horne-Tournier system proposed in the 1960s, used the appearance and morphology of the "peplos" and peplomers as important characteristics for classification. More recently, the term "peplos" is considered a synonym for viral envelope.
Properties
Spikes or peplomers are usually rod- or club-shaped projections from the viral surface. Spike proteins are membrane proteins with typically large external ectodomains, a single transmembrane domain that anchors the protein in the viral envelope, and a short tail in the interior of the virion. They may also form protein–protein interactions with other viral proteins, such as those forming the nucleocapsid. They are usually glycoproteins, more commonly via N-linked than O-linked glycosylation.
Functions
Spikes typically have a role in viral entry. They may interact with cell-surface receptors located on the host cell and may have hemagglutinizing activity as a result, or in other cases they may be enzymes. For example, influenza virus has two surface proteins with these two functions, hemagglutinin and neuraminidase. The binding site for the cell-surface receptor is usually located at the tip of the spike. Many spike proteins are membrane fusion proteins. Being exposed on the surface of the virion, spike proteins can be antigens.
Examples
Spikes or peplomers can be visible in electron micrograph images of enveloped viruses such a |
https://en.wikipedia.org/wiki/Phytotoxin | Phytotoxins are substances that are poisonous or toxic to the growth of plants. Phytotoxic substances may result from human activity, as with herbicides, or they may be produced by plants, by microorganisms, or by naturally occurring chemical reactions.
The term is also used to describe toxic chemicals produced by plants themselves, which function as defensive agents against their predators. Most examples pertaining to this definition of phytotoxin are members of various classes of specialised or secondary metabolites, including alkaloids, terpenes, and especially phenolics, though not all such compounds are toxic or serve defensive purposes. Phytotoxins may also be toxic to humans.
Toxins produced by plants
Alkaloids
Alkaloids are derived from amino acids, and contain nitrogen. They are medically important by interfering with components of the nervous system affecting membrane transport, protein synthesis, and enzyme activities. They generally have a bitter taste. Alkaloids usually end in -ine (caffeine, nicotine, cocaine, morphine, ephedrine).
Terpenes
Terpenes are made of water-insoluble lipids, and synthesized from acetyl-CoA or basic intermediates of glycolysis They often end in -ol (menthol) and comprise the majority of plant essential oils.
Monoterpenes are found in gymnosperms and collect in the resin ducts and may be released after an insect begins to feed to attract the insect's natural enemies.
Sesquiterpenes are bitter tasting to humans and are found on glandular hairs or subdermal pigments.
Diterpenes are contained in resin and block and deter insect feeding. Taxol, an important anticancer drug is found in this group.
Triterpenes mimic the insect molting hormone ecdysone, disrupting molting and development and is often lethal. They are usually found in citrus fruit, and produce a bitter substance called limonoid that deters insect feeding.
Glycosides are made of one or more sugars combined with a non-sugar like aglycone, which usually determines |
https://en.wikipedia.org/wiki/Argentine%20hemorrhagic%20fever | Argentine hemorrhagic fever (AHF) or O'Higgins disease, also known in Argentina as mal de los rastrojos (stubble disease) is a hemorrhagic fever and zoonotic infectious disease occurring in Argentina. It is caused by the Junín virus (an arenavirus, closely related to the Machupo virus, causative agent of Bolivian hemorrhagic fever). Its reservoir of infection is the drylands vesper mouse, a rodent found in Argentina and Paraguay.
Epidemiology
The disease was first reported in the town of in Buenos Aires province, Argentina in 1958, giving it one of the names by which it is known. Theories about its nature included: Weil's disease, leptospirosis, chemical pollution. It was associated with fields containing stubble after the harvest, giving it another of its names.
The endemic area of AHF covers approximately 150,000 km2, compromising the provinces of Buenos Aires, Córdoba, Santa Fe and La Pampa, with an estimated risk population of 5 million.
The natural reservoir of infection, a small rodent known locally as ratón maicero ("maize mouse"; Calomys musculinus), has chronic asymptomatic infection, and spreads the virus through its saliva and urine. Infection is produced through contact of skin or mucous membranes, or through inhalation of infected particles. It is found mostly in people who reside or work in rural areas; 80% of those infected are males between 15 and 60 years of age.
Clinical aspects
AHF is a grave acute disease which may progress to recovery or death in 1 to 2 weeks. The incubation time of the disease is between 10 and 12 days, after which the first symptoms appear: fever, headaches, weakness, loss of appetite and will. These intensify less than a week later, forcing the infected to lie down, and producing stronger symptoms such as vascular, renal, hematological and neurological alterations. This stage lasts about 3 weeks.
If untreated, the mortality of AHF reaches 15–30%. The specific treatment includes plasma of recovered patients, which, if |
https://en.wikipedia.org/wiki/Day-year%20principle | The day-year principle or year-for-a-day principle is a method of interpretation of Bible prophecy in which the word day in prophecy is considered to be symbolic of a year of actual time. It was the method used by most of the Reformers, and is used principally by the historicist school of prophetic interpretation. It is held by the Seventh-day Adventist Church, Jehovah's Witnesses, and the Christadelphians. The day-year principle is also used by the Baháʼí Faith, as well with by most all astrologers who employ the "Secondary Progression" theory, aka the day-for-a-year theory, wherein the planets are moved forwards in the table of planetary motion (known as an ephemeris) a day for each year of life or fraction thereof. The astrologers say that the four seasons of the year are directly spiritually, phenomenologically like the four "seasons" of the day.
Biblical basis
Proponents of the principle, such as the Seventh-day Adventists, claim that it has three primary precedents in Scripture:
. The Israelites will wander for 40 years in the wilderness, one year for every day spent by the spies in Canaan.
. The prophet Ezekiel is commanded to lie on his left side for 390 days, followed by his right side for 40 days, to symbolize the equivalent number of years of punishment on Israel and Judah respectively.
. This is known as the Prophecy of Seventy Weeks. The majority of scholars do understand the passage to refer to 70 "sevens" or "septets" of years—that is, a total of 490 years.
While not listed as primary precedent by the proponents, a direct reference to the day-for-a-year concept is made in Genesis.
. Laban requires an additional seven years of work in contract for Rachel's hand in marriage, calling it a week.
Jon Paulien has defended the principle from a systematic theology perspective, not strictly from the Bible.
History
The day-year principle was partially employed by Jews as seen in Daniel 9:24–27, Ezekiel 4:4-7 and in the early church. It was first used |
https://en.wikipedia.org/wiki/Brauer%E2%80%93Siegel%20theorem | In mathematics, the Brauer–Siegel theorem, named after Richard Brauer and Carl Ludwig Siegel, is an asymptotic result on the behaviour of algebraic number fields, obtained by Richard Brauer and Carl Ludwig Siegel. It attempts to generalise the results known on the class numbers of imaginary quadratic fields, to a more general sequence of number fields
In all cases other than the rational field Q and imaginary quadratic fields, the regulator Ri of Ki must be taken into account, because Ki then has units of infinite order by Dirichlet's unit theorem. The quantitative hypothesis of the standard Brauer–Siegel theorem is that if Di is the discriminant of Ki, then
Assuming that, and the algebraic hypothesis that Ki is a Galois extension of Q, the conclusion is that
where hi is the class number of Ki. If one assumes that all the degrees are bounded above by a uniform constant
N, then one may drop the assumption of normality - this is what is actually proved in Brauer's paper.
This result is ineffective, as indeed was the result on quadratic fields on which it built. Effective results in the same direction were initiated in work of Harold Stark from the early 1970s. |
https://en.wikipedia.org/wiki/Phi%20Tau%20Sigma | Phi Tau Sigma () is the Honor Society for food science and technology. The organization was founded in at the University of Massachusetts Amherst by Dr. Gideon E. (Guy) Livingston, a food technology professor. It was incorporated under the General Laws of the Commonwealth of Massachusetts , as "Phi Tau Sigma Honorary Society, Inc."
Greek letters designation
Why the choice of to designate the Honor Society? Some have speculated or assumed that the Greek letters correspond to the initials of "Food Technology Society". However very recent research by Mary K. Schmidl, making use of documents retrieved from the Oregon State University archives by Robert McGorrin, including the 1958 Constitution, has elucidated the real basis of the choice. The 1958 Constitution is headed with three Greek words
"ΦΙΛΕΙΝ ΤΡΟΦΗΣ ΣΠΟΥΔΗΝ" under which are the English words "Devotion to the Study of Foods". With the assistance of Petros Taoukis, the Greek words are translated as follows:
ΦΙΛΕΙΝ: Love or devotion (pronounced Philleen, accent on the last syllable)
ΤΡΟΦΗΣ:of Food (pronounced Trophees, accent on the last syllable)
ΣΠΟΥΔΗΝ: Study (pronounced Spootheen, accent on the last syllable - th as in the word “the” or “this” not like in the word “thesis”).
represent the initials of those three Greek words.
Charter Members
Besides Livingston, the charter members of the Honor Society were M.P. Baldorf, Robert V. Decareau, E. Felicotti, W.D. Powrie, M.A. Steinberg, and D.E. Westcott.
Purposes
To recognize and honor professional achievements of Food Scientists and Technologists,
To encourage the application of fundamental scientific principles to Food Science and Technology in each of its branches,
To stimulate the exchange of scientific knowledge through meetings, lectures, and publications,
To establish and maintain a network of like-minded professionals, and
To promote exclusively charitable, scientific, literary and educational programs.
Members
Phi Tau Sigma has (currentl |
https://en.wikipedia.org/wiki/Immunoproteasome | An immunoproteasome is a type of proteasome that degrades ubiquitin-labeled proteins found in the cytoplasm in cells exposed to oxidative stress and proinflammatory stimuli. In general, proteasomes consist of a regulatory and a catalytic part. Immunoproteasomes are induced by interferon gamma (but also by other proinflammatory cytokines) and oxidative stress, which in the cell triggers the transcription of three catalytic subunits that do not occur in the classical proteasome. Another possible variation of proteasome is the thymoproteasome, which is located in the thymus and folds to present peptides to naive T cells.
Structure
Structurally, immunoproteasome is a cylindrical protein complex composed of a catalytic 20S subunit and a 19S regulatory subunit. The catalytic subunit consists of four outer alpha rings and four inner beta rings. In the classical proteasome, the beta (β) 1, β2 and β5 subunits have catalytic activity, which, however, in the immunoproteasome are replaced by the subunits LMP2 (alias β1i), MECL-1 (alias β2i), and LMP7 (alias β5i). The LMP2 protein is composed of 20 amino acids, MECL-1 of 39 amino acids and LMP7 occurs in isoform and therefore can have either 72 or 68 amino acids. The regulatory unit consists of 19 proteins, which are structurally divided into a lid of 9 proteins and a base again of 9 proteins. The RPN10 protein is added to this regulatory complex, which serves to stabilize the structure and as a receptor for ubiquitin.
Function
The function of the immunoproteasome is primarily to specifically cleave proteins into shorter peptides, which can then be displayed on the cell surface together with the MHC I complex. The MHC I complex with bound peptide is then recognized primarily by cytotoxic T cells. In order to expose a peptide on the cell surface, the ubiquitin-labeled protein, specifically cleaved into peptides by immunoproteasome, must first be transferred to the endoplasmic reticulum using TAP1 and TAP2 transporters and cha |
https://en.wikipedia.org/wiki/Hypercycle%20%28geometry%29 | In hyperbolic geometry, a hypercycle, hypercircle or equidistant curve is a curve whose points have the same orthogonal distance from a given straight line (its axis).
Given a straight line and a point not on , one can construct a hypercycle by taking all points on the same side of as , with perpendicular distance to equal to that of . The line is called the axis, center, or base line of the hypercycle. The lines perpendicular to , which are also perpendicular to the hypercycle, are called the normals of the hypercycle. The segments of the normals between and the hypercycle are called the radii. Their common length is called the distance or radius of the hypercycle.
The hypercycles through a given point that share a tangent through that point converge towards a horocycle as their distances go towards infinity.
Properties similar to those of Euclidean lines
Hypercycles in hyperbolic geometry have some properties similar to those of lines in Euclidean geometry:
In a plane, given a line and a point not on it, there is only one hypercycle of that of the given line (compare with Playfair's axiom for Euclidean geometry).
No three points of a hypercycle are on a circle.
A hypercycle is symmetrical to each line perpendicular to it. (Reflecting a hypercycle in a line perpendicular to the hypercycle results in the same hypercycle.)
Properties similar to those of Euclidean circles
Hypercycles in hyperbolic geometry have some properties similar to those of circles in Euclidean geometry:
A line perpendicular to a chord of a hypercycle at its midpoint is a radius and it bisects the arc subtended by the chord.
Let AB be the chord and M its middle point.
By symmetry the line R through M perpendicular to AB must be orthogonal to the axis L.
Therefore R is a radius.
Also by symmetry, R will bisect the arc AB.
The axis and distance of a hypercycle are uniquely determined.
Let us assume that a hypercycle C has two different axes L1 and L2.
Using the previous p |
https://en.wikipedia.org/wiki/GW%20approximation | The GW approximation (GWA) is an approximation made in order to calculate the self-energy of a many-body system of electrons. The approximation is that the expansion of the self-energy Σ in terms of the single particle Green's function G and the screened Coulomb interaction W (in units of )
can be truncated after the first term:
In other words, the self-energy is expanded in a formal Taylor series in powers of the screened interaction W and the lowest order term is kept in the expansion in GWA.
Theory
The above formulae are schematic in nature and show the overall idea of the approximation. More precisely, if we label an electron coordinate with its position, spin, and time and bundle all three into a composite index (the numbers 1, 2, etc.), we have
where the "+" superscript means the time index is shifted forward by an infinitesimal amount. The GWA is then
To put this in context, if one replaces W by the bare Coulomb interaction (i.e. the usual 1/r interaction), one generates the standard perturbative series for the self-energy found in most many-body textbooks. The GWA with W replaced by the bare Coulomb yields nothing other than the Hartree–Fock exchange potential (self-energy). Therefore, loosely speaking, the GWA represents a type of dynamically screened Hartree–Fock self-energy.
In a solid state system, the series for the self-energy in terms of W should converge much faster than the traditional series in the bare Coulomb interaction. This is because the screening of the medium reduces the effective strength of the Coulomb interaction: for example, if one places an electron at some position in a material and asks what the potential is at some other position in the material, the value is smaller than given by the bare Coulomb interaction (inverse distance between the points) because the other electrons in the medium polarize (move or distort their electronic states) so as to screen the electric field. Therefore, W is a smaller quantity than |
https://en.wikipedia.org/wiki/Rule%20induction | Rule induction is an area of machine learning in which formal rules are extracted from a set of observations. The rules extracted may represent a full scientific model of the data, or merely represent local patterns in the data.
Data mining in general and rule induction in detail are trying to create algorithms without human programming but with analyzing existing data structures. In the easiest case, a rule is expressed with “if-then statements” and was created with the ID3 algorithm for decision tree learning. Rule learning algorithm are taking training data as input and creating rules by partitioning the table with cluster analysis. A possible alternative over the ID3 algorithm is genetic programming which evolves a program until it fits to the data.
Creating different algorithm and testing them with input data can be realized in the WEKA software. Additional tools are machine learning libraries for Python, like scikit-learn.
Paradigms
Some major rule induction paradigms are:
Association rule learning algorithms (e.g., Agrawal)
Decision rule algorithms (e.g., Quinlan 1987)
Hypothesis testing algorithms (e.g., RULEX)
Horn clause induction
Version spaces
Rough set rules
Inductive Logic Programming
Boolean decomposition (Feldman)
Algorithms
Some rule induction algorithms are:
Charade
Rulex
Progol
CN2 |
https://en.wikipedia.org/wiki/Rotordynamics | Rotordynamics (or rotor dynamics) is a specialized branch of applied mechanics concerned with the behavior and diagnosis of rotating structures. It is commonly used to analyze the behavior of structures ranging from jet engines and steam turbines to auto engines and computer disk storage. At its most basic level, rotor dynamics is concerned with one or more mechanical structures (rotors) supported by bearings and influenced by internal phenomena that rotate around a single axis. The supporting structure is called a stator. As the speed of rotation increases the amplitude of vibration often passes through a maximum that is called a critical speed. This amplitude is commonly excited by imbalance of the rotating structure; everyday examples include engine balance and tire balance. If the amplitude of vibration at these critical speeds is excessive, then catastrophic failure occurs. In addition to this, turbomachinery often develop instabilities which are related to the internal makeup of turbomachinery, and which must be corrected. This is the chief concern of engineers who design large rotors.
Rotating machinery produces vibrations depending upon the structure of the mechanism involved in the process. Any faults in the machine can increase or excite the vibration signatures. Vibration behavior of the machine due to imbalance is one of the main aspects of rotating machinery which must be studied in detail and considered while designing. All objects including rotating machinery exhibit natural frequency depending on the structure of the object. The critical speed of a rotating machine occurs when the rotational speed matches its natural frequency. The lowest speed at which the natural frequency is first encountered is called the first critical speed, but as the speed increases, additional critical speeds are seen which are the multiples of the natural frequency. Hence, minimizing rotational unbalance and unnecessary external forces are very important to reducing the ov |
https://en.wikipedia.org/wiki/Sonic%20black%20hole | A sonic black hole, sometimes called a dumb hole or acoustic black hole, is a phenomenon in which phonons (sound perturbations) are unable to escape from a region of a fluid that is flowing more quickly than the local speed of sound. They are called sonic, or acoustic, black holes because these trapped phonons are analogous to light in astrophysical (gravitational) black holes. Physicists are interested in them because they have many properties similar to astrophysical black holes and, in particular, emit a phononic version of Hawking radiation. This Hawking radiation can be spontaneously created by quantum vacuum fluctuations, in close analogy with Hawking radiation from a real black hole. On the other hand, the Hawking radiation can be stimulated in a classical process. The boundary of a sonic black hole, at which the flow speed changes from being greater than the speed of sound to less than the speed of sound, is called the event horizon.
A rotating sonic black hole was used in 2010 to give the first laboratory testing of superradiance, a process whereby energy is extracted from a black hole.
Sonic black holes are possible because phonons in perfect fluids exhibit the same properties of motion as fields, such as gravity, in space and time. For this reason, a system in which a sonic black hole can be created is called a gravity analogue. Nearly any fluid can be used to create an acoustic event horizon, but the viscosity of most fluids creates random motion that makes features like Hawking radiation nearly impossible to detect. The complexity of such a system would make it very difficult to gain any knowledge about such features even if they could be detected. Many nearly perfect fluids have been suggested for use in creating sonic black holes, such as superfluid helium, one–dimensional degenerate Fermi gases, and Bose–Einstein condensate. Gravity analogues other than phonons in a fluid, such as slow light and a system of ions, have also been proposed for studyin |
https://en.wikipedia.org/wiki/OTPW | OTPW is a one-time password system developed for authentication in Unix-like operating systems by Markus Kuhn. A user's real password is not directly transmitted across the network. Rather, a series of one-time passwords is created from a short set of characters (constant secret) and a set of one-time tokens. As each single-use password can only be used once, passwords intercepted by a password sniffer or key logger are not useful to an attacker.
OTPW is supported in Unix and Linux (via pluggable authentication modules), OpenBSD, NetBSD, and FreeBSD, and a generic open source implementation can be used to enable its use on other systems.
OTPW, like the other one-time password systems, is sensitive to a man in the middle attack if used by itself. This could for example be solved by putting SSL, SPKM or similar security protocol "under it" which authenticates the server and gives point-to-point security between the client and server.
Design and differences from other implementations
Unlike S/KEY, OTPW is not based on the Lamport's scheme in which every one-time password is the one-way hash value of its successor. Password lists based on the Lamport's scheme have the problem that if the attacker can see one of the last passwords on the list, then all previous passwords can be calculated from it. It also does not store the encrypted passwords as suggested by Aviel D. Rubin in Independent One-Time Passwords, in order to keep the host free of files with secrets.
In OTPW a one-way hash value of every single password is stored in a potentially widely readable file in the user’s home directory. For instance, hash values of 300 passwords (a typical A4 page) require only a four kilobyte long .otpw file, a typically negligible amount of storage space.
The passwords are carefully generated random numbers. The random number generator is based on the RIPEMD-160 secure hash function, and it is seeded by hashing together the output of various shell commands. These provide unp |
https://en.wikipedia.org/wiki/Stable%20group | In model theory, a stable group is a group that is stable in the sense of stability theory.
An important class of examples is provided by groups of finite Morley rank (see below).
Examples
A group of finite Morley rank is an abstract group G such that the formula x = x has finite Morley rank for the model G. It follows from the definition that the theory of a group of finite Morley rank is ω-stable; therefore groups of finite Morley rank are stable groups. Groups of finite Morley rank behave in certain ways like finite-dimensional objects. The striking similarities between groups of finite Morley rank and finite groups are an object of active research.
All finite groups have finite Morley rank, in fact rank 0.
Algebraic groups over algebraically closed fields have finite Morley rank, equal to their dimension as algebraic sets.
showed that free groups, and more generally torsion-free hyperbolic groups, are stable. Free groups on more than one generator are not superstable.
The Cherlin–Zilber conjecture
The Cherlin–Zilber conjecture (also called the algebraicity conjecture), due to Gregory and Boris , suggests that infinite (ω-stable) simple groups are simple algebraic groups over algebraically closed fields. The conjecture would have followed from Zilber's trichotomy conjecture. Cherlin posed the question for all ω-stable simple groups, but remarked that even the case of groups of finite Morley rank seemed hard.
Progress towards this conjecture has followed Borovik’s program of transferring methods used in classification of finite simple groups. One possible source of counterexamples is bad groups: nonsoluble connected groups of finite Morley rank all of whose proper connected definable subgroups are nilpotent. (A group is called connected if it has no definable subgroups of finite index other than itself.)
A number of special cases of this conjecture have been proved; for example:
Any connected group of Morley rank 1 is abelian.
Cherlin proved that a connect |
https://en.wikipedia.org/wiki/Optical%20black%20hole | An optical black hole is a phenomenon in which slow light is passed through a Bose–Einstein condensate that is itself spinning faster than the local speed of light within to create a vortex capable of trapping the light behind an event horizon just as a gravitational black hole would.
Unlike other black hole analogs such as a sonic black hole in a Bose–Einstein condensate, a slow light black hole analog is not expected to mimic the quantum effects of a black hole, and thus not emit Hawking radiation. It does, however, mimic the classical properties of a gravitational black hole, making it potentially useful in studying other properties of black holes. More recently, some physicists have developed a fiber optic based system which they believe will emit Hawking radiation.
See also
Sonic black hole
Analog models of gravity
Notes
Black holes |
https://en.wikipedia.org/wiki/Biodemography | Biodemography is the science dealing with the integration of biological theory and demography.
Overview
Biodemography is a new branch of human (classical) demography concerned with understanding the complementary biological and demographic determinants of and interactions between the birth and death processes that shape individuals, cohorts and populations. The biological component brings human demography under the unifying theoretical umbrella of evolution, and the demographic component provides an analytical foundation for many of the principles upon which evolutionary theory rests including fitness, selection, structure, and change. Biodemographers are concerned with birth and death processes as they relate to populations in general and to humans in particular, whereas population biologists specializing in life history theory are interested in these processes only insofar as they relate to fitness and evolution.
Traditionally, evolutionary biologists seldom focused on older, post-reproductives because these individuals (it is typically argued) do not contribute to fitness. In contrast, biodemographers embraced research programs expressly designed to study individuals at ages beyond their reproductive years because information on these age classes will shed important light on longevity and aging. The biological and demographic components of biodemography are not hierarchical but reciprocal in that both are primary windows on the world and are thus synergistic, complementary and mutually informing.
However, there has been much more synthesis between the approaches to demographic research in recent years, such that collaboration between evolutionary, ecology and demographic researchers is increasingly common. An example of this is the "Evolutionary Demography Society", formed in 2012/2013 to increase opportunities for inter and multidisciplinary approaches to understanding how life history and ageing are related and lead to different population demographics.
B |
https://en.wikipedia.org/wiki/Stenohaline | Stenohaline describes an organism, usually fish, that cannot tolerate a wide fluctuation in the salinity of water. Stenohaline is derived from the words: "steno" meaning narrow, and "haline" meaning salt. Many fresh water fish, such as goldfish (Carassius auratus), tend to be stenohaline and die in environments of high salinity such as the ocean. Many marine fish, such as haddock, are also stenohaline and die in water with lower salinity.
Alternatively, fish living in coastal estuaries and tide pools are often euryhaline (tolerant to changes in salinity), as are many species which have life cycle requiring tolerance to both fresh water and seawater environments such as salmon and herring.
See also
Fish migration
Osmoconformer
Osmoregulation
Euryhaline |
https://en.wikipedia.org/wiki/Elmore%20delay | Elmore delay is a simple approximation to the delay through an RC network in an electronic system. It is often used in applications such as logic synthesis, delay calculation, static timing analysis, placement and routing, since it is simple to compute (especially in tree structured networks, which are the vast majority of signal nets within ICs) and is reasonably accurate. Even where it is not accurate, it is usually faithful, in the sense that reducing the Elmore delay will almost always reduce the true delay, so it is still useful in optimization.
Elmore delay can be thought of in several ways, all mathematically identical.
For tree structured networks, find the delay through each segment as the R (electrical resistance) times the downstream C (electrical capacitance). Sum the delays from the root to the sink.
Assume the output is a simple exponential, and find the exponential that has the same integral as the true response. This is also equivalent to moment matching with one moment, since the first moment is a pure exponential.
Find a one pole approximation to the true frequency response. This is a first-order Padé approximation.
There are many extensions to Elmore delay. It can be extended to upper and lower bounds, to include inductance as well as R and C, to be more accurate (higher order approximations) and so on. See delay calculation for more details and references.
See also
Delay calculation
Static timing analysis
William Cronk Elmore |
https://en.wikipedia.org/wiki/Continuous%20group%20action | In topology, a continuous group action on a topological space X is a group action of a topological group G that is continuous: i.e.,
is a continuous map. Together with the group action, X is called a G-space.
If is a continuous group homomorphism of topological groups and if X is a G-space, then H can act on X by restriction: , making X a H-space. Often f is either an inclusion or a quotient map. In particular, any topological space may be thought of as a G-space via (and G would act trivially.)
Two basic operations are that of taking the space of points fixed by a subgroup H and that of forming a quotient by H. We write for the set of all x in X such that . For example, if we write for the set of continuous maps from a G-space X to another G-space Y, then, with the action ,
consists of f such that ; i.e., f is an equivariant map. We write . Note, for example, for a G-space X and a closed subgroup H, . |
https://en.wikipedia.org/wiki/Shock%20mount | A shock mount or isolation mount is a mechanical fastener that connects two parts elastically. They are used for shock and vibration isolation.
Isolation mounts allow a piece of equipment to be securely mounted to a foundation and/or frame and, at the same time, allow it to float independently from the substrate.
Uses
Shock mounts can be found in a wide variety of applications.
Shock mounts can be used to isolate the foundation or substrate from the dynamics of the mounted equipment. This is vital on submarines where silence is critical to mission success. Yachts also use shock mounts to dampen the noise (mainly the one transmitted throughout the structure) and increase the comfort. This is usually done through elastic supports and transmission couplings.
Another common example of this are the motor and transmission mounts that are used in virtually every automobile manufactured today. Without isolation mounts, the interior noise and comfort level in today's vehicles would be significantly different than what we have grown accustomed to. In this case, shock and vibration isolation mounts are often chosen by the nature of the dynamics produced by the equipment and the weight of the equipment.
Shock mounts can be used to isolate sensitive equipment from undesirable dynamics of the foundation or substrate. Sensitive laboratory equipment needs to be isolated from handling shocks and ambient vibration. Military equipment and ships need to be able to withstand nearby explosions. Shock mounts are found in some disc drives and compact disc players, in which soft bushings are all that mechanically hold the disk and reading assembly, thereby isolating it from outside vibrations and from other outside loads such as torsion. In this case, isolation mounts are often chosen by the sensitivity of the equipment to shock (fragility) and vibration (natural frequency) and the weight of the equipment. This and nature of the input shock and vibration must be matched. A shock |
https://en.wikipedia.org/wiki/Conjugate%20%28square%20roots%29 | In mathematics, the conjugate of an expression of the form is provided that does not appear in and . One says also that the two expressions are conjugate.
In particular, the two solutions of a quadratic equation are conjugate, as per the in the quadratic formula .
Complex conjugation is the special case where the square root is the imaginary unit.
Properties
As
and
the sum and the product of conjugate expressions do not involve the square root anymore.
This property is used for removing a square root from a denominator, by multiplying the numerator and the denominator of a fraction by the conjugate of the denominator (see Rationalisation). An example of this usage is:
Hence:
A corollary property is that the subtraction:
leaves only a term containing the root.
See also
Conjugate element (field theory), the generalization to the roots of a polynomial of any degree
Elementary algebra |
https://en.wikipedia.org/wiki/Destrin | Destrin or DSTN (also known as actin depolymerizing factor or ADF) is a protein which in humans is encoded by the DSTN gene. Destrin is a component protein in microfilaments.
The product of this gene belongs to the actin-binding proteins ADF (Actin-Depolymerizing Factor)/cofilin family. This family of proteins is responsible for enhancing the turnover rate of actin in vivo. This gene encodes the actin depolymerizing protein that severs actin filaments (F-actin) and binds to actin monomers (G-actin). Two transcript variants encoding distinct isoforms have been identified for this gene.
Structure
The tertiary structure of destrin was determined by the use of triple-resonance multidimensional nuclear magnetic resonance, NMR. The secondary and tertiary structures of destrin are similar to the gelsolin family which is another actin-regulating protein family.
There are three ordered layers to destrin which is a globular protein. There is a central β sheet that is composed of one parallel strand and three antiparallel strands. This β sheet is between a long α helix along with a shorter one and two shorter helices on the opposite side. The four helices are parallel to the β strands.
Function
In a variety of eukaryotes, destrin regulates actin in the cytoskeleton. Destrin binds actin and is thought to connect it as gelsolin segment-1 does. Furthermore, the binding of actin by destrin and cofilin is regulated negatively by phosphorylation. Destrin can also sever actin filaments. |
https://en.wikipedia.org/wiki/Nestin%20%28protein%29 | Nestin is a protein that in humans is encoded by the NES gene.
Nestin (acronym for neuroepithelial stem cell protein) is a type VI intermediate filament (IF) protein. These intermediate filament proteins are expressed mostly in nerve cells where they are implicated in the radial growth of the axon. Seven genes encode for the heavy (NF-H), medium (NF-M) and light neurofilament (NF-L) proteins, nestin and α-internexin in nerve cells, synemin α and desmuslin/synemin β (two alternative transcripts of the DMN gene) in muscle cells, and syncoilin (also in muscle cells). Members of this group mostly preferentially coassemble as heteropolymers in tissues. Steinert et al. has shown that nestin forms homodimers and homotetramers but does not form IF by itself in vitro. In mixtures, nestin preferentially co-assembles with purified vimentin or the type IV IF protein internexin to form heterodimer coiled-coil molecules.
Gene
Structurally, nestin has the shortest head domain (N-terminus) and the longest tail domain (C-terminus) of all the IF proteins. Nestin is of high molecular weight (240kDa) with a terminus greater than 500 residues (compared to cytokeratins and lamins with termini less than 50 residues).
After subcloning the human nestin gene into plasmid vectors, Dahlstrand et al. determined the nucleotide sequence of all coding regions and parts of the introns. In order to establish the boundaries of the introns, they used the polymerase chain reaction (PCR) to amplify a fragment made from human fetal brain cDNA using two primers located in the first and fourth exon, respectively. The resulting 270 base pair (bp) long fragment was then sequenced directly in its entirety, and intron positions precisely located by comparison with the genomic sequence. Putative initiation and stop codons for the human nestin gene were found at the same positions as in the rat gene, in regions where overall similarity was very high. Based on this assumption, the human nestin gene encod |
https://en.wikipedia.org/wiki/Satellite%20television | Satellite television is a service that delivers television programming to viewers by relaying it from a communications satellite orbiting the Earth directly to the viewer's location. The signals are received via an outdoor parabolic antenna commonly referred to as a satellite dish and a low-noise block downconverter.
A satellite receiver then decodes the desired television program for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services. It is usually the only television available in many remote geographic areas without terrestrial television or cable television service.
Modern systems signals are relayed from a communications satellite on the X band (8–12 GHz) or Ku band (12–18 GHz) frequencies requiring only a small dish less than a meter in diameter. The first satellite TV systems were an obsolete type now known as television receive-only. These systems received weaker analog signals transmitted in the C-band (4–8 GHz) from FSS type satellites, requiring the use of large 2–3-meter dishes. Consequently, these systems were nicknamed "big dish" systems, and were more expensive and less popular.
Early systems used analog signals, but modern ones use digital signals which allow transmission of the modern television standard high-definition television, due to the significantly improved spectral efficiency of digital broadcasting. As of 2022, Star One C2 from Brazil is the only remaining satellite broadcasting in analog signals.
Different receivers are required for the two types. Some transmissions and channels are unencrypted and therefore free-to-air, while many other channels are transmitted with encryption. Free-to-view channels are encrypted but not charged-for, while pay television requires the viewer to subscribe and pay a monthly fee to receive the programming.
Satellite TV has seen a recent decline in consumers due to the cord-cutting trend |
https://en.wikipedia.org/wiki/Binaural%20fusion | Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
The process of binaural fusion is important for computing the location of sound sources in the horizontal plane (sound localization), and it is important for sound segregation. Sound segregation refers to the ability to identify acoustic components from one or more sound sources. The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of both eardrums to extract physical cues and synthesize auditory objects.
When stimulation from a sound reaches the ear, the eardrum deflects in a mechanical fashion, and the three middle ear bones (ossicles) transmit the mechanical signal to the cochlea, where hair cells transform the mechanical signal into an electrical signal. The auditory nerve, also called the cochlear nerve, then transmits action potentials to the central auditory nervous system.
In binaural fusion, inputs from both ears integrate and fuse to create a complete auditory picture at the brainstem. Therefore, the signals sent to the central auditory nervous system are representative of this complete picture, integrated information from both ears instead of a single ear.
The binaural squelch effect is a result of nuclei of the brainstem processing timing, amplitude, and spectral differences between the two ears. Sounds are integrated and then separated into auditory objects. For this effect to take place, neural integration from both sides is required.
Anatomy
As sound travels into the inner eardrum of vertebrate mammals, it encounters the hair cells that line the basilar membrane |
https://en.wikipedia.org/wiki/Proofs%20of%20trigonometric%20identities | There are several equivalent ways for defining trigonometric functions, and the proof of the trigonometric identities between them depend on the chosen definition. The oldest and somehow the most elementary definition is based on the geometry of right triangles. The proofs given in this article use this definition, and thus apply to non-negative angles not greater than a right angle. For greater and negative angles, see Trigonometric functions.
Other definitions, and therefore other proofs are based on the Taylor series of sine and cosine, or on the differential equation to which they are solutions.
Elementary trigonometric identities
Definitions
The six trigonometric functions are defined for every real number, except, for some of them, for angles that differ from 0 by a multiple of the right angle (90°). Referring to the diagram at the right, the six trigonometric functions of θ are, for angles smaller than the right angle:
Ratio identities
In the case of angles smaller than a right angle, the following identities are direct consequences of above definitions through the division identity
They remain valid for angles greater than 90° and for negative angles.
Or
Complementary angle identities
Two angles whose sum is π/2 radians (90 degrees) are complementary. In the diagram, the angles at vertices A and B are complementary, so we can exchange a and b, and change θ to π/2 − θ, obtaining:
Pythagorean identities
Identity 1:
The following two results follow from this and the ratio identities. To obtain the first, divide both sides of by ; for the second, divide by .
Similarly
Identity 2:
The following accounts for all three reciprocal functions.
Proof 2:
Refer to the triangle diagram above. Note that by Pythagorean theorem.
Substituting with appropriate functions -
Rearranging gives:
Angle sum identities
Sine
Draw a horizontal line (the x-axis); mark an origin O. Draw a line from O at an angle above the horizontal line and a second line at an |
https://en.wikipedia.org/wiki/Sheddase | Sheddases are membrane-bound enzymes that cleave extracellular portions of transmembrane proteins, releasing the soluble ectodomains from the cell surface. Many sheddases are members of the ADAM or aspartic protease (BACE) protein families.
These enzymes can activate a transmembrane protein if it is a receptor (e.g., HER2), or cut off the part of the transmembrane protein which has already bound an agonist (e.g., in the case of EGFR), allowing this agonist to go and stimulate a receptor on another cell. Hence, sheddases demultiply the yield of agonists. Sheddase inhibitors active on ADAM10 and ADAM17 can potentiate anti-cancer therapy.
Functions
It has been postulated that the activity of sheddases occurs in relation to the amount of general enzymatic activity. Research indicates that sheddases are instead related to phosphatidylserine exposure. When PSA-3 cells' ability to synthesize phosphatidylserine was repressed, sheddase activity decreased, and the sheddase activity returned to normal levels when the cells were again able to synthesize phosphatidylserine. This led researchers to conclude that phosphatidyserine exposure is necessary for cells to exhibit sheddase activity.
Uses
Due to the nature of the mechanisms and functions of sheddase enzymes, they have been studied on the basis of discovering possible uses in medicine. One such use is in the treatment of allergic responses and other processes of the immune system. ADAM10 is responsible for the shedding of the CD23 Immunoglobulin receptor, which releases soluble sCD23. sCD23 present in the blood serum contributes to immune response and, to some, the onset of inflammatory disease such as asthma. Given that ADAM10 sheddase cleaves CD23 and increases the levels of sCD23, possible treatments for these diseases may center around the inhibition of sheddase function.
Tumor necrosis factor alpha converting enzyme (TACE) is a sheddase protein that has been observed in many types of cancer and could serve as |
https://en.wikipedia.org/wiki/Magnetic%20dip | Magnetic dip, dip angle, or magnetic inclination is the angle made with the horizontal by the Earth's magnetic field lines. This angle varies at different points on the Earth's surface. Positive values of inclination indicate that the magnetic field of the Earth is pointing downward, into the Earth, at the point of measurement, and negative values indicate that it is pointing upward. The dip angle is in principle the angle made by the needle of a vertically held compass, though in practice ordinary compass needles may be weighted against dip or may be unable to move freely in the correct plane. The value can be measured more reliably with a special instrument typically known as a dip circle.
Dip angle was discovered by the German engineer Georg Hartmann in 1544. A method of measuring it with a dip circle was described by Robert Norman in England in 1581.
Explanation
Magnetic dip results from the tendency of a magnet to align itself with lines of magnetic field. As the Earth's magnetic field lines are not parallel to the surface, the north end of a compass needle will point upward in the southern hemisphere (negative dip) or downward in the northern hemisphere (positive dip) . The range of dip is from -90 degrees (at the South Magnetic Pole) to +90 degrees (at the North Magnetic Pole). Contour lines along which the dip measured at the Earth's surface is equal are referred to as isoclinic lines. The locus of the points having zero dip is called the magnetic equator or aclinic line.
Calculation for a given latitude
The inclination is defined locally for the magnetic field due to the Earth's core, and has a positive value if the field points below the horizontal (ie into the Earth). Here we show how to determine the value of at a given latitude, following the treatment given by Fowler.
Outside Earth's core we consider Maxwell's equations in a vacuum, and where and the subscript denotes the core as the origin of these fields. The first means we can introduc |
https://en.wikipedia.org/wiki/Compilation%20error | Compilation error or compile error refers to a state when a compiler fails to compile a piece of computer program source code, either due to errors in the code, or, more unusually, due to errors in the compiler itself. A compilation error message often helps programmers debugging the source code. Although the definitions of compilation and interpretation can be vague, generally compilation errors only refer to static compilation and not dynamic compilation. However, dynamic compilation can still technically have compilation errors, although many programmers and sources may identify them as run-time errors. Most just-in-time compilers, such as the Javascript V8 engine, ambiguously refer to compilation errors as syntax errors since they check for them at run time.
Examples
Common C++ compilation errors
Undeclared identifier, e.g.:
doy.cpp: In function `int main()':
doy.cpp:25: `DayOfYear' undeclared (first use this function)
This means that the variable "DayOfYear" is trying to be used before being declared.
Common function undeclared, e.g.:
xyz.cpp: In function `int main()': xyz.cpp:6: `cout' undeclared (first use this function)
This means that the programmer most likely forgot to include iostream.
Parse error, e.g.:
somefile.cpp:24: parse error before `something'
This could mean that a semi-colon is missing at the end of the previous statement.
Internal Compiler Errors
An internal compiler error (commonly abbreviated as ICE) is an error that occurs not due to erroneous source code, but rather due to a bug in the compiler itself. They can sometimes be worked around by making small, insignificant changes to the source code around the line indicated by the error (if such a line is indicated at all), but sometimes larger changes must be made, such as refactoring the code, to avoid certain constructs. Using a different compiler or different version of the compiler may solve the issue and be an acceptable solution in some cases. When an internal compiler erro |
https://en.wikipedia.org/wiki/Peptide%20computing | Peptide computing is a form of computing which uses peptides, instead of traditional electronic components. The basis of this computational model is the affinity of antibodies towards peptide sequences. Similar to DNA computing, the parallel interactions of peptide sequences and antibodies have been used by this model to solve a few NP-complete problems. Specifically, the hamiltonian path problem (HPP) and some versions of the set cover problem are a few NP-complete problems which have been solved using this computational model so far. This model of computation has also been shown to be computationally universal (or Turing complete).
This model of computation has some critical advantages over DNA computing. For instance, while DNA is made of four building blocks, peptides are made of twenty building blocks. The peptide-antibody interactions are also more flexible with respect to recognition and affinity than an interaction between a DNA strand and its reverse complement. However, unlike DNA computing, this model is yet to be practically realized. The main limitation is the availability of specific monoclonal antibodies required by the model.
See also
Biocomputers
Computational gene
Computational complexity theory
DNA computing
Molecular electronics
Parallel computing
Unconventional computing
Molecular logic gate |
https://en.wikipedia.org/wiki/Facebook | Facebook is an online social media and social networking service owned by American technology giant Meta Platforms. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities. Since 2006, Facebook allows everyone to register from 13 years old (or older), except in the case of a handful of nations, where the age limit is 14 years. , Facebook claimed 3 billion monthly active users, and ranked third worldwide among the most visited websites. It was the most downloaded mobile app of the 2010s.
Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing information about themselves. They can post text, photos and multimedia which are shared with any other users who have agreed to be their "friend" or, with different privacy settings, publicly. Users can also communicate directly with each other with Messenger, join common-interest groups, and receive notifications on the activities of their Facebook friends and the pages they follow.
The subject of numerous controversies, Facebook has often been criticized over issues such as user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U.S. elections) and mass surveillance. Facebook has also been subject to criticism over psychological effects such as addiction and low self-esteem, and various controversies over content such as fake news, conspiracy theories, copyright infringement, and hate speech. Commentators have accused Facebook of willingly facilitating the spread of such content, as well as exaggerating its number of users to appeal to advertisers.
History
2 |
https://en.wikipedia.org/wiki/Scalable%20Cluster%20Environment | OpenSCE (Open Scalable Cluster Environment) is an Open-source beowulf-clustering software suite led by Kasetsart University, Thailand. It started from a small system monitoring for cluster, called SCMS (Scalable Cluster Monitoring System) and extend from its base to many sub-project. Currently OpenSCE has the following components
Components
SCEBase - Core system library and command line
SCMS (Scalable Cluster Monitoring System) - Cluster monitoring. Including some command line tools for cluster management
SCMSWeb - Grid & Cluster web-base monitoring system
MPITH - A thin layer of MPI that focus on light & robust implementation
MPView - A visual profiling and debugger for parallel program
OpenSCE Roll - A bundled of components to install in a NPACI Rocks cluster distribution.
Currently, OpenSCE project has been moved into umbrella of Thai National Grid Project, led by Thai National Grid Center.
Some components of OpenSCE, especially SCMSWeb, has been used to monitor many Grid computing researches. Such as PRAGMA, APGrid, and also Thai National Grid.
External links
OpenSCE Project
Thai National Grid Project
Internet Protocol based network software
Parallel computing
Grid computing |
https://en.wikipedia.org/wiki/Specific%20granule | Specific granules are secretory vesicles found exclusively in cells of the immune system called granulocytes.
It is sometimes described as applying specifically to neutrophils, and sometimes the term is applied to other types of cells.
These granules store a mixture of cytotoxic molecules, including many enzymes and antimicrobial peptides, that are released by a process called degranulation following activation of the granulocyte by an immune stimulus.
Specific granules are also known as "secondary granules".
Contents
Examples of cytotoxic molecule stored by specific granules in different granulocytes include:
Neutrophil: alkaline phosphatase, lactoferrin, lysozyme, NADPH oxidase
Eosinophil: cathepsin, major basic protein
Basophil: heparin, histamine (not directly cytotoxic)
Clinical significance
A specific granule deficiency can be associated with CEBPE. |
https://en.wikipedia.org/wiki/Nested%20intervals | In mathematics, a sequence of nested intervals can be intuitively understood as an ordered collection of intervals on the real number line with natural numbers as an index. In order for a sequence of intervals to be considered nested intervals, two conditions have to be met:
Every interval in the sequence is contained in the previous one ( is always a subset of ).
The length of the intervals get arbitrarily small (meaning the length falls below every possible threshold after a certain index ).
In other words, the left bound of the interval can only increase (), and the right bound can only decrease ().
Historically - long before anyone defined nested intervals in a textbook - people implicitly constructed such nestings for concrete calculation purposes. For example, the ancient Babylonians discovered a method for computing square roots of numbers. In contrast, the famed Archimedes constructed sequences of polygons, that inscribed and surcumscribed a unit circle, in order to get a lower and upper bound for the circles circumference - which is the circle number Pi ().
The central question to be posed is the nature of the intersection over all the natural numbers, or, put differently, the set of numbers, that are found in every Interval (thus, for all ). In modern mathematics, nested intervals are used as a construction method for the real numbers (in order to complete the field of rational numbers).
Historic motivation
As stated in the introduction, historic users of mathematics discovered the nesting of intervals and closely related algorithms as methods for specific calculations. Some variations and modern interpretations of these ancient techniques will be introduced here:
Computation of square roots
One intuitive algorithm is so easy to understand, that it could well be found by engaged high school students. When trying to find the square root of a number , one can be certain that , which gives the first interval , in which has to be found. If one kn |
https://en.wikipedia.org/wiki/Cone%20%28category%20theory%29 | In category theory, a branch of mathematics, the cone of a functor is an abstract notion used to define the limit of that functor. Cones make other appearances in category theory as well.
Definition
Let F : J → C be a diagram in C. Formally, a diagram is nothing more than a functor from J to C. The change in terminology reflects the fact that we think of F as indexing a family of objects and morphisms in C. The category J is thought of as an "index category". One should consider this in analogy with the concept of an indexed family of objects in set theory. The primary difference is that here we have morphisms as well. Thus, for example, when J is a discrete category, it corresponds most closely to the idea of an indexed family in set theory. Another common and more interesting example takes J to be a span. J can also be taken to be the empty category, leading to the simplest cones.
Let N be an object of C. A cone from N to F is a family of morphisms
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
The (usually infinite) collection of all these triangles can
be (partially) depicted in the shape of a cone with the apex N. The cone ψ is sometimes said to have vertex N and base F.
One can also define the dual notion of a cone from F to N (also called a co-cone) by reversing all the arrows above. Explicitly, a co-cone from F to N is a family of morphisms
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
Equivalent formulations
At first glance cones seem to be slightly abnormal constructions in category theory. They are maps from an object to a functor (or vice versa). In keeping with the spirit of category theory we would like to define them as morphisms or objects in some suitable category. In fact, we can do both.
Let J be a small category and let CJ be the category of diagrams of type J in C (this is nothing more than a functor category). Define the diagona |
https://en.wikipedia.org/wiki/The%20Oxford%20Murders%20%28novel%29 | The Oxford Murders (; Imperceptible Crimes) is a novel by the Argentine author Guillermo Martínez, first published in 2003. It was translated into English in 2005 by Sonia Soto. The story tells about a professor of logic, who, along with a graduate student, investigates a series of bizarre, mathematically-based murders in Oxford, England.
Plot introduction
In this thriller, mathematical symbols are the key to a mysterious sequence of murders. Each new death that occurs is accompanied by a different mathematical shape, starting with a circle. This pure mathematical form heralds the death of Mrs Eagleton, the landlady of a young Argentine mathematician who narrates the story. It appears that the serial killer can be stopped only if somebody can decode the next symbol in the sequence. The mathematics graduate is joined by the leading Oxford logician Arthur Seldom on the quest to solve the cryptic clues.
The book explains how difficult it can be to solve mathematics in a cryptic form.
Selected editions
Abacus (2005). . Paperback, English.
MacAdam/Cage Publishing (2005). . Hardback, English.
See also
The Oxford Murders, a 2007 film adaptation directed by Álex de la Iglesia, starring Elijah Wood and John Hurt.
The Mathematical Institute
Merton College, Oxford
Cryptography |
https://en.wikipedia.org/wiki/Somatic%20hyphae | The life stage at which a fungus lives, grows, and develops, gathering nutrients and energy.
The fungus uses this stage to proliferate itself through asexually created mitotic spores.
Cycles through somatic hyphae, zoosporangia, zoospores, encystation & germination, and back to somatic hyphae. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.