source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Proizvolov%27s%20identity | In mathematics, Proizvolov's identity is an identity concerning sums of differences of positive integers. The identity was posed by Vyacheslav Proizvolov as a problem in the 1985 All-Union Soviet Student Olympiads.
To state the identity, take the first 2N positive integers,
1, 2, 3, ..., 2N − 1, 2N,
and partition them into two subsets of N numbers each. Arrange one subset in increasing order:
Arrange the other subset in decreasing order:
Then the sum
is always equal to N2.
Example
Take for example N = 3. The set of numbers is then {1, 2, 3, 4, 5, 6}. Select three numbers of this set, say 2, 3 and 5. Then the sequences A and B are:
A1 = 2, A2 = 3, and A3 = 5;
B1 = 6, B2 = 4, and B3 = 1.
The sum is
which indeed equals 32.
Proof
A slick proof of the identity is as follows. Note that for any , we have that :. For this reason, it suffices to establish that the sets and : coincide. Since the numbers are all distinct, it therefore suffices to show that for any , . Assume the contrary that this is false for some , and consider positive integers . Clearly, these numbers are all distinct (due to the construction), but they are at most : a contradiction is reached.
Notes |
https://en.wikipedia.org/wiki/NDISwrapper | NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64.
Native drivers for some network adapters are not available on Linux as some manufacturers maintain proprietary interfaces and do not write cross-platform drivers. NDISwrapper allows the use of Windows drivers, which are available for virtually all modern PC network adapters.
Use
There are three steps: Creating a Linux driver, installing it, and using it. NDISwrapper is composed of two main parts, a command-line tool used at installation time and a Windows subsystem used when an application calls the Wi-Fi subsystem.
As the outcome of an NDISwrapper installation should be some sort of Linux driver to be able to work with Linux applications, the first action the user does is to "compile" a couple or more of Windows files, and the NDISwrapper's version of Windows DDK into a Linux Kernel Module. This is done with a tool named "ndiswrapper". The resultant linux driver is then installed (often manually) in the OS. A Linux application can then send request to this Linux driver that automatically does the needed adaptations to call its—now—internal Windows driver and DDK.
To achieve this "compilation" NDISwrapper requires at least the ".inf" and the ".sys" files invariably supplied as parts of the Windows driver. For example, if the driver is called "mydriver", with the files mydriver.inf and mydriver.sys and vendorid:productid 0000:0000, then NDISwrapper installs the driver to /etc/ndiswrapper/mydriver/. This directory contains three files:
0000:0000.conf, which contains information extracted from the inf file
mydr |
https://en.wikipedia.org/wiki/Immunocompetence | In immunology, immunocompetence is the ability of the body to produce a normal immune response following exposure to an antigen. Immunocompetence is the opposite of immunodeficiency (also known as immuno-incompetence or being immuno-compromised). Examples include:
a newborn who does not yet have a fully functioning immune system but may have maternally transmitted antibodies – immunodeficient;
a late stage AIDS patient with a failed or failing immune system – immuno-incompetent; or
a transplant recipient taking medication so their body will not reject the donated organ – immunocompromised.
There may be cases of overlap but these terms all describe immune system not fully functioning.
The US Centers for Disease Control and Prevention (CDC) recommends that household and other close contacts of persons with altered immunocompetence receive the MMR, varicella, and rotavirus vaccines according to the standard schedule of vaccines, as well as receiving an annual flu shot. All other vaccines may be administered to contacts without alteration to the vaccine schedule, with the exception of the smallpox vaccine. Persons with altered immunocompetence should not receive live, attenuated vaccines (viral or bacterial), and may not receive the full benefit of inactivated vaccines.
In reference to lymphocytes, immunocompetence means that a B cell or T cell is mature and can recognize antigens and allow a person to mount an immune response. In order for lymphocytes such as T cells to become immunocompetent, which refers to the ability of lymphocyte cell receptors to recognize MHC molecules, they must undergo positive selection. Adaptive immunocompetence is regulated by growth hormone (GH), prolactin (PRL), and vasopressin (VP) – hormones secreted by the pituitary gland.
See also
Immunopharmacology and Immunotoxicology (medical journal)
Parasite-stress theory |
https://en.wikipedia.org/wiki/Death%20effector%20domain | The death-effector domain (DED) is a protein interaction domain found only in eukaryotes that regulates a variety of cellular signalling pathways. The DED domain is found in inactive procaspases (cysteine proteases) and proteins that regulate caspase activation in the apoptosis cascade such as FAS-associating death domain-containing protein (FADD). FADD recruits procaspase 8 and procaspase 10 into a death induced signaling complex (DISC). This recruitment is mediated by a homotypic interaction between the procaspase DED and a second DED that is death effector domain in an adaptor protein that is directly associated with activated TNF receptors. Complex formation allows proteolytic activation of procaspase into the active caspase form which results in the initiation of apoptosis (cell death). Structurally the DED domain are a subclass of protein motif known as the death fold and contains 6 alpha helices, that closely resemble the structure of the Death domain (DD).
Structure
DED is a subfamily of the DD superfamily (other recognizable domains in this superfamily are: caspase-recruitment domain (CARD), pyrin domain (PYD) and death domain (DD)). The subfamilies resemble structurally one another, all of them (and DED in particular) are composed of a bundle of 6 alpha-helices, but they diverge in the surface features.
The complete primary structure of this proteic domain has not been consensually defined. Some studies described residues 2-184, but C-terminus and N-terminus residues are not identified yet. The presence of amino acids that determine the solubility and aggregation to DED allowed to identify DED's in different proteins, such as caspase-8 and MC159. The secondary structure of the domain, as said, is built by 6 alpha-helices.
The tertiary structure of the domain has been described from the crystallization of caspase 8 in the human. The method used to describe the structure was X-RAY diffraction and the resolution obtained is 2.2 Å. DEDs in this protein |
https://en.wikipedia.org/wiki/Glasstron | The Sony Glasstron was a family of portable head-mounted displays, first released in 1996 with the model PLM-50. The products included two LCD screens and two earphones for video and audio respectively. These products are no longer manufactured nor supported by Sony.
The Glasstron was not the first head-mounted display by Sony, with the Visortron being a previous exhibited unit. The Sony HMZ-T1 can be considered successors to Glasstron. The head-mounted display developed for Sony during the mid-1990s by Virtual i-o is completely unrelated to the Glasstron.
One application of this technology was in the game MechWarrior 2, which permitted users to adopt a visual perspective from inside the cockpit of the craft, using their own eyes as visual and seeing the battlefield through their craft's own cockpit.
Models
Five models were released. Supported video inputs included PC (15 pin, VGA interface), Composite and S-Video. A brief list of the models follows: |
https://en.wikipedia.org/wiki/Trinomial | In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials.
Examples of trinomial expressions
with variables
with variables
with variables
, the quadratic polynomial in standard form with variables.
with variables, nonnegative integers and any constants.
where is variable and constants are nonnegative integers and any constants.
Trinomial equation
A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century.
Some notable trinomials
The quadratic trinomial in standard form (as from above):
sum or difference of two cubes:
A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable ( below). This form is factored as:
where
For instance, the polynomial is an example of this type of trinomial with . The solution and of the above system gives the trinomial factorization:
.
The same result can be provided by Ruffini's rule, but with a more complex and time-consuming process.
See also
Trinomial expansion
Monomial
Binomial
Multinomial
Simple expression
Compound expression
Sparse polynomial
Notes |
https://en.wikipedia.org/wiki/Round-robin%20DNS | Round-robin DNS is a technique of load distribution, load balancing, or fault-tolerance provisioning multiple, redundant Internet Protocol service hosts, e.g., Web server, FTP servers, by managing the Domain Name System's (DNS) responses to address requests from client computers according to an appropriate statistical model.
In its simplest implementation, round-robin DNS works by responding to DNS requests not only with a single potential IP address, but with a list of potential IP addresses corresponding to several servers that host identical services. The order in which IP addresses from the list are returned is the basis for the term round robin. With each DNS response, the IP address sequence in the list is permuted. Traditionally, IP clients initially attempt connections with the first address returned from a DNS query, so that on different connection attempts, clients would receive service from different providers, thus distributing the overall load among servers.
Some resolvers attempt to re-order the list to give priority to numerically "closer" networks. This behaviour was standardized during the definition of IPv6, and has been blamed for defeating round-robin load-balancing. Some desktop clients do try alternate addresses after a connection timeout of up to 30 seconds.
Round-robin DNS is often used to load balance requests among a number of Web servers. For example, a company has one domain name and three identical copies of the same web site residing on three servers with three IP addresses. The DNS server will be set up so that domain name has multiple A records, one for each IP address. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and |
https://en.wikipedia.org/wiki/Metabolic%20network | A metabolic network is the complete set of metabolic and physical processes that determine the physiological and biochemical properties of a cell. As such, these networks comprise the chemical reactions of metabolism, the metabolic pathways, as well as the regulatory interactions that guide these reactions.
With the sequencing of complete genomes, it is now possible to reconstruct the network of biochemical reactions in many organisms, from bacteria to human. Several of these networks are available online:
Kyoto Encyclopedia of Genes and Genomes (KEGG), EcoCyc, BioCyc and metaTIGER.
Metabolic networks are powerful tools for studying and modelling metabolism.
Uses
Metabolic networks can be used to detect comorbidity patterns in diseased patients. Certain diseases, such as obesity and diabetes, can be present in the same individual concurrently, sometimes one disease being a significant risk factor for the other disease. The disease phenotypes themselves are normally the consequence of the cell's inability to breakdown or produce an essential substrate. However, an enzyme defect at one reaction may affect the fluxes of other subsequent reactions. These cascading effects couple the metabolic diseases associated with subsequent reactions resulting in comorbidity effects. Thus, metabolic disease networks can be used to determine if two disorders are connected due to their correlated reactions.
See also
Metabolic network modelling
Metabolic pathway |
https://en.wikipedia.org/wiki/Unbounded%20nondeterminism | In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation.
Fairness
Discussion of unbounded nondeterminism tends to get involved with discussions of fairness. The basic concept is that all computation paths must be "fair" in the sense that if the machine enters a state infinitely often, it must take every possible transition from that state. This amounts to requiring that the machine be guaranteed to service a request if it can, since an infinite sequence of states will only be allowed if there is no transition that leads to the request being serviced. Equivalently, every possible transition must occur eventually in an infinite computation, although it may take an unbounded amount of time for the transition to occur. This concept is to be distinguished from the local fairness of flipping a "fair" coin, by which it is understood that it is possible for the outcome to always be heads for any finite number of steps, although as the number of steps increases, this will almost surely not happen.
An example of the role of fair or unbounded nondeterminism in the merging of strings was given by William D. Clinger, in his 1981 thesis. He defined a "fair merge" of two strings to be a third string in which each character of each string must occur eventually. He then considered the set of all fair merges of two strings , assuming it to be a monotone function. Then he argued that , where is the empty stream. Now }, so it must be that is an element of , a contradiction. He concluded that:
It appears that a fair merge cannot be written |
https://en.wikipedia.org/wiki/Open%20Rights%20Group | The Open Rights Group (ORG) is a UK-based organisation that works to preserve digital rights and freedoms by campaigning on digital rights issues and by fostering a community of grassroots activists. It campaigns on numerous issues including mass surveillance, internet filtering and censorship, and intellectual property rights.
History
The organisation was started by Danny O'Brien, Cory Doctorow, Ian Brown, Rufus Pollock, James Cronin, Stefan Magdalinski, Louise Ferguson and Suw Charman after a panel discussion at Open Tech 2005. O'Brien created a pledge on PledgeBank, placed on 23 July 2005, with a deadline of 25 December 2005: "I will create a standing order of 5 pounds per month to support an organisation that will campaign for digital rights in the UK but only if 1,000 other people will too." The pledge reached 1000 people on 29 November 2005. The Open Rights Group was launched at a "sell-out" meeting in Soho, London.
Work
The group has made submissions to the All Party Internet Group (APIG) inquiry into digital rights management and the Gowers Review of Intellectual Property.
The group was honoured in the 2008 Privacy International Big Brother Awards alongside No2ID, Liberty, Genewatch UK and others, as a recognition of their efforts to keep state and corporate mass surveillance at bay.
In 2010 the group worked with 38 Degrees to oppose the introduction of the Digital Economy Act, which was passed in April 2010.
The group opposes measures in the draft Online Safety Bill introduced in 2021, that it sees as infringing free speech rights and online anonymity.
The group campaigns against the Department for Digital, Culture, Media and Sport's plan to switch to an opt-out model for cookies. The group spokesperson stated that "[t]he UK government propose to make online spying the default option" in response to the proposed switch.
Goals
To collaborate with other digital rights and related organisations.
To nurture a community of campaigning volunteers, from |
https://en.wikipedia.org/wiki/Water%20maze%20%28neuroscience%29 | A water maze is a device used to test an animal's memory in which the alleys are filled with water, providing a motivation to escape.
Many different mazes exist, such as T- and Y-mazes, Cincinnati water mazes, and radial arm mazes. Water mazes have been used to test discrimination learning and spatial learning abilities. The Morris water navigation task is often called a "water maze task", but this is erroneous as it is not, properly speaking, a maze. The development of these mazes has aided research into, for example, hippocampal synaptic plasticity, NMDA receptor function, and looking into neurodegenerative diseases, such as Alzheimer's disease. |
https://en.wikipedia.org/wiki/Modular%20Ocean%20Model | The Modular Ocean Model (MOM) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (NOAA/GFDL) in Princeton, NJ, USA.
Overview
MOM has traditionally been a level-coordinate ocean model, in which the ocean is divided into boxes whose bottoms are located at fixed depths. Such a representation makes it easy to solve the momentum equations and the well-mixed, weakly stratified layer known as the ocean mixed layer near the ocean surface. However, level coordinate models have problems when it comes to the representation of thin bottom boundary layers (Winton et al., 1998) and thick sea ice. Additionally, because mixing in the ocean interior is largely along lines of constant potential density rather than along lines of constant depth, mixing must be rotated relative to the coordinate grid- a process that can be computationally expensive. By contrast, in codes which represent the ocean in terms of constant-density layers (which represent the flow in the ocean interior much more faithfully)- representation of the ocean mixed layer becomes a challenge.
MOM3, MOM4, and MOM5 are used as a code base for the ocean component of the GFDL coupled models used in the IPCC assessment reports, including the GFDL CM2.X physical climate model series and the ESM2M Earth System Model. Versions of MOM have been used in hundreds of scientific papers by authors around the world. MOM4 is used as the basis for the El Nino prediction system employed by the National Centers for Environmental Prediction.
History
MOM owes its genesis to work at GFDL in the late 1960s by Kirk Bryan and Michael Cox. This code, along with a version generated at GFDL and UCLA/NCAR by Bert Semtner, is the ancestor of many of the level-coordinate ocean model codes run around the world today. In the late |
https://en.wikipedia.org/wiki/DASK | The DASK was the first computer in Denmark. It was commissioned in 1955, designed and constructed by Regnecentralen, and began operation in September 1957. DASK is an acronym for Dansk Aritmetisk Sekvens Kalkulator or Danish Arithmetic Sequence Calculator. Regnecentralen almost didnot allow the name, as the word dask means "slap" in Danish. In the end, however, it was named so as it fit the pattern of the name BESK, the Swedish computer which provided the initial architecture for DASK.
DASK traces its origins to 1947 and a goal set by Akademiet for de Tekniske Videnskaber (Academy for the Technical Sciences or Academy of Applied Sciences), which was to follow the development of the modern computing devices. Initial funding was obtained through the Ministry of Defence (Denmark) as the Danish Military had been given a grant through the Marshall Plan for cipher machines for which the military saw no immediate need.
Originally conceived to be a copy of BESK, the rapid advancement in the field allowed improvements to be made during the development such that in the end, it was not a copy of BESK. The DASK was a one-off design that took place in a villa. The machine became so big that the floor had to be reinforced to support its mass of 3.5 metric tons.
DASK is notable for being the subject of one of the earliest ALGOL implementations, referred to as DASK ALGOL, which counted Jørn Jensen and Peter Naur among its contributors.
Architecture
The DASK was a vacuum tube machine based on the Swedish BESK design. As described in 1956, it contained 2500 vacuum tubes, 1500 solid-state elements, and required a three-phase power supply of at least 15 kW.
Fast storage was 1024 40-bit words of magnetic-core memory (cycle time 5µs), directly addressable as 1024 full or 2048 half-words. This was complemented by an additional 8192 words of backing store on magnetic drum (3000 rpm). A full word stored 40-bit numbers in two's-complement form, or two 20-bit instructions.
In addition t |
https://en.wikipedia.org/wiki/Why%20the%20lucky%20stiff | Jonathan Gillette, known by the pseudonym why the lucky stiff (often abbreviated as _why), is a writer, cartoonist, artist, and programmer notable for his work with the Ruby programming language. Annie Lowrey described him as "one of the most unusual, and beloved, computer programmers" in the world. Along with Yukihiro Matsumoto and David Heinemeier Hansson, he was seen as one of the key figures in the Ruby community. His pseudonym might allude to the exclamation "Why, the lucky stiff!" from The Fountainhead by Ayn Rand.
_why made a presentation enigmatically titled "A Starry Afternoon, a Sinking Symphony, and the Polo Champ Who Gave It All Up for No Reason Whatsoever" at the 2005 O'Reilly Open Source Convention. It explored how to teach programming and make the subject more appealing to adolescents. _why gave a presentation and performed with his band, the Thirsty Cups, at RailsConf in 2006.
On 19 August 2009, _why's accounts on Twitter and GitHub and his personally maintained websites went offline. Shortly before he disappeared, _why tweeted, "programming is rather thankless. u see your works become replaced by superior ones in a year. unable to run at all in a few more."
_why's colleagues have assembled collections of his writings and projects.
In 2012, his website briefly went back online with a detailed explanation of his plans for the future.
Works
Books
His best known work is Why's (poignant) Guide to Ruby, which "teaches Ruby with stories." Paul Adams of Webmonkey describes its eclectic style as resembling a "collaboration between Stan Lem and Ed Lear". Chapter three was published in The Best Software Writing I: Selected and Introduced by Joel Spolsky.
In April 2013, a complete book attributed to Jonathan Gillette was digitally released via the website whytheluckystiff.net (which has since changed ownership) and the GitHub repository cwales. It was presented as individual files of PCL (Printer Command Language) without any instructions on how to asse |
https://en.wikipedia.org/wiki/Lea%20test | The LEA Vision Test System is a series of pediatric vision tests designed specifically for children who do not know how to read the letters of the alphabet that are typically used in eye charts. There are numerous variants of the LEA test which can be used to assess the visual capabilities of near vision and distance vision, as well as several other aspects of occupational health, such as contrast sensitivity, visual field, color vision, visual adaptation, motion perception, and ocular function and accommodation (eye).
History
The first version of the LEA test was developed in 1976 by Finnish pediatric ophthalmologist Lea Hyvärinen, MD, PhD. Dr. Hyvärinen completed her thesis on fluorescein angiography and helped start the first clinical laboratory in that area while serving as a fellow at the Wilmer Eye Institute of Johns Hopkins Hospital in 1967. During her time with the Wilmer Institute, she became interested in vision rehabilitation and assessment and has been working in that field since the 1970s, training rehabilitation teams, designing new visual assessment devices, and teaching. The first test within the LEA Vision Test System that Dr. Hyvarinen created was the classic LEA Symbols Test followed shortly by the LEA Numbers Test which was used in comparison studies within the field of occupational medicine.
Accuracy
Among the array of visual assessment picture tests that exist, the LEA symbols tests are the only tests that have been calibrated against the standardized Landolt C vision test symbol. The Landolt C is an optotype that is used throughout most of the world as the standardized symbol for measuring visual acuity. It is identical to the "C" that is used in the traditional Snellen chart.
In addition to this, the LEA symbols test has been experimentally verified to be both a valid and reliable measure of visual acuity. As is desirable of a good vision test, each of the four optotypes used in the symbols test has been proven to measure visual acuity sim |
https://en.wikipedia.org/wiki/Digital%20mockup | Digital mockUp (DMU) is the digital description of a product, usually in 3D, for its entire life cycle. Digital mockup is enriched by all the activities that contribute to describing the product. The product design engineers, the manufacturing engineers, and the support engineers work together to create and manage the DMU. One of the objectives is to have important knowledge of the future or the supported product to replace any physical prototypes with virtual ones, using 3D computer graphics techniques. As an extension it is also frequently referred to as Digital Prototyping or Virtual Prototyping. These two specific definitions refer to the production of a physical prototype, but they are part of the DMU concept. DMU allows engineers to design and configure complex products and validate their designs without ever needing to build a physical model.
Among the techniques and technologies which make this possible are:
the use of light-weight 3D models with multiple levels of detail using lightweight data structures such as JT XVL and PDF allow engineers to visualize, analyze, and interact with large amounts of product data in real-time on standard desktop computers.
direct interface to between Digital Mockups and PDM systems.
active digital mockup technology that unites the ability to visualize the assembly mockup with the ability to measure, analyze, simulate, design and redesign.
Methodology and process
Technologies
Product Life Cycle Management
Visualization
Collaboration
Large Model Rendering
Interference Checking (aka Clearance Analysis or Clash Analysis)
Measurement Tools
Cross Sectioning
Path Extraction & Swept Volume Analysis
Constraint Management
Product Structure Modeling
Product Data Management (PDM)
Software
NX from Siemens Digital Industries Software
CATIA from Dassault Systèmes
Orealia from Onesia for Interactive 3D Digital MockUp
Showcase from Autodesk
Creo from PTC
Open Cascade Technology products from Open Cascade
See also
C |
https://en.wikipedia.org/wiki/Superposition%20calculus | The superposition calculus is a calculus for reasoning in equational logic. It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion. It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses, i.e. it performs proofs by refutation. Superposition is refutation complete—given unlimited resources and a fair derivation strategy, from any unsatisfiable clause set a contradiction will eventually be derived.
, most of the (state-of-the-art) theorem provers for first-order logic are based on superposition (e.g. the E equational theorem prover), although only a few implement the pure calculus.
Implementations
E
SPASS
Vampire
Waldmeister (official web page) |
https://en.wikipedia.org/wiki/Bent%20bond | In organic chemistry, a bent bond, also known as a banana bond, is a type of covalent chemical bond with a geometry somewhat reminiscent of a banana. The term itself is a general representation of electron density or configuration resembling a similar "bent" structure within small ring molecules, such as cyclopropane (C3H6) or as a representation of double or triple bonds within a compound that is an alternative to the sigma and pi bond model.
Small cyclic molecules
Bent bonds are a special type of chemical bonding in which the ordinary hybridization state of two atoms making up a chemical bond are modified with increased or decreased s-orbital character in order to accommodate a particular molecular geometry. Bent bonds are found in strained organic compounds such as cyclopropane, oxirane and aziridine.
In these compounds, it is not possible for the carbon atoms to assume the 109.5° bond angles with standard sp3 hybridization. Increasing the p-character to sp5 (i.e. s-density and p-density) makes it possible to reduce the bond angles to 60°. At the same time, the carbon-to-hydrogen bonds gain more s-character, which shortens them. In cyclopropane, the maximum electron density between two carbon atoms does not correspond to the internuclear axis, hence the name bent bond. In cyclopropane, the interorbital angle is 104°. This bending can be observed experimentally by X-ray diffraction of certain cyclopropane derivatives: the deformation density is outside the line of centers between the two carbon atoms. The carbon–carbon bond lengths are shorter than in a regular alkane bond: 151 pm versus 153 pm.
Cyclobutane is a larger ring, but still has bent bonds. In this molecule, the carbon bond angles are 90° for the planar conformation and 88° for the puckered one. Unlike in cyclopropane, the C–C bond lengths actually increase rather than decrease; this is mainly due to 1,3-nonbonded steric repulsion. In terms of reactivity, cyclobutane is relatively inert and behave |
https://en.wikipedia.org/wiki/Latent%20and%20observable%20variables | In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including political science, demography, engineering, medicine, ecology, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, management, psychology and the social sciences.
Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. In this situation, the term hidden variables is commonly used (reflecting the fact that the variables are meaningful, but not observable). Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The terms hypothetical variables or hypothetical constructs may be used in these situations.
The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world.
Examples
Psychology
Latent variables, as created by factor analytic methods, generally represent "shared" variance, or the degree to which variables "move" together. Variables that have no correlation cannot result in a latent construct based on the common factor model.
The "Big Five personality traits" have been inferred using factor analysis.
extraversion
spatial ability
wisdom “Two of the more predominant means of assessing wisdom include wisdom-related performance and latent variable measures.”
Spearman's g, or the general intelligence fac |
https://en.wikipedia.org/wiki/Chief%20Noc-A-Homa | Chief Noc-A-Homa was a mascot for the American professional baseball team Atlanta Braves from 1966 to 1985. He was primarily played by Levi Walker, Jr. After being a mascot for the franchise for two decades, the Atlanta Braves retired Chief Noc-A-Homa before the 1986 season.
History
Origin
The mascot's tradition started in 1964 while the franchise was in Milwaukee. The first recorded instance of the concept came when a 16-year-old high school student named Tim Rynders set up a tipi in the centerfield bleachers. He danced and ignited smoke bombs when the Braves scored. While the concept started in Milwaukee, there was no name associated with the mascot until the team moved to Atlanta.
During the 1966 season, the Atlanta Braves held a contest to name their mascot. Mary Truesdale, a Greenville, SC resident was one of three people who entered "Chief Noc-A-Homa" the winning name chosen and announced by the Braves on July 26, 1966.
The first Chief Noc-A-Homa was portrayed by a Georgia State college student named Larry Hunn. During the 1968 season, after training from Hunn, Tim Minors took over as Noc-A-Homa.
In 1968, Levi Walker approached the Braves about having a real Native American portray the chief. Having grown weary of life as an insurance salesmen, warehouse worker and plumber, Walker was hired for the 1969 season.
On May 26, 1969 Walker set his tipi on fire after lighting a smoke bomb celebrating a home run by Clete Boyer. After dancing around the tipi behind the left field fence, Chief Noc-A-Homa went inside but came charging out when flames shot out two feet into the air. The fire was quickly put out and after the game Walker said the smoke bombs were sabotaged. Walker became synonymous with Noc-A-Homa and he kept the job for 17 years, serving as the mascot until it was retired before the 1986 season. Walker, a Michigan native and member of the Odawa tribe, was the most famous version of Noc-A-Homa.
Chief Noc-A-Homa could be found at every home game |
https://en.wikipedia.org/wiki/Respiratory%20rate | The respiratory rate is the rate at which breathing occurs; it is set and controlled by the respiratory center of the brain. A person's respiratory rate is usually measured in breaths per minute.
Measurement
The respiratory rate in humans is measured by counting the number of breaths for one minute through counting how many times the chest rises. A fibre-optic breath rate sensor can be used for monitoring patients during a magnetic resonance imaging scan. Respiration rates may increase with fever, illness, or other medical conditions.
Inaccuracies in respiratory measurement have been reported in the literature. One study compared respiratory rate counted using a 90-second count period, to a full minute, and found significant differences in the rates.. Another study found that rapid respiratory rates in babies, counted using a stethoscope, were 60–80% higher than those counted from beside the cot without the aid of the stethoscope. Similar results are seen with animals when they are being handled and not being handled—the invasiveness of touch apparently is enough to make significant changes in breathing.
Various other methods to measure respiratory rate are commonly used, including impedance pneumography, and capnography which are commonly implemented in patient monitoring. In addition, novel techniques for automatically monitoring respiratory rate using wearable sensors are in development, such as estimation of respiratory rate from the electrocardiogram, photoplethysmogram, or accelerometry signals.
Breathing rate is often interchanged with the term breathing frequency. However, this should not be considered the frequency of breathing because realistic breathing signal is composed of many frequencies.
Normal range
For humans, the typical respiratory rate for a healthy adult at rest is 12–15 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower |
https://en.wikipedia.org/wiki/Highway%20Addressable%20Remote%20Transducer%20Protocol | The HART Communication Protocol (Highway Addressable Remote Transducer) is a hybrid analog+digital industrial automation open protocol. Its most notable advantage is that it can communicate over legacy 4–20 mA analog instrumentation current loops, sharing the pair of wires used by the analog-only host systems. HART is widely used in process and instrumentation systems ranging from small automation applications up to highly sophisticated industrial applications.
HART is a in the OSI model a Layer 7, Application. Layers 3–6 are not used. When sent over 4–20 mA it uses a Bell 202 for layer 1. But it is often converted to RS485 or RS232.
According to Emerson, due to the huge installation base of 4–20 mA systems throughout the world, the HART Protocol is one of the most popular industrial protocols today. HART protocol has made a good transition protocol for users who wished to use the legacy 4–20 mA signals, but wanted to implement a "smart" protocol.
OSI layer
History
The protocol was developed by Rosemount Inc., built off the Bell 202 early communications standard in the mid-1980s as a proprietary digital communication protocol for their smart field instruments. Soon it evolved into HART and in 1986 it was made an open protocol. Since then, the capabilities of the protocol have been enhanced by successive revisions to the specification.
Modes
There are two main operational modes of HART instruments: point-to-point (analog/digital) mode, and multi-drop mode.
Point to point
In point-to-point mode the digital signals are overlaid on the 4–20 mA loop current. Both the 4–20 mA current and the digital signal are valid signalling protocols between the controller and measuring instrument or final control element.
The polling address of the instrument is set to "0". Only one instrument can be put on each instrument cable signal pair. One signal, generally specified by the user, is specified to be the 4–20 mA signal. Other signals are sent digitally on top of the 4 |
https://en.wikipedia.org/wiki/Edge%20enhancement | Edge enhancement is an image processing filter that enhances the edge contrast of an image or video in an attempt to improve its acutance (apparent sharpness).
The filter works by identifying sharp edge boundaries in the image, such as the edge between a subject and a background of a contrasting color, and increasing the image contrast in the area immediately around the edge. This has the effect of creating subtle bright and dark highlights on either side of any edges in the image, called overshoot and undershoot, leading the edge to look more defined when viewed from a typical viewing distance.
The process is prevalent in the video field, appearing to some degree in the majority of TV broadcasts and DVDs. A modern television set's "sharpness" control is an example of edge enhancement. It is also widely used in computer printers especially for font or/and graphics to get a better printing quality. Most digital cameras also perform some edge enhancement, which in some cases cannot be adjusted.
Edge enhancement can be either an analog or a digital process. Analog edge enhancement may be used, for example, in all-analog video equipment such as modern CRT televisions.
Properties
Edge enhancement applied to an image can vary according to a number of properties; the most common algorithm is unsharp masking, which has the following parameters:
Amount. This controls the extent to which contrast in the edge detected area is enhanced.
Radius or aperture. This affects the size of the edges to be detected or enhanced, and the size of the area surrounding the edge that will be altered by the enhancement. A smaller radius will result in enhancement being applied only to sharper, finer edges, and the enhancement being confined to a smaller area around the edge.
Threshold. Where available, this adjusts the sensitivity of the edge detection mechanism. A lower threshold results in more subtle boundaries of colour being identified as edges. A threshold that is too lo |
https://en.wikipedia.org/wiki/Osteitis%20pubis | Osteitis pubis is a noninfectious inflammation of the pubis symphysis (also known as the pubic symphysis, symphysis pubis, or symphysis pubica), causing varying degrees of lower abdominal and pelvic pain. Osteitis pubis was first described in patients who had undergone suprapubic surgery, and it remains a well-known complication of invasive procedures about the pelvis. It may also occur as an inflammatory process in athletes. The incidence and cause of osteitis pubis as an inflammatory process versus an infectious process continues to fuel debate among physicians when confronted by a patient who presents complaining of abdominal pain or pelvic pain and overlapping symptoms. It was first described in 1924.
Signs and symptoms
The symptoms of osteitis pubis can include loss of flexibility in the groin region, dull aching pain in the groin, or in more severe cases, a sharp stabbing pain when running, kicking, changing directions, or even during routine activities such as standing up or getting out of a car. Tenderness on palpation is also commonly present in the adductor longus origin.
Causes
Pregnancy/childbirth
Gynecologic surgery
Urologic surgery
Athletic activities (e.g. running, football, American football, ice hockey, tennis)
Major trauma
Repeated minor trauma
Rheumatological disorders
Unknown cause
In the pre-antibiotic era, osteitis pubis was an occasional complication of pelvic surgery, and in particular, of retropubic prostatectomy.
Overload or training errors
Exercising on hard surfaces (like concrete)
Exercising on uneven ground
Beginning an exercise program after a long lay-off period
Increasing exercise intensity or duration too quickly
Exercising in worn out or ill-fitting shoes
Biomechanical inefficiencies
Faulty foot and body mechanics and gait disturbances
Poor running or walking mechanics
Tight, stiff muscles in the hips, groin, and buttocks
Muscular imbalances
Leg length differences
Diagnosis
Osteitis pubis may be diagnosed with an X |
https://en.wikipedia.org/wiki/AVR%20Butterfly | The AVR Butterfly is a battery-powered single-board microcontroller developed by Atmel. It consists of an Atmel ATmega169PV Microcontroller, a liquid crystal display, joystick, speaker, serial port, real-time clock (RTC), internal flash memory, and sensors for temperature and voltage. The board is the size of a name tag and has a clothing pin on back so it can be worn as such after the user enters their name onto the LCD.
Feature set
LCD
The AVRButterfly demonstrates LCD driving by running a 14 segment, six alpha-numeric character display. However, the LCD interface consumes many of the I/O pins.
CPU & Speed
The Butterfly's ATmega169 CPU is capable of speeds up to 8 MHz, however it is factory set by software to 2 MHz to preserve the button battery life. There are free replacement bootloaders available that will launch programs at 1, 2, 4 or 8 MHz speeds. Alternatively, this may be accomplished by changing the CPU prescaler in the application code.
Features
ATmega169V AVR 8-bit CPU, including 16 Kbyte of Flash memory for code storage and 512 bytes of EEPROM for data storage
100-segment LCD (without backlight)
4-Mbit (512-Kbyte) AT45 flash memory
4-way Mini-Joystick with center push-button
Light, temperature, and voltage (0-5 V range) sensors (light sensor no longer included due to the RoHS directive)
Piezo speaker
Solder pads for user-supplied connectors: 2 8-bit I/O ports, ISP, USI, JTAG
RS232 level converter & interface (Cable and connector provided by end user)
3 V battery holder (CR2450 battery included)
Software
The Butterfly comes preloaded with software that demonstrates many features of the ATmega169, including reading of the ambient light level and temperature and playback of musical notes. The device has a clothing-pin attached to the back, so it may be worn as a name tag — the "name" may be entered via the joystick or over the RS-232 port, and will scroll across the LCD.
Reprogramming
The Butterfly can be freely reprogrammed using |
https://en.wikipedia.org/wiki/Transient%20recovery%20voltage | A transient recovery voltage (TRV) for high-voltage circuit breakers is the voltage that appears across the terminals after current interruption. It is a critical parameter for fault interruption by a high-voltage circuit breaker, its characteristics (amplitude, rate of rise) can lead either to a successful current interruption or to a failure (called reignition or restrike).
The TRV is dependent on the characteristics of the system connected on both terminals of the circuit-breaker, and on the type of fault that this circuit breaker has to interrupt (single, double or three-phase faults, grounded or ungrounded fault ..).
Characteristics of the system include:
type of neutral (effectively grounded, ungrounded, solidly grounded ..)
type of load (capacitive, inductive, resistive)
type of connection: cable connected, line connected..
The most severe TRV is applied on the first pole of a circuit-breaker that interrupts current (called the first-pole-to-clear in a three-phase system). The parameters of TRVs are defined in international standards such as IEC and IEEE (or ANSI).
Capacitive load
Typical cases of capacitive loads are unloaded lines and capacitor banks.
Inductive circuit
Terminal fault
A terminal fault is a fault that occurs at the circuit breaker terminals. The circuit breaker interrupts a short-circuit at current zero, at this instant the supply voltage is maximum and the recovery voltage tends to reach the supply voltage with a high frequency transient. The normalized value of the overshoot or amplitude factor is 1.4.
Short-line-fault
A short-line-fault is a fault that occurs on a line a few hundred meters to several kilometers down the line from the circuit breaker terminal. As shown on Figure 5, the TRV is characterized, in its initial part, by a steep rate-of-rise due to a high-frequency oscillation produced by travelling waves that travel on the line with positive and negative reflections at the circuit breaker terminal and at the fault |
https://en.wikipedia.org/wiki/Tautonym | A tautonym is a scientific name of a species in which both parts of the name have the same spelling, such as Rattus rattus. The first part of the name is the name of the genus and the second part is referred to as the specific epithet in the International Code of Nomenclature for algae, fungi, and plants and the specific name in the International Code of Zoological Nomenclature.
Tautonymy (i.e., the usage of tautonymous names) is permissible in zoological nomenclature (see List of tautonyms for examples). In past editions of the zoological Code, the term tautonym was used, but it has now been replaced by the more inclusive "tautonymous names"; these include trinomial names such as Gorilla gorilla gorilla and Bison bison bison.
For animals, a tautonym implicitly (though not always) indicates that the species is the type species of its genus. This can also be indicated by a species name with the specific epithet typus or typicus, although more commonly the type species is designated another way.
Botanical nomenclature
In the current rules for botanical nomenclature (which apply retroactively), tautonyms are explicitly prohibited. One example of a botanical tautonym is 'Larix larix'. The earliest name for the European larch is Pinus larix L. (1753) but Gustav Karl Wilhelm Hermann Karsten did not agree with the placement of the species in Pinus and decided to move it to Larix in 1880. His proposed name created a tautonym. Under rules first established in 1906, which are applied retroactively, Larix larix cannot exist as a formal name. In such a case either the next earliest validly published name must be found, in this case Larix decidua Mill. (1768), or (in its absence) a new epithet must be published.
However, it is allowed for both parts of the name of a species to mean the same (pleonasm), without being identical in spelling. For instance, Arctostaphylos uva-ursi means bearberry twice, in Greek and Latin respectively; Picea omorika uses the Latin and Serbian t |
https://en.wikipedia.org/wiki/Imaging%20genetics | Imaging genetics refers to the use of anatomical or physiological imaging technologies as phenotypic assays to evaluate genetic variation. Scientists that first used the term imaging genetics were interested in how genes influence psychopathology and used functional neuroimaging to investigate genes that are expressed in the brain (neuroimaging genetics).
Imaging genetics uses research approaches in which genetic information and fMRI data in the same subjects are combined to define neuro-mechanisms linked to genetic variation. With the images and genetic information, it can be determined how individual differences in single nucleotide polymorphisms, or SNPs, lead to differences in brain wiring structure, and intellectual function. Imaging genetics allows the direct observation of the link between genes and brain activity in which the overall idea is that common variants in SNPs lead to common diseases. A neuroimaging phenotype is attractive because it is closer to the biology of genetic function than illnesses or cognitive phenotypes.
Alzheimer's disease
By combining the outputs of the polygenic and neuro-imaging within a linear model, it has been shown that genetic information provides additive value in the task of predicting Alzheimer's disease (AD). AD traditionally has been considered a disease marked by neuronal cell loss and widespread gray matter atrophy and the apolipoprotein E allele (APOE4) is a widely confirmed genetic risk factor for late-onset AD.
Another gene risk variant is associated with Alzheimer's, which is known as the CLU gene risk variant. The CLU gene risk variant showed a distinct profile of lower white matter integrity that may increase vulnerability to developing AD later in life. Each CLU-C allele was associated with lower FA in frontal, temporal, parietal, occipital, and subcortical white matter. Brain regions with lower FA included corticocortical pathways previously demonstrated to have lower FA in AD patients and APOE4 carriers |
https://en.wikipedia.org/wiki/Irreducible%20component | In algebraic geometry, an irreducible algebraic set or irreducible variety is an algebraic set that cannot be written as the union of two proper algebraic subsets. An irreducible component is an algebraic subset that is irreducible and maximal (for set inclusion) for this property. For example, the set of solutions of the equation is not irreducible, and its irreducible components are the two lines of equations and .
It is a fundamental theorem of classical algebraic geometry that every algebraic set may be written in a unique way as a finite union of irreducible components.
These concepts can be reformulated in purely topological terms, using the Zariski topology, for which the closed sets are the algebraic subsets: A topological space is irreducible if it is not the union of two proper closed subsets, and an irreducible component is a maximal subspace (necessarily closed) that is irreducible for the induced topology. Although these concepts may be considered for every topological space, this is rarely done outside algebraic geometry, since most common topological spaces are Hausdorff spaces, and, in a Hausdorff space, the irreducible components are the singletons.
In topology
A topological space X is reducible if it can be written as a union of two closed proper subsets , of
A topological space is irreducible (or hyperconnected) if it is not reducible. Equivalently, X is irreducible if all non empty open subsets of X are dense, or if any two nonempty open sets have nonempty intersection.
A subset F of a topological space X is called irreducible or reducible, if F considered as a topological space via the subspace topology has the corresponding property in the above sense. That is, is reducible if it can be written as a union where are closed subsets of , neither of which contains
An irreducible component of a topological space is a maximal irreducible subset. If a subset is irreducible, its closure is also irreducible, so irreducible components are |
https://en.wikipedia.org/wiki/Personalized%20medicine | Personalized medicine, also referred to as precision medicine, is a medical model that separates people into different groups—with medical decisions, practices, interventions and/or products being tailored to the individual patient based on their predicted response or risk of disease. The terms personalized medicine, precision medicine, stratified medicine and P4 medicine are used interchangeably to describe this concept though some authors and organisations use these expressions separately to indicate particular nuances.
While the tailoring of treatment to patients dates back at least to the time of Hippocrates, the term has risen in usage in recent years given the growth of new diagnostic and informatics approaches that provide understanding of the molecular basis of disease, particularly genomics. This provides a clear evidence base on which to stratify (group) related patients.
Among the 14 Grand Challenges for Engineering, an initiative sponsored by National Academy of Engineering (NAE), personalized medicine has been identified as a key and prospective approach to "achieve optimal individual health decisions", therefore overcoming the challenge to "Engineer better medicines".
Development of concept
In personalised medicine, diagnostic testing is often employed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis. The use of genetic information has played a major role in certain aspects of personalized medicine (e.g. pharmacogenomics), and the term was first coined in the context of genetics, though it has since broadened to encompass all sorts of personalization measures, including the use of proteomics, imaging analysis, nanoparticle-based theranostics, among others.
Relationship to personalized medicine
Precision medicine (PM) is a medical model that proposes the customization of healthcare, with medical decisions, treatments, practices, or products being tailored to |
https://en.wikipedia.org/wiki/Transition%20%28genetics%29 | Transition, in genetics and molecular biology, refers to a point mutation that changes a purine nucleotide to another purine (A ↔ G), or a pyrimidine nucleotide to another pyrimidine (C ↔ T). Approximately two out of three single nucleotide polymorphisms (SNPs) are transitions.
Transitions can be caused by oxidative deamination and tautomerization. Although there are twice as many possible transversions, transitions appear more often in genomes, possibly due to the molecular mechanisms that generate them.
5-Methylcytosine is more prone to transition than unmethylated cytosine, due to spontaneous deamination. This mechanism is important because it dictates the rarity of CpG islands.
See also
Transversion |
https://en.wikipedia.org/wiki/UTM%20theorem | In computability theory, the theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function. The universal function is an abstract version of the universal Turing machine, thus the name of the theorem.
Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem.
Theorem
The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined.
The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, … is an enumeration of the partial computable functions. The function in the statement of the theorem is called a universal function. |
https://en.wikipedia.org/wiki/Gunning%20transceiver%20logic | Gunning transceiver logic (GTL) is a type of logic signaling used to drive electronic backplane buses. It has a voltage swing between 0.4 volts and 1.2 volts—much lower than that used in TTL and CMOS logic—and symmetrical parallel resistive termination. The maximum signaling frequency is specified to be 100 MHz, although some applications use higher frequencies. GTL is defined by JEDEC standard JESD 8-3 (1993) and was invented by William Gunning while working for Xerox at the Palo Alto Research Center.
All Intel front-side buses use GTL. As of 2008, GTL in these FSBs has a maximum frequency of 1.6 GHz. The front-side bus of the Intel Pentium Pro, Pentium II and Pentium III microprocessors uses GTL+ (or GTLP) developed by Fairchild Semiconductor, an upgraded version of GTL which has defined slew rates and higher voltage levels. AGTL+ stands for either assisted Gunning transceiver logic or advanced Gunning transceiver logic. These are GTL signaling derivatives used by Intel microprocessors. |
https://en.wikipedia.org/wiki/Failure%20analysis | Failure analysis is the process of collecting and analyzing data to determine the cause of a failure, often with the goal of determining corrective actions or liability.
According to Bloch and Geitner, ”machinery failures reveal a reaction chain of cause and effect… usually a deficiency commonly referred to as the symptom…”. Failure analysis can save money, lives, and resources if done correctly and acted upon. It is an important discipline in many branches of manufacturing industry, such as the electronics industry, where it is a vital tool used in the development of new products and for the improvement of existing products. The failure analysis process relies on collecting failed components for subsequent examination of the cause or causes of failure using a wide array of methods, especially microscopy and spectroscopy. Nondestructive testing (NDT) methods (such as industrial computed tomography scanning) are valuable because the failed products are unaffected by analysis, so inspection sometimes starts using these methods.
Forensic investigation
Forensic inquiry into the failed process or product is the starting point of failure analysis. Such inquiry is conducted using scientific analytical methods such as electrical and mechanical measurements, or by analyzing failure data such as product reject reports or examples of previous failures of the same kind. The methods of forensic engineering are especially valuable in tracing product defects and flaws. They may include fatigue cracks, brittle cracks produced by stress corrosion cracking or environmental stress cracking for example. Witness statements can be valuable for reconstructing the likely sequence of events and hence the chain of cause and effect. Human factors can also be assessed when the cause of the failure is determined. There are several useful methods to prevent product failures occurring in the first place, including failure mode and effects analysis (FMEA) and fault tree analysis (FTA), methods wh |
https://en.wikipedia.org/wiki/Kuso | Kuso is a term used in East Asia for the internet culture that generally includes all types of camp and parody. In Japanese, is a word
that is commonly translated to English as curse words such as fuck, shit, damn, and bullshit (both kuso and shit refer to feces), and is often said as an interjection. It is also used to describe outrageous matters and objects of poor quality. This usage of kuso was brought into Taiwan around 2000 by young people who frequently visited Japanese websites and quickly became an internet phenomenon, spreading to Taiwan and Hong Kong and subsequently to Mainland China.
From Japanese kusogē to Taiwanese kuso
The root of Taiwanese "kuso" was not the Japanese word kuso itself but . The word kusogē is a clipped compound of and , which means, quite literally, "crappy (video) games". This term was eventually brought outside of Japan and its meaning shifted in the West, becoming a term of endearment (and even a category) towards either bad games of nostalgic value and/or poorly-developed games that still remain enjoyable as a whole.
This philosophy soon spread to Taiwan, where people would share the games and often satirical comments on BBSes, and the term was further shortened. Games generally branded as kuso in Taiwan include Hong Kong 97 and the Death Crimson series.
Because kusogē were often unintentionally funny, soon the definition of kuso in Taiwan shifted to "anything hilarious", and people started to brand anything outrageous and funny as kuso. Parodies, such as the Chinese robot Xianxingzhe ridiculed by a Japanese website, were marked as kuso. Mo lei tau films by Stephen Chow are often said to be kuso as well. The Cultural Revolution is often a subject of parody too, with songs such as I Love Beijing Tiananmen spread around the internet for laughs.
Some, however, limit the definition of kuso to "humour limited to those about Hong Kong comics or Japanese anime, manga, and games". Kuso by such definitions are primarily doujin or f |
https://en.wikipedia.org/wiki/Millieme | Millieme is a French word meaning one thousandth of something. In English it may refer to:
Millieme (angle), a French unit of plane angle similar to a milliradian.
One thousandth of an Egyptian pound, Tunisian dinar, or Libyan pound.
Numbers |
https://en.wikipedia.org/wiki/Zodiac%20%28film%29 | Zodiac is a 2007 American neo-noir mystery thriller film directed by David Fincher from a screenplay by James Vanderbilt, based on the non-fiction books by Robert Graysmith, Zodiac (1986) and Zodiac Unmasked (2002). The film stars Jake Gyllenhaal, Mark Ruffalo, and Robert Downey Jr. with Anthony Edwards, Brian Cox, Elias Koteas, Donal Logue, John Carroll Lynch, Chloë Sevigny, Philip Baker Hall and Dermot Mulroney in supporting roles.
The film tells the story of the manhunt for the Zodiac Killer, a serial murderer who terrorized the San Francisco Bay Area during the late 1960s and early 1970s, taunting police with letters, bloodstained clothing, and ciphers mailed to newspapers. The case remains one of the United States' most infamous unsolved crimes. Fincher, Vanderbilt, and producer Bradley J. Fischer spent 18 months conducting their own investigation and research into the Zodiac murders. Fincher employed the digital Thomson Viper FilmStream Camera to photograph most of the film, with traditional high-speed film cameras used for slow-motion murder sequences.
Zodiac was released by Paramount Pictures in North America and Warner Bros. Pictures in international markets on March 2, 2007, and received largely positive reviews, with praise for its writing, directing, acting, and historical accuracy. The film was nominated for several awards, including the Saturn Award for Best Action, Adventure or Thriller Film. It grossed over $84.7 million worldwide on a production budget of $65 million. In a 2016 critics' poll conducted by the BBC, Zodiac was voted the 12th greatest film of the 21st century.
Plot
On July 4, 1969, an unknown man attacks Darlene Ferrin and Mike Mageau with a handgun at a lovers' lane in Vallejo, California. Only Mike survives.
One month later, the San Francisco Chronicle receives encrypted letters written by the killer calling himself "Zodiac," who threatens to kill a dozen people unless his coded message containing his identity is published. Polit |
https://en.wikipedia.org/wiki/Pyongyang%20TV%20Tower | Pyongyang TV Tower is a free-standing concrete TV tower with an observation deck and a panorama restaurant at a height of in Pyongyang, North Korea. The tower stands in Kaeson Park in Moranbong-guyok, north of Kim Il-sung Stadium. The tower broadcasts signals for Korean Central Television.
History
It was built in 1967 to enhance the broadcasting area, which was very poor at the time, and to start colour TV broadcasts.
The Pyongyang TV Tower is chiefly based on the design of the Ostankino Tower in Moscow, which was built at the same time.
Features
There are broadcast antennas and technical equipment at the height of , located at circular platforms. An observation deck is located above the ground, and the tower is topped by a antenna. It uses its high-gain reflector antennas and panel antennas to produce a wide coverage of Analog and Digital TV reception, as well for radio reception.
See also
List of towers
Television in North Korea |
https://en.wikipedia.org/wiki/Uniform%20boundedness | In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family.
Definition
Real line and complex plane
Let
be a family of functions indexed by , where is an arbitrary set and is the set of real or complex numbers. We call uniformly bounded if there exists a real number such that
Metric space
In general let be a metric space with metric , then the set
is called uniformly bounded if there exists an element from and a real number such that
Examples
Every uniformly convergent sequence of bounded functions is uniformly bounded.
The family of functions defined for real with traveling through the integers, is uniformly bounded by 1.
The family of derivatives of the above family, is not uniformly bounded. Each is bounded by but there is no real number such that for all integers |
https://en.wikipedia.org/wiki/Beating%20the%20bounds | Beating the bounds or perambulating the bounds is an ancient custom still observed in parts of England, Wales, and the New England region of the United States, which traditionally involved swatting local landmarks with branches to maintain a shared mental map of parish boundaries, usually every seven years.
These ceremonial events occur on what are sometimes called gangdays; the custom of going a-ganging was kept before the Norman Conquest. During the event, a group of prominent citizens from the community, which can be an English church parish, New England town, or other civil division, will walk the geographic boundaries of their locality for the purpose of maintaining the memory of the precise location of these boundaries. While modern surveying techniques have rendered these ceremonial walks largely irrelevant, the practice remains as an important local civic ceremony or legal requirement for civic leaders.
Ceremony
In former times when maps were rare, it was usual to make a formal perambulation of the parish boundaries on Ascension Day or during Rogation week. Knowledge of the limits of each parish needed to be handed down so that such matters as liability to contribute to the repair of the church or the right to be buried within the churchyard were not disputed. The relevant jurisdiction was that of the ecclesiastical courts. The priest of the parish with the churchwardens and the parochial officials headed a crowd of boys who beat the parish boundary markers with green boughs, usually birch or willow. Sometimes the boys were whipped or violently bumped on the boundary stones to make them remember. The object of taking boys along is supposed to ensure that witnesses to the boundaries should survive as long as possible. Priests would pray for its protection in the forthcoming year, and often Psalms 103 and 104 were recited, and the priest would say such sentences as "Cursed is he who transgresseth the bounds or doles of his neighbour". Hymns would be sung, |
https://en.wikipedia.org/wiki/Biopharmaceutical | A biopharmaceutical, also known as a biological medical product, or biologic, is any pharmaceutical drug product manufactured in, extracted from, or semisynthesized from biological sources. Different from totally synthesized pharmaceuticals, they include vaccines, whole blood, blood components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein, and living medicines used in cell therapy. Biologics can be composed of sugars, proteins, nucleic acids, or complex combinations of these substances, or may be living cells or tissues. They (or their precursors or components) are isolated from living sources—human, animal, plant, fungal, or microbial. They can be used in both human and animal medicine.
Terminology surrounding biopharmaceuticals varies between groups and entities, with different terms referring to different subsets of therapeutics within the general biopharmaceutical category. Some regulatory agencies use the terms biological medicinal products or therapeutic biological product to refer specifically to engineered macromolecular products like protein- and nucleic acid-based drugs, distinguishing them from products like blood, blood components, or vaccines, which are usually extracted directly from a biological source. Biopharmaceutics is pharmaceutics that works with biopharmaceuticals. Biopharmacology is the branch of pharmacology that studies biopharmaceuticals. Specialty drugs, a recent classification of pharmaceuticals, are high-cost drugs that are often biologics. The European Medicines Agency uses the term advanced therapy medicinal products (ATMPs) for medicines for human use that are "based on genes, cells, or tissue engineering", including gene therapy medicines, somatic-cell therapy medicines, tissue-engineered medicines, and combinations thereof. Within EMA contexts, the term advanced therapies refers specifically to ATMPs, although that term is rather nonspecific outside those contexts.
Gene-based and cellular bi |
https://en.wikipedia.org/wiki/Southwestern%20blot | The southwestern blot, is a lab technique that involves identifying as well as characterizing DNA-binding proteins by their ability to bind to specific oligonucleotide probes. Determination of molecular weight of proteins binding to DNA is also made possible by the technique. The name originates from a combination of ideas underlying Southern blotting and Western blotting techniques of which they detect DNA and protein respectively. Similar to other types of blotting, proteins are separated by SDS-PAGE and are subsequently transferred to nitrocellulose membranes. Thereafter southwestern blotting begins to vary with regards to procedure as since the first blotting’s, many more have been proposed and discovered with goals of enhancing results. Former protocols were hampered by the need for large amounts of proteins and their susceptibility to degradation while being isolated.
Southwestern blotting was first described by Brian Bowen, Jay Steinberg, U.K. Laemmli, and Harold Weintraub in 1979. During the time the technique was originally called "protein blotting". While there were existing techniques for purification of proteins associated with DNA, they often had to be used together to yield desired results. Thus, Bowen and colleagues sought to describe a procedure that could simplify the current methods of their time.
Method
Original Method
To begin, proteins of interest are prepared for the SDS-PAGE technique and subsequently loaded onto the gel for separation on the basis of molecular size. Large proteins will have difficulty navigating through the mesh-like structure of the gel as they can not fit through the pores with the ease that smaller proteins can. As a result, large proteins do not travel very far on the gel in comparison to smaller proteins that travel further. After enough time, this results in distinct bands that can be visualized from a number of post-gel electrophoresis staining procedures. The bands are at different positions on the gel relative |
https://en.wikipedia.org/wiki/Tricalcium%20phosphate | Tricalcium phosphate (sometimes abbreviated TCP), more commonly known as Calcium phosphate, is a calcium salt of phosphoric acid with the chemical formula Ca3(PO4)2. It is also known as tribasic calcium phosphate and bone phosphate of lime (BPL). It is a white solid of low solubility. Most commercial samples of "tricalcium phosphate" are in fact hydroxyapatite.
It exists as three crystalline polymorphs α, α′, and β. The α and α′ states are stable at high temperatures.
Nomenclature
Calcium phosphate refers to numerous materials consisting of calcium ions (Ca2+) together with orthophosphates (), metaphosphates or pyrophosphates () and occasionally oxide and hydroxide ions. Especially, the common mineral apatite has formula Ca5(PO4)3X, where X is F, Cl, OH, or a mixture; it is hydroxyapatite if the extra ion is mainly hydroxide. Much of the "tricalcium phosphate" on the market is actually powdered hydroxyapatite.
Preparation
Tricalcium phosphate is produced commercially by treating hydroxyapatite with phosphoric acid and slaked lime.
It cannot be precipitated directly from aqueous solution. Typically double decomposition reactions are employed, involving a soluble phosphate and calcium salts, e.g. (NH4)2HPO4 + Ca(NO3)2. is performed under carefully controlled pH conditions. The precipitate will either be "amorphous tricalcium phosphate", ATCP, or calcium deficient hydroxyapatite, CDHA, Ca9(HPO4)(PO4)5(OH), (note CDHA is sometimes termed apatitic calcium triphosphate). Crystalline tricalcium phosphate can be obtained by calcining the precipitate. β-Ca3(PO4)2 is generally formed, higher temperatures are required to produce α-Ca3(PO4)2.
An alternative to the wet procedure entails heating a mixture of a calcium pyrophosphate and calcium carbonate:
CaCO3 + Ca2P2O7 → Ca3(PO4)2 + CO2
Structure of β-, α- and α′- Ca3(PO4)2 polymorphs
Tricalcium phosphate has three recognised polymorphs, the rhombohedral β form (shown above), and two high temperature forms, monoclini |
https://en.wikipedia.org/wiki/Image%20histogram | An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.
Image histograms are present on many modern services. Photographers can use them as an aid to show the distribution of tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows. This is less useful when using a raw image format, as the dynamic range of the displayed image may only be an approximation to that in the raw file.
The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the total number of pixels in that particular tone.
The left side of the horizontal axis represents the dark areas, the middle represents mid-tone values and the right hand side represents light areas. The vertical axis represents the size of the area (total number of pixels) that is captured in each one of these zones.
Thus, the histogram for a very dark image will have most of its data points on the left side and center of the graph.
Conversely, the histogram for a very bright image with few dark areas and/or shadows will have most of its data points on the right side and center of the graph.
Image manipulation and histograms
Image editors typically create a histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness or tonal value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Histogram equalization is a popular example of these algorithms. Improvements in picture brightness and contrast can thus be obtained.
In the field of computer vision, image histograms can be useful tools f |
https://en.wikipedia.org/wiki/Image%20noise | Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the image sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that obscures the desired information. Typically the term “image noise” is used to refer to noise in 2D images, not 3D images.
The original meaning of "noise" was "unwanted signal"; unwanted electrical fluctuations in signals received by AM radios caused audible acoustic noise ("static"). By analogy, unwanted electrical fluctuations are also called "noise".
Image noise can range from almost imperceptible specks on a digital photograph taken in good light, to optical and radioastronomical images that are almost entirely noise, from which a small amount of information can be derived by sophisticated processing. Such a noise level would be unacceptable in a photograph since it would be impossible even to determine the subject.
Types
Gaussian noise
Principal sources of Gaussian noise in digital images arise during acquisition. The sensor has inherent noise due to the level of illumination and its own temperature, and the electronic circuits connected to the sensor inject their own share of electronic circuit noise.
A typical model of image noise is Gaussian, additive, independent at each pixel, and independent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that which comes from the reset noise of capacitors ("kTC noise"). Amplifier noise is a major part of the "read noise" of an image sensor, that is, of the constant noise level in dark areas of the image. In color cameras where more amplification is used in the blue color channel than in the green or red channel, there can be more noise in the blue channel. At higher exposures, however, image senso |
https://en.wikipedia.org/wiki/Otonality%20and%20utonality | Otonality and utonality are terms introduced by Harry Partch to describe chords whose pitch classes are the harmonics or subharmonics of a given fixed tone (identity), respectively. For example: , , ,... or , , ,....
Definition
An otonality is a collection of pitches which can be expressed in ratios, expressing their relationship to the fixed tone, that have equal denominators and consecutive numerators. For example, , , and (just major chord) form an otonality because they can be written as , , . This in turn can be written as an extended ratio 4:5:6. Every otonality is therefore composed of members of a harmonic series. Similarly, the ratios of a utonality share the same numerator and have consecutive denominators. , , , and () form a utonality, sometimes written as , or as . Every utonality is therefore composed of members of a subharmonic series. This term is used extensively by Harry Partch in Genesis of a Music.
An otonality corresponds to an arithmetic series of frequencies, or lengths of a vibrating string. Brass instruments naturally produce otonalities, and indeed otonalities are inherent in the harmonics of a single fundamental tone. Tuvan Khoomei singers produce otonalities with their vocal tracts.
Utonality is the opposite, corresponding to a subharmonic series of frequencies, or an arithmetic series of wavelengths (the inverse of frequency). The arithmetical proportion "may be considered as a demonstration of utonality ('minor tonality')."
If otonality and utonality are defined broadly, every just intonation chord is both an otonality and a utonality. For example, the minor triad in root position is made up of the 10th, 12th and 15th harmonics, and , and meets the definition of otonal. A better, narrower definition requires that the harmonic (or subharmonic) series members be adjacent. Thus 4:5:6 is an otonality, but 10:12:15 is not. (Alternate voicings of 4:5:6, such as 5:6:8, 3:4:5:6, etc. would presumably also be otonalities.) Under this d |
https://en.wikipedia.org/wiki/Heusler%20compound | Heusler compounds are magnetic intermetallics with face-centered cubic crystal structure and a composition of XYZ (half-Heuslers) or X2YZ (full-Heuslers), where X and Y are transition metals and Z is in the p-block. The term derives from the name of German mining engineer and chemist Friedrich Heusler, who studied such a compound (Cu2MnAl) in 1903. Many of these compounds exhibit properties relevant to spintronics, such as magnetoresistance, variations of the Hall effect, ferro-, antiferro-, and ferrimagnetism, half- and semimetallicity, semiconductivity with spin filter ability, superconductivity, topological band structure and are actively studied as Thermoelectric materials. Their magnetism results from a double-exchange mechanism between neighboring magnetic ions. Manganese, which sits at the body centers of the cubic structure, was the magnetic ion in the first Heusler compound discovered. (See the Bethe–Slater curve for details of why this happens.)
Styles of writing chemical formula
Depending on the field of literature being surveyed, one might encounter the same compound referred to with different chemical formulas. An example of the most common difference is X2YZ versus XY2Z, where the reference to the two transition metals X and Y in the compound are swapped. The traditional convention X2YZ arises from the interpretation of Heuslers as intermetallics and used predominantly in literature studying magnetic applications of Heuslers compounds. The XY2Z convention on the other hand is used mostly in thermoelectric materials and transparent conducting applications literature where semiconducting Heuslers (most half-Heuslers are semiconductors) are used. This convention, in which the left-most element on the periodic table comes first, uses the Zintl interpretation of semiconducting compounds where the chemical formula XY2Z is written in order of increasing electronegativity. In well-known compounds such as Fe2VAl which were historically thought of as metallic |
https://en.wikipedia.org/wiki/Domestic%20duck | The domestic duck or domestic mallard (Anas platyrhynchos domesticus) is a subspecies of mallard that has been domesticated by humans and raised for meat, eggs, and down feathers. A few are also kept for show, as pets, or for their ornamental value. Almost all varieties of domesticated ducks, apart from the domestic Muscovy duck (Cairina moschata), are descended from the mallard.
Domestication
Whole-genome sequencing suggests that domestic ducks originate from a single domestication event of mallards during the Neolithic, followed by rapid selection for lineages favouring meat or egg production. They were probably domesticated in Southeast Asia – most probably in Southern China – by the rice paddy-farming ancestors of modern Southeast Asians, and spread outwards from that region. There are few archaeological records, so the date of domestication is unknown; the earliest written records are in Han Chinese writings from central China dating to about 500 BC. Duck farming for both meat and eggs is a widespread and ancient industry in Southeast Asia.
Wild ducks were hunted extensively in Ancient Egypt and other parts of the world in ancient times, but were not domesticated. Ducks are documented in Ancient Rome from the second century BC, but descriptions – notably those of Columella – suggest that ducks in Roman agriculture were tamed, not domesticated; there was no duck breeding in Roman times, so eggs from wild ducks were needed to start duck farms.
Most breeds and varieties of domestic duck derive from the mallard, Anas platyrhynchos; a few derive from Cairina moschata, the Muscovy duck, or are mulards, hybrids of these with A. anas stock. Domestication has greatly altered their characteristics. Domestic ducks are mostly promiscuous, where wild mallards are monogamous. Domestic ducks have lost the mallard's territorial behaviour, and are less aggressive than mallards. Despite these differences, domestic ducks frequently mate with wild mallards, producing fully fe |
https://en.wikipedia.org/wiki/Evershed%20effect | The Evershed effect, named after the British astronomer John Evershed, is the radial flow of gas across the photospheric surface of the penumbra of sunspots from the inner border with the umbra towards the outer edge.
The speed varies from around 1 km/s at the border between the umbra and the penumbra to a maximum of around double this in the middle of the penumbra and falls off to zero at the outer edge of the penumbra.
Evershed first detected this phenomenon in January 1909, whilst working at the Kodaikanal Solar Observatory in India, when he found that the spectral lines of sunspots showed doppler shift.
Afterwards, measurements of the spectral emission lines emitted in the ultraviolet wavelengths have shown a systematic red-shift. The Evershed effect is common to every spectral line formed at a temperature below 105 K; this fact would imply a constant downflow from the transition region towards the chromosphere. The observed velocity is about 5 km/s. Of course, this is impossible, since if it were true, the corona would disappear in a short time instead of being suspended over the Sun at temperatures of million degrees over distances much larger than a solar radius.
Many theories have been proposed to explain this redshift in line profiles of the transition region, but the problem is still unsolved, since a coherent theory should take into account all the physical observations: UV line profiles are redshifted on average, but they show back and forth velocity oscillations at the same time.
In synthesis, the proposed mechanisms are:
siphon flows in coronal loops driven by a pressure difference,
different cross-sections of the coronal loops footpoints,
the return of spicules,
multiple flows,
nanoflares, and
thermal instabilities during chromospheric condensation.
The effect was commemorated in a postage stamp issued in India on 2 December 2008.
See also
Spectroscopy
Plasma physics |
https://en.wikipedia.org/wiki/Security%20association | A security association (SA) is the establishment of shared security attributes between two network entities to support secure communication. An SA may include attributes such as: cryptographic algorithm and mode; traffic encryption key; and parameters for the network data to be passed over the connection. The framework for establishing security associations is provided by the Internet Security Association and Key Management Protocol (ISAKMP). Protocols such as Internet Key Exchange (IKE) and Kerberized Internet Negotiation of Keys (KINK) provide authenticated keying material.
An SA is a simplex (one-way channel) and logical connection which endorses and provides a secure data connection between the network devices. The fundamental requirement of an SA arrives when the two entities communicate over more than one channel. Take, for example, a mobile subscriber and a base station. The subscriber may subscribe itself to more than one service. Therefore, each service may have different service primitives, such as a data encryption algorithm, public key, or initialization vector. To make things easier, all of this security information is grouped logically, and the logical group itself is a Security Association. Each SA has its own ID called SAID. So both the base station and mobile subscriber will share the SAID, and they will derive all the security parameters.
In other words, an SA is a logical group of security parameters that enable the sharing of information to another entity.
See also
IPsec
Virtual private network (VPN)
Notes |
https://en.wikipedia.org/wiki/User%20story | In software development and product management, a user story is an informal, natural language description of features of a software system. They are written from the perspective of an end user or user of a system, and may be recorded on index cards, Post-it notes, or digitally in project management software. Depending on the project, user stories may be written by different stakeholders like client, user, manager, or development team.
User stories are a type of boundary object. They facilitate sensemaking and communication; and may help software teams document their understanding of the system and its context.
History
1997: Kent Beck introduces user stories at the Chrysler C3 project in Detroit.
1998: Alistair Cockburn visited the C3 project and coined the phrase "A user story is a promise for a conversation."
1999: Kent Beck published the first edition of the book Extreme Programming Explained, introducing Extreme Programming (XP), and the usage of user stories in the planning game.
2001: Ron Jeffries proposed a "Three Cs" formula for user story creation:
The Card (or often a post-it note) is a tangible physical token to hold the concepts;
The Conversation is between the stakeholders (customers, users, developers, testers, etc.). It is verbal and often supplemented by documentation;
The Confirmation ensures that the objectives of the conversation have been reached.
2001: The XP team at Connextra in London devised the user story format and shared examples with others.
2004: Mike Cohn generalized the principles of user stories beyond the usage of cards in his book User Stories Applied: For Agile Software Development that is now considered the standard reference for the topic according to Martin Fowler. Cohn names Rachel Davies as the inventor of user stories. While Davies was a team member at Connextra she credits the team as a whole with the invention.
2014: After a first article in 2005 and a blog post in 2008, in 2014 Jeff Patton published the user-st |
https://en.wikipedia.org/wiki/After%20Man | After Man: A Zoology of the Future is a 1981 speculative evolution book written by Scottish geologist and palaeontologist Dougal Dixon and illustrated by several illustrators including Diz Wallis, John Butler, Brian McIntyre, Philip Hood, Roy Woodard and Gary Marsh. The book features a foreword by Desmond Morris. After Man explores a hypothetical future set 50 million years after extinction of humanity, a time period Dixon dubs the "Posthomic", which is inhabited by animals that have evolved from survivors of a mass extinction succeeding our own time.
After Man used a fictional setting and hypothetical animals to explain the natural processes behind evolution and natural selection. In total, over a hundred different invented animal species are featured in the book, described as part of fleshed-out fictional future ecosystems. Reviews for After Man were highly positive and its success spawned two follow-up speculative evolution books which used new fictional settings and creatures to explain other natural processes: The New Dinosaurs (1988) and Man After Man (1990).
After Man and Dixon's following books inspired the speculative evolution artistic movement which focuses on speculative scenarios in the evolution of life, often possible future scenarios (such as After Man) or alternative paths in the past (such as The New Dinosaurs). Dixon is often considered the founder of the modern speculative evolution movement.
Summary
After Man explores an imagined future Earth, set 50 million years from the present, hypothesizing what new animals might evolve in the timespan between its setting and the present day. Ecology and evolutionary theory are applied to create believable creatures, all of which have their own binomial names and text describing their behaviour and interactions with other contemporary animals.
In this new period of the Cenozoic, which Dixon calls the "Posthomic", Europe and Africa have fused, closing the Mediterranean Sea; whereas Asia and North Ameri |
https://en.wikipedia.org/wiki/Resin%20acid | Resin acid refers to mixtures of several related carboxylic acids, primarily abietic acid, found in tree resins. Nearly all resin acids have the same basic skeleton: three fused rings having the empirical formula C19H29COOH. Resin acids are tacky, yellowish gums that are water-insoluble. They are used to produce soaps for diverse applications, but their use is being displaced increasingly by synthetic acids such as 2-ethylhexanoic acid or petroleum-derived naphthenic acids.
Botanical analysis
Resin acids are protectants and wood preservatives that are produced by parenchymatous epithelial cells that surround the resin ducts in trees from temperate coniferous forests. The resin acids are formed when two-carbon and three-carbon molecules couple with isoprene building units to form monoterpenes (volatile), sesquiterpenes (volatile), and diterpenes (nonvolatile) structures.
Pines contain numerous vertical and radial resin ducts scattered throughout the entire wood. The accumulation of resin in the heartwood and resin ducts causes a maximum concentration in the base of the older trees. Resin in the sapwood, however, is less at the base of the tree and increases with height.
In 2005, as an infestation of the Mountain pine beetle (Dendroctonus ponderosae) and blue stain fungus devastated the Lodgepole Pine forests of northern interior British Columbia, Canada, resin acid levels three to four times greater than normal were detected in infected trees, prior to death. These increased levels show that a tree uses the resins as a defense. Resins are both toxic to the beetle and the fungus and also can entomb the beetle in diterpene remains from secretions. Increasing resin production has been proposed as a way to slow the spread of the beetle in the "Red Zone" or the wildlife urban interface.
Chemical components
Abietic-type acids
Represents the majority 85-90% of typical tall oil.
abietic acid
abieta-7,13-dien-18-oic acid
13-isopropylpodocarpa -7,13-dien-15-oic acid
N |
https://en.wikipedia.org/wiki/Front-end%20processor | A front-end processor (FEP), or a communications processor, is a small-sized computer which interfaces to the host computer a number of networks, such as SNA, or a number of peripheral devices, such as terminals, disk units, printers and tape units. Data is transferred between the host computer and the front-end processor using a high-speed parallel interface. The front-end processor communicates with peripheral devices using slower serial interfaces, usually also through communication networks. The purpose is to off-load from the host computer the work of managing the peripheral devices, transmitting and receiving messages, packet assembly and disassembly, error detection, and error correction. Two examples are the IBM 3705 Communications Controller and the Burroughs Data Communications Processor.
Sometimes FEP is synonymous with a communications controller, although the latter is not necessarily as flexible. Early communications controllers such as the IBM 270x series were hard wired, but later units were programmable devices.
Front-end processor is also used in a more general sense in asymmetric multi-processor systems. The FEP is a processing device (usually a computer) which is closer to the input source than is the main processor. It performs some task such as telemetry control, data collection, reduction of raw sensor data, analysis of keyboard input, etc.
Front-end processes relates to the software interface between the user (client) and the application processes (server) in the client/server architecture. The user enters input (data) into the front-end process where it is collected and processed in such a way that it conforms to what the receiving application (back end) on the server can accept and process. As an example, the user enters a URL into a GUI (front-end process) such as Microsoft Internet Explorer. The GUI then processes the URL in such a way that the user is able to reach or access the intended web pages on the web server (application serve |
https://en.wikipedia.org/wiki/Quasiperiodic%20function | In mathematics, a quasiperiodic function is a function that has a certain similarity to a periodic function. A function is quasiperiodic with quasiperiod if , where is a "simpler" function than . What it means to be "simpler" is vague.
A simple case (sometimes called arithmetic quasiperiodic) is if the function obeys the equation:
Another case (sometimes called geometric quasiperiodic) is if the function obeys the equation:
An example of this is the Jacobi theta function, where
shows that for fixed it has quasiperiod ; it also is periodic with period one. Another example is provided by the Weierstrass sigma function, which is quasiperiodic in two independent quasiperiods, the periods of the corresponding Weierstrass ℘ function.
Functions with an additive functional equation
are also called quasiperiodic. An example of this is the Weierstrass zeta function, where
for a z-independent η when ω is a period of the corresponding Weierstrass ℘ function.
In the special case where we say f is periodic with period ω in the period lattice .
Quasiperiodic signals
Quasiperiodic signals in the sense of audio processing are not quasiperiodic functions in the sense defined here; instead they have the nature of almost periodic functions and that article should be consulted. The more vague and general notion of quasiperiodicity has even less to do with quasiperiodic functions in the mathematical sense.
A useful example is the function:
If the ratio A/B is rational, this will have a true period, but if A/B is irrational there is no true period, but a succession of increasingly accurate "almost" periods.
See also
Quasiperiodic motion |
https://en.wikipedia.org/wiki/Hyperbolic%20set | In dynamical systems theory, a subset Λ of a smooth manifold M is said to have a hyperbolic structure with respect to a smooth map f if its tangent bundle may be split into two invariant subbundles, one of which is contracting and the other is expanding under f, with respect to some Riemannian metric on M. An analogous definition applies to the case of flows.
In the special case when the entire manifold M is hyperbolic, the map f is called an Anosov diffeomorphism. The dynamics of f on a hyperbolic set, or hyperbolic dynamics, exhibits features of local structural stability and has been much studied, cf. Axiom A.
Definition
Let M be a compact smooth manifold, f: M → M a diffeomorphism, and Df: TM → TM the differential of f. An f-invariant subset Λ of M is said to be hyperbolic, or to have a hyperbolic structure, if the restriction to Λ of the tangent bundle of M admits a splitting into a Whitney sum of two Df-invariant subbundles, called the stable bundle and the unstable bundle and denoted Es and Eu. With respect to some Riemannian metric on M, the restriction of Df to Es must be a contraction and the restriction of Df to Eu must be an expansion. Thus, there exist constants 0<λ<1 and c>0 such that
and
and for all
and
for all and
and
for all and .
If Λ is hyperbolic then there exists a Riemannian metric for which c = 1 — such a metric is called adapted.
Examples
Hyperbolic equilibrium point p is a fixed point, or equilibrium point, of f, such that (Df)p has no eigenvalue with absolute value 1. In this case, Λ = {p}.
More generally, a periodic orbit of f with period n is hyperbolic if and only if Dfn at any point of the orbit has no eigenvalue with absolute value 1, and it is enough to check this condition at a single point of the orbit. |
https://en.wikipedia.org/wiki/Creamery | A creamery is a place where milk and cream are processed and where butter and cheese is produced. Cream is separated from whole milk; pasteurization is done to the skimmed milk and cream separately. Whole milk for sale has had some cream returned to the skimmed milk.
The creamery is the source of butter from a dairy. Cream is an emulsion of fat-in-water; the process of churning causes a phase inversion to butter which is an emulsion of water-in-fat. Excess liquid as buttermilk is drained off in the process. Modern creameries are automatically controlled industries, but the traditional creamery needed skilled workers. Traditional tools included the butter churn and Scotch hands.
The term "creamery" is sometimes used in retail trade as a place to buy milk products such as yogurt and ice cream. Under the banner of a creamery one might find a store also stocking pies and cakes or even a coffeehouse with confectionery.
See also
List of cheesemakers
List of dairy products |
https://en.wikipedia.org/wiki/Van%20Stockum%20dust | In general relativity, the van Stockum dust is an exact solution of the Einstein field equations in which the gravitational field is generated by dust rotating about an axis of cylindrical symmetry. Since the density of the dust is increasing with distance from this axis, the solution is rather artificial, but as one of the simplest known solutions in general relativity, it stands as a pedagogically important example.
This solution is named after Willem Jacob van Stockum, who rediscovered it in 1937 independently of a much earlier discovery by Cornelius Lanczos in 1924. It is currently recommended that the solution be referred to as the Lanczos–van Stockum dust.
Derivation
One way of obtaining this solution is to look for a cylindrically symmetric perfect fluid solution in which the fluid exhibits rigid rotation. That is, we demand that the world lines of the fluid particles form a timelike congruence having nonzero vorticity but vanishing expansion and shear. (In fact, since dust particles feel no forces, this will turn out to be a timelike geodesic congruence, but we won't need to assume this in advance.)
A simple Ansatz corresponding to this demand is expressed by the following frame field, which contains two undetermined functions of :
To prevent misunderstanding, we should emphasize that taking the dual coframe
gives the metric tensor in terms of the same two undetermined functions:
Multiplying out gives
We compute the Einstein tensor with respect to this frame, in terms of the two undetermined functions,
and demand that the result have the form appropriate for a perfect fluid solution with the timelike unit vector everywhere tangent to the world line of a fluid particle. That is, we demand that
This gives the conditions
Solving for and then for gives the desired frame defining the van Stockum solution:
Note that this frame is only defined on .
Properties
Computing the Einstein tensor with respect to our frame shows that in fact the pressu |
https://en.wikipedia.org/wiki/Kakutani%20fixed-point%20theorem | In mathematical analysis, the Kakutani fixed-point theorem is a fixed-point theorem for set-valued functions. It provides sufficient conditions for a set-valued function defined on a convex, compact subset of a Euclidean space to have a fixed point, i.e. a point which is mapped to a set containing it. The Kakutani fixed point theorem is a generalization of the Brouwer fixed point theorem. The Brouwer fixed point theorem is a fundamental result in topology which proves the existence of fixed points for continuous functions defined on compact, convex subsets of Euclidean spaces. Kakutani's theorem extends this to set-valued functions.
The theorem was developed by Shizuo Kakutani in 1941, and was used by John Nash in his description of Nash equilibria. It has subsequently found widespread application in game theory and economics.
Statement
Kakutani's theorem states:
Let S be a non-empty, compact and convex subset of some Euclidean space Rn.
Let φ: S → 2S be a set-valued function on S with the following properties:
φ has a closed graph; φ(x) is non-empty and convex for all x ∈ S.Then φ has a fixed point.Definitions
Set-valued function A set-valued function φ from the set X to the set Y is some rule that associates one or more points in Y with each point in X. Formally it can be seen just as an ordinary function from X to the power set of Y, written as φ: X → 2Y, such that φ(x) is non-empty for every . Some prefer the term correspondence, which is used to refer to a function that for each input may return many outputs. Thus, each element of the domain corresponds to a subset of one or more elements of the range.
Closed graph A set-valued function φ: X → 2Y is said to have a closed graph if the set {(x,y) | y ∈ φ(x)} is a closed subset of X × Y in the product topology i.e. for all sequences and such that , and for all , we have .
Fixed point Let φ: X → 2X be a set-valued function. Then a ∈ X is a fixed point of φ if a ∈ φ(a).
Examples
A function with infinit |
https://en.wikipedia.org/wiki/Norton%20Personal%20Firewall | Norton Personal Firewall, developed by Symantec, is a discontinued personal firewall with ad blocking, program control and privacy protection capabilities.
Norton Personal Firewall program control module is able to allow or deny individual applications access to the Internet. Programs are automatically allowed or denied Internet access by Norton Personal Firewall. It uses a blacklist and a whitelist to determine whether the program should be allowed Internet access.
The advertisement-blocking feature of this software rewrites the HTML that one's browser uses to display Web pages. It searches for code related to advertisements against a blacklist and prevents the web page from being displayed.
The Privacy Control component blocks browser cookies and active content, and prevents the transmission of sensitive data through standard POP3 e-mail clients, Microsoft Office e-mail attachments and Instant Messaging services such as MSN Messenger, Windows Messenger and AOL Instant Messenger without the user's consent.
See also
Internet Security
Comparison of antivirus software
Comparison of firewalls |
https://en.wikipedia.org/wiki/Albert%20Shiryaev | Albert Nikolayevich Shiryaev (; born October 12, 1934) is a Soviet and Russian mathematician. He is known for his work in probability theory, statistics and financial mathematics.
Career
He graduated from Moscow State University in 1957. From that time until now he has been working in Steklov Mathematical Institute. He earned his candidate degree in 1961 (Andrey Kolmogorov was his advisor) and a doctoral degree in 1967 for his work "On statistical sequential analysis". He is a professor of the department of mechanics and mathematics of Moscow State University, since 1971. Shiryaev holds a 20% permanent professorial position at the School of Mathematics, University of Manchester. He has supervised more than 50 doctoral dissertations and is the author or coauthor of more than 250 publications.
In 1970 he was an Invited Speaker with talk Sur les equations stochastiques aux dérivées partielles at the International Congress of Mathematicians (ICM) in Nice. In 1978 he was a Plenary Speaker with talk Absolute Continuity and Singularity of Probability Measures in Functional Spaces at the ICM in Helsinki.
He was elected in 1985 an honorary member of the Royal Statistical Society and in 1990 a member of Academia Europaea. From 1989 to 1991 he was the president of the Bernoulli Society for Mathematical Statistics and Probability. From 1994 to 1998 he was the president of the Russian Actuarial Society. In 1996 he was awarded a Humboldt Prize. He was elected a corresponding member of the Russian Academy of Sciences in 1997 and a full member in 2011. From 1998 to 1999 he was a founding member and the first president of the Bachelier Finance Society. He was made in 2000 Doctor Rerum Naturalium Honoris Causa of Albert Ludwigs University of Freiburg and in 2002 Professor Honoris Causa of the University of Amsterdam. In 2017 he was awarded the Chebyschev gold medal of the Russian Academy of Sciences.
Contributions
His scientific work concerns different aspects of probability |
https://en.wikipedia.org/wiki/Power%20MOSFET | A power MOSFET is a specific type of metal–oxide–semiconductor field-effect transistor (MOSFET) designed to handle significant power levels. Compared to the other power semiconductor devices, such as an insulated-gate bipolar transistor (IGBT) or a thyristor, its main advantages are high switching speed and good efficiency at low voltages. It shares with the IGBT an isolated gate that makes it easy to drive. They can be subject to low gain, sometimes to a degree that the gate voltage needs to be higher than the voltage under control.
The design of power MOSFETs was made possible by the evolution of MOSFET and CMOS technology, used for manufacturing integrated circuits since the 1960s. The power MOSFET shares its operating principle with its low-power counterpart, the lateral MOSFET. The power MOSFET, which is commonly used in power electronics, was adapted from the standard MOSFET and commercially introduced in the 1970s.
The power MOSFET is the most common power semiconductor device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth, ruggedness, easy drive, simple biasing, ease of application, and ease of repair. In particular, it is the most widely used low-voltage (less than 200 V) switch. It can be found in a wide range of applications, such as most power supplies, DC-to-DC converters, low-voltage motor controllers, and many other applications.
History
The MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. It was a breakthrough in power electronics. Generations of MOSFETs enabled power designers to achieve performance and density levels not possible with bipolar transistors.
In 1969, Hitachi introduced the first vertical power MOSFET, which would later be known as the VMOS (V-groove MOSFET). The same year, the DMOS (double-diffused MOSFET) with self-aligned gate was first reported by Y. Tarui, Y. Hayashi and Toshihiro Sekigawa of the Electrotechnical Laboratory |
https://en.wikipedia.org/wiki/Shortlex%20order | In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order. Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering.
In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is [ε, a, b, c, ..., z, aa, ab, ac, ..., zz, aaa, aab, aac, ..., zzz, ...], where ε denotes the empty string.
The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the natural numbers, giving the bijective numeration system for representing numbers. The shortlex ordering is also important in the theory of automatic groups.
See also
Graded lexicographic order |
https://en.wikipedia.org/wiki/WUPV | WUPV (channel 65) is a television station licensed to Ashland, Virginia, United States, serving the Richmond area as an affiliate of The CW. It is owned by Gray Television alongside Richmond-licensed NBC affiliate WWBT (channel 12) and WRID-LD (channel 48). The stations share studios on Midlothian Turnpike (US 60) in Richmond, while WUPV's transmitter is located northeast of Richmond in King William County, west of Enfield. WRID repeats its main channel from the WWBT transmitter behind the studios in the inner ring of Richmond on its third subchannel, mapped to WUPV-DT6.
Established as a religious TV station in 1990, WZXK joined The WB in 1995 (as WAWB) and switched to UPN in 1997, adopting its present call sign. The result of the switch was to leave The WB without a full-time outlet in Richmond. The station joined The CW on its 2006 launch and today serves as one of two ATSC 3.0 (Next Gen TV) transmitters in central Virginia. The station airs one local newscast from the WWBT newsroom.
History
Christel Inc., run by James E. Campana, had broadcast leased-access religious programming on cable systems in Henrico County since 1978. In 1986, he was granted a construction permit for channel 65 in Ashland after settling with a competing applicant and began a years-long construction process that would involve more than $2 million in funds. The call letters WZXK were chosen at the suggestion of an attorney who knew they'd be available and after 10 suggestions were turned down by the FCC. Meanwhile, a tower was built in King William County in 1989 after Hanover County refused to concede a zoning variance to build the mast. Construction was almost halted on the rest of the project due to a sudden cash crunch; the transmitter was left sitting in a warehouse in Kentucky for a time because Christel needed to pay another $118,000.
Channel 65 finally appeared on March 9, 1990. It aired primarily religious programming with some secular shows. However, it also dealt with financia |
https://en.wikipedia.org/wiki/Lateral%20thoracic%20artery | In human anatomy, the lateral thoracic artery (or external mammary artery) is a blood vessel that supplies oxygenated blood to the lateral structures of the thorax and breast.
It originates from the axillary artery and follows the lower border of the Pectoralis minor muscle to the side of the chest, supplies the Serratus anterior muscle and the Pectoralis major muscle, and sends branches across the axilla to the axillary lymph nodes and Subscapularis muscle.
It anastomoses with the internal thoracic artery, subscapular, and intercostal arteries, and with the pectoral branch of the thoracoacromial artery.
In the female it supplies an external mammary branch which turns round the free edge of the Pectoralis major and supplies the breasts. |
https://en.wikipedia.org/wiki/Sine%20and%20cosine%20transforms | In mathematics, the Fourier sine and cosine transforms are forms of the Fourier transform that do not use complex numbers or require negative frequency. They are the forms originally used by Joseph Fourier and are still preferred in some applications, such as signal processing or statistics.
Definition
The Fourier sine transform of , sometimes denoted by either or , is
If means time, then is frequency in cycles per unit time, but in the abstract, they can be any pair of variables which are dual to each other.
This transform is necessarily an odd function of frequency, i.e. for all :
The numerical factors in the Fourier transforms are defined uniquely only by their product. Here, in order that the Fourier inversion formula not have any numerical factor, the factor of 2 appears because the sine function has norm of
The Fourier cosine transform of , sometimes denoted by either or , is
It is necessarily an even function of frequency, i.e. for all :
Since positive frequencies can fully express the transform, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided.
Simplification to avoid negative t
Some authors only define the cosine transform for even functions of , in which case its sine transform is zero. Since cosine is also even, a simpler formula can be used,
Similarly, if is an odd function, then the cosine transform is zero and the sine transform can be simplified to
Other conventions
Just like the Fourier transform takes the form of different equations with different constant factors (see ), other authors also define the cosine transform as
and sine as
or, the cosine transform as and the sine transform as using as the transformation variable. And while is typically used to represent the time domain, is often used alternatively, particularly when representing frequencies in a spatial domain.
Fourier inversion
The original function can be recovered from its transform under the usual hypoth |
https://en.wikipedia.org/wiki/Stromule | A stromule is a microscopic structure found in plant cells. Stromules (stroma-filled tubules) are highly dynamic structures extending from the surface of all plastid types, including proplastids, chloroplasts, etioplasts, leucoplasts, amyloplasts, and chromoplasts. Protrusions from and interconnections between plastids were observed in 1888 (Gottlieb Haberlandt) and 1908 (Gustav Senn) and have been described sporadically in the literature since then. Stromules were recently rediscovered in 1997 and have since been reported to exist in a number of angiosperm species including Arabidopsis thaliana, wheat, rice and tomato, but their role is not yet fully understood.
This highly dynamic nature is caused by the close relationship between plastid stromules and actin microfilaments, which are anchored to the stromule extensions, either in a longitudinal fashion to pull from the stromule and guide the plastid in a given direction or in a hinge fashion allowing the plastid to rest anchored in a given place. The actin microfilaments also define the stromule shape through their interactions. This dynamic random walk-like movement is probably caused by Myosin XI proteins as a recent work found.
Other organelles are also associated to stromules, as mitochondria, which have been observed associated and sliding over stromule tubes. Plastids and mitochondria need to be spatially close as some metabolic pathways like photorespiration require the association of both organelles to recycle glycolate and detoxify the ammonium produced during photorespiration.
Stromules are usually 0.35–0.85 µm in diameter and of variable length, from short beak-like projections to linear or branched structures up to 220 µm long. They are enclosed by the inner and outer plastid envelope membranes and enable the transfer of molecules as large as RuBisCO (~560 kDa) between interconnected plastids. Stromules occur in all cell types, but stromule morphology and the proportion of plastids with stromules |
https://en.wikipedia.org/wiki/Classification%20of%20discontinuities | Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function.
The oscillation of a function at a point quantifies these discontinuities as follows:
in a removable discontinuity, the distance that the value of the function is off by is the oscillation;
in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides);
in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant.
A special case is if the function diverges to infinity or minus infinity, in which case the oscillation
is not defined (in the extended real numbers, this is a removable discontinuity).
Classification
For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous.
Removable discontinuity
Consider the piecewise function
The point is a removable discontinuity. For this kind of discontinuity:
The one-sided limit from the negative direction:
and the one-sided limit from the positive direction:
at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function
is continuous at
The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology b |
https://en.wikipedia.org/wiki/Molecular%20Structure%20of%20Nucleic%20Acids%3A%20A%20Structure%20for%20Deoxyribose%20Nucleic%20Acid | "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" was the first article published to describe the discovery of the double helix structure of DNA, using X-ray diffraction and the mathematics of a helix transform. It was published by Francis Crick and James D. Watson in the scientific journal Nature on pages 737–738 of its 171st volume (dated 25 April 1953).
This article is often termed a "pearl" of science because it is brief and contains the answer to a fundamental mystery about living organisms. This mystery was the question of how it is possible that genetic instructions are held inside organisms and how they are passed from generation to generation. The article presents a simple and elegant solution, which surprised many biologists at the time who believed that DNA transmission was going to be more difficult to deduce and understand. The discovery had a major impact on biology, particularly in the field of genetics, enabling later researchers to understand the genetic code.
Evolution of molecular biology
The application of physics and chemistry to biological problems led to the development of molecular biology, which is particularly concerned with the flow and consequences of biological information from DNA to proteins. The discovery of the DNA double helix made clear that genes are functionally defined parts of DNA molecules, and that there must be a way for cells to translate the information in DNA to specific amino acids, which make proteins.
Linus Pauling was a chemist who was very influential in developing an understanding of the structure of biological molecules. In 1951, Pauling published the structure of the alpha helix, a fundamentally important structural component of proteins. In early 1953, Pauling published a triple helix model of DNA, which subsequently turned out to be incorrect. Both Crick, and particularly Watson, thought that they were racing against Pauling to discover the structure of DNA.
Max Delbrück was a |
https://en.wikipedia.org/wiki/Auto-Tune | Auto-Tune (or autotune) is an audio processor introduced in 1997 by the American company Antares Audio Technologies. It uses a proprietary device to measure and alter pitch in vocal and instrumental music recording and performances.
Auto-Tune was originally intended to disguise or correct off-key inaccuracies, allowing vocal tracks to be perfectly tuned. The 1998 Cher song "Believe" popularized the technique of using Auto-Tune to distort vocals. In 2018, the music critic Simon Reynolds observed that Auto-Tune had "revolutionized popular music", calling its use for effects "the fad that just wouldn't fade. Its use is now more entrenched than ever."
In its role distorting vocals, Auto-Tune operates on different principles from the vocoder or talk box and produces different results.
Function
Auto-Tune is available as a plug-in for digital audio workstations used in a studio setting and as a stand-alone, rack-mounted unit for live performance processing. The processor slightly shifts pitches to the nearest true, correct semitone (to the exact pitch of the nearest note in traditional equal temperament). Auto-Tune can also be used as an effect to distort the human voice when pitch is raised or lowered significantly, such that the voice is heard to leap from note to note stepwise, like a synthesizer.
Auto-Tune has become standard equipment in professional recording studios. Instruments such as the Peavey AT-200 guitar seamlessly use Auto-Tune technology for real-time pitch correction.
Development
Auto-Tune was developed by Andy Hildebrand, a Ph.D. research engineer who specialized in stochastic estimation theory and digital signal processing. Hildebrand conceived the vocal pitch correction technology on the suggestion of a colleague's wife, who had joked that she could benefit from a device to help her sing in tune.
Over several months in early 1996, he implemented the algorithm on a custom Macintosh computer and presented the result at the NAMM Show later that yea |
https://en.wikipedia.org/wiki/Buccan | Buccan or Boucan is the native South American and Caribbean name for a wooden framework or hurdle on which meat was slow-roasted or smoked over a fire. Spaniards called the same process "barbacoa", later "barbecue".
The term "buccaneer" for pirates or privateers, is said to be derived from buccan. In the Caribbean, seafarers used the wooden frames for smoking meat, preferably pork. From this derived the French word boucane and hence the name boucanier for French hunters who used such frames to smoke meat from feral cattle and pigs on Hispaniola (now Haiti and the Dominican Republic). English colonists anglicised the word boucanier to buccaneer. |
https://en.wikipedia.org/wiki/Fly%20%28pentop%20computer%29 | The Fly Pentop Computer and FLY Fusion Pentop Computer are personal electronics products manufactured by LeapFrog Enterprises Inc. They are called a "pentop" computer by its manufacturer, because they consist of a pen with a computer inside.
In 2009, LeapFrog discontinued both the manufacture and support of the device and all accessory products, such as notepads and ink refills which are required for continued use. The inventor of the FLY Pentop, Jim Marggraff, left LeapFrog and founded Livescribe in January 2007.
Description
The Fly, released in 2005, is a customizable pen that is intended to assist children with schoolwork. There are several bundled and add-on applications available, including a notepad, calculator, language and writing assistant, and educational games; many of these require the use of a small cartridge that can be inserted into a port built into the rear of the pen. The Fly only works on its own proprietary digital paper, which is lightly printed with a pattern of dots to provide positioning information to the pen via a tiny infrared camera. The ink tip itself can be retracted into the body of the pen when no physical notes are desired.
The pen uses digital paper and pattern decoding technology developed by Anoto to track where the user writes on the page. It uses Vision Objects' MyScript character recognition technology to read what's been written, and can read aloud nearly any word in U.S. English. One notable thing is that the Fly uses only capital letters. To start the main menu of the base pen, the user writes "M" and circles it. After recognizing the circled "M", the pen switches to "menu mode". There are several different circle-letter codes for activating different applications, these codes are officially known as "Fly-Cons."
Once an application is activated, the user uses the pen to draw on the paper to interact with the application. In much of the applications, users are told what to draw, rather than having the freedom to d |
https://en.wikipedia.org/wiki/Cray%20T3E | The Cray T3E was Cray Research's second-generation massively parallel supercomputer architecture, launched in late November 1995. The first T3E was installed at the Pittsburgh Supercomputing Center in 1996. Like the previous Cray T3D, it was a fully distributed memory machine using a 3D torus topology interconnection network. The T3E initially used the DEC Alpha 21164 (EV5) microprocessor and was designed to scale from 8 to 2,176 Processing Elements (PEs). Each PE had between 64 MB and 2 GB of DRAM and a 6-way interconnect router with a payload bandwidth of 480 MB/s in each direction. Unlike many other MPP systems, including the T3D, the T3E was fully self-hosted and ran the UNICOS/mk distributed operating system with a GigaRing I/O subsystem integrated into the torus for network, disk and tape I/O.
The original T3E (retrospectively known as the T3E-600) had a 300 MHz processor clock. Later variants, using the faster 21164A (EV56) processor, comprised the T3E-900 (450 MHz), T3E-1200 (600 MHz), T3E-1200E (with improved memory and interconnect performance) and T3E-1350 (675 MHz). The T3E was available in both air-cooled (AC) and liquid-cooled (LC) configurations. AC systems were available with 16 to 128 user PEs, LC systems with 64 to 2048 user PEs.
A 1480-processor T3E-1200 was the first supercomputer to achieve a performance of more than 1 teraflops running a computational science application, in 1998.
After Cray Research was acquired by Silicon Graphics in February 1996, development of new Alpha-based systems was stopped. While providing the -900, -1200 and -1200E upgrades to the T3E, in the long term Silicon Graphics intended Cray T3E users to migrate to the Origin 3000, a MIPS-based distributed shared memory computer, introduced in 2000. However, the T3E continued in production after SGI sold the Cray business the same year.
See also
History of supercomputing |
https://en.wikipedia.org/wiki/Traffic%20announcement%20%28radio%20data%20systems%29 | Traffic announcement (TA) refers to the broadcasting of a specific type of traffic report on the Radio Data System. It is generally used by motorists, to assist with route planning, and for the avoidance of traffic congestion.
The RDS-enabled receiver can be set to pay special attention to this TA flag and e.g. stop the tape/pause the CD or retune to receive a Traffic bulletin. The related TP (Traffic Programme) flag is used to allow the user to find only those stations that regularly broadcast traffic bulletins, whereas the TA flag is used to stop the tape or raise the volume during a traffic bulletin.
On some modern units, traffic reports can also be recorded and stored within the unit, both while the unit is switched on, but also for a pre-set period after the unit is turned off. It may also have in-built timers to seek and record the same for two separate daily occasions, i.e., one setting for before the morning commute, and the second for the evening return journey. These messages may then subsequently be recalled on demand by the driver. This specific function is known as TIM, or traffic information message.
See also
Traffic Message Channel – an automated service operational in Europe.
External links
BBC factsheet – Radio Data System (RDS) in HTML format
RDSList.com
Broadcast engineering
Radio technology |
https://en.wikipedia.org/wiki/Golden%20hour%20%28photography%29 | In photography, the golden hour is the period of daytime shortly after sunrise or before sunset, during which daylight is redder and softer than when the sun is higher in the sky.
The golden hour is also sometimes called the "magic hour," especially by cinematographers and photographers. During these times, the brightness of the sky matches the brightness of streetlights, signs, car headlights and lit windows.
The period of time shortly before the magic hour at sunrise, or after it at sunset, is called the "blue hour". This is when the sun is at a significant depth below the horizon, when residual, indirect sunlight takes on a predominantly blue shade, and there are no sharp shadows because the sun either has not risen, or has already set.
Details
When the sun is low above the horizon, sunlight rays must penetrate the atmosphere for a greater distance, reducing the intensity of the direct light, so that more of the illumination comes from indirect light from the sky, reducing the lighting ratio. This is technically a type of lighting diffusion. More blue light is scattered, so if the sun is present, its light appears more reddish. In addition, the sun's low angle above the horizon produces longer shadows.
The term hour is used figuratively; the effect has no clearly defined duration and varies according to season and latitude. The character of the lighting is determined by the sun's altitude, and the time for the sun to move from the horizon to a specified altitude depends on a location's latitude and the time of year. In Los Angeles, California, at an hour after sunrise or an hour before sunset, the sun has an altitude of about 10–12°. For a location closer to the Equator, the same altitude is reached in less than an hour, and for a location farther from the equator, the altitude is reached in more than one hour. For a location sufficiently far from the equator, the sun may not reach an altitude of 10°, and the golden hour lasts for the entire day in certain s |
https://en.wikipedia.org/wiki/Opalescence | Opalescence or play of color is the optical phenomenon displayed by the mineraloid gemstone opal, a hydrated silicon dioxide. However, each of the three notable types of opalprecious, common, and firedisplay different optical effects; therefore, the intended meaning varies depending on context.
The general definition of opalescence is a milky iridescence displayed by an opal, which describes the visual effect of precious opal very well, and opalescence is commonly used in lay terms as a synonym for iridescence.
In contrast, common opal does not display an iridescence, but often exhibits a hazy sheen of light from within the stonethe phenomenon that gemologists strictly term as opalescence. This milky sheen displayed by opal is a form of adularescence.
Fire opal is a relatively transparent gemstone with a vivid yellow-orange-red color and rarely displays iridescence.
The optical effects seen in various types of opal are a result of refraction (precious and fire) or reflection (common) due to the layering, spacing, and size of the myriad microscopic silicon dioxide spheres and included water (or air) in its physical structure. When the size and spacing of the silica spheres are relatively small, refracted blue-green colors are prevalent; when relatively larger, refracted yellow-orange-red colors are seen; and when larger yet, reflection yields a milky-hazy sheen.
In a physical sense, some cases of opalescence could be related to a type of dichroism seen in highly dispersed systems with little opacity. Due to Rayleigh scattering, a transparent material appears yellowish-red in transmitted white light and blue in the scattered light perpendicular to the transmitted light. The phenomenon illustrated in the bottom photo is an example of the Tyndall effect.
See also
Aventurescence
Labradorescence
Moonstone (gemstone) |
https://en.wikipedia.org/wiki/Chaguar | Chaguar is the common name of several related species of South American plants of the family Bromeliaceae, among them Bromelia serra, Bromelia hieronymi, Deinacanthon urbanianum and Pseudananas sagenarius, which are non-woody forest plants with sword-shaped evergreen leaves, resembling yucca. The different varieties grow in the semi-arid parts of the Gran Chaco ecoregion.
The term chaguar is of Quechua origin; in areas where the Guaraní have had an influence, it is also known as caraguatá.
This plant is mainly employed by the Wichí tribal groups in the provinces of Salta and Formosa, Argentina, to provide a resistant fiber that can be woven to make bags and purses, ponchos, skirts, fishing nets, string, ropes, hammocks, mats, covers and clothing. Along with those made from hardwoods such as quebracho, chaguar crafts make up an important part of the economy of some Wichí groups, though the profits are scarce.
Chaguar is not cultivated. It grows in the semi-shade of a middle layer of the Chaco forest, and reproduces by stolons. Desertification of the Chaco has decreased its presence, but the plant is neither endangered nor of primary ecological importance. Farmers consider it a pest and, since its spines scare cattle, they sometimes burn the chaguarales (plant colonies) during the dry season.
There are no restrictions on the collection and use of chaguar among the Wichí, but the task is time-consuming and labor-intensive. This fact and the environmental ethics of the tribe discourage overexploitation. |
https://en.wikipedia.org/wiki/Netopia | Farallon, later renamed Netopia, was a computer networking company headquartered in Berkeley, and subsequently Emeryville, California, that produced a wide variety of products including bridges, repeaters and switches, and in their later Netopia incarnation, modems, routers, gateways, and Wi-Fi devices. The company also produced the NBBS (Netopia Broadband Server Software) and, as Farallon, Timbuktu remote administration software, as well as the MacRecorder, the first audio capture and manipulation products for the Macintosh (later sold to Macromedia). The company was founded in 1986 and changed its name to Netopia in 1998. Farallon originated several notable technologies, including:
PhoneNet, an implementation of AppleTalk over plain ("Cat-3") telephone wiring or, more commonly, EIA-TIA 568A/B structured cabling systems. Many versions of the product were produced, but the original product was a commercialized version of a kit developed and produced by BMUG, the Berkeley Macintosh Users Group in 1986.
The StarController, a line of LocalTalk and Ethernet bridges and switches released in 1988 which integrated directly with EIA-TIA 568A/B structured cabling systems.
EtherWave, an ADB-powered serial-to-ethernet bridge in a dongle form-factor which looked something like a manta ray. The two external ports were 10BASE-T and the serial pigtail spoke an overclocked 690kbps version of LocalTalk. This served both to allow devices without expansion busses (commonly early Macintosh computers and LaserWriter printers) to connect directly to Ethernet networks, and also to allow the daisy-chaining of multiple devices from a single Ethernet switch or bridge port. Later versions used Apple's "AAUI" version of the Attachment Unit Interface to achieve full 10mbps host connections.
AirDock, a Serial-to-IrDA gateway which allowed devices with LocalTalk ports to communicate on IrDA infrared wireless networks.
Netopia acquired multiple companies in the home networking space inclu |
https://en.wikipedia.org/wiki/Center%20of%20percussion | The center of percussion is the point on an extended massive object attached to a pivot where a perpendicular impact will produce no reactive shock at the pivot. Translational and rotational motions cancel at the pivot when an impulsive blow is struck at the center of percussion. The center of percussion is often discussed in the context of a bat, racquet, door, sword or other extended object held at one end.
The same point is called the center of oscillation for the object suspended from the pivot as a pendulum, meaning that a simple pendulum with all its mass concentrated at that point will have the same period of oscillation as the compound pendulum.
In sports, the center of percussion of a bat, racquet, or club is related to the so-called "sweet spot", but the latter is also related to vibrational bending of the object.
Explanation
Imagine a rigid beam suspended from a wire by a fixture that can slide freely along the wire at point P, as shown in the Figure. An impulsive blow is applied from the left. If it is below the center of mass (CM) it will cause the beam to rotate counterclockwise around the CM and also cause the CM to move to the right. The center of percussion (CP) is below the CM. If the blow falls above the CP, the rightward translational motion will be bigger than the leftward rotational motion at P, causing the net initial motion of the fixture to be rightward. If the blow falls below the CP the opposite will occur, rotational motion at P will be larger than translational motion and the fixture will move initially leftward. Only if the blow falls exactly on the CP will the two components of motion cancel out to produce zero net initial movement at point P.
When the sliding fixture is replaced with a pivot that cannot move left or right, an impulsive blow anywhere but at the CP results in an initial reactive force at the pivot.
Calculating the center of percussion
For a free, rigid beam, an impulse applied at right angle at a distance |
https://en.wikipedia.org/wiki/Galerkin%20method | In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions.
Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:
Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions.
Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator.
Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation.
Examples of Galerkin methods are:
the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method,
the boundary element method for solving integral equations,
Krylov subspace methods.
Example: Matrix linear system
We first introduce and illustrate the Galerkin method as being |
https://en.wikipedia.org/wiki/Modified%20triadan%20system | The modified triadan system is a scheme of dental nomenclature that can be used widely across different animal species. It is used worldwide among veterinary surgeons.
Each tooth is given a three digit number.
The first number relates to the quadrant of the mouth in which the tooth lies:
upper right
upper left
lower left
lower right
If it is a deciduous tooth that is being referred to, then a different number is used:
upper right
upper left
lower left
lower right
The second and third numbers refer to the location of the tooth from front to back (or rostral to caudal). This starts at 01 and goes up to 11 for many species, depending on the total number of teeth. |
https://en.wikipedia.org/wiki/Chemotype | A chemotype (sometimes chemovar) is a chemically distinct entity in a plant or microorganism, with differences in the composition of the secondary metabolites. Minor genetic and epigenetic changes with little or no effect on morphology or anatomy may produce large changes in the chemical phenotype. Chemotypes are often defined by the most abundant chemical produced by that individual and the concept has been useful in work done by chemical ecologists and natural product chemists. With respect to plant biology, the term "chemotype" was first coined by Rolf Santesson and his son Johan in 1968, defined as, "...chemically characterized parts of a population of morphologically indistinguishable individuals."
In microbiology, the term "chemoform" or "chemovar" is preferred in the 1990 edition of the International Code of Nomenclature of Bacteria (ICNB), the former referring to the chemical constitution of an organism and the latter meaning "production or amount of production of a particular chemical." Terms with the suffix -type are discouraged so as to avoid confusion with type specimens. The terms chemotype and chemovar were originally introduced to the ICNB in a proposed revision to one of the nomenclatural rules dealing with infrasubspecific taxonomic subdivisions at the 1962 meeting of the International Microbiological Congress in Montreal. The proposed change argued that nomenclatural regulation of these ranks, such as serotype and morphotype, is necessary to avoid confusion. In proposed recommendation 8a(7), it was asked that "authorization be given for the use of the terms chemovar and chemotype," defining the terms as being "used to designate an infrasubspecific subdivision to include infrasubspecific forms or strains characterized by the production of some chemical not normally produced by the type strain of the species." The change to the Code was approved in August 1962 by the Judicial Commission of the International Committee of Bacteriological Nomenclature |
https://en.wikipedia.org/wiki/Sysctl | sysctl is a software utility of some Unix-like operating systems that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings. It is available both as a system call for compiled programs, and an administrator command for interactive use and scripting. Linux additionally exposes sysctl as a virtual file system.
BSD
In BSD, these parameters are generally objects in a management information base (MIB) that describe tunable limits such as the size of a shared memory segment, the number of threads the operating system will use as an NFS client, or the maximum number of processes on the system; or describe, enable or disable behaviors such as IP forwarding, security restrictions on the superuser (the "securelevel"), or debugging output.
In OpenBSD and DragonFly BSD, sysctl is also used as the transport layer for the hw.sensors framework for hardware monitoring, whereas NetBSD uses the ioctl system call for its sysmon envsys counterpart. Both sysctl and ioctl are the two system calls which can be used to add extra functionality to the kernel without adding yet another system call; for example, in 2004 with OpenBSD 3.6, when the tcpdrop utility was introduced, sysctl was used as the underlying system call. In FreeBSD, although there is no sensors framework, the individual temperature and other sensors are still commonly exported through the sysctl tree through Newbus, for example, as is the case with the aibs(4) driver that's available in all the 4 BSD systems, including FreeBSD.
In BSD, a system call or system call wrapper is usually provided for use by programs, as well as an administrative program and a configuration file (for setting the tunable parameters when the system boots).
This feature first appeared in 4.4BSD. It has the advantage over hardcoded constants that changes to the parameters can be made dynamically without recompiling the kernel.
Historically, although kernel variables themselves |
https://en.wikipedia.org/wiki/Rare%20species | A rare species is a group of organisms that are very uncommon, scarce, or infrequently encountered. This designation may be applied to either a plant or animal taxon, and is distinct from the term endangered or threatened. Designation of a rare species may be made by an official body, such as a national government, state, or province. The term more commonly appears without reference to specific criteria. The International Union for Conservation of Nature does not normally make such designations, but may use the term in scientific discussion.
Rarity rests on a specific species being represented by a small number of organisms worldwide, usually fewer than 10,000. However, a species having a very narrow endemic range or fragmented habitat also influences the concept. Almost 75% of known species can be classified as "rare".
Rare species are species with small populations. Many will move into the endangered or vulnerable category if the negative factors affecting them continue to operate. Well-known examples of rare species - because these are large terrestrial animals - include the Himalayan brown bear, Fennec fox, Wild Asiatic buffalo, or the Hornbill.
They are not endangered yet, but classified as "at risk", although the frontier between these categories is increasingly difficult to draw given the general paucity of data on rare species. This is especially the case in the world Ocean where many 'rare' species not seen for decades may well have gone extinct unnoticed, if they are not already on the verge of extinction like the Mexican Vaquita.
A species may be endangered or vulnerable, but not considered rare if it has a large, dispersed population. IUCN uses the term "rare" as a designation for species found in isolated geographical locations. Rare species are generally considered threatened because a small population size is obviously less likely to recover from ecological disasters.
A rare plant's legal status can be observed through the USDA's Plants Databas |
https://en.wikipedia.org/wiki/Opioid%20peptide | Opioid peptides or opiate peptides are peptides that bind to opioid receptors in the brain; opiates and opioids mimic the effect of these peptides. Such peptides may be produced by the body itself, for example endorphins. The effects of these peptides vary, but they all resemble those of opiates. Brain opioid peptide systems are known to play an important role in motivation, emotion, attachment behaviour, the response to stress and pain, control of food intake, and the rewarding effects of alcohol and nicotine.
Opioid-like peptides may also be absorbed from partially digested food (casomorphins, exorphins, and rubiscolins). Opioid peptides from food typically have lengths between 4–8 amino acids. Endogenous opioids are generally much longer.
Opioid peptides are released by post-translational proteolytic cleavage of precursor proteins. The precursors consist of the following components: a signal sequence that precedes a conserved region of about 50 residues; a variable-length region; and the sequence of the neuropeptides themselves. Sequence analysis reveals that the conserved N-terminal region of the precursors contains 6 cysteines, which are probably involved in disulfide bond formation. It is speculated that this region might be important for neuropeptide processing.
Endogenous
The human genome contains several homologous genes that are known to code for endogenous opioid peptides.
The nucleotide sequence of the human gene for proopiomelanocortin (POMC) was characterized in 1980. The POMC gene codes for endogenous opioids such as β-endorphin and γ-endorphin.
The human gene for the enkephalins was isolated and its sequence described in 1982.
The human gene for dynorphins (originally called the "Enkephalin B" gene because of sequence similarity to the enkephalin gene) was isolated and its sequence described in 1983.
The PNOC gene encoding prepronociceptin, which is cleaved into nociceptin and potentially two additional neuropeptides.
Adrenorphin, amidorphin, and |
https://en.wikipedia.org/wiki/Lesser%20omentum | The lesser omentum (small omentum or gastrohepatic omentum) is the double layer of peritoneum that extends from the liver to the lesser curvature of the stomach, and to the first part of the duodenum. The lesser omentum is usually divided into these two connecting parts: the hepatogastric ligament, and the hepatoduodenal ligament.
Structure
The lesser omentum is extremely thin, and is continuous with the two layers of peritoneum which cover respectively the antero-superior and postero-inferior surfaces of the stomach and first part of the duodenum.
When these two layers reach the lesser curvature of the stomach and the upper border of the duodenum, they join and ascend as a double fold to the porta hepatis.
To the left of the porta, the fold is attached to the bottom of the fossa for the ductus venosus, along which it is carried to the diaphragm, where the two layers separate to embrace the end of the esophagus.
At the right border of the lesser omentum, the two layers are continuous, and form a free margin which constitutes the anterior boundary of the omental foramen.
Divisions
Anatomically, the lesser omentum is divided into ligaments, each starting with the prefix "hepato" to indicate that it connects to the liver at one end.
Most sources divide it into two parts:
hepatogastric ligament: the portion connecting to the lesser curvature of the stomach
hepatoduodenal ligament: the portion connecting to the duodenum
In some cases, the following ligaments are considered part of the lesser omentum:
hepatophrenic ligament: the portion connecting to the thoracic diaphragm
hepatoesophageal ligament: the portion connecting to the esophagus
hepatocolic ligament: the portion connecting to the colon
Contents
Between the two layers of the lesser omentum, close to the right free margin, are the hepatic artery proper, the common bile duct, the portal vein, lymphatics, and the hepatic plexus of nerves—all these structures being enclosed in a fibrous capsule (Glis |
https://en.wikipedia.org/wiki/Nitro%20blue%20tetrazolium%20chloride | Nitro blue tetrazolium is a chemical compound composed of two tetrazole moieties. It is used in immunology for sensitive detection of alkaline phosphatase (with BCIP). NBT serves as the oxidant and BCIP is the AP-substrate (and gives also dark blue dye).
Clinical significance
In immunohistochemistry the alkaline phosphatase is often used as a marker, conjugated to an antibody. The colored product can either be of the NBT/BCIP reaction reveals where the antibody is bound, or can be used in immunofluorescence.
The NBT/BCIP reaction is also used for colorimetric/spectrophotometric activity assays of oxidoreductases. One application is in activity stains in gel electrophoresis, such as with the mitochondrial electron transport chain complexes.
Nitro blue tetrazolium is used in a diagnostic test, particularly for chronic granulomatous disease and other diseases of phagocyte function. When there is an NADPH oxidase defect, the phagocyte is unable to make reactive oxygen species or radicals required for bacterial killing. As a result, bacteria may thrive within the phagocyte. The higher the blue score, the better the cell is at producing reactive oxygen species. |
https://en.wikipedia.org/wiki/Felid%20hybrids | A felid hybrid is any of a number of hybrids between various species of the cat family, Felidae. This article deals with hybrids between the species of the subfamily Felinae (feline hybrids).
For hybrids between two species of the genus Panthera (lions, tigers, jaguars, and leopards), see Panthera hybrid. There are no known hybrids between Neofelis (the clouded leopard) and other genera. By contrast, many genera of Felinae are interfertile with each other, though few hybridize under natural conditions, and not all combinations are likely to be viable (e.g. between the tiny rusty-spotted cat and the leopard-sized cougar).
All-wild feline hybridization
Caracal × serval hybrids
A caraval is a cross between a male caracal (Caracal caracal) and a female serval (Leptailurus serval), while a male serval's and female caracal's offspring are called servicals. The first servicals were bred accidentally when the two animals were housed together at the Los Angeles Zoo. The offspring were tawny with pale spots. If a female servical is crossed to a male caracal, the result is a car-servical; if she is crossed to a male serval, the result is a ser-servical.
Bobcat × lynx
The blynx or lynxcat is a hybrid of a bobcat (Lynx rufus) and some other species of genus Lynx. The appearance of the offspring depends on which lynx species is used, as the Eurasian lynx (Lynx lynx) is more heavily spotted than the Canada lynx (Lynx canadensis). These hybrids have been bred in captivity and also occur naturally where a lynx or bobcat cannot find a member of its own species for mating.
At least seven such hybrids have been reported in the United States, outside of captivity. In August 2003, two wild-occurring hybrids between wild Canadian lynx and bobcats were confirmed by DNA analysis in the Moosehead region of Maine. Three hybrids were identified in northeastern Minnesota. These were the first confirmed hybrids outside of captivity. Mitochondrial DNA studies showed them all to be t |
https://en.wikipedia.org/wiki/SIGSOFT | The Association for Computing Machinery's Special Interest Group on Software Engineering provides a forum for computing professionals from industry, government and academia to examine principles, practices, and new research results in software engineering.
SIGSOFT focuses on issues related to all aspects of software development and maintenance, with emphasis on requirements, specification and design, software architecture, validation, verification, debugging, software safety, software processes, software management, measurement, user interfaces, configuration management, software engineering environments, and CASE tools.
SIGSOFT (co-)sponsors conferences and symposia including the International Conference on Software Engineering (ICSE), the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) and other events.
SIGSOFT publishes the informal bimonthly newsletter Software Engineering Notes (SEN) newsletter with papers, reports and other material related to the cost-effective, timely development and maintenance of high-quality software.
SIGSOFT's mission is to improve the ability to engineer software by stimulating interaction among practitioners, researchers, and educators; by fostering the professional development of software engineers; and by representing software engineers to professional, legal, and political entities. |
https://en.wikipedia.org/wiki/Software%20Engineering%20Notes | The ACM SIGSOFT Software Engineering Notes (SEN) is published by the Association for Computing Machinery (ACM) for the Special Interest Group on Software Engineering (SIGSOFT). It was established in 1976, and the first issue appeared in May 1976. It provides a forum for informal articles and other information on software engineering. The headquarters is in New York City. Since 1990, it has been published five times a year. |
https://en.wikipedia.org/wiki/Fermat%20point | In Euclidean geometry, the Fermat point of a triangle, also called the Torricelli point or Fermat–Torricelli point, is a point such that the sum of the three distances from each of the three vertices of the triangle to the point is the smallest possible or, equivalently, the geometric median of the three vertices. It is so named because this problem was first raised by Fermat in a private letter to Evangelista Torricelli, who solved it.
The Fermat point gives a solution to the geometric median and Steiner tree problems for three points.
Construction
The Fermat point of a triangle with largest angle at most 120° is simply its first isogonic center or X(13), which is constructed as follows:
Construct an equilateral triangle on each of two arbitrarily chosen sides of the given triangle.
Draw a line from each new vertex to the opposite vertex of the original triangle.
The two lines intersect at the Fermat point.
An alternative method is the following:
On each of two arbitrarily chosen sides, construct an isosceles triangle, with base the side in question, 30-degree angles at the base, and the third vertex of each isosceles triangle lying outside the original triangle.
For each isosceles triangle draw a circle, in each case with center on the new vertex of the isosceles triangle and with radius equal to each of the two new sides of that isosceles triangle.
The intersection inside the original triangle between the two circles is the Fermat point.
When a triangle has an angle greater than 120°, the Fermat point is sited at the obtuse-angled vertex.
In what follows "Case 1" means the triangle has an angle exceeding 120°. "Case 2" means no angle of the triangle exceeds 120°.
Location of X(13)
Fig. 2 shows the equilateral triangles attached to the sides of the arbitrary triangle .
Here is a proof using properties of concyclic points to show that the three lines in Fig 2 all intersect at the point and cut one another at angles of 60°.
The triangles are congr |
https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy%20deconvolution | The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently.
Description
When an image is produced using an optical system and detected using photographic film or a charge-coupled device, for instance, it is inevitably blurred, with an ideal point source not appearing as a point but being spread out into what is known as the point spread function. Extended sources can be decomposed into the sum of many individual point sources, thus the observed image can be represented in terms of a transition matrix p operating on an underlying image:
where is the intensity of the underlying image at pixel and is the detected intensity at pixel . In general, a matrix whose elements are describes the portion of light from source pixel j that is detected in pixel i. In most good optical systems (or in general, linear systems that are described as shift invariant) the transfer function p can be expressed simply in terms of the spatial offset between the source pixel j and the observation pixel i:
where is called a point spread function. In that case the above equation becomes a convolution. This has been written for one spatial dimension, but of course most imaging systems are two dimensional, with the source, detected image, and point spread function all having two indices. So a two dimensional detected image is a convolution of the underlying image with a two dimensional point spread function plus added detection noise.
In order to estimate given the observed and a known , the following iterative procedure is employed in which the estimate of (called ) for iteration number t is updated as follows:
where
It has been shown empirically that if this iteration converges, it converges to the maximum likelihood solution for .
Writing this more generally f |
https://en.wikipedia.org/wiki/WRBW | WRBW (channel 65), branded on-air as Fox 35 Plus, is a television station in Orlando, Florida, United States, serving as the local outlet for the MyNetworkTV programming service. It is owned and operated by Fox Television Stations alongside Fox outlet WOFL (channel 35). Both stations share studios on Skyline Drive in Lake Mary, while WRBW's transmitter is located in unincorporated Bithlo, Florida.
History
WRBW began operation as an independent station on June 6, 1994, airing vintage sitcoms, cartoons and older movies. It was owned by Rainbow Media, a subsidiary of Cablevision Systems Corporation. It originally operated from studio facilities located on the backlot of Universal Studios Florida. WRBW became the Orlando area affiliate of the United Paramount Network (UPN) (a network created by BHC and Paramount), when the network debuted on January 16, 1995. Since UPN only provided two hours of network programming two nights a week at launch, WRBW essentially still programmed itself as an independent station. During the late 1990s, especially during the wildfire plagued summer of 1998, there were occasions to which ABC Sports programming was moved to channel 65 in order for the market's ABC affiliate WFTV (channel 9) to provide wall-to-wall news coverage. Some of ABC's Saturday morning children's programs also aired on WRBW, until WRDQ signed on the air in April 2000.
Chris-Craft Industries, part-owner of UPN (through its United Television unit) bought the station in 1998, making WRBW the first owned-and-operated station of a major network in the Orlando market. Fox Television Stations acquired most of Chris-Craft's television stations, including WRBW, in 2001. Fox did not consider moving its affiliation from WOFL to WRBW, however; not only was WOFL one of Fox's strongest affiliates, but WRBW was located on a very high channel number. The buyout of Chris-Craft's stake in UPN by Viacom (which owned 50% of UPN since 1996) and the subsequent purchase of WRBW by Fox effe |
https://en.wikipedia.org/wiki/Capillary%20fringe | The capillary fringe is the subsurface layer in which groundwater seeps up from a water table by capillary action to fill pores. Pores at the base of the capillary fringe are filled with water due to tension saturation. This saturated portion of the capillary fringe is less than the total capillary rise because of the presence of a mix in pore size. If the pore size is small and relatively uniform, it is possible that soils can be completely saturated with water for several feet above the water table. Alternately, when the pore size is large, the saturated portion will extend only a few inches above the water table. Capillary action supports a vadose zone above the saturated base, within which water content decreases with distance above the water table. In soils with a wide range in pore size, the unsaturated zone can be several times thicker than the saturated zone.
Some workers restrict their definition of the capillary fringe only to the tension-saturated base portion and exclude it wholly from the vadose zone.
This is more common among workers addressing solute transport and water flow. Others define the capillary fringe as including both the tension-saturated and unsaturated portions. This is the preferred definition among workers dealing with the remediation of salt affected soils as well as those dealing with the vapor phase of soil processes and bioremediation. It is not uncommon to see the capillary fringe treated as a boundary condition separating the water table from the unsaturated zone, without defining it as a significant part of either.
See also |
https://en.wikipedia.org/wiki/Variations%20of%20the%20ichthys%20symbol | The ichthys symbol (or "Jesus fish") is a sign typically used to proclaim an affiliation with or affinity for Christianity. The fish was originally adopted by early Christians as a secret symbol, but the many variations known today first appeared in the 1980s. Some of these are made by Christians in order to promote a specific doctrine or theological perspective, such as evolutionary creation. Other variations are intended for the purpose of satire by non-Christian groups.
Both the traditional ichthys and its variations are seen adorning the bumpers or trunks of automobiles mainly in the United States, often in the form of adhesive badges made of chrome-colored plastic.
Ichthys symbol
An ancient Hellenistic Christian slogan espoused the Greek acronym/acrostic () for "" (""), which translates into English as 'Jesus Christ, Son of God, Saviour'; the Greek word translating to 'fish' in English. The first appearances of fish symbols as adopted in Christian art and literature date to the 2nd century CE. Some modern fish symbol variations, called the Jesus fish, contain the English word Jesus in the center, or are empty entirely.
Parodies
Jeroen Temperman states that there are "variations on this Ichthys symbol. Some variations add feet to the fish and inscribe "Darwin" in the body. Others make reference to sushi, sharks, the food chain, fast food, the devil or death. How are we to interpret these variations? These adaptations are themselves susceptible to multiple interpretations, ranging from humour to critique, to mocking derision, to blasphemy." Among such parodies are the Darwin fish and the Gefilte fish, often displayed by atheists and Jews in the United States, and the "fish-hungry shark," displayed by Muslims in Egypt.
Artgemeinschaft
The German Artgemeinschaft group, promoting racist neopaganism, uses a registered symbol showing an eagle catching an ichthys fish. This symbol, known as "eagle catching fish" (German: Adler fängt Fisch) was later used b |
https://en.wikipedia.org/wiki/Hacker%20%28video%20game%29 | Hacker is a 1985 video game by Activision. It was designed by Steve Cartwright and released for the Amiga, Amstrad CPC, Apple II, Atari 8-bit family, Atari ST, Commodore 64, Macintosh, DOS, MSX2, and ZX Spectrum.
Plot
Activision executive Jim Levy introduced Hacker to reporters by pretending that something had gone wrong during his attempt to connect on line to company headquarters to demonstrate a new game. After several attempts he logged into a mysterious non-Activision computer, before explaining, "That, ladies and gentlemen, is the game". The player assumes the role of a hacker, a person experienced in breaking into secure computer systems, who accidentally acquires access to a non-public system. The game was shipped with no information on how to play, thus building the concept that the player did hack into a system.
BPL2020
The player must attempt to hack into the Magma Ltd. computer system at the beginning of the game by guessing the logon password. The password becomes obvious only after gaining access, through another means of entry, to the later stage of the game, but typing help or h in the initial command line gives a clue. Since initial attempts consist of guessing (and likely failing), access is eventually granted due to a supposed malfunction in the security system. Once the player is in, the player is asked to identify various parts of a robot unit by pointing the cursor at the relevant parts and pressing the joystick button. Most parts have exotic and technical names, such as "asynchronous data compactor" or "phlamson joint"—this again allows more room for error by initially trying to guess which part each name belongs to. Failure to identify each part correctly forces the player to take a retest until a 100 percent identification is made, at which point the player is then allowed to continue.
The player gains control of the robot which can travel around the globe via secret tunnels, deep within the earth. The game's text states that the robot is |
https://en.wikipedia.org/wiki/Bit%20field | A bit field is a data structure that consists of one or more adjacent bits which have been allocated for specific purposes, so that any single bit or group of bits within the structure can be set or inspected. A bit field is most commonly used to represent integral types of known, fixed bit-width, such as single-bit Booleans.
The meaning of the individual bits within the field is determined by the programmer; for example, the first bit in a bit field (located at the field's base address) is sometimes used to determine the state of a particular attribute associated with the bit field.
Within CPUs and other logic devices, collections of bit fields called flags are commonly used to control or to indicate the outcome of particular operations. Processors have a status register that is composed of flags. For example, if the result of an addition cannot be represented in the destination an arithmetic overflow is set. The flags can be used to decide subsequent operations, such as conditional jump instructions. For example, a JE ... (Jump if Equal) instruction in the x86 assembly language will result in a jump if the Z (zero) flag was set by some previous operation.
A bit field is distinguished from a bit array in that the latter is used to store a large set of bits indexed by integers and is often wider than any integral type supported by the language. Bit fields, on the other hand, typically fit within a machine word, and the denotation of bits is independent of their numerical index.
Implementation
Bit fields can be used to reduce memory consumption when a program requires a number of integer variables which always will have low values. For example, in many systems storing an integer value requires two bytes (16-bits) of memory; sometimes the values to be stored actually need only one or two bits. Having a number of these tiny variables share a bit field allows efficient packaging of data in the memory.
In C and C++, native implementation-defined bit fields can be cr |
https://en.wikipedia.org/wiki/Reciprocal%20cross | In genetics, a reciprocal cross is a breeding experiment designed to test the role of parental sex on a given inheritance pattern. All parent organisms must be true breeding to properly carry out such an experiment. In one cross, a male expressing the trait of interest will be crossed with a female not expressing the trait. In the other, a female expressing the trait of interest will be crossed with a male not expressing the trait.
It is the cross that could be made either way or independent of the sex of the parents.
For example, suppose a biologist wished to identify whether a hypothetical allele Z, a variant of some gene A, is on the male or female sex chromosome. They might first cross a Z-trait female with an A-trait male and observe the offspring. Next, they would cross an A-trait female with a Z-trait male and observe the offspring. Via principles of dominant and recessive alleles, they could then (perhaps after cross-breeding the offspring as well) make an inference as to which sex chromosome contains the gene Z, if either in fact did.
Reciprocal cross in practice
Given that the trait of interest is either autosomal or sex-linked and follows by either complete dominance or incomplete dominance, a reciprocal cross following two generations will determine the mode of inheritance of the trait.
White-eye mutation in Drosophila melanogaster
Sex linkage was first reported by Doncaster and Raynor in 1906 who studied the inheritance of a colour mutation in a moth, Abraxas grossulariata. Thomas Hunt Morgan later showed that a new white-eye mutation in Drosophila melanogaster was also sex-linked. He found that a white-eyed male crossed with a red-eyed female produced only red-eyed offspring. However, when they crossed a red-eyed male with a white-eyed female, the male offspring had white eyes while the female offspring had red eyes. The reason was that the white eye allele is sex-linked (more specifically, on the X chromosome) and recessive.
The analysis ca |
https://en.wikipedia.org/wiki/MailSlot | A Mailslot is a one-way interprocess communication mechanism, available on the Microsoft Windows operating system, that allows communication between processes both locally and over a network. The use of Mailslots is generally simpler than named pipes or sockets when a relatively small number of relatively short messages are expected to be transmitted, such as for example infrequent state-change messages, or as part of a peer-discovery protocol. The Mailslot mechanism allows for short message broadcasts ("datagrams") to all listening computers across a given network domain.
Features
Mailslots function as a server-client interface. A server can create a Mailslot, and a client can write to it by name. Only the server can read the mailslot, as such mailslots represent a one-way communication mechanism. A server-client interface could consist of two processes communicating locally or across a network. Mailslots operate over the RPC protocol and work across all computers in the same network domain. Mailslots offer no confirmation that a message has been received. Mailslots are generally a good choice when one client process must broadcast a message to multiple server processes.
Uses
The most widely known use of the Mailslot IPC mechanism is the Windows Messenger service that is part of the Windows NT-line of products, including Windows XP. The Messenger Service, not to be confused with the MSN Messenger internet chat service, is essentially a Mailslot server that waits for a message to arrive. When a message arrives it is displayed in a popup onscreen. The NET SEND command is therefore a type of Mailslot client, because it writes to specified mailslots on a network.
A number of programs also use Mailslots to communicate. Generally these are amateur chat clients and other such programs. Commercial programs usually prefer pipes or sockets.
Mailslots are implemented as files in a mailslot file system (MSFS). Examples of Mailslots include:
MAILSLOT\Messngr - Microsoft N |
https://en.wikipedia.org/wiki/Controversy%20over%20Cantor%27s%20theory | In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers.
Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum).
Cantor's argument
Cantor's first proof that infinite sets can have different cardinalities was published in 1874. This proof demonstrates that the set of natural numbers and the set of real numbers have different cardinalities. It uses the theorem that a bounded increasing sequence of real numbers has a limit, which can be proved by using Cantor's or Richard Dedekind's construction of the irrational numbers. Because Leopold Kronecker did not accept these constructions, Cantor was motivated to develop a new proof.
In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers." His new proof uses his diagonal argument to prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbers N = {1, 2, 3, ...}. This larger set consists of the elements (x1, x2, x3, ...), where each xn is either m or w. Each of these elements corresponds to a subset of N—namely, the element (x1, x2, x3, ...) corresponds to {n ∈ N: xn = w}. So C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.