source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Arnold%20conjecture
|
The Arnold conjecture, named after mathematician Vladimir Arnold, is a mathematical conjecture in the field of symplectic geometry, a branch of differential geometry.
Statement
Let be a compact symplectic manifold. For any smooth function , the symplectic form induces a Hamiltonian vector field on , defined by the identity:
The function is called a Hamiltonian function.
Suppose there is a 1-parameter family of Hamiltonian functions , inducing a 1-parameter family of Hamiltonian vector fields on . The family of vector fields integrates to a 1-parameter family of diffeomorphisms . Each individual is a Hamiltonian diffeomorphism of .
The Arnold conjecture says that for each Hamiltonian diffeomorphism of , it possesses at least as many fixed points as a smooth function on possesses critical points.
Nondegenerate Hamiltonian and weak Arnold conjecture
A Hamiltonian diffeomorphism is called nondegenerate if its graph intersects the diagonal of transversely. For nondegenerate Hamiltonian diffeomorphisms, a variant of the Arnold conjecture says that the number of fixed points is at least equal to the minimal number of critical points of a Morse function on , called the Morse number of .
In view of the Morse inequality, the Morse number is also greater than or equal to a homological invariant of , for example, the sum of Betti numbers over a field :
The weak Arnold conjecture says that for a nondegenerate Hamiltonian diffeomorphism on the above integer is a lower bound of its number of fixed points.
See also
Arnold–Givental conjecture
|
https://en.wikipedia.org/wiki/List%20of%20physical%20constants
|
The constants listed here are known values of physical constants expressed in SI units; that is, physical quantities that are generally believed to be universal in nature and thus are independent of the unit system in which they are measured. Many of these are redundant, in the sense that they obey a known relationship with other physical constants and can be determined from them.
Table of physical constants
Uncertainties
While the values of the physical constants are independent of the system of units in use, each uncertainty as stated reflects our lack of knowledge of the corresponding value as expressed in SI units, and is strongly dependent on how those units are defined. For example, the atomic mass constant is exactly known when expressed using the dalton (its value is exactly 1 Da), but the kilogram is not exactly known when using these units, the opposite of when expressing the same quantities using the kilogram.
Technical constants
Some of these constants are of a technical nature and do not give any true physical property, but they are included for convenience. Such a constant gives the correspondence ratio of a technical dimension with its corresponding underlying physical dimension. These include the Boltzmann constant , which gives the correspondence of the dimension temperature to the dimension of energy per degree of freedom, and the Avogadro constant , which gives the correspondence of the dimension of amount of substance with the dimension of count of entities (the latter formally regarded in the SI as being dimensionless). By implication, any product of powers of such constants is also such a constant, such as the molar gas constant .
See also
List of mathematical constants
Physical constant
List of particles
Notes
|
https://en.wikipedia.org/wiki/Xylan
|
Xylan (; ) (CAS number: 9014-63-5) is a type of hemicellulose, a polysaccharide consisting mainly of xylose residues. It is found in plants, in the secondary cell walls of dicots and all cell walls of grasses. Xylan is the third most abundant biopolymer on Earth, after cellulose and chitin.
Composition
Xylans are polysaccharides made up of β-1,4-linked xylose (a pentose sugar) residues with side branches of α-arabinofuranose and/or α-glucuronic acids. On the basis of substituted groups xylan can be categorized into three classes i) glucuronoxylan (GX) ii) neutral arabinoxylan (AX) and iii) glucuronoarabinoxylan (GAX). In some cases contribute to cross-linking of cellulose microfibrils and lignin through ferulic acid residues.
Occurrence
Plant cell structure
Xylans play an important role in the integrity of the plant cell wall and increase cell wall recalcitrance to enzymatic digestion; thus, they help plants to defend against herbivores and pathogens (biotic stress). Xylan also plays a significant role in plant growth and development. Typically, xylans content in hardwoods is 10-35%, whereas they are 10-15% in softwoods. The main xylan component in hardwoods is O-acetyl-4-O-methylglucuronoxylan, whereas arabino-4-O-methylglucuronoxylans are a major component in softwoods. In general, softwood xylans differ from hardwood xylans by the lack of acetyl groups and the presence of arabinose units linked by α-(1,3)-glycosidic bonds to the xylan backbone.
Algae
Some macrophytic green algae contain xylan (specifically homoxylan) especially those within the Codium and Bryopsis genera where it replaces cellulose in the cell wall matrix. Similarly, it replaces the inner fibrillar cell-wall layer of cellulose in some red algae.
Food science
The quality of cereal flours and the hardness of dough are affected by their xylan content, thus, playing a significant role in bread industry. The main constituent of xylan can be converted into xylitol (a xylose derivative), which
|
https://en.wikipedia.org/wiki/Almquist%20shell
|
Almquist shell (also known as A Shell, ash and sh) is a lightweight Unix shell originally written by Kenneth Almquist in the late 1980s. Initially a clone of the System V.4 variant of the Bourne shell, it replaced the original Bourne shell in the BSD versions of Unix released in the early 1990s.
History
ash was first released via a posting to the Usenet news group, approved and moderated by Rich Salz on 30 May 1989. It was described as "a reimplementation of the System V shell [with] most features of that shell, plus some additions".
Fast, small, and virtually compatible with the POSIX standard's specification of the Unix shell, ash did not provide line editing or command history mechanisms, because Almquist felt that such functionality should be moved into the terminal driver. However, current variants support it.
The following is extracted from the ash package information from Slackware v14:
Myriad forks have been produced from the original ash release. These derivatives of ash are installed as the default shell (/bin/sh) on FreeBSD, NetBSD, DragonFly BSD, MINIX, and in some Linux distributions. MINIX 3.2 used the original ash version, whose test feature differed from POSIX. That version of the shell was replaced in MINIX 3.3. Android used ash until Android 4.0, at which point it switched to mksh.
Dash
In 1997 Herbert Xu ported ash from NetBSD to Debian Linux. In September 2002, with release 0.4.1, this port was renamed to Dash (Debian Almquist shell). Xu's main priorities are POSIX conformance and slim implementation.
Like its predecessor, Dash implements support for neither internationalization and localization nor multi-byte character encoding (both required in POSIX). Line editing and history support based on GNU Readline is optional ().
Adoption in Debian and Ubuntu
Because of its slimness, Ubuntu decided to adopt Dash as the default /bin/sh in 2006. The reason for using Dash is faster shell script execution, especially during startup of the opera
|
https://en.wikipedia.org/wiki/MRNA%20display
|
mRNA display is a display technique used for in vitro protein, and/or peptide evolution to create molecules that can bind to a desired target. The process results in translated peptides or proteins that are associated with their mRNA progenitor via a puromycin linkage. The complex then binds to an immobilized target in a selection step (affinity chromatography). The mRNA-protein fusions that bind well are then reverse transcribed to cDNA and their sequence amplified via a polymerase chain reaction. The result is a nucleotide sequence that encodes a peptide with high affinity for the molecule of interest.
Puromycin is an analogue of the 3’ end of a tyrosyl-tRNA with a part of its structure mimics a molecule of adenosine, and the other part mimics a molecule of tyrosine. Compared to the cleavable ester bond in a tyrosyl-tRNA, puromycin has a non-hydrolysable amide bond. As a result, puromycin interferes with translation, and causes premature release of translation products.
All mRNA templates used for mRNA display technology have puromycin at their 3’ end. As translation proceeds, ribosome moves along the mRNA template, and once it reaches the 3’ end of the template, the fused puromycin will enter ribosome’s A site and be incorporated into the nascent peptide. The mRNA-polypeptide fusion is then released from the ribosome (Figure 1).
To synthesize an mRNA-polypeptide fusion, the fused puromycin is not the only modification to the mRNA template. Oligonucleotides and other spacers need to be recruited along with the puromycin to provide flexibility and proper length for the puromycin to enter the A site. Ideally, the linker between the 3’ end of an mRNA and the puromycin has to be flexible and long enough to allow the puromycin to enter the A site upon translation of the last codon. This enables the efficient production of high-quality, full-length mRNA-polypeptide fusion. Rihe Liu et al. optimized the 3’-puromycin oligonucleotide spacer. They reported that dA25 in
|
https://en.wikipedia.org/wiki/Russula%20xerampelina
|
Russula xerampelina, also commonly known as the shrimp russula, crab brittlegill, or shrimp mushroom, is a basidiomycete mushroom of the brittlegill genus Russula. Two subspecies are recognised. The fruiting bodies appear in coniferous woodlands in autumn in northern Europe and North America. Their caps are coloured various shades of wine-red, purple to green. Mild tasting and edible, it is one of the most highly regarded brittlegills for the table. It is also notable for smelling of shellfish or crab when fresh.
Taxonomy
Russula xerampelina was originally described in 1770 as Agaricus xerampelina from a collection in Bavaria by the German mycologist Jacob Christian Schaeffer, who noted the colour as fusco-purpureus or "purple-brown". It was later given its present binomial name by Swedish mycologist Elias Magnus Fries. Its specific epithet is taken from the Ancient Greek meaning "colour of dried vine leaves", xeros meaning "dry", and ampělinos or "of the vine".
Two subspecies have been recognised, var. xerampelina and var. tenuicarnosa, with thinner flesh in the cap and the stipe. The name R. erythropoda is now considered a synonym, and former subspecies R. (xerampelina subsp.) amoenipes (originally named by Henri Romagnesi) now a separate species. A former variety with a greenish cap, R. xerampelina var. elaeodes, is now classified as R. clavipes.
As the first defined species, it gives its name to the section Xerampelinae, a group of related species within the genus Russula, occasionally all termed R. xerampelina in the past.
Common names include shrimp mushroom, shrimp Russula, crab brittlegill, and shellfish-scented Russula.
Description
Russula xerampelina has a characteristic odour of boiled crustacean. The cap is wide, domed, flat, or with a slightly depressed centre, and sticky. The colour is variable, most commonly purple to wine-red, or greenish, and darker towards the centre of the cap. There are fine grooves up to a centimetre long running perpe
|
https://en.wikipedia.org/wiki/George%20W.%20Beadle%20Award
|
The George W. Beadle Award is a scientific prize given by the Genetics Society of America to individuals who have made “outstanding contributions” to Genetics. The Award was established in 1999 and named in honor of George Wells Beadle, who won the Nobel Prize in Physiology or Medicine in 1958.
Laureates
Source: Genetics Society of America
1999 Michael Ashburner
2000 John Sulston and Robert Waterston
2001 Gerald Fink
2002 André Goffeau and Robert K. Mortimer
2003 Gerald M. Rubin and Allan C. Spradling
2004 Norbert Perrimon, Harvard Medical School
2005 Thomas C. Kaufman, Indiana University
2006 Fred Sherman, University of Rochester
2007 Robert K. Herman, University of Minnesota
2008 Mark Johnston, Washington University School of Medicine
2009 Jay C. Dunlap, Dartmouth Medical School
2010 William M. Gelbart, Harvard University
2011 Joseph R. Ecker, Salk Institute for Biological Studies
2012 Therese Markow, University of California, San Diego
2013 R. Scott Hawley, Stowers Institute for Medical Research
2014 Hugo J. Bellen, Baylor College of Medicine
2015 John Postlethwait, University of Oregon
2016 Susan Celniker, Lawrence Berkeley National Laboratory
2017 Susan A. Gerbi, Brown University
2018 Philip Hieter, University of British Columbia
2019 Michael P. Snyder, Stanford University
2020 Julie Ahringer, Cambridge University
2021 Chao-ting Wu, Harvard Medical School
2022 Shirley M. Tilghman, Princeton University
See also
List of genetics awards
|
https://en.wikipedia.org/wiki/P%C3%B6schl%E2%80%93Teller%20potential
|
In mathematical physics, a Pöschl–Teller potential, named after the physicists Herta Pöschl (credited as G. Pöschl) and Edward Teller, is a special class of potentials for which the one-dimensional Schrödinger equation can be solved in terms of special functions.
Definition
In its symmetric form is explicitly given by
and the solutions of the time-independent Schrödinger equation
with this potential can be found by virtue of the substitution , which yields
.
Thus the solutions are just the Legendre functions with , and , . Moreover, eigenvalues and scattering data can be explicitly computed. In the special case of integer , the potential is reflectionless and such potentials also arise as the N-soliton solutions of the Korteweg-de Vries equation.
The more general form of the potential is given by
Rosen–Morse potential
A related potential is given by introducing an additional term:
See also
Morse potential
Trigonometric Rosen–Morse potential
|
https://en.wikipedia.org/wiki/Simple%20Network%20Paging%20Protocol
|
Simple Network Paging Protocol (SNPP) is a protocol that defines a method by which a pager can receive a message over the Internet. It is supported by most major paging providers, and serves as an alternative to the paging modems used by many telecommunications services. The protocol was most recently described in . It is a fairly simple protocol that may run over TCP port 444 and sends out a page using only a handful of well-documented commands.
Connecting and using SNPP servers
It is relatively easy to connect to a SNPP server only requiring a telnet client and the address of the SNPP server. The port 444 is standard for SNPP servers, and it is free to use from the sender's point of view. Maximum message length can be carrier-dependent. Once connected, a user can simply enter the commands to send a message to a pager connected to that network. For example, you could then issue the PAGE command with the number of the device to which you wish to send the message. After that issue the MESS command with the text of the message you wish to send following it. You can then issue the SEND command to send out the message to the pager and then QUIT, or send another message to a different device. The protocol also allows you to issue multiple PAGE commands, stacking them one after the other, per message effectively allowing you to send the same message to several devices on that network with one MESS and SEND command pair.
|
https://en.wikipedia.org/wiki/Multidimensional%20Multirate%20Systems
|
Multidimensional Multirate systems find applications in image compression and coding. Several applications such as conversion between progressive video signals require usage of multidimensional multirate systems. In multidimensional multirate systems, the basic building blocks are decimation matrix (M), expansion matrix(L) and Multidimensional digital filters. The decimation and expansion matrices have dimension of D x D, where D represents the dimension. To extend the one dimensional (1-D) multirate results, there are two different ways which are based on the structure of decimation and expansion matrices. If these matrices are diagonal, separable approaches can be used, which are separable operations in each dimension. Although separable approaches might serve less complexity, non-separable methods, with non-diagonal expansion and decimation matrices, provide much better performance. The difficult part in non-separable methods is to create results in MD case by extend the 1-D case. Polyphase decomposition and maximally decimated reconstruction systems are already carried out.
MD decimation / interpolation filters derived from 1-D filters and maximally decimated filter banks are widely used and constitute important steps in the design of multidimensional multirate systems.
Basic Building Blocks of MD Multirate Systems
Decimation and interpolation are necessary steps to create multidimensional multirate systems. In the one dimensional system, decimation and interpolation can be seen in the figure.
Theoretically, explanations of decimation and interpolation are:
• Decimation (Down-sampling):
The M times decimated version of x(n) is defined as y(n)= x(Mn), where M is a nonsingular integer matrix called decimation matrix.
In the frequency domain, relation becomes
where
k is in the range of S which is set of all integer vectors in the form of MTx.
J(M) denotes |det(M)| which is also equals to number of k in the determined range.
Above expression changes
|
https://en.wikipedia.org/wiki/Web%20Standards%20Project
|
The Web Standards Project (WaSP) was a group of professional web developers dedicated to disseminating and encouraging the use of the web standards recommended by the World Wide Web Consortium, along with other groups and standards bodies.
Founded in 1998, The Web Standards Project campaigned for standards that reduced the cost and complexity of development while increasing the accessibility and long-term viability of any document published on the Web. WaSP worked with browser companies, authoring tool makers, and peers to encourage them to use these standards, since they "are carefully designed to deliver the greatest benefits to the greatest number of web users". The group disbanded in 2013.
Organization
The Web Standards Project began as a grassroots coalition "fighting for standards in our [web] browsers" founded by George Olsen, Glenn Davis, and Jeffrey Zeldman in August 1998. By 2001, the group had achieved its primary goal of persuading Microsoft, Netscape, Opera, and other browser makers to accurately and completely support HTML 4.01/XHTML 1.0, CSS1, and ECMAScript. Had browser makers not been persuaded to do so, the Web would likely have fractured into pockets of incompatible content, with various websites available only to people who possessed the right browser. In addition to streamlining web development and significantly lowering its cost, support for common web standards enabled the development of the semantic web. By marking up content in semantic (X)HTML, front-end developers make a site's content more available to search engines, more accessible to people with disabilities, and more available to the world beyond the desktop (e.g. mobile).
The project relaunched in June 2002 with new members, a redesigned website, new site features, and a redefined mission focused on developer education and standards compliance in authoring tools as well as browsers.
Project leaders were:
George Olsen (1998–1999)
Jeffrey Zeldman (1999–2002)
Steven Champeon (20
|
https://en.wikipedia.org/wiki/Sublime%20Text
|
Sublime Text is a shareware text and source code editor available for Windows, macOS, and Linux. It natively supports many programming languages and markup languages. Users can customize it with themes and expand its functionality with plugins, typically community-built and maintained under free-software licenses. To facilitate plugins, Sublime Text features a Python API. The editor utilizes minimal interface and contains features for programmers including configurable syntax highlighting, code folding, search-and-replace supporting regular-expressions, terminal output window, and more. It is proprietary software, but a free evaluation version is available.
Features
The following is a list of features of Sublime Text:
"Goto Anything", quick navigation to project files, symbols, or lines
"Command palette" uses adaptive matching for quick keyboard invocation of arbitrary commands
Simultaneous editing: simultaneously make the same interactive changes to multiple selected areas
Python-based plugin API
Project-specific preferences
Extensive customizability via JSON settings files, including project-specific and platform-specific settings
Cross-platform (Windows, macOS, and Linux) and Supportive Plugins for cross-platform
Compatible with many language grammars from TextMate
Version history
Version 1
Sublime Text 1.0 was released on 18 January 2008 as an application for the Windows operating system. It supports tabs and side-by-side view of files.
Version 2
Sublime Text 2.0.2 was released on 8 July 2013. Changes from the first version of the software, as promoted on the official Sublime blog, include Retina display support and "Quick Skip Next" functionality.
Themes
Sublime Text contains 23 visual themes, with the option to download and configure additional themes via third-party plugins.
The minimap feature shows a reduced overview of the entire file in the top-right corner of the screen. The portion of the file visible in the main editor pane is highlight
|
https://en.wikipedia.org/wiki/Iron%E2%80%93sulfur%20world%20hypothesis
|
The iron–sulfur world hypothesis is a set of proposals for the origin of life and the early evolution of life advanced in a series of articles between 1988 and 1992 by Günter Wächtershäuser, a Munich patent lawyer with a degree in chemistry, who had been encouraged and supported by philosopher Karl R. Popper to publish his ideas. The hypothesis proposes that early life may have formed on the surface of iron sulfide minerals, hence the name. It was developed by retrodiction (making a "prediction" about the past) from extant biochemistry (non-extinct, surviving biochemistry) in conjunction with chemical experiments.
Origin of life
Pioneer organism
Wächtershäuser proposes that the earliest form of life, termed the "pioneer organism", originated in a volcanic hydrothermal flow at high pressure and high (100 °C) temperature. It had a composite structure of a mineral base with catalytic transition metal centers (predominantly iron and nickel, but also perhaps cobalt, manganese, tungsten and zinc). The catalytic centers catalyzed autotrophic carbon fixation pathways generating small molecule (non-polymer) organic compounds from inorganic gases (e.g. carbon monoxide, carbon dioxide, hydrogen cyanide and hydrogen sulfide). These organic compounds were retained on or in the mineral base as organic ligands of the transition metal centers with a flow retention time in correspondence with their mineral bonding strength thereby defining an autocatalytic "surface metabolism". The catalytic transition metal centers became autocatalytic by being accelerated by their organic products turned ligands. The carbon fixation metabolism became autocatalytic by forming a metabolic cycle in the form of a primitive sulfur-dependent version of the reductive citric acid cycle. Accelerated catalysts expanded the metabolism and new metabolic products further accelerated the catalysts. The idea is that once such a primitive autocatalytic metabolism was established, its intrinsically synthetic ch
|
https://en.wikipedia.org/wiki/Microparallelism
|
Microparallelism is the use of software to exploit fine-grained parallelism within standard computer processors, by writing code that allows the full use of existing parallel units within superscalar processors.
|
https://en.wikipedia.org/wiki/Zest%20%28ingredient%29
|
Zest is a food ingredient that is prepared by scraping or cutting from the rind of unwaxed citrus fruits such as lemon, orange, citron, and lime. Zest is used to add flavor to foods.
In terms of fruit anatomy, the zest is obtained from the flavedo (exocarp) which is also referred to as zest. The flavedo and white pith (albedo) of a citrus fruit together makes up its peel. The amounts of both flavedo and pith are variable among citrus fruits, and may be adjusted by the manner in which they are prepared. Citrus peel may be used fresh, dried, candied, or pickled in salt.
Preparation
For culinary use, a zester, grater, vegetable peeler, paring knife, or even a surform tool is used to scrape or cut zest from the fruit. Alternatively, the peel is sliced, then excess pith (if any) cut away.
The white portion of the peel under the zest (pith, albedo or mesocarp) may be unpleasantly bitter and is generally avoided by limiting the peeling depth. Some citrus fruits have so little white mesocarp that their peel can be used whole.
Variation between fruit
The zest and mesocarp vary with the genetics of the fruit. Fruit with peels that are almost all flavedo are generally mandarines; relatives of pomelos and citrons tend to have thicker mesocarp. The mesocarp of pomelo relatives (grapefruit, orange, etc.) is generally more bitter; the mesocarp of citron relatives (Mexican and Persian limes, alemows etc.) is milder. The lemon is a hybrid of pummelo, citron, and mandarin. The mesocarp is also edible, and is used to make succade.
Uses
Zest is often used to add flavor to different pastries and sweets, such as pies (e.g., lemon meringue pie), cakes, cookies, biscuits, puddings, confectionery, candy and chocolate. Zest also is added to certain dishes (including ossobuco alla milanese), marmalades, sauces, sorbets and salads.
Zest is a key ingredient in a variety of sweet and sour condiments, including lemon pickle, lime chutney, and marmalade. Lemon liqueurs and liquors such as
|
https://en.wikipedia.org/wiki/Password%20cracking
|
In cryptanalysis and computer security, password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system in scrambled form. A common approach (brute-force attack) is to repeatedly try guesses for the password and to check them against an available cryptographic hash of the password. Another type of approach is password spraying, which is often automated and occurs slowly over time in order to remain undetected, using a list of common passwords.
The purpose of password cracking might be to help a user recover a forgotten password (due to the fact that installing an entirely new password would involve System Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure whereby system administrators check for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a particular file's permissions restricted.
Time needed for password searches
The time to crack a password is related to bit strength , which is a measure of the password's entropy, and the details of how the password is stored. Most methods of password cracking require the computer to produce many candidate passwords, each of which is checked. One example is brute-force cracking, in which a computer tries every possible key or password until it succeeds. With multiple processors, this time can be optimized through searching from the last possible group of symbols and the beginning at the same time, with other processors being placed to search through a designated selection of possible passwords. More common methods of password cracking, such as dictionary attacks, pattern checking, word list substitution, etc. attempt to reduce the number of trials required and will usually be attempted before brute force. Higher password bit strength exponentially increases the number of candidate passwords that must
|
https://en.wikipedia.org/wiki/Google%20Chat
|
Google Chat is a communication service developed by Google. Initially designed for teams and business environments, it has since been made available for general consumers. It provides direct message, group conversations, and spaces, which allow users to create and assign tasks and share files in a central place in addition to chatting. It can be accessed through its own website and app or through the Gmail website and app.
It was first launched as Hangouts Chat on March 9, 2017, as one of the two apps to constitute the replacement for Google Hangouts, the other being Google Meet. It was renamed to Google Chat on April 9, 2020. It was initially only available for the Google Workspace software suite, but in February 2021 Google began rolling out Google Chat in "early access" to regular consumer accounts until it became fully available in April 2021. Google deprecated the original Hangouts and replaced it with Chat on November 1, 2022.
History
Google Chat was first launched as Hangouts Chat on March 9, 2017, for Google Workspace (called G Suite until October 2020) customers only as a replacement for Google Hangouts. All G Suite packages had identical features except for a lack of Vault data retention in the Basic package.
On April 9, 2020, Google rebranded Hangouts Chat to Google Chat. Following this rebranding, and along with a similar change for Hangouts Meet, the Hangouts brand is to be removed from Google Workspace.
Migration from Hangouts
Google first announced their plan to begin retiring Google Hangouts in October 2019. In October 2020, Google announced that it would open Google Chat up to consumers in 2021. Google also announced that Hangouts conversations, contacts, and history would be migrated over to Google Chat.
Google Chat began to roll out to consumer accounts in "early access" in February 2021, but at the time Google stated that Hangouts would remain a consumer-level product for people using standard Google accounts. By April 2021, Google Chat bec
|
https://en.wikipedia.org/wiki/Darling%2058
|
The Darling 58 is a genetically engineered American chestnut tree. The tree was created by American Chestnut Research & Restoration Program at the State University of New York College of Environmental Science and Forestry (SUNY ESF) in collaboration with The American Chestnut Foundation to restore the American chestnut to the forests of North America. These Darling-58 trees are attacked by chestnut blight, but survive. Darling-58 trees survive to reach maturity, produce chestnuts, and multiply to restore the American chestnut tree to the forests of North America.
Background
The chestnut blight was introduced in the late 19th century with the Japanese chestnut and decimated the once-widespread American chestnut tree. Native un-modified trees are killed from the ground up by the blight, and only the root system survives. The roots then continue to send up shoots that are once again attacked by the blight and die back before they reach maturity, repeating the cycle.
Mechanism
Chestnut blight damages trees by producing oxalic acid, which lowers the pH in the cambium and kills plant tissues. Darling 58 adds a oxalate oxidase (OxO) gene from wheat, driven by a CaMV 35S genetics. The promoter allows the OxO protein to be made all through the plant. The OxO protein allows the plant to break down the acid before too much damage is done. The same defense strategy is found not only in wheat, but also in strawberries, bananas, oats, barley, and other cereals. The resistant trait is passed down to progeny. The resistance does not stop the blight from completing its lifecycle.
Extensive testing done with the transgenic Darling 58 variant to assess its effects on other species showed that the survival, pollen use, and reproduction of bumble bees were not affected by oxalate oxidase at the typical concentrations found in the pollen of the American chestnut. Presence of the transgenic oxalate oxidase gene in the genome of the American chestnut has little effect on photosynt
|
https://en.wikipedia.org/wiki/Call%20setup
|
In telecommunication, call setup is the process of establishing a virtual circuit across a telecommunications network. Call setup is typically accomplished using a signaling protocol.
The term call set-up time has the following meanings:
The overall length of time required to establish a circuit-switched call between users.
For data communication, the overall length of time required to establish a circuit-switched call between terminals; i.e., the time from the initiation of a call request to the beginning of the call message.
Note: Call set-up time is the summation of: (a) call request time—the time from initiation of a calling signal to the delivery to the caller of a proceed-to-select signal; (b) selection time—the time from the delivery of the proceed-to-select signal until all the selection signals have been transmitted; and (c) post selection time—the time from the end of the transmission of the selection signals until the delivery of the call-connected signal to the originating terminal.
Success rate
In telecommunications, the call setup success rate (CSSR) is the fraction of the attempts to make a call that result in a connection to the dialled number (due to various reasons not all call attempts end with a connection to the dialled number). This fraction is usually measured as a percentage of all call attempts made.
In telecommunications a call attempt invokes a call setup procedure, which, if successful, results in a connected call. A call setup procedure may fail due to a number of technical reasons. Such calls are classified as failed call attempts. In many practical cases, this definition needs to be further expanded with a number of detailed specifications describing which calls exactly are counted as successfully set up and which not. This is determined to a great degree by the stage of the call setup procedure at which a call is counted as connected. In modern communications systems, such as cellular (mobile) networks, the call setup procedu
|
https://en.wikipedia.org/wiki/Sampling%20risk
|
Sampling risk is one of the many types of risks an auditor may face when performing the necessary procedure of audit sampling. Audit sampling exists because of the impractical and costly effects of examining all or 100% of a client's records or books. As a result, a "sample" of a client's accounts are examined.
Due to the negative effects produced by sampling risk, an auditor may have to perform additional procedures which in turn can impact the overall efficiency of the audit.
Sampling risk represents the possibility that an auditor's conclusion based on a sample is different from that reached if the entire population were subject to audit procedure. The auditor may conclude that material misstatements exist, when in fact they do not; or material misstatements do not exist but in fact they do exist. Auditors can lower the sampling risk by increasing the sampling size.
Although there are many types of risks associated with the audit process, each type primarily has an effect on the overall audit engagement. The effects produced by sampling risk generally can increase audit risk, the risk that an entity's financial statements will contain a material misstatement, though given an unqualified ('clean') audit report. Sampling risk can also increase detection risk which suggests the possibility that an auditor will not find material misstatements relating to the financial statements through substantive tests and analysis.
Types of sampling risk
Typical scenarios
Auditors must often make professional judgments in assessing sampling risk. When testing samples the auditor is primarily concerned with two aspects of sampling risk:
Risk of accepting incorrect data: the sample supports the conclusion that the recorded account balance is not materially misstated when it is materially misstated.
Risk of incorrect rejection: the risk that the sample supports the conclusion that the recorded amount balance is materially misstated when it is not materially misstated.
In add
|
https://en.wikipedia.org/wiki/Barley%20malt%20syrup
|
Barley malt syrup is an unrefined sweetener, processed by extraction from sprouted, malted, barley.
Barley malt syrup contains approximately 65 percent maltose, 30 percent complex carbohydrates, and 3 percent storage protein (prolamin glycoprotein). Malt syrup is dark brown, thick, sticky, and possesses a strong distinctive flavor described as "malty". It is about half as sweet as refined white sugar. Barley malt syrup is sometimes used in combination with other natural sweeteners to lend a malt flavor. Also called "barley malt extract" (or just malt syrup), barley malt syrup is made from malted barley, though there are instances of mislabeling where merchants use other grains or corn syrup in production.
Barley malt syrup is also sold in powdered form. Barley malt extract is used in the bread and baked good industry for browning and flavoring, and in cereal manufacture to add malt flavor. Adding barley malt syrup to yeast dough increases fermentation as a result of the enzymes in the malt, thus quickening the proofing process.
Barley malt syrup has a long history, and was one of the primary sweeteners (along with honey) in use in China in the years 1000 BCE – 1000 CE. Qimin Yaoshu, a classic 6th century Chinese text, contains notes on the extraction of malt syrup and maltose from common household grains. Barley malt syrup continues to be used in traditional Chinese sweets, such as Chinese cotton candy.
Sugar rationing in the US led to the first commercial malt syrup production in the 1920s, to deal with sugar shortages.
Malt loaf is another product that makes use of barley malt syrup.
See also
Brewing
List of syrups
List of unrefined sweeteners
Malted milk
|
https://en.wikipedia.org/wiki/DNADynamo
|
DNADynamo is a commercial DNA sequence analysis software package produced by Blue Tractor Software Ltd that runs on Microsoft Windows, Mac OS X and Linux
It is used by molecular biologists to analyze DNA and Protein sequences. A free demo is available from the software developers website.
Features
DNADynamo is a general purpose DNA and Protein sequence analysis package that can carry out most of the functions required by a standard research molecular biology laboratory
DNA and Protein Sequence viewing, editing and annotating
Contig assembly and chromatogram editing including comparison to a reference sequence to identify mutations
Global Sequence alignment with ClustalW and MUSCLE and editing.
Select and drag Sequence alignment editing for hand made dna vs protein alignments
Restriction site analysis - for viewing restriction cut sites in tables and on linear and circular maps.
A Subcloning tool for the assembly of constructs using Restriction Sites or Gibson assembly,
Agarose Gel simulation.
Online Database searching - Search public databases at the NCBI such as Genbank and UniProt.
Online BLAST searches.
Protein analysis including estimation of Molecular Weight, Extinction Coefficient and pI.
PCR Primer design, including an interface to Primer3
3D structure viewing via an interface to Jmol
History
DNADynamo has been developed since 2004 by BlueTractorSoftware Ltd, a software development company based in North Wales, UK
|
https://en.wikipedia.org/wiki/Mikroelektronika
|
MikroElektronika (stylized as MikroE) is a Serbian manufacturer and retailer of hardware and software tools for developing embedded systems. The company headquarters is in Belgrade, Serbia.
Its best known software products are mikroC, mikroBasic and mikroPascal compilers for programming microcontrollers. Its flagship hardware product line is Click boards, a range of more than 550 add-on boards for interfacing microcontrollers with peripheral sensors or transceivers. These boards conform to mikroBUS – a standard conceived by MikroElektronika and later endorsed by NXP Semiconductors and Microchip Technology, among others. MikroElektronika is also known for Hexiwear, an Internet of things development kit developed in partnership with NXP Semiconductors.
History
Serbian entrepreneur – and current company owner and CEO – Nebojša Matić started publishing an electronics magazine called "MikroElektronika" in 1997. In 2001, the magazine was shut down and MikroElektronika repositioned itself as a company focused on producing development boards for microcontrollers and publishing books for developing embedded systems.
The company started offering compilers in 2004, with the release of mikroPascal for PIC and mikroBasic for PIC – compilers for programming 8-bit microcontrollers from Microchip Technology. Between 2004 and 2015 the company released C, Basic and Pascal compilers for seven microcontroller architectures: PIC, PIC32, dsPIC/PIC24, FT90x, AVR, 8051, and ARM® (supporting STMicroelectronics, Texas Instruments and Microchip-based ARM® Cortex microcontrollers).
In conjunction with compilers, MikroElektronika kept its focus on producing development boards while gradually ceasing its publishing activities. Its current generation of the "Easy" boards brand was released in 2012. One of the flagship models, EasyPIC Fusion v7 was nominated for best tool at the Embedded World 2013 exhibition in Nurembeg, an important embedded systems industry gathering. Other product lines we
|
https://en.wikipedia.org/wiki/WiCell
|
WiCell Research Institute is a scientific research institute in Madison, Wisconsin that focuses on stem cell research. Independently governed and supported as a 501(c)(3) organization, WiCell operates as an affiliate of the Wisconsin Alumni Research Foundation and works to advance stem cell research at the University of Wisconsin–Madison and beyond.
History
Established in 1998 to develop stem cell technology, WiCell Research Institute is a nonprofit organization that creates and distributes human pluripotent stem cell lines worldwide. WiCell also provides cytogenetic and technical services, establishes scientific protocols and supports basic research on the UW-Madison campus.
WiCell serves as home to the Wisconsin International Stem Cell Bank. This stem cell repository stores, characterizes and provides access to stem cell lines for use in research and clinical development. The cell bank originally stored the first five human Embryonic stem cell lines derived by Dr. James Thomson of UW–Madison. It currently houses human embryonic stem cell lines, induced pluripotent stem cell lines, clinical grade cell lines developed in accordance with Good Manufacturing Practices (GMP) and differentiated cell lines including neural progenitor cells.
To support continued progress in the field and help unlock the therapeutic potential of stem cells, in 2005 WiCell began providing cytogenetic services and quality control testing services. These services allow scientists to identify genetic abnormalities in cells or changes in stem cell colonies that might affect research results.
Organization
Chartered with a mission to support scientific investigation and research at UW–Madison, WiCell collaborates with faculty members and provides support with stem cell research projects. The institute established its cytogenetic laboratory to meet the growing needs of academic and commercial researchers to monitor genetic stability in stem cell cultures.
Facilities
WiCell maintains its stem c
|
https://en.wikipedia.org/wiki/NVM%20Express
|
NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interface specification for accessing a computer's non-volatile storage media usually attached via the PCI Express bus. The initialism NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCIe add-in cards, and M.2 cards, the successor to mSATA cards. NVM Express, as a logical-device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.
Architecturally, the logic for NVMe is physically stored within and executed by the NVMe controller chip that is physically co-located with the storage media, usually an SSD. Version changes for NVMe, e.g., 1.3 to 1.4, are incorporated within the storage media, and do not affect PCIe-compatible components such as motherboards and CPUs.
By its design, NVM Express allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVM Express reduces I/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency. The previous interface protocols like AHCI were developed for use with far slower hard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.
NVM Express devices are chiefly available in the form of standard-sized PCI Express expansion cards and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639). Storage devices using SATA Express and the M.2 specification which support NVM Express as the logical-device interface are a po
|
https://en.wikipedia.org/wiki/Noise-domain%20reflectometry
|
Noise-domain reflectometry is a type of reflectometry where the reflectometer exploits existing data signals on wiring and does not have to generate any signals itself. Noise-domain reflectometry, like time-domain and spread-spectrum time domain reflectometers, is most often used in identifying the location of wire faults in electrical lines.
Time-domain reflectometers work by generating a signal and then sending that signal down the wireline and examining the reflected signal. Noise-domain reflectometers (NDRs) provide the benefit of locating wire faults without introducing an external signal because the NDR examines the existing signals on the line to identify wire faults. This technique is particularly useful in the testing of live wires where data integrity on the wires is critical. For example, NDRs can be used for monitoring aircraft wiring while in flight.
See also
Spread-spectrum time-domain reflectometry
Time-domain reflectometry
|
https://en.wikipedia.org/wiki/Lynda%20Soderholm
|
Lynda Soderholm is a physical chemist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory with a specialty in f-block elements. She is a senior scientist and the lead of the Actinide, Geochemistry & Separation Sciences Theme within Argonne's Chemical Sciences and Engineering Division. Her specific role is the Separation Science group leader within Heavy Element Chemistry and Separation Science (HESS), directing basic research focused on low-energy methods for isolating lanthanide and actinide elements from complex mixtures. She has made fundamental contributions to understanding f-block chemistry and characterizing f-block elements.
Soderholm became a Fellow of the American Association for the Advancement of Science (AAAS) in 2013, and is also an Argonne Distinguished Fellow.
Early life and education
Soderholm was awarded her PhD in 1982 by McMaster University under the direction of Prof John Greedan. Her dissertation focused on characterizing the structural and magnetic properties of a series of ternary f-ion oxides. After graduating, she was awarded a NATO postdoctoral fellow at the Centre national de la recherche scientifique in France from 1982 until 1985. After a short postdoctoral appointment as an Argonne postdoctoral fellow she was promoted to staff scientist the same year. Over several years, she moved up the ranks, becoming a senior chemist in 2001. She was also an adjunct professor at the University of Notre Dame from 2003 until 2007. In 2021, Soderholm was appointed interim Division Director for the Chemical Sciences and Engineering Division.
Career and research
Uncovering structure of Yttrium-123 Superconductor
Early in her career, Soderholm focused on the characterizing the magnetic and electronic behavior of compounds containing f-ions (lanthanides and actinides) with a focus on high-Tc materials, compounds that are superconducting under usually high temperatures. She was part of the research group that first determined the s
|
https://en.wikipedia.org/wiki/Local%20boundedness
|
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number.
Locally bounded function
A real-valued or complex-valued function defined on some topological space is called a if for any there exists a neighborhood of such that is a bounded set. That is, for some number one has
In other words, for each one can find a constant, depending on which is larger than all the values of the function in the neighborhood of Compare this with a bounded function, for which the constant does not depend on Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below).
This definition can be extended to the case when takes values in some metric space Then the inequality above needs to be replaced with
where is some point in the metric space. The choice of does not affect the definition; choosing a different will at most increase the constant for which this inequality is true.
Examples
The function defined by is bounded, because for all Therefore, it is also locally bounded.
The function defined by is bounded, as it becomes arbitrarily large. However, it locally bounded because for each in the neighborhood where
The function defined by is neither bounded locally bounded. In any neighborhood of 0 this function takes values of arbitrarily large magnitude.
Any continuous function is locally bounded. Here is a proof for functions of a real variable. Let be continuous where and we will show that is locally bounded at for all Taking ε = 1 in the definition of continuity, there exists such that for all with . Now by the triangle inequality, which means that is locally bounded at (taking and the neighborhood ). This argument generalizes easily to when the domain of is any topological space.
The converse of the above r
|
https://en.wikipedia.org/wiki/Biometal%20%28biology%29
|
Biometals are metals normally present, in small but important and measurable amounts, in biology, biochemistry, and medicine. The metals copper, zinc, iron, and manganese are examples of metals that are essential for the normal functioning of most plants and the bodies of most animals, such as the human body. A few (calcium, potassium, sodium) are present in relatively larger amounts, whereas most others are trace metals, present in smaller but important amounts (the image shows the percentages for humans). Approximately 2/3 of the existing periodic table is composed of metals with varying properties, accounting for the diverse ways in which metals (usually in ionic form) have been utilized in nature and medicine.
History
At first, the study of biometals was referred to as bioinorganic chemistry. Each branch of bioinorganic chemistry studied separate, particular sub-fields of the subject. However, this led to an isolated view of each particular aspect in a biological system. This view was revised into a holistic approach of biometals in metallomics.
Metal ions in biology were studied in various specializations. In nutrition, it was to define the essentials for life; in toxicology, to define how the adverse effects of certain metal ions in biological systems and in pharmacology for their therapeutic effects. In each field, at first, they were studied and separated on a basis of concentration. In low amounts, metal ions in a biological system could perform at their optimal functionality whereas in higher concentrations, metal ions can prove fatal to biological systems. However, the concentration gradients were proved to be arbitrary as low concentrations of non-essential metals (like lithium or helium) in essential metals (like sodium or potassium) can cause an adverse effect in biological systems and vice versa.
Investigations into biometals and their effects date back to the 19th century and even further back to the 18th century with the identification of iron i
|
https://en.wikipedia.org/wiki/Symplectomorphism
|
In mathematics, a symplectomorphism or symplectic map is an isomorphism in the category of symplectic manifolds. In classical mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and preserves the symplectic structure of phase space, and is called a canonical transformation.
Formal definition
A diffeomorphism between two symplectic manifolds is called a symplectomorphism if
where is the pullback of . The symplectic diffeomorphisms from to are a (pseudo-)group, called the symplectomorphism group (see below).
The infinitesimal version of symplectomorphisms gives the symplectic vector fields. A vector field is called symplectic if
Also, is symplectic iff the flow of is a symplectomorphism for every .
These vector fields build a Lie subalgebra of .
Here, is the set of smooth vector fields on , and is the Lie derivative along the vector field
Examples of symplectomorphisms include the canonical transformations of classical mechanics and theoretical physics, the flow associated to any Hamiltonian function, the map on cotangent bundles induced by any diffeomorphism of manifolds, and the coadjoint action of an element of a Lie group on a coadjoint orbit.
Flows
Any smooth function on a symplectic manifold gives rise, by definition, to a Hamiltonian vector field and the set of all such vector fields form a subalgebra of the Lie algebra of symplectic vector fields. The integration of the flow of a symplectic vector field is a symplectomorphism. Since symplectomorphisms preserve the symplectic 2-form and hence the symplectic volume form, Liouville's theorem in Hamiltonian mechanics follows. Symplectomorphisms that arise from Hamiltonian vector fields are known as Hamiltonian symplectomorphisms.
Since the flow of a Hamiltonian vector field also preserves . In physics this is interpreted as the law of conservation of energy.
If the first Betti number of a connected symplectic manifold is zero, symplectic and Ham
|
https://en.wikipedia.org/wiki/Adrenergic%20storm
|
An adrenergic storm is a sudden and dramatic increase in serum levels of the catecholamines adrenaline and noradrenaline (also known as epinephrine and norepinephrine respectively), with a less significant increase in dopamine transmission. It is a life-threatening condition because of extreme tachycardia and hypertension, and is especially dire for those with prior heart problems. If treatment is prompt, prognosis is good; typically large amounts of diazepam or other benzodiazepines are administered alongside beta blockers. Beta blockers are contraindicated in some patients, so other anti-hypertensive medication such as clonidine may be used.
Antipsychotics are also used to treat the most severe psychiatric reactions such as psychosis, paranoia or terror, after their use was formerly discouraged because of their potential to prolong the QT interval; however, more recent research performed since 2019 has revealed that this and other severe side effects are rare and their occurrence does not warrant banning antipsychotics from the treatment of adrenergic crises for which they can be extremely useful.
Adreneric storms are usually caused by overdoses of stimulants, especially cocaine or methamphetamine, or eating foods high in tyramine while taking monoamine oxidase inhibitors. A subarachnoid hemorrhage can also cause an adrenergic storm. A catecholamine storm is part of the normal course of rabies infection, and is responsible for the severe feelings of agitation, terror, and dysautonomia present in the pre-coma stage of the disease.
Signs and symptoms
The behavioral symptoms are similar to those of an amphetamine, cocaine or caffeine overdose. Overstimulation of the central nervous system results in a state of hyperkinetic movement and unpredictable mental status including mania, rage and suicidal behavior; hyperthermia is also prominently present. Delirium can also be present but rarely.
Physical symptoms are more serious and include heart arrhythmias as well as
|
https://en.wikipedia.org/wiki/Many-sorted%20logic
|
Many-sorted logic can reflect formally our intention not to handle the universe as a homogeneous collection of objects, but to partition it in a way that is similar to types in typeful programming. Both functional and assertive "parts of speech" in the language of the logic reflect this typeful partitioning of the universe, even on the syntax level: substitution and argument passing can be done only accordingly, respecting the "sorts".
There are various ways to formalize the intention mentioned above; a many-sorted logic is any package of information which fulfils it. In most cases, the following are given:
a set of sorts, S
an appropriate generalization of the notion of signature to be able to handle the additional information that comes with the sorts.
The domain of discourse of any structure of that signature is then fragmented into disjoint subsets, one for every sort.
Example
When reasoning about biological organisms, it is useful to distinguish two sorts: and . While a function makes sense, a similar function usually does not. Many-sorted logic allows one to have terms like , but to discard terms like as syntactically ill-formed.
Algebraization
The algebraization of many-sorted logic is explained in an article by Caleiro and Gonçalves, which generalizes abstract algebraic logic to the many-sorted case, but can also be used as introductory material.
Order-sorted logic
While many-sorted logic requires two distinct sorts to have disjoint universe sets, order-sorted logic allows one sort to be declared a subsort of another sort , usually by writing or similar syntax. In the above biology example, it is desirable to declare
,
,
,
,
,
,
and so on; cf. picture.
Wherever a term of some sort is required, a term of any subsort of may be supplied instead (Liskov substitution principle). For example, assuming a function declaration , and a constant declaration , the term is perfectly valid and has the sort . In order to supply the information that
|
https://en.wikipedia.org/wiki/Selectin
|
The selectins (cluster of differentiation 62 or CD62) are a family of cell adhesion molecules (or CAMs). All selectins are single-chain transmembrane glycoproteins that share similar properties to C-type lectins due to a related amino terminus and calcium-dependent binding. Selectins bind to sugar moieties and so are considered to be a type of lectin, cell adhesion proteins that bind sugar polymers.
Structure
All three known members of the selectin family (L-, E-, and P-selectin) share a similar cassette structure: an N-terminal, calcium-dependent lectin domain, an epidermal growth factor (EGF)-like domain, a variable number of consensus repeat units (2, 6, and 9 for L-, E-, and P-selectin, respectively), a transmembrane domain (TM) and an intracellular cytoplasmic tail (cyto). The transmembrane and cytoplasmic parts are not conserved across the selectins being responsible for their targeting to different compartments. Though they share common elements, their tissue distribution and binding kinetics are quite different, reflecting their divergent roles in various pathophysiological processes.
Types
There are three subsets of selectins:
E-selectin (in endothelial cells)
L-selectin (in leukocytes)
P-selectin (in platelets and endothelial cells)
L-selectin is the smallest of the vascular selectins, expressed on all granulocytes and monocytes and on most lymphocytes, can be found in most leukocytes.
P-selectin, the largest selectin, is stored in α-granules of platelets and in Weibel–Palade bodies of endothelial cells, and is translocated to the cell surface of activated endothelial cells and platelets.
E-selectin is not expressed under baseline conditions, except in skin microvessels, but is rapidly induced by inflammatory cytokines.
These three types share a significant degree of sequence homology among themselves (except in the transmembrane and cytoplasmic domains) and between species. Analysis of this homology has revealed that the lectin domain, which b
|
https://en.wikipedia.org/wiki/Series%20%28botany%29
|
In botany and plant taxonomy, a series is a subdivision of a genus, a taxonomic rank below that of section (and subsection) but above that of species.
Sections and/or series are typically used to help organize very large genera, which may have hundreds of species.
Cultivar marketing
The term "series" is also used (in seed marketing) for groupings of cultivars, but this term has no formal status with that meaning in the ICNCP.
|
https://en.wikipedia.org/wiki/Steven%20Brams
|
Steven J. Brams (born November 28, 1940 in Concord, New Hampshire) is an American game theorist and political scientist at the New York University Department of Politics. Brams is best known for using the techniques of game theory, public choice theory, and social choice theory to analyze voting systems and fair division. He is one of the independent discoverers of approval voting, as well as extensions of approval voting to multiple-winner elections to give proportional representation of different interests.
Brams was a co-discoverer, with Alan Taylor, of the first envy-free cake-cutting solution for n people.
Previous to the Brams-Taylor procedure, the cake-cutting problem had been one of the most important open problems in contemporary mathematics. He is co-inventor with Taylor of the fair-division procedure, adjusted winner, which was patented by New York University in 1999 (# 5,983,205). Adjusted winner has been licensed to a Boston law firm, which formed a company, Fair Outcomes, Inc., that marketed several fair-division algorithms.
Brams has applied game theory to a wide variety of strategic situations, from the Bible and theology to international relations to sports.
Education
Brams earned his B.S. at Massachusetts Institute of Technology in Politics, Economics, and Science in 1962. In 1966, he earned his Ph.D. in Political Science at Northwestern University.
Career
Brams worked briefly in U.S. federal government positions and for the Institute for Defense Analyses before taking an assistant professor position at Syracuse University in 1967. He moved to New York University in 1969, where he is professor in the Department of Politics. He has been a visiting professor at the University of Rochester, the University of Michigan, the University of California, Irvine, the University of Pennsylvania, and Yale University.
In 1990–1991 he was president of the Peace Science Society (International); in 2004–2006, he was president of the Public Choice Society.
He
|
https://en.wikipedia.org/wiki/Harald%20Nordenson
|
Harald Nordenson (1886–1980) was a Swedish chemist, industrialist and politician best known for his criticisms of the theory of relativity.
Biography
Nordenson was born in Göttingen, Germany on August 10, 1886 as the son of ophthalmologist Erik Nordenson and his wife Bertha Nordenson, a descendant of Lars Johan Hierta and later a prominent women's activist. He matriculated at Uppsala University in 1906, graduated with a bachelor's degree in 1910, a licentiate degree in 1912 and earned his Ph.D. in physical chemistry in 1914, working on colloidal solutions.
Nordenson's choice of subject in Uppsala, physical chemistry, meant that he became part of the circle around Theodor Svedberg, who had become a famous scientist already around 1910 and who won the Nobel Prize in Chemistry in 1926. Nordenson served as assistant professor (docent) in physical chemistry in Uppsala 1914–1919. Simultaneous with his research in chemistry, he cultivated an interest in philosophy through his contacts with professor Adolf Phalén, who regarded the analysis of concepts as the most important area of philosophy. This led Nordenson to subject Albert Einstein's theories to a thorough philosophical scrutiny, which led to his first book about Einstein's theories in Swedish in 1922. This book made him a licentiate in theoretical philosophy in Uppsala. It was later developed into his 1969 English book, Relativity, Time and Reality. He studied the philosophy of Axel Hägerström and Phalén, and describes also their opinions about Einstein's theories in his book, together with reviews of many other contributors.
His book, Relativity, Time and Reality, is a controversial critique of Albert Einstein's concepts of time and simultaneity in special relativity. Nordenson attacked Einstein's theory from rules of logic and concluded that "The Theory of Relativity is not physics but philosophy and in my opinion poor philosophy". His arguments have been generally considered invalid.
The primary objection in
|
https://en.wikipedia.org/wiki/School%20of%20GeoSciences%2C%20University%20of%20Edinburgh
|
The University of Edinburgh School of GeoSciences, is a school within the College of Science and Engineering, which was formed in 2002 by the merger of four departments. It is split between the King's Buildings and the Central Area of the university. The institutes of Ecological Sciences and Earth Science are located at the King's Buildings, whilst the Institute of Geography is located on Drummond Street in the Central Area. In 2013 the department was ranked 8th best place to study geography in the country by The Guardian University Rankings, down from 2nd in 2006.
The school is ranked as one of the best in the UK for Earth Sciences. A 2008 Research Assessment Exercise assessment ranked the "Earth Systems and Environmental Science" department as the best in the UK by number of world leading research and staff. Its Geography department was ranked 15th in the world according to the 2015 QS rankings.
There are over 1100 undergraduate students and 250 postgraduate students in the School of GeoSciences. There are also around 100 research and teaching staff within the school.
The School collaborates with the University of Edinburgh Business School and the School of Economics, to offer a Carbon Management MSc degree, the first in the world, which has students from over 20 countries. The school also has exchange programmes though the Erasmus programme, in addition to universities in Canada, the United States, Australia and New Zealand.
The head of the School of GeoSciences is currently Professor Bryne Ngwenya. Famous recent alumni of the School include former BP chief executive Tony Hayward. Former Rector of the university Peter McColl matriculated at one of the predecessors, the Department of Geography.
Competition for entry is highly selective, in 2010, the School received 2221 applications, but only 275 offers were made, representing a 16.9% of an applicant receiving an offer. The school currently offers 11 undergraduate courses and a range of postgraduate degrees.
|
https://en.wikipedia.org/wiki/Ferranti%20Pegasus
|
Pegasus was an early British vacuum-tube (valve) computer built by Ferranti, Ltd that pioneered design features to make life easier for both engineers and programmers. Originally it was named the Ferranti Package Computer as its hardware design followed that of the Elliott 401 with modular plug-in packages. Much of the development was the product of three men: W. S. (Bill) Elliott (hardware); Christopher Strachey (software) and Bernard Swann (marketing and customer support). It was Ferranti's most popular valve computer with 38 being sold. The first Pegasus was delivered in 1956 and the last was delivered in 1959. Ferranti received funding for the development from the National Research Development Corporation (NRDC).
At least two Pegasus machines survive, one in The Science Museum, London and one which was displayed in the Science and Industry Museum, Manchester but which has now been removed to the storage in the Science Museum archives at Wroughton. The Pegasus in The Science Museum, London ran its first program in December 1959 and was regularly demonstrated until 2009 when it developed a severe electrical fault. In early 2014, the Science Museum decided to retire it permanently, effectively ending the life of one of the world's oldest working computers. The Pegasus officially held the title of the world's oldest computer until 2012, when the restoration of the Harwell computer was completed at the National Museum of Computing.
Design
In those days it was common for it to be unclear whether a failure was due to the hardware or the program. As a consequence, Christopher Strachey of NRDC, who was himself a brilliant programmer, recommended the following design objectives:
The necessity for optimum programming (favoured by Alan Turing) was to be minimised, "because it tended to become a time-wasting intellectual hobby of the programmers".
The needs of the programmer were to be a governing factor in selecting the instruction set.
It was to be cheap and reliable.
|
https://en.wikipedia.org/wiki/Jtest
|
Jtest is an automated Java software testing and static analysis product developed by Parasoft. The product includes technology for data-flow analysis, unit test-case generation and execution, static analysis, and more. Jtest is used by companies such as Cisco Systems and TransCore. It is also used by Lockheed Martin for the F-35 Joint Strike Fighter program (JSF).
Awards
Jtest received the Dr. Dobb's Journal Jolt Award for Excellence in 2000.
It was granted a Codie award from the Software and Information Industry Association for "Best Software Testing Solution" in 2005 and 2007. It also won "Technology of the Year" award as "Best Application Test Tool" from InfoWorld two years in a row in 2006 and 2007.
See also
Automated testing
List of unit testing frameworks
List of tools for static code analysis
Regression testing
Software testing
System testing
Test case
Test-driven development
xUnit, a family of unit testing frameworks
|
https://en.wikipedia.org/wiki/Starlet%20sea%20anemone
|
The starlet sea anemone (Nematostella vectensis) is a species of small sea anemone in the family Edwardsiidae native to the east coast of the United States, with introduced populations along the coast of southeast England and the west coast of the United States (class Anthozoa, phylum Cnidaria, a sister group of Bilateria). Populations have also been located in Nova Scotia, Canada. This sea anemone is found in the shallow brackish water of coastal lagoons and salt marshes where its slender column is usually buried in the mud and its tentacles exposed. Its genome has been sequenced and it is cultivated in the laboratory as a model organism, but the IUCN has listed it as being a "Vulnerable species" in the wild.
Description
The starlet sea anemone has a bulbous basal end and a contracting column that ranges in length from less than . There is a fairly distinct division between the scapus, the main part of the column, and the capitulum, the part just below the crown of tentacles. The outer surface of the column has a loose covering of mucus to which particles of sediment tend to adhere. At the top of the column is an oral disk containing the mouth surrounded by two rings of long slender tentacles. Typically there are fourteen but sometimes as many as twenty tentacles, the outermost being longer than the inner whorl. The starlet sea anemone is translucent and largely colourless but usually has a pattern of white markings on the column and white banding on the tentacles.
Distribution and habitat
The starlet sea anemone occurs on the eastern and westward seaboard of North America. Its range extends from Nova Scotia to Louisiana on the east coast and from Washington to California on the west coast. It is also known from three locations in the United Kingdom—two in East Anglia and one on the Isle of Wight. Its typical habitat is brackish ponds, brackish lagoons and ditches and pools in salt marshes. It is found in positions with little water flow and seldom occurs more th
|
https://en.wikipedia.org/wiki/Image%20destriping
|
Image destriping is the process of removing stripes or streaks from images and videos without disrupting the original image/video. These artifacts plague a range of fields in scientific imaging including atomic force microscopy, light sheet fluorescence microscopy, and planetary satellite imaging.
The most common image processing techniques to reduce stripe artifacts is with Fourier filtering. Unfortunately, filtering methods risk altering or suppressing useful image data. Methods developed for multiple-sensor imaging systems in planetary satellites use statistical-based methods to match signal distribution across multiple sensors. More recently, a new class of approaches leverage compressed sensing, to regularize an optimization problem, and recover stripe free images. In many cases, these destriped images have little to no artifacts, even at low signal to noise ratios.
|
https://en.wikipedia.org/wiki/Apollonian%20sphere%20packing
|
Apollonian sphere packing is the three-dimensional equivalent of the Apollonian gasket. The principle of construction is very similar: with any four spheres that are cotangent to each other, it is then possible to construct two more spheres that are cotangent to four of them, resulting in an infinite sphere packing.
The fractal dimension is approximately 2.473946 (±1 in the last digit).
Software for generating and visualization of the apollonian sphere packing: ApolFrac.
|
https://en.wikipedia.org/wiki/Redundant%20code
|
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as:
recomputing a value that has previously been calculated and is still available,
code that is never executed (known as unreachable code),
code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code).
A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations.
Examples
The following examples are in C.
int foo(int iX)
{
int iY = iX*2;
return iX*2;
}
The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed.
Consider:
#define min(A,B) ((A)<(B)?(A):(B))
int shorter_magnitude(int u1, int v1, int u2, int v2)
{
/* Returns the shorter magnitude of (u1,v1) and (u2,v2) */
return sqrt(min(u1*u1 + v1*v1, u2*u2 + v2*v2));
}
As a consequence of using the C preprocessor, the compiler will only see the expanded form:
int shorter_magnitude(int u1, int v1, int u2, int v2)
{
int temp;
if (u1*u1 + v1*v1 < u2*u2 + v2*v2)
temp = u1*u1 + v1*v1; /* Redundant already calculated for comparison */
else
temp = u2*u2 + v2*v2; /* Redundant already calculated for comparison */
return sqrt(temp);
}
Because the use of min/max macros is very common, modern compilers are programmed to recognize and eliminate redundancy caused by their use.
There is no redundancy, however, in the following code:
#define max(A,B) ((A)>(B)?(A):(B))
int random(int cutoff, int range)
{
return max(cutoff, rand()%range);
}
If the initial call to rand(), modulo range, is greater than or equal to cutoff, rand() will be called a seco
|
https://en.wikipedia.org/wiki/Genitourinary%20amoebiasis
|
Genitourinary amoebiasis or renal amoebiasis is a rare complication to amoebic liver abscess, which in turn is a complication of amoebiasis. It is believed to result from liver abscesses breaking open, whereupon the amoebas spread through the blood to the new locale. Genital involvement is thought to result from fistula formation from the liver or through rectocolitis. The involvement causes lesions which exude a high degree of pus.
|
https://en.wikipedia.org/wiki/TRE%20%28computing%29
|
TRE is an open-source library for pattern matching in text, which works like a regular expression engine with the ability to do approximate string matching. It was developed by Ville Laurikari and is distributed under a 2-clause BSD-like license.
The library is written in C and provides functions which allow using regular expressions for searching over input text lines. The main difference from other regular expression engines is that TRE can match text fragments in an approximate way, that is, supposing that text could have some number of typos.
Features
TRE uses extended regular expression syntax with the addition of "directions" for matching preceding fragment in approximate way. Each of such directions specifies how many typos are allowed for this fragment.
Approximate matching is performed in a way similar to Levenshtein distance, which means that there are three types of typos 'recognized':
TRE allows specifying of cost for each of three typos type independently.
The project comes with a command-line utility, a reimplementation of agrep.
Though approximate matching requires some syntax extension, when this feature is not used, TRE works like most of other regular expression matching engines. This means that
it implements ordinary regular expressions written for strict matching;
programmers familiar with POSIX-style regular expressions need not do much study to be able to use TRE.
Predictable time and memory consumption
The library's author states that time spent for matching grows linearly with increasing of input text length, while memory requirement is constant during matching and does not depend on the input, only on the pattern.
Other
Other features, common for most regular expression engines could be checked in regex engines comparison tables or in list of TRE features on its web-page.
Usage example
Approximate matching directions are specified in curly brackets and should be distinguishable from repetitive quantifiers (possibly with inserting
|
https://en.wikipedia.org/wiki/List%20of%20Japanese%20typographic%20symbols
|
This article lists Japanese typographic symbols that are not included in kana or kanji groupings.
Repetition marks
Brackets and quotation marks
Phonetic marks
Punctuation marks
Other special marks
Organization-specific symbols
See also
Japanese map symbols
Japanese punctuation
Emoji, which originated in Japanese mobile phone culture
|
https://en.wikipedia.org/wiki/Ralcorp
|
Ralcorp Holdings is a manufacturer of various food products, including breakfast cereal, cookies, crackers, chocolate, snack foods, mayonnaise, pasta, and peanut butter. The company is based in St. Louis, Missouri. The majority of the items Ralcorp makes are private-label, store-brand products. It has over 9,000 employees. Ralcorp has its headquarters in the Bank of America Plaza in downtown St. Louis.
History and description
Originally part of Ralston Purina, the Ralston name was more associated with food for humans; soda crackers and a farina cereal, among other products, were marketed under this name. Ralcorp can trace its ancestry to 1898 when William H. Danforth of Purina Mills, which made animal feeds, began making breakfast cereal. He sought and received the endorsement of Webster Edgerly (Dr. Ralston) who founded the Ralstonism social movement. Ralston cereal became so successful that Purina Mills was renamed Ralston Purina in 1902. Ralston Purina also for many years produced the familiar line of Chex and Cookie Crisp cold breakfast cereals. The animal and human food businesses were seemingly only tenuously related. In 1994, the human food business was spun off to Ralcorp Holdings, operating as Ralston Foods, which then sold its branded breakfast cereal lineup to General Mills and its Continental Baking division (Wonder Bread and Twinkies) to Interstate Bakeries. The Purina part of the company is now split. The pet-food company sold to Nestlé is now called Nestlé Purina PetCare. The livestock-feed company is called Purina Mills, LLC, and is a unit of Land O'Lakes. Ralcorp manufactures many store-brand foods that are sold in grocery outlets across the United States under the retailers' private labels. In late 2007, Ralcorp signed an agreement with Kraft Foods to acquire the Post Cereals brands, thus returning to the major-branded cereal business. The acquisition was completed August 4, 2008. Another brand name product Ralcorp makes and markets is Ry-Krisp
|
https://en.wikipedia.org/wiki/The%20World%20%28book%29
|
The World, also called Treatise on the Light (French title: Traité du monde et de la lumière), is a book by René Descartes (1596–1650). Written between 1629 and 1633, it contains a nearly complete version of his philosophy, from method, to metaphysics, to physics and biology.
Descartes espoused mechanical philosophy, a form of natural philosophy popular in the 17th century. He thought everything physical in the universe to be made of tiny "corpuscles" of matter. Corpuscularianism is closely related to atomism. The main difference was that Descartes maintained that there could be no vacuum, and all matter was constantly swirling to prevent a void as corpuscles moved through other matter. The World presents a corpuscularian cosmology in which swirling vortices explain, among other phenomena, the creation of the Solar System and the circular motion of planets around the Sun.
The World rests on the heliocentric view, first explicated in Western Europe by Copernicus. Descartes delayed the book's release upon news of the Roman Inquisition's conviction of Galileo for "suspicion of heresy" and sentencing to house arrest. Descartes discussed his work on the book, and his decision not to release it, in letters with another philosopher, Marin Mersenne.
Some material from The World was revised for publication as Principia philosophiae or Principles of Philosophy (1644), a Latin textbook at first intended by Descartes to replace the Aristotelian textbooks then used in universities. In the Principles the heliocentric tone was softened slightly with a relativist frame of reference. The last chapter of The World was published separately as De Homine (On Man) in 1662. The rest of The World was finally published in 1664, and the entire text in 1677.
The void and particles in nature
Before Descartes begins to describe his theories in physics, he introduces the reader to the idea that there is no relationship between our sensations and what creates these sensations, thereby cast
|
https://en.wikipedia.org/wiki/AMD%20FireStream
|
AMD FireStream was AMD's brand name for their Radeon-based product line targeting stream processing and/or GPGPU in supercomputers. Originally developed by ATI Technologies around the Radeon X1900 XTX in 2006, the product line was previously branded as both ATI FireSTREAM and AMD Stream Processor. The AMD FireStream can also be used as a floating-point co-processor for offloading CPU calculations, which is part of the Torrenza initiative. The FireStream line has been discontinued since 2012, when GPGPU workloads were entirely folded into the AMD FirePro line.
Overview
The FireStream line is a series of add-on expansion cards released from 2006 to 2010, based on standard Radeon GPUs but designed to serve as a general-purpose co-processor, rather than rendering and outputting 3D graphics. Like the FireGL/FirePro line, they were given more memory and memory bandwidth, but the FireStream cards do not necessarily have video output ports. All support 32-bit single-precision floating point, and all but the first release support 64-bit double-precision. The line was partnered with new APIs to provide higher performance than existing OpenGL and Direct3D shader APIs could provide, beginning with Close to Metal, followed by OpenCL and the Stream Computing SDK, and eventually integrated into the APP SDK.
For highly parallel floating point math workloads, the cards can speed up large computations by more than 10 times; Folding@Home, the earliest and one of the most visible users of the GPGPU, obtained 20-40 times the CPU performance. Each pixel and vertex shader, or unified shader in later models, can perform arbitrary floating-point calculations.
History
Following the release of the Radeon R520 and GeForce G70 GPU cores with programmable shaders, the large floating-point throughput drew attention from academic and commercial groups, experimenting with using then for non-graphics work. The interest led ATI (and Nvidia) to create GPGPU products — able to calculate general pu
|
https://en.wikipedia.org/wiki/Non-bonding%20electron
|
A non-bonding electron is an electron not involved in chemical bonding. This can refer to:
Lone pair, with the electron localized on one atom.
Non-bonding orbital, with the electron delocalized throughout the molecule.
Chemical bonding
|
https://en.wikipedia.org/wiki/Nephritis
|
Nephritis is inflammation of the kidneys and may involve the glomeruli, tubules, or interstitial tissue surrounding the glomeruli and tubules. It is one of several different types of nephropathy.
Types
Glomerulonephritis is inflammation of the glomeruli. Glomerulonephritis is often implied when using the term "nephritis" without qualification.
Interstitial nephritis (or tubulo-interstitial nephritis) is inflammation of the spaces between renal tubules.
Causes
Nephritis can often be caused by infections and toxins, but it is most commonly caused by autoimmune disorders that affect the major organs like kidneys.
Pyelonephritis is inflammation that results from a urinary tract infection that reaches the renal pelvis of the kidney.
Lupus nephritis is inflammation of the kidney caused by systemic lupus erythematosus (SLE), a disease of the immune system.
Athletic nephritis is nephritis resulting from strenuous exercise. Bloody urine after strenuous exercise may also result from march hemoglobinuria, which is caused by trauma to red blood cells, causing their rupture, which leads to the release of hemoglobin into the urine.
Mechanism
Nephritis can produce glomerular injury, by disturbing the glomerular structure with inflammatory cell proliferation. This can lead to reduced glomerular blood flow, leading to reduced urine output (oliguria) and retention of waste products (uremia). As a result, red blood cells may leak out of damaged glomeruli, causing blood to appear in the urine (hematuria).
Low renal blood flow activates the renin–angiotensin–aldosterone system (RAAS), causing fluid retention and mild hypertension. As the kidneys inflame, they begin to excrete needed protein from the affected individual's body into the urine stream. This condition is called proteinuria.
Loss of necessary protein due to nephritis can result in several life-threatening symptoms. The most serious complication of nephritis can occur if there is significant loss of the proteins tha
|
https://en.wikipedia.org/wiki/Terpenoid
|
The terpenoids, also known as isoprenoids, are a class of naturally occurring organic chemicals derived from the 5-carbon compound isoprene and its derivatives called terpenes, diterpenes, etc. While sometimes used interchangeably with "terpenes", terpenoids contain additional functional groups, usually containing oxygen. When combined with the hydrocarbon terpenes, terpenoids comprise about 80,000 compounds. They are the largest class of plant secondary metabolites, representing about 60% of known natural products. Many terpenoids have substantial pharmacological bioactivity and are therefore of interest to medicinal chemists.
Plant terpenoids are used for their aromatic qualities and play a role in traditional herbal remedies. Terpenoids contribute to the scent of eucalyptus, the flavors of cinnamon, cloves, and ginger, the yellow color in sunflowers, and the red color in tomatoes. Well-known terpenoids include citral, menthol, camphor, salvinorin A in the plant Salvia divinorum, ginkgolide and bilobalide found in Ginkgo biloba and the cannabinoids found in cannabis. The provitamin beta carotene is a terpene derivative called a carotenoid.
The steroids and sterols in animals are biologically produced from terpenoid precursors. Sometimes terpenoids are added to proteins, e.g., to enhance their attachment to the cell membrane; this is known as isoprenylation. Terpenoids play a role in plant defense as prophylaxis against pathogens and attractants for the predators of herbivores.
Structure and classification
Terpenoids are modified terpenes, wherein methyl groups have been moved or removed, or oxygen atoms added. Some authors use the term "terpene" more broadly, to include the terpenoids. Just like terpenes, the terpenoids can be classified according to the number of isoprene units that comprise the parent terpene:
Terpenoids can also be classified according to the type and number of cyclic structures they contain: linear, acyclic, monocyclic, bicyclic, tricycl
|
https://en.wikipedia.org/wiki/Homestake%20experiment
|
The Homestake experiment (sometimes referred to as the Davis experiment or Solar Neutrino Experiment and in original literature called Brookhaven Solar Neutrino Experiment or Brookhaven 37Cl (Chlorine) Experiment ) was an experiment headed by astrophysicists Raymond Davis, Jr. and John N. Bahcall in the late 1960s. Its purpose was to collect and count neutrinos emitted by nuclear fusion taking place in the Sun. Bahcall performed the theoretical calculations and Davis designed the experiment. After Bahcall calculated the rate at which the detector should capture neutrinos, Davis's experiment turned up only one third of this figure. The experiment was the first to successfully detect and count solar neutrinos, and the discrepancy in results created the solar neutrino problem. The experiment operated continuously from 1970 until 1994. The University of Pennsylvania took it over in 1984. The discrepancy between the predicted and measured rates of neutrino detection was later found to be due to neutrino "flavour" oscillations.
Methodology
The experiment took place in the Homestake Gold Mine in Lead, South Dakota. Davis placed a 380 cubic meter (100,000 gallon) tank of perchloroethylene, a common dry-cleaning fluid, 1,478 meters (4,850 feet) underground. A big target deep underground was needed to prevent interference from cosmic rays, taking into account the very small probability of a successful neutrino capture, and, therefore, very low effect rate even with the huge mass of the target. Perchloroethylene was chosen because it is rich in chlorine. Upon interaction with an electron neutrino, a 37Cl atom transforms into a radioactive isotope of 37Ar, which can then be extracted and counted. The reaction of the neutrino capture is
The reaction threshold is 0.814 MeV, i.e. the neutrino should have at least this energy to be captured by the 37Cl nucleus.
Because 37Ar has a half-life of 35 days, every few weeks, Davis bubbled helium through the tank to collect the argon th
|
https://en.wikipedia.org/wiki/Abdus%20Salam%20Chair%20in%20Physics
|
The Abdus Salam Chair in Physics, also known as Salam Chair in Physics, is an academic physics research institute of the Government College University at Lahore, Punjab province of Pakistan. Named after Pakistan's only Nobel Laureate, Abdus Salam, the institute is partnered with the Pakistan Atomic Energy Commission (PAEC) and International Center for Theoretical Physics (ICTP). While it is a physics research institute, the institute is dedicated to the field of Theoretical and Mathematical physics.
The institute was established in 1999, after it was suggested by Ishfaq Ahmad, by the Government of Pakistan, led by the Prime Minister Nawaz Sharif. Its first director, who is designated as Salam Professor, was Dr. Ghulam Murtaza who was appointed in 1999. It also participated with the projects led by the Khan Research Laboratories (KRL) and the Pakistan Ministry of Science and Technology (MOST).
|
https://en.wikipedia.org/wiki/Tampa%20Bay%20Reforestation%20and%20Environmental%20Effort
|
Tampa Bay Reforestation and Environmental Effort, Inc. more commonly known as "T.R.E.E. Inc.", is a grassroots nonprofit environmental organization based out of the Tampa Bay Area. It promotes the practice of volunteers raising and then planting trees along the interstates, roadways, and parks of the greater Tampa Bay Area to beautify and preserve the environment. To date, T.R.E.E. Inc. has planted over 31,156 trees and palms.
Early years 1983-1987
The group was started in plant more trees in the Tampa Bay Area. On February 8, 1983, T.R.E.E. Inc. was incorporated under Florida law.
T.R.E.E. Inc.'s modus operandi for the first 22 years of their existence was to purchase bare root tree seedlings, grow them in 1-gallon containers for one growing season, step them up into 3-gallon containers for a second growing season, and then donate or out-plant them before their third growing season.
The different kind of trees that were most commonly used during this period was the genetically improved or superior North Florida Slash Pine (Pinus elliottii var. "elliottii"). It was selected due to its adaptability, rapid growth, and relative ease of maintenance after establishment. Hardwood trees during that time were typically purchased as 4" potted seedlings. Varieties typically used were Sweetgum (Liquidambar styraciflua), Pignut Hickory (Carya glabra), and Loblolly Bay (Gordonia lasianthus).
Transition years 1988–1989
In January 1988, William Moriaty stepped down as president so that he could relocate to Gainesville, Florida with his late wife, Karen Cashon, as she would be attending the University of Florida later that year. As a result, an almost entirely new slate of directors served from 1988 to 1989. Vice President Bob Scheible was the only founding member to serve during these two years in the same capacity that he did at the organization's creation.
Major program initiatives, 1990–2004
The genetically improved or superior North Florida Slash Pine began to lose favo
|
https://en.wikipedia.org/wiki/Kelihos%20botnet
|
The Kelihos botnet, also known as Hlux, is a botnet mainly involved in spamming and the theft of bitcoins.
History
The Kelihos botnet was first discovered around . Researchers originally suspected having found a new version of either the Storm or Waledac botnet, due to similarities in the modus operandi and source code of the bot, but analysis of the botnet showed it was instead a new, 45,000-infected-computer-strong, botnet that was capable of sending an estimated spam messages a day. In Microsoft took down the botnet in an operation codenamed "Operation b79". At the same time, Microsoft filed civil charges against Dominique Alexander Piatti, dotFREE Group SRO and 22 John Doe defendants for suspected involvement in the botnet for issuing 3,700 subdomains that were used by the botnet. These charges were later dropped when Microsoft determined that the named defendants did not intentionally aid the botnet controllers.
In January 2012 a new version of the botnet was discovered, one sometimes referred to as Kelihos.b or Version 2, consisting of an estimated 110,000 infected computers. During this same month Microsoft pressed charges against Russian citizen Andrey Sabelnikov, a former IT security professional, for being the alleged creator of the Kelihos Botnet sourcecode. The second version of the botnet itself was shut down by it in by several privately owned firms by sinkholing it – a technique which gave the companies control over the botnet while cutting off the original controllers.
Following the shutdown of the second version of the botnet, a new version surfaced as early as 2 April, though there is some disagreement between research groups whether the botnet is simply the remnants of the disabled Version 2 botnet, or a new version altogether. This version of the botnet currently consists of an estimated 70,000 infected computers. The Kelihos.c version mostly infects computers through Facebook by sending users of the website malicious download links. Once
|
https://en.wikipedia.org/wiki/Stack%20search
|
Stack search (also known as Stack decoding algorithm) is a search algorithm similar to beam search. It can be used to explore tree-structured search spaces and is often employed in Natural language processing applications, such as parsing of natural languages, or for decoding of error correcting codes where the technique goes under the name of sequential decoding.
Stack search keeps a list of the best n candidates seen so far. These candidates are incomplete solutions to the search problems, e.g. partial parse trees. It then iteratively expands the best partial solution, putting all resulting partial solutions onto the stack and then trimming the resulting list of partial solutions to the top n candidates, until a real solution (i.e. complete parse tree) has been found.
Stack search is not guaranteed to find the optimal solution to the search problem. The quality of the result depends on the quality of the search heuristic.
|
https://en.wikipedia.org/wiki/Saturation%20%28magnetic%29
|
Seen in some magnetic materials, saturation is the state reached when an increase in applied external magnetic field H cannot increase the magnetization of the material further, so the total magnetic flux density B more or less levels off. (Though, magnetization continues to increase very slowly with the field due to paramagnetism.) Saturation is a characteristic of ferromagnetic and ferrimagnetic materials, such as iron, nickel, cobalt and their alloys. Different ferromagnetic materials have different saturation levels.
Description
Saturation is most clearly seen in the magnetization curve (also called BH curve or hysteresis curve) of a substance, as a bending to the right of the curve (see graph at right). As the H field increases, the B field approaches a maximum value asymptotically, the saturation level for the substance. Technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is several orders of magnitude smaller than the ferromagnetic rate seen below saturation.
The relation between the magnetizing field H and the magnetic field B can also be expressed as the magnetic permeability: or the relative permeability , where is the vacuum permeability. The permeability of ferromagnetic materials is not constant, but depends on H. In saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one.
Different materials have different saturation levels. For example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6–2.2teslas (T), whereas ferrites saturate at 0.2–0.5T. Some amorphous alloys saturate at 1.2–1.3T. Mu-metal saturates at around 0.8T.
Explanation
Ferromagnetic materials (like iron) are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. Before an external magnetic field is applied to the material,
|
https://en.wikipedia.org/wiki/Reverse%20semantic%20traceability
|
Reverse semantic traceability (RST) is a quality control method for verification improvement that helps to insure high quality of artifacts by backward translation at each stage of the software development process.
Brief introduction
Each stage of development process can be treated as a series of “translations” from one language to another. At the very beginning a project team deals with customer’s requirements and expectations expressed in natural language. These customer requirements sometimes might be incomplete, vague or even contradictory to each other. The first step is specification and formalization of customer expectations, transition (“translation”) of them into a formal requirement document for the future system. Then requirements are translated into system architecture and step by step the project team generates code written in a very formal programming language. There is always a threat of inserting mistakes, misinterpreting or losing something during the translation. Even a small defect in requirement or design specifications can cause huge amounts of defects at the late stages of the project. Sometimes such misunderstandings can lead to project failure or complete customer dissatisfaction.
The highest usage scenarios of Reverse Semantic Traceability method can be:
Validating UML models: quality engineers restore a textual description of a domain, original and restored descriptions are compared.
Validating model changes for a new requirement: given an original and changed versions of a model, quality engineers restore the textual description of the requirement, original and restored descriptions are compared.
Validating a bug fix: given an original and modified source code, quality engineers restore a textual description of the bug that was fixed, original and restored descriptions are compared.
Integrating new software engineer into a team: a new team member gets an assignment to do Reverse Semantic Traceability for the key artifacts from the
|
https://en.wikipedia.org/wiki/APA%27s%20Diamond%20NN%20Cannery
|
The Alaska Packer's Association Diamond NN Cannery located at the mouth of the Naknek River (Bristol Bay) in Naknek, Alaska operated between 1890 and 2015. In 2020, the cannery site was formally nominated for inclusion on the National Register of Historic Places and in 2021 the nomination was forwarded by the Alaska Historical Commission for national listing consideration. It was listed in August 2021.
An exhibit based on the history of the cannery called, "Mug Up: The Language of Work" opens at the Alaska State Museum in Juneau, AK in February 2022. For cannery workers, "Mug Up," meant a coffee break.
|
https://en.wikipedia.org/wiki/Prism%20graph
|
In the mathematical field of graph theory, a prism graph is a graph that has one of the prisms as its skeleton.
Examples
The individual graphs may be named after the associated solid:
Triangular prism graph – 6 vertices, 9 edges
Cubical graph – 8 vertices, 12 edges
Pentagonal prism graph – 10 vertices, 15 edges
Hexagonal prism graph – 12 vertices, 18 edges
Heptagonal prism graph – 14 vertices, 21 edges
Octagonal prism graph – 16 vertices, 24 edges
...
Although geometrically the star polygons also form the faces of a different sequence of (self-intersecting and non-convex) prismatic polyhedra, the graphs of these star prisms are isomorphic to the prism graphs, and do not form a separate sequence of graphs.
Construction
Prism graphs are examples of generalized Petersen graphs, with parameters GP(n,1).
They may also be constructed as the Cartesian product of a cycle graph with a single edge.
As with many vertex-transitive graphs, the prism graphs may also be constructed as Cayley graphs. The order-n dihedral group is the group of symmetries of a regular n-gon in the plane; it acts on the n-gon by rotations and reflections. It can be generated by two elements, a rotation by an angle of 2/n and a single reflection, and its Cayley graph with this generating set is the prism graph. Abstractly, the group has the presentation (where r is a rotation and f is a reflection or flip) and the Cayley graph has r and f (or r, r−1, and f) as its generators.
The n-gonal prism graphs with odd values of n may be constructed as circulant graphs .
However, this construction does not work for even values of n.
Properties
The graph of an n-gonal prism has 2n vertices and 3n edges. They are regular, cubic graphs.
Since the prism has symmetries taking each vertex to each other vertex, the prism graphs are vertex-transitive graphs.
As polyhedral graphs, they are also 3-vertex-connected planar graphs. Every prism graph has a Hamiltonian cycle.
Among all biconnected cubic graphs
|
https://en.wikipedia.org/wiki/Boilermaker%20Special
|
The Boilermaker Special is the official mascot of Purdue University in West Lafayette, Indiana. It resembles a Victorian-era railroad locomotive and is built on a truck chassis. It is operated and maintained by the student members of the Purdue Reamer Club. It is often incorrectly assumed that Purdue Pete is the official mascot of the university.
Inspiration for the Boilermaker Special
Purdue University is a land-grant university (or Agricultural and Mechanical (A&M) university) created through the Morrill Act of 1862. In the 1890s, Purdue became a leader in the research of railway technology. For many years Purdue operated the "Schenectady No. 1", "Schenectady No. 2", and the "Vulclain" on a dynamometer in an engineering laboratory on the West Lafayette campus. The Schenectady was a 4-4-0 type steam locomotive manufactured by the Baldwin Locomotive Works of Philadelphia, Pennsylvania. It was a classic Victorian-era design similar in construction to the Western and Atlantic Railroad No. 3 (see The General (locomotive) on display at the Southern Museum of Civil War and Locomotive History). Purdue even operated its own railroad to connect the campus powerplant to a main rail line. The American Railway Association Building, which stands on the West Lafayette campus to the southwest of the Mechanical Engineering Building, is one of the few remaining vestiges of the railroad testing which occurred on the campus. It was constructed in 1926 to test railroad car draft gears. Locomotive research ended in 1938 when the Vauclain's boiler was declared unsafe and the dynamometer was decommissioned.
Creation of the mascot
For many years Purdue did not have a mascot. In 1939, Purdue pharmacy student Israel Selkowitz suggested the school adopt an official mascot to represent Purdue's engineering heritage. He originally proposed a "mechanical man". After much debate, it was decided to build a locomotive on an automobile chassis. This choice allowed the mascot to build
|
https://en.wikipedia.org/wiki/Ringing%20%28signal%29
|
In electronics, signal processing, and video, ringing is oscillation of a signal, particularly in the step response (the response to a sudden change in input). Often ringing is undesirable, but not always, as in the case of resonant inductive coupling. It is also known as hunting. It is closely related to overshoot, often instigated as damping response following overshoot or undershoot, and thus the terms are at times conflated.
It is also known as ripple, particularly in electricity or in frequency domain response.
Electricity
In electrical circuits, ringing is an unwanted oscillation of a voltage or current. It happens when an electrical pulse causes the parasitic capacitances and inductances in the circuit (i.e. those that are not part of the design, but just by-products of the materials used to construct the circuit) to resonate at their characteristic frequency. Ringing artifacts are also present in square waves; see Gibbs phenomenon.
Ringing is undesirable because it causes extra current to flow, thereby wasting energy and causing extra heating of the components; it can cause unwanted electromagnetic radiation to be emitted; it can delay arrival at a desired final state (increase settling time); and it may cause unwanted triggering of bistable elements in digital circuits. Ringy communications circuits may suffer falsing.
Ringing can be due to signal reflection, in which case it may be minimized by impedance matching.
Video
In video circuits, electrical ringing causes closely spaced repeated ghosts of a vertical or diagonal edge where dark changes to light or vice versa, going from left to right. In a CRT the electron beam upon changing from dark to light or vice versa instead of changing quickly to the desired intensity and staying there, overshoots and undershoots a few times. This bouncing could occur anywhere in the electronics or cabling and is often caused by or accentuated by a too high setting of the sharpness control.
Audio
Ringing can affect
|
https://en.wikipedia.org/wiki/Biometric%20passport
|
A biometric passport (also known as an electronic passport, e-passport or a digital passport) is a traditional passport that has an embedded electronic microprocessor chip, which contains biometric information that can be used to authenticate the identity of the passport holder. It uses contactless smart card technology, including a microprocessor chip (computer chip) and antenna (for both power to the chip and communication) embedded in the front or back cover, or centre page, of the passport. The passport's critical information is printed on the data page of the passport, repeated on the machine readable lines and stored in the chip. Public key infrastructure (PKI) is used to authenticate the data stored electronically in the passport chip, supposedly making it expensive and difficult to forge when all security mechanisms are fully and correctly implemented.
Many countries are moving towards issuing biometric passports to their citizens. Malaysia was the first country to issue biometric passports in 1998. In December 2008, 60 countries were issuing such passports, which increased to over 150 by mid-2019.
The currently standardised biometrics used for this type of identification system are facial recognition, fingerprint recognition, and iris recognition. These were adopted after assessment of several different kinds of biometrics including retinal scan. Document and chip characteristics are documented in the International Civil Aviation Organization's (ICAO) Doc 9303 (ICAO 9303). The ICAO defines the biometric file formats and communication protocols to be used in passports. Only the digital image (usually in JPEG or JPEG 2000 format) of each biometric feature is actually stored in the chip. The comparison of biometric features is performed outside the passport chip by electronic border control systems (e-borders). To store biometric data on the contactless chip, it includes a minimum of 32 kilobytes of EEPROM storage memory, and runs on an interface in accordan
|
https://en.wikipedia.org/wiki/Nature-based%20solutions
|
Nature-based solutions (NBS) is the sustainable management and use of natural features and processes to tackle socio-environmental issues.
These issues include climate change (mitigation and adaptation), water security, water pollution, food security, human health, biodiversity loss, and disaster risk management. The European Commission's definition of NBS states that these solutions are "inspired and supported by nature, which are cost-effective, simultaneously provide environmental, social and economic benefits and help build resilience. Such solutions bring more, and more diverse, nature and natural features and processes into cities, landscapes, and seascapes, through locally adapted, resource-efficient and systemic interventions". In 2020, the EC definition was updated to further emphasise that "Nature-based solutions must benefit biodiversity and support the delivery of a range of ecosystem services." Through the use of NBS healthy, resilient, and diverse ecosystems (whether natural, managed, or newly created) can provide solutions for the benefit of both societies and overall biodiversity.
For instance, the restoration and/or protection of mangroves along coastlines utilizes a Nature-based solution to accomplish several goals. Mangroves moderate the impact of waves and wind on coastal settlements or cities and sequester CO2. They also provide nursery zones for marine life that can be the basis for sustaining fisheries on which local populations may depend. Additionally, mangrove forests can help to control coastal erosion resulting from sea level rise. Similarly, green roofs or walls are Nature-based solutions that can be implemented in cities to moderate the impact of high temperatures, capture storm water, abate pollution, and act as carbon sinks, while simultaneously enhancing biodiversity.
Conservation approaches and environmental management initiatives have been carried out for decades. More recently, progress has been made in better articulating the
|
https://en.wikipedia.org/wiki/Shelly%20Knotts
|
Shelly Knotts is a composer, performer and improvisor of live electronic, live coded and network music based in Newcastle upon Tyne, England. She performs internationally, often using Live coding techniques, and a range of styles including Noise, Drone and Algorave.
She often collaborates on performance, including a PRS for Music commission with Annie Mahtani, an audio/visual collaboration Sisesta Pealkiri with Alo Allik, uiaesk! with Holger Ballweg, Algobabez with Joanne Armitage, and as part of the Birmingham Laptop Ensemble. Her work often has a political dimension, using network music to explore social structures, and live coding to explore failure as an alternative to virtuosity, as well as exploring and encouraging diversity through workshops and hackathons. Knotts has also engaged with computer science in schools, through a Sonic Pi commission and BBC Live lesson.
Knotts is also active in event curation, including organising several Algorave events in Newcastle, three editions of the international Network Music Festival, chairing the Live Coding and Collaboration symposium in 2014, and chairing the artistic programme of the International Conference on Live coding in 2015. She was recognised as part of the Sound and Music New Voices cohort in 2014-2015, which aims to raise the profile for artists who exist outside of the support of commercial publishers or record companies, although she has been published by Leonardo Music Journal, ChordPunch, and Absenceofwax. In 2018 she completed a PhD in Live Computer Music at Durham University, supervised by Nick Collins and Peter Manning, with funding from the Department of Music and Hatfield College. She is currently a postdoctoral associate at the same institution.
|
https://en.wikipedia.org/wiki/Cosmological%20lithium%20problem
|
In astronomy, the lithium problem or lithium discrepancy refers to the discrepancy between the primordial abundance of lithium as inferred from observations of metal-poor (Population II) halo stars in our galaxy and the amount that should theoretically exist due to Big Bang nucleosynthesis+WMAP cosmic baryon density predictions of the CMB. Namely, the most widely accepted models of the Big Bang suggest that three times as much primordial lithium, in particular lithium-7, should exist. This contrasts with the observed abundance of isotopes of hydrogen (1H and 2H) and helium (3He and 4He) that are consistent with predictions. The discrepancy is highlighted in a so-called "Schramm plot", named in honor of astrophysicist David Schramm, which depicts these primordial abundances as a function of cosmic baryon content from standard BBN predictions.
Origin of lithium
Minutes after the Big Bang, the universe was made almost entirely of hydrogen and helium, with trace amounts of lithium and beryllium, and negligibly small abundances of all heavier elements.
Lithium synthesis in the Big Bang
Big Bang nucleosynthesis produced both lithium-7 and beryllium-7, and indeed the latter dominates the primordial synthesis of mass 7 nuclides. On the other hand, the Big Bang produced lithium-6 at levels more than 1000 times smaller.
later decayed via electron capture (half-life 53.22 days) into ,
so that the observable primordial lithium abundance essentially sums primordial and radiogenic lithium from the decay of .
These isotopes
are produced by the reactions
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
|}
and destroyed by
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
|}
The amount of lithium generated in the Big Bang can be calculated. Hydrogen-1 is the most abundant nuclide, comprising roughly 92% of the ato
|
https://en.wikipedia.org/wiki/VoIP%20vulnerabilities
|
VoIP vulnerabilities are weaknesses in the VoIP protocol or its implementations that expose users to privacy violations and other problems. VoIP is a group of technologies that enable voice calls online. VoIP contains similar vulnerabilities to those of other internet use.
Risks are not usually mentioned to potential customers. VoIP provides no specific protections against fraud and illicit practices.
Vulnerabilities
Eavesdropping
Unencrypted connections are vulnerable to security breaches. Hackers/trackers can eavesdrop on conversations and extract valuable data.
Network attacks
Attacks on the user network or internet provider can disrupt or destroy the connection. Since VoIP requires an internet connection, direct attacks on the internet connection, or provider, can be effective. Such attacks target office telephony. Mobile applications that do not rely on an internet connection to make calls are immune to such attacks.
Default security settings
VoIP phones are smart devices that need to be configured. In some cases, Chinese manufacturers are using default passwords that lead to vulnerabilities.
VOIP over Wi-Fi
While VoIP is relatively secure, it still needs a source of internet, which is often a Wi-Fi network, making VoIP subject to Wi-Fi vulnerabilities
Exploits
Spam
VoIP is subject to spam called SPIT (Spam over Internet Telephony). Using the extensions provided by VoIP PBX capabilities, the spammer can harass their target from different numbers. The process can be automated and can fill the target's voice mail with notifications. The spammer can make calls often enough to block the target from getting important calls.
Phishing
VoIP users can change their Caller ID (a.k.a. Caller ID spoofing), allowing a caller to pose as a relative or colleague in order to extract information, money or benefits from the target.
See also
Comparison of VoIP software
INVITE of Death
List of VoIP companies
Mobile communications over IP - Mobile VoIP
Voice over WLA
|
https://en.wikipedia.org/wiki/Christ%20Community%20Church
|
Christ Community Church in Zion, Illinois, formerly the Christian Catholic Church or Christian Catholic Apostolic Church, is an evangelical non-denominational church founded in 1896 by John Alexander Dowie. The city of Zion was founded by Dowie as a religious community to establish a society on the principles of the Kingdom of God. Members are sometimes called Zionites (not to be confused with the German Zionites).
Over the years there have been many changes to the church founded by John Alexander Dowie. He was a popular faith healer and started the church and the Zion community with utopian ideals. Under Wilbur Glenn Voliva, Dowie's successor, the church was noted for its adherence to a flat earth cosmology. The succession of pastors after Voliva moved the church towards mainstream Protestant doctrine.
In the early 20th century, the Christian Catholic Church had worldwide appeal. The church's magazine, The Leaves of Healing, was distributed in the U.S., Australia, Europe, and southern Africa. At its height, Dowie's movement had some 20,000 adherents. The Zionist Churches of southern Africa trace their spiritual heritage back to Dowie and the Christian Catholic Church. Because of Dowie's emphasis on faith healing and restorationism the church is considered a forerunner of Pentecostalism.
The name Christian Catholic Church is still used for Christ Community Church's worldwide fellowship of churches and mission work. As of 2008, it has about 3,000 members in the United States and Canada. Missionary work is conducted in Japan, Philippines, Guyana, Palestine, Indonesia, and the Navajo Nation. Missionary work continues among the African Zionists under the banner of Zion Evangelical Ministries of Africa (ZEMA). ZEMA's goal is to convert the African Zionists from syncretism to mainstream Christian theology.
John Alexander Dowie
John Alexander Dowie was born in Edinburgh, Scotland, May 25, 1847, to an evangelical family. The family emigrated to Australia in 1860, with
|
https://en.wikipedia.org/wiki/Biological%20oceanography
|
Biological oceanography is the study of how organisms affect and are affected by the physics, chemistry, and geology of the oceanographic system. Biological oceanography may also be referred to as ocean ecology, in which the root word of ecology is Oikos (oικoσ), meaning ‘house’ or ‘habitat’ in Greek. With that in mind, it is of no surprise then that the main focus of biological oceanography is on the microorganisms within the ocean; looking at how they are affected by their environment and how that affects larger marine creatures and their ecosystem. Biological oceanography is similar to marine biology, but is different because of the perspective used to study the ocean. Biological oceanography takes a bottom-up approach (in terms of the food web), while marine biology studies the ocean from a top-down perspective. Biological oceanography mainly focuses on the ecosystem of the ocean with an emphasis on plankton: their diversity (morphology, nutritional sources, motility, and metabolism); their productivity and how that plays a role in the global carbon cycle; and their distribution (predation and life cycle).
History
In 325 BC, Pytheas of Massalia, a Greek geographer, explored much of the coast of England and Norway and developed the means of determining latitude from the declination of the North Star. His account of tides is also one of the earliest accounts that suggest a relationship between them and the moon. This relationship was later developed by English monk Bede in De Temporum Ratione (The Reckoning of Time) around 700 AD.
Understanding the ocean began with the general exploration and voyaging for trade. Some notable events closer to our time, include Prince Henry the Navigator’s ocean exploration in the 1400s. In 1513, Ponce de Leon described the Florida Current. In 1674, Robert Boyle investigated the relationship between salinity, temperature, and pressure in the depths of the ocean. Captain James Cook’s voyages were responsible for the extensive da
|
https://en.wikipedia.org/wiki/Varignon%27s%20theorem
|
In Euclidean geometry, Varignon's theorem holds that the midpoints of the sides of an arbitrary quadrilateral form a parallelogram, called the Varignon parallelogram. It is named after Pierre Varignon, whose proof was published posthumously in 1731.
Theorem
The midpoints of the sides of an arbitrary quadrilateral form a parallelogram. If the quadrilateral is convex or concave (not complex), then the area of the parallelogram is half the area of the quadrilateral.
If one introduces the concept of oriented areas for n-gons, then this area equality also holds for complex quadrilaterals.
The Varignon parallelogram exists even for a skew quadrilateral, and is planar whether the quadrilateral is planar or not. The theorem can be generalized to the midpoint polygon of an arbitrary polygon.
Proof
Referring to the diagram above, triangles ADC and HDG are similar by the side-angle-side criterion, so angles DAC and DHG are equal, making HG parallel to AC. In the same way EF is parallel to AC, so HG and EF are parallel to each other; the same holds for HE and GF.
Varignon's theorem can also be proved as a theorem of affine geometry organized as linear algebra with the linear combinations restricted to coefficients summing to 1, also called affine or barycentric coordinates. The proof applies even to skew quadrilaterals in spaces of any dimension.
Any three points E, F, G are completed to a parallelogram (lying in the plane containing E, F, and G) by taking its fourth vertex to be E − F + G. In the construction of the Varignon parallelogram this is the point (A + B)/2 − (B + C)/2 + (C + D)/2 = (A + D)/2. But this is the point H in the figure, whence EFGH forms a parallelogram.
In short, the centroid of the four points A, B, C, D is the midpoint of each of the two diagonals EG and FH of EFGH, showing that the midpoints coincide.
From the first proof, one can see that the sum of the diagonals is equal to the perimeter of the parallelogram formed. Also, we can use vector
|
https://en.wikipedia.org/wiki/ImmTAC
|
ImmTACs (Immune mobilising monoclonal T-cell receptors Against Cancer) are a class of bispecific biological drug being investigated for the treatment of cancer and viral infections which combines engineered cancer-recognizing TCRs with immune activating complexes. ImmTACs target cancerous or virally infected cells through binding human leukocyte antigen (HLA) presented peptide antigens and redirect the host's cytotoxic T cells to recognise and kill them.
ImmTACs are fusion proteins that combine an engineered T Cell Receptor (TCR) based targeting system with a single chain antibody fragment (scFv) effector function. TCRs, like antibodies, constitute an important antigen recognition system within the immune system; but, whereas antibodies are restricted to targeting cell surface or secreted proteins TCRs can recognise peptides derived from intracellular targets presented by human leukocyte antigen (HLA). Naturally occurring TCRs are low affinity (0.18-387 micromolar range) 2-chain membrane receptors expressed on the surface of T cells. To produce stable, soluble, high affinity TCRs capable of being used as diagnostics and therapeutics the two TCR protein chains are stabilised through the introduction of a novel disulphide bond between the 2 constant domains and the affinity increased 1-5 million fold to low picomolar values through phage display affinity maturation. To provide the soluble, affinity enhanced TCR with a biological effector function the beta chain of the TCR is fused to an scFv antibody fragment specific for the CD3 T cell co-receptor, creating an ImmTAC. The molecular weight of an ImmTAC molecule is ~75kDa.
Mechanism of action
ImmTACs exert their activity through T cell redirection, a mechanism of action used by several other bi-specific biologics such as the Bi-specific T-cell engagers (BiTEs). After administration of the drug the picomolar affinity TCR portion of the ImmTAC binds to the cancerous or virally infected cell through specific reco
|
https://en.wikipedia.org/wiki/Bob%20O.%20Evans
|
Bob Overton Evans (August 19, 1927 – September 2, 2004), also known as "Boe" Evans, was an American computer pioneer and corporate executive at IBM (International Business Machines). He led the groundbreaking development of compatible computers that changed the industry.
Early life and education
Evans was born in Grand Island, Nebraska. In 1951, after earning an engineering degree from Iowa State University, he joined IBM as a junior engineer.
Career
Bob O. Evans joined IBM in a low level engineering position in
1951
as it was developing a new range of "computers" based on vacuum tubes (earlier IBM computers used mechanical
switches). A natural and very
capable manager he moved up the company hierarchy to the position of vice president (development) in
the Data Systems division in 1962. This was apparently created as a position where he had responsibility for the
development of "System/360", a merger of IBMs separate scientific and business computing systems.
In the early 1960s, Evans persuaded IBM’s chairman, Thomas J. Watson Jr., to discontinue the company’s development of a hodgepodge of incompatible computers and instead to embark on the development of a single product line of general-purpose, compatible computers. Until then, researchers thought that the fields of scientific computing and commercial data processing each required their own type of special-purpose computer. Compatibility would ensure that the same software could run on any model of the product line, avoiding a re-programming of software.
Evans had overall responsibility for the hardware and software development of what was announced on April 7, 1964, as the IBM System/360 product line, with six models (later gradually expanded to 18 models) and a performance range factor of 50. IBM – in 1964 a company with an annual revenue of $3.2 billion – invested more than $5 billion in engineering, factories and equipment to develop and manufacture System/360, opening five plants and hiring 60,000 e
|
https://en.wikipedia.org/wiki/Traction%20force%20microscopy
|
Traction force microscopy (TFM) is an experimental method for determining the tractions on the surface of a biological cell by obtaining measurements of the surrounding displacement field within an in vitro extracellular matrix (ECM).
Overview
The dynamic mechanical behavior of cell-ECM and cell-cell interactions is known to influence a vast range of cellular functions, including necrosis, differentiation, adhesion, migration, locomotion, and growth. TFM utilizes experimentally observed ECM displacements to calculate the traction, or stress vector, at the surface of a cell.
Before TFM, efforts observed cellular tractions on silicone rubber substrata wrinkling around cells; however, accurate quantification of the tractions in such a technique is difficult due to the nonlinear and unpredictable behavior of the wrinkling. Several years later, the terminology TFM was introduced to describe a more advanced computational procedure that was created to convert measurements of substrate deformation into estimated traction stresses.
General Methodology
In conventional TFM, cellular cultures are seeded on, or within, an optically transparent 3D ECM embedded with fluorescent microspheres (typically latex beads with diameters ranging from 0.2-1 μm). A wide range of natural and synthetic hydrogels can be used for this purpose, with the prerequisite that mechanical behavior of the material is well characterized, and the hydrogel is capable of maintaining cellular viability. The cells will exert their own forces into this substrate which will consequently displace the beads in the surrounding ECM. In some studies, a detergent, enzyme, or drug is used to disturb the cytoskeleton, thereby altering, or sometimes completely eliminating, the tractions generated by the cell.
First, a continuous displacement field is computed from a pair of images: the first image being the reference configuration of microspheres surrounding an isolated cell, and the second image being the same isola
|
https://en.wikipedia.org/wiki/Stopping%20power
|
Stopping power is the ability of a weapon – typically a ranged weapon such as a firearm – to cause a target (human or animal) to be incapacitated or immobilized. Stopping power contrasts with lethality in that it pertains only to a weapon's ability to make the target cease action, regardless of whether or not death ultimately occurs. Which ammunition cartridges have the greatest stopping power is a much-debated topic.
Stopping power is related to the physical properties and terminal behavior of the projectile (bullet, shot, or slug), the biology of the target, and the wound location, but the issue is complicated and not easily studied. Although higher-caliber ammunitions usually have greater muzzle energy and momentum and thus traditionally been widely associated with higher stopping power, the physics involved are multifactorial, with caliber, muzzle velocity, bullet mass, bullet shape and bullet material all contributing to the ballistics.
Despite much disagreement, the most popular theory of stopping power is that it is usually caused not by the force of the bullet but by the wounding effects of the bullet, which are typically a rapid loss of blood causing a circulatory failure, which leads to impaired motor function and/or unconsciousness. The "Big Hole School" and the principles of penetration and permanent tissue damage are in line with this way of thinking. The other prevailing theories focus more on the energy of the bullet and its effects on the nervous system, including hydrostatic shock and energy transfer, which is similar to kinetic energy deposit.
History
The concept of stopping power appeared in the tail end of the 19th century when colonial troops (including American troops in the Philippines during the Moro Rebellion, and British soldiers during the New Zealand Wars) at close quarters found that their pistols were not able to stop charging native tribesmen. This led to the introduction or reintroduction of larger caliber weapons (such as the
|
https://en.wikipedia.org/wiki/Electromigration
|
Electromigration is the transport of material caused by the gradual movement of the ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. The effect is important in applications where high direct current densities are used, such as in microelectronics and related structures. As the structure size in electronics such as integrated circuits (ICs) decreases, the practical significance of this effect increases.
History
The phenomenon of electromigration has been known for over 100 years, having been discovered by the French scientist Gerardin. The topic first became of practical interest during the late 1960s when packaged ICs first appeared. The earliest commercially available ICs failed in a mere three weeks of use from runaway electromigration, which led to a major industry effort to correct this problem. The first observation of electromigration in thin films was made by I. Blech. Research in this field was pioneered by a number of investigators throughout the fledgling semiconductor industry. One of the most important engineering studies was performed by Jim Black of Motorola, after whom Black's equation is named. At the time, the metal interconnects in ICs were still about 10 micrometres wide. Currently interconnects are only hundreds to tens of nanometers in width, making research in electromigration increasingly important.
Practical implications of electromigration
Electromigration decreases the reliability of integrated circuits (ICs). It can cause the eventual loss of connections or failure of a circuit. Since reliability is critically important for space travel, military purposes, anti-lock braking systems, medical equipment like Automated External Defibrillators and is even important for personal computers or home entertainment systems, the reliability of chips (ICs) is a major focus of research efforts.
Due to difficulty of testing under real conditions, Black's equation is used to predict the life s
|
https://en.wikipedia.org/wiki/Catch%20reporting
|
Catch reporting is a part of Monitoring control and surveillance of Commercial fishing. Depending on national and local fisheries management practices, catch reports may reveal illegal fishing practices, or simply indicate that a given area is being overfished.
Manual Catch Reporting
The general industry practice is to write out a catch report on paper, and present it to a fisheries management official when they return to port. If information does not seem plausible to the official, the report may be verified by physical inspection of the catch. Alternatively, a suspicious vessel may need to carry an independent observer on future voyages.
Semi-automated Catch Reporting
Some Vessel monitoring systems have features that collect, from keyboard input, the data that constitutes a catch report for the entire voyage. More advanced systems periodically transmit the current catch as electronic mail, so fisheries management centers can determine if a controlled area needs to be closed to further fishing.
While there is no standardization as yet for catch reports, a starting point came from a 1981 Conference of Experts:
Catch on entry to each controlled area
Weekly catch
Transshipment
Port of landing
Catch on exiting a controlled area
Days at sea
Daily time at sea
Seasonal catch limits
Per-trip catch limits
Limits on catch within certain areas
Individual (vessel) transferable quotas
Minimum or maximum fish (or shellfish) sizes
This was extended, in 1993, to include: to include the measurement of:
catch
species composition
fishing effort
Bycatch (i.e., species unintentionally caught, such as dolphins in tuna fishery)
area of operations
A number of programs require tracking of days at sea (DAS) for a given vessel. They may require tracking the total cumulative catch of a given fishery.
Major Trends
Where the local fishery economy permits, perhaps with international funding, near-real-time catch reporting will become a basic feature of vessel manag
|
https://en.wikipedia.org/wiki/Ulnar%20collateral%20ligament%20of%20elbow%20joint
|
The ulnar collateral ligament (UCL) or internal lateral ligament is a thick triangular ligament at the medial aspect of the elbow uniting the distal aspect of the humerus to the proximal aspect of the ulna.
Structure
It consists of two portions, an anterior and posterior united by a thinner intermediate portion. Note that this ligament is also referred to as the medial collateral ligament and should not be confused with the lateral ulnar collateral ligament (LUCL).
The anterior portion, directed obliquely forward, is attached, above, by its apex, to the front part of the medial epicondyle of the humerus; and, below, by its broad base to the medial margin of the coronoid process of the ulna.
The posterior portion, also of triangular form, is attached, above, by its apex, to the lower and back part of the medial epicondyle; below, to the medial margin of the olecranon.
Between these two bands a few intermediate fibers descend from the medial epicondyle to blend with a transverse band which bridges across the notch between the olecranon and the coronoid process.
This ligament is in relation with the triceps brachii and flexor carpi ulnaris and the ulnar nerve, and gives origin to part of the flexor digitorum superficialis.
Injury
During activities such as overhand baseball pitching, this ligament is subjected to extreme tension, which places the overhand-throwing athlete at risk for injury. Acute or chronic disruption and/or attenuation of the ulnar collateral ligament often result in medial elbow pain, valgus instability, and impaired throwing performance. There are both non-surgical and surgical treatment options.
Additional images
See also
Tommy John surgery
|
https://en.wikipedia.org/wiki/HDF%20Explorer
|
HDF Explorer is a data visualization program that reads the HDF, HDF5 and netCDF data file formats. It runs in the Microsoft Windows operating systems. HDF Explorer was developed by Space Research Software, LLC, headquartered in Urbana-Champaign, Illinois.
External links
Space Research Software LLC
The HDF Group home page
Meteorological data and networks
Earth sciences graphics software
Science software for Windows
|
https://en.wikipedia.org/wiki/Seikima%20II%20Akuma%20no%20Gyakush%C5%AB%21
|
is a video game that was released in Japan in 1986.
In 1987, the game was re-released for the MSX2 under the title with more detailed sprites and backgrounds. It also featured actual Seikima-II music.
Summary
The game is based on a then-popular Japanese heavy metal band formed by Damian Hamada called Seikima-II. This band lasted from its creation in 1982 to its dissolution on December 31, 1999. Their history, as it has been prophesied, is that they are a group of demons preaching a religion in order to propagate Satan through the use of Heavy Metal. Each member is a demon of a different hierarchical class with His Excellency Demon Kogure being leader of demons and His Majesty Damian Hamada being crown prince of hell. In accordance to the prophecy and after completing the world conquest, the band would disband at the end of the century on December 12, 1999 at 23:59:59 Japan Standard Time (09:59:99 Eastern Standard Time).
Character appearance
Demon Kogure
This is the player character. He is an NES globe-trotter who helps his fellows when they were caught by Zeus. If the player takes a hit with an enemy, he dies on the very first contact. The description of the player's death simply reads "The devil is not dead."
Ace Shimizu
One fellow that was caught in the name of Zeus. Coming in the fourth stage. He uses the Stratocaster guitar as an instrument.
Jail O'Hashi
Another fellow was caught in the name of Zeus. First appeared on stage. He uses the Flying V guitar as an instrument.
Raiden Yuzawa
Yet another fellow was caught in the name of Zeus. To appear in the second stage. His instrument is a basic set of drums.
Xenon Ishikawa
The fourth fellow that was caught in the name of Zeus. To appear in the third stage. His musical instrument is the simple bass guitar.
Zeus
In the final boss of the game, the enemy of the devil god demons appears to block progress towards beating it. He is named after the king of the Greek gods.
|
https://en.wikipedia.org/wiki/BioLegend
|
BioLegend is a global developer and manufacturer of antibodies and reagents used in biomedical research located in San Diego, California. It was incorporated in June 2002 and has since expanded to include BioLegend Japan KK, where it is partnered with Tomy Digital Biology Co., Ltd. in Tokyo, BioLegend Europe in the United Kingdom, BioLegend GmbH in Germany, and BioLegend UK Ltd in the United Kingdom. BioLegend manufactures products in the areas of neuroscience, cell immunophenotyping, cytokines and chemokines, adhesion, cancer research, T regulatory cells, stem cells, innate immunity, cell-cycle analysis, apoptosis, and modification-specific antibodies. Reagents are created for use in flow cytometry, proteogenomics, ELISA, immunoprecipitation, Western blotting, immunofluorescence microscopy, immunohistochemistry, and in vitro or in vivo functional assays.
History
BioLegend was founded by CEO, Gene Lay, D.V.M., who was also the co-founder of PharMingen. In 2011, BioLegend co-developed and introduced Brilliant Violet(TM)-conjugated antibodies, using a novel fluorophore based on Nobel Prize-winning chemistry developed by Sirigen. In 2018, BioLegend introduced TotalSeq™ antibody-oligonucleotide conjugates for use in single cell proteogenomics analysis. BioLegend continued expansion and moved into a new 8 acre campus at BioLegend Way in 2019 with state of the art facilities designed to accommodate up to 1000 employees.
|
https://en.wikipedia.org/wiki/GBR%20code
|
The GBR code (or Guy–Blandford–Roycroft code) is a system of representing the position of chess pieces on a chessboard. Publications such as EG use it to classify endgame types and to index endgame studies.
The code is named after Richard Guy, Hugh Blandford and John Roycroft. The first two devised the original system (the Guy–Blandford code) using different figures to represent the number of pieces. Roycroft suggested to count one for a white piece and three for a black piece in order to make the code easier to memorise.
Definition
In the GBR code, every chess position is represented by six digits, in the following format:
abcd.ef
a = queens
b = rooks
c = bishops
d = knights
e = white pawns
f = black pawns
For the first four digits, each of the first two white pieces counts as 1, and each of the first two black pieces counts as 3. Thus, for example, if White has two knights and Black has one knight, numeral d = 1 + 1 + 3 = 5. If that is all the other than the kings, the position is classified 0005. Values 0 through 8 represent all normal permutations of force. 9 is used if either side has three or more pieces of the same non-pawn type; these positions are possible in standard chess due to pawn promotion.
The last two digits of the code represent the number of white and black pawns, respectively.
Usage
GBR code can be used to refer to a general class of material. For example, the endgame of two knights against pawn (as famously analysed by A.A. Troitsky, leading to his discovery of the Troitsky line), is GBR class 0002.01.
When indexing or referring to specific positions, rather than generalised material imbalances, the code may be extended in various ways. Two common ones are to prefix "+" to indicate the stipulation "White to play and win" or "=" for "White to play and draw"; and to suffix the position of the white and black kings. With these additions, the position to the right, a draw study by Leonid Kubbel (First Prize, Shakhmaty, 1925), is clas
|
https://en.wikipedia.org/wiki/CNIB%20Chanchlani%20Global%20Vision%20Research%20Award
|
The CNIB Chanchlani Global Vision Research Award is an annual global research award that promotes vital world-class research to explore the causes of blindness and vision loss, as well as potential cures, treatments and preventions. The award of $25,000 is given to vision scientists around the world who have made a major, original contribution for advancement in above said fields.
The award was established in 2011 by Vasu and wife Jaya Chanchlani in collaboration with CNIB (Canadian National Institute for the Blind), the Toronto Netralya Lions Club and the Toronto Doctors Lions Club. The $500,000 endowment established with Mr. Chanchlani’s significant financial support, the awards promotes first-class global research of vision science and vision rehabilitation.
Award Recipients
2016 - Dr Robert Molday
Dr Molday is Professor of Biochemistry & Molecular Biology and Ophthalmology & Visual Sciences, University of British Columbia
2014 - Jayakrishna Ambati
Dr Ambati is Professor and Vice-Chair for Research of Ophthalmology and Founding Director of the Center for Advanced Vision Science at the University of Virginia.
2012 - Professor Hugh R. Taylor
Taylor is Melbourne Laureate Professor at the University of Melbourne and Chair of Indigenous Eye Health, where he was formerly Professor of Ophthalmology and department head and is founder of the Centre for Eye Research Australia. He is the Vice President of the International Agency for the Prevention of Blindness and Treasurer of the International Council of Ophthalmology.
See also
List of medicine awards
Notes
Academic awards
Canadian awards
Medicine awards
Lions Clubs International
Awards established in 2011
|
https://en.wikipedia.org/wiki/Chimeric%20gene
|
Chimeric genes (literally, made of parts from different sources) form through the combination of portions of two or more coding sequences to produce new genes. These mutations are distinct from fusion genes which merge whole gene sequences into a single reading frame and often retain their original functions.
Formation
Chimeric genes can form through several different means. Many chimeric genes form through errors in DNA replication or DNA repair so that pieces of two different genes are inadvertently combined. Chimeric genes can also form through retrotransposition where a retrotransposon accidentally copies the transcript of a gene and inserts it into the genome in a new location. Depending on where the new retrogene appears, it can recruit new exons to produce a chimeric gene. Finally, ectopic recombination, when there is an exchange between portions of the genome that are not actually related, can also produce chimeric genes. This process occurs often in human genomes, and abnormal chimeras formed by this process are known to cause color blindness.
Evolutionary Importance of Fusion Proteins
Chimeric genes are important players in the evolution of genetic novelty. Much like gene duplications, they provide a source of new genes, which can allow organisms to develop new phenotypes and adapt to their environment. Unlike duplicate genes, chimeric proteins are immediately distinct from their parental genes, and therefore are more likely to produce entirely new functions.
Chimeric fusion proteins form often in genomes, and many of these are likely to be dysfunctional and eliminated by natural selection. However, in some cases, these new peptides can form fully functional gene products that are selectively favored and spread through populations quickly.
Functions
One of the most well known chimeric genes was identified in Drosophila and has been named Jingwei. This gene is formed from a retrotransposed copy of Alcohol dehydrogenase that united with the
|
https://en.wikipedia.org/wiki/Ace%20Stream
|
Ace Stream is a peer-to-peer multimedia streaming protocol, built using BitTorrent technology. Ace Stream has been recognized by sources as a potential method for broadcasting and viewing bootlegged live video streams. The protocol functions as both a client and a server. When users stream a video feed using Ace Stream, they are simultaneously downloading from peers and uploading the same video to other peers.
History
Ace Stream began under the name TorrentStream as a pilot project to use BitTorrent technology to stream live video. In 2013 TorrentStream, was re-released under the name ACE Stream.
|
https://en.wikipedia.org/wiki/Vaginamuseum
|
The virtual Vaginamuseum is an international internet project, founded by the Austrian artist, Kerstin Rajnar, in 2014. It consists in a virtual gallery and a virtual archive containing background information about the female sex and femininity. Various representations of female sexual organs indicate the existence of a female role model in social systems and allow conclusions to be drawn as to the importance of women in different environments. This project aims at promoting the artistic creation and debate about the female sex. It supports the positive meanings and appreciation of the words and body parts such as the vagina, vulva, and clitoris. The museum is considered the first to be devoted to the vagina.
About
The Vaginamuseum communicates informations and is an educational platform. Experts of art history, health care, medical science, as well as artists of all disciplines, are creating this platform. The archive displays conceptual and historical texts and other articles and contributions about the vagina, the vulva and the clitoris. The curated gallery shows selected artworks which stimulate new thinking about the female genital and lead to new perspectives. Rajnar's vision is that the museum can help improve people's negative attitudes about the vagina which are shaped through culture and media.
The Vaginamuseum elaborated concepts on the topic: Art and Culture and Life and Limb _ the positive Power of Feminity.
Vaginamuseum is registered at the European Union Intellectual Property Office (EUIPO).
Exhibitions in the gallery
Vagina 2.0
The virtual opening exhibition curated by the media artist and curator Doris Jauk-Hinz broaches the issue of current terms and subjective meanings of the female sexual organs. Reflections in dealing with the term vagina are based on ideas, expectations, attributions, associations and emotions by means of art. The artistic inputs range from earlier depictions of vulva symbols in different civilizations and times to the l
|
https://en.wikipedia.org/wiki/Pierre%20Fauchard%20Academy
|
The Pierre Fauchard Academy is a volunteer-run, non-profit dental organization that was founded by Dr. Elmer S. Best, a dentist from Minnesota in 1936. The objective is the independence from commercial interests in dental research and its publications. Dr. Best endeavored to raise the professional standards. The academy is named after Pierre Fauchard (1678-1761), a French dentist who is considered the "Father of modern dentistry". Fauchard wrote a book entitled Le Chirurgien dentiste, ou Traité des dents, the first dental textbook of modern times.
Statutes
The statutes of the Pierre Fauchard Academy are based on the objectives of Elmer Best and its focus is on integrity and leadership of dentists. A primary objective at the time of its foundation is to preserve the independence of scientific publications. Goals are to have as Fellows the most outstanding dentists in every country in the world and to select and induct individuals of the highest ethical, moral and professional standards.
Organization
The Pierre Fauchard Academy currently consists of more than 5,000 Fellows, who are organized in 120 sections. 55 are located in the United States and 65 in countries of South America, Europe, Asia, Africa and Australia. Members are dentists who are among the most outstanding leaders in the various fields of dentistry. Fellowship in the academy is by nomination and is designed to honor past accomplishments in field of dentistry and encourage future productivity. Membership must be supported by the section in which the dentist resides. The academy is administered by a board of trustees consisting of five officers and eleven trustees from around the world. Section organization includes a Chairperson and such other officers or committee members as the Section may elect. The office of the academy is located in Rockville (Maryland).
Historical workup
Part of the work of the academy is the historical workup of dentistry. For this purpose, the CVs of the main leaders and
|
https://en.wikipedia.org/wiki/Cryptographic%20Message%20Syntax
|
The Cryptographic Message Syntax (CMS) is the IETF's standard for cryptographically protected messages. It can be used by cryptographic schemes and protocols to digitally sign, digest, authenticate or encrypt any form of digital data.
CMS is based on the syntax of PKCS #7, which in turn is based on the Privacy-Enhanced Mail standard. The newest version of CMS () is specified in (but see also for updated ASN.1 modules conforming to ASN.1 2002).
The architecture of CMS is built around certificate-based key management, such as the profile defined by the PKIX working group.
CMS is used as the key cryptographic component of many other cryptographic standards, such as S/MIME, PKCS #12 and the digital timestamping protocol.
OpenSSL is open source software that can encrypt, decrypt, sign and verify, compress and uncompress CMS documents, using the openssl-cms command.
See also
CAdES - CMS Advanced Electronic Signatures
S/MIME
PKCS #7
External links
(Update to the Cryptographic Message Syntax (CMS) for Algorithm Identifier Protection)
(Cryptographic Message Syntax (CMS), in use)
(Cryptographic Message Syntax (CMS), obsolete)
(Cryptographic Message Syntax (CMS), obsolete)
(Cryptographic Message Syntax, obsolete)
(New ASN.1 Modules for Cryptographic Message Syntax (CMS) and S/MIME, in use)
(New ASN.1 Modules for Cryptographic Message Syntax (CMS) and S/MIME, updated)
(Using Elliptic Curve Cryptography with CMS, in use)
(Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS), obsolete)
(Using AES-CCM and AES-GCM Authenticated Encryption in the Cryptographic Message Syntax (CMS), in use)
Cryptographic protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Balto%20II%3A%20Wolf%20Quest
|
Balto II: Wolf Quest is a 2002 American animated adventure film produced and directed by Phil Weinstein. It is the sequel to Universal Pictures/Amblin Entertainment's 1995 Northern animated film Balto.
Plot
One year after his heroic journey, Balto has mated with Jenna, and they now have a new family of six puppies in Alaska. Five of their puppies resemble their husky mother, while one pup named Aleu takes her looks from her wolfdog father. When they all reach eight weeks old, all of the other pups are adopted to new homes, but no one wants Aleu due to her wild animal looks, forcing her to live with her father. A year later when she is grown, Aleu is almost killed by a hunter who mistakes her for a wild wolf. Balto tells Aleu the truth about her wolf heritage, causing her to run away, hoping to find her place in the world. Balto then goes out into the Alaskan wilderness to find her. At the same time, Balto has been struggling with strange dreams of a raven and a pack of wolves, and he cannot understand their meaning. Balto resolves to find the meaning of these dreams as he searches for Aleu. His friends Boris, Muk, and Luk attempt to join him, but after they are halted by some unknown force, they realize that this journey is meant only for the father and daughter themselves.
Taking refuge in a cave, Aleu meets the field mouse Muru, who explains that Aleu should not be ashamed of her lineage, which tells her what she is but not who she is. Muru reveals himself to be Aleu's spirit guide and tells her to go on a journey of self-discovery. Balto and Aleu reunite when he saves her from the grizzly bear and reconcile, and find their way to the ocean, where they are attacked by a group of starving Northwestern wolves led by Niju, an arrogant and vicious wolf. The confrontation is defused by the elderly Nava, the true leader of the pack, who welcomes Balto and Aleu. Nava announces to his pack that the wolf spirit Aniu has contacted him in "dream visions". Aniu has told hi
|
https://en.wikipedia.org/wiki/Monoclonal%20antibody%20therapy
|
Monoclonal antibody therapy is a form of immunotherapy that uses monoclonal antibodies (mAbs) to bind monospecifically to certain cells or proteins. The objective is that this treatment will stimulate the patient's immune system to attack those cells. Alternatively, in radioimmunotherapy a radioactive dose localizes a target cell line, delivering lethal chemical doses. Antibodies are used to bind to molecules involved in T-cell regulation to remove inhibitory pathways that block T-cell responses. This is known as immune checkpoint therapy.
It is possible to create a mAb that is specific to almost any extracellular/cell surface target. Research and development is underway to create antibodies for diseases (such as rheumatoid arthritis, multiple sclerosis, Alzheimer's disease, Ebola and different types of cancers).
Antibody structure and function
Immunoglobulin G (IgG) antibodies are large heterodimeric molecules, approximately 150 kDa and are composed of two kinds of polypeptide chain, called the heavy (~50kDa) and the light chain (~25kDa). The two types of light chains are kappa (κ) and lambda (λ). By cleavage with enzyme papain, the Fab (fragment-antigen binding) part can be separated from the Fc (fragment crystallizable region) part of the molecule. The Fab fragments contain the variable domains, which consist of three antibody hypervariable amino acid domains responsible for the antibody specificity embedded into constant regions. The four known IgG subclasses are involved in antibody-dependent cellular cytotoxicity.
Antibodies are a key component of the adaptive immune response, playing a central role in both in the recognition of foreign antigens and the stimulation of an immune response to them. The advent of monoclonal antibody technology has made it possible to raise antibodies against specific antigens presented on the surfaces of tumors. Monoclonal antibodies can be acquired in the immune system via passive immunity or active immunity. The advantage of
|
https://en.wikipedia.org/wiki/Counternull
|
In statistics, and especially in the statistical analysis of psychological data, the counternull is a statistic used to aid the understanding and presentation of research results. It revolves around the effect size, which is the mean magnitude of some effect divided by the standard deviation.
The counternull value is the effect size that is just as well supported by the data as the null hypothesis. In particular, when results are drawn from a distribution that is symmetrical about its mean, the counternull value is exactly twice the observed effect size.
The null hypothesis is a hypothesis set up to be tested against an alternative. Thus the counternull is an alternative hypothesis that, when used to replace the null hypothesis, generates the same p-value as had the original null hypothesis of “no difference.”
Some researchers contend that reporting the counternull, in addition to the p-value, serves to counter two common errors of judgment:
assuming that failure to reject the null hypothesis at the chosen level of statistical significance means that the observed size of the "effect" is zero; and
assuming that rejection of the null hypothesis at a particular p-value means that the measured "effect" is not only statistically significant, but also scientifically important.
These arbitrary statistical thresholds create a discontinuity, causing unnecessary confusion and artificial controversy.
Other researchers prefer confidence intervals as a means of countering these common errors.
See also
File drawer problem
Publication bias
|
https://en.wikipedia.org/wiki/Distributed%20parameter%20system
|
In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.
Linear time-invariant distributed-parameter systems
Abstract evolution equations
Discrete-time
With U, X and Y Hilbert spaces and ∈ L(X), ∈ L(U, X), ∈ L(X, Y) and ∈ L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:
with (the state) a sequence with values in X, (the input or control) a sequence with values in U and (the output) a sequence with values in Y.
Continuous-time
The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:
,
.
An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.
Example: a partial differential equation
The partial differential equation with and given by
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
Example: a delay differential equation
The delay differential equation
fits into th
|
https://en.wikipedia.org/wiki/Osmos
|
Osmos is a 2009 puzzle video game developed by Canadian developer Hemisphere Games for various systems such as Microsoft Windows, Mac OS X, Linux, OnLive, iPad, iPhone, iPod Touch and Android.
Gameplay
The aim of the game is to propel oneself, a single-celled organism ("Mote"), into other smaller motes to absorb them. Colliding with a mote larger than the player will cause the player to be absorbed, resulting in a game over. Motes smaller than the player are blue, while motes bigger than the player are red. Changing course is done by expelling mass. Due to conservation of momentum, this results in the player's mote moving away from the expelled mass, but also in one's own mote shrinking.
There are three different "zones" of levels in Osmos: In the "sentient" levels, the goal is to prevail over active motes of various types that hunt and absorb other motes, including the player. Hunting them typically involves absorbing as many inactive motes as possible before chasing down the active ones with the extra mass one has gained.
In the "ambient" levels, the player's mote typically floats in a large area surrounded by inactive motes, and must become the largest or simply very large. Variations on this theme involve, for instance, starting the game as a very small mote surrounded by many larger, fast moving motes, or the presence of "antimatter" motes which shrink normal motes during collision no matter which one was originally bigger, or starting the game stuck in a huge, densely packed area with a large number of other motes without much space to move about and having to nudge other motes out of the way by ejecting mass at them.
In the "force" levels, special motes ("Attractors") influence other motes with a force similar to gravitation. The player has to take into account orbital physics when planning movement in order to save mass when changing course. In these levels, the game optionally assists the player with a course trajectory tool that plots the mote's cours
|
https://en.wikipedia.org/wiki/Glutamate-rich%20protein%203
|
Glutamate-rich protein 3, also known as Uncharacterized Protein C1orf173, is a protein encoded by the ERICH3 gene. ERICH3 was named “chromosome 1 open reading frame 173 (C1orf173)” based on its map location in the human genome. It was subsequently renamed “E-rich 3” as a result of the high content of glutamate (E) in its encoded amino acid sequence. Single-nucleotide polymorphisms (SNPs) in the ERICH3 gene has been identified as one of the "top" signals in a genome-wide association study (GWAS) for plasma serotonin concentrations which were themselves associated with selective serotonin reuptake inhibitor (SSRI) response in major depressive disorder (MDD) patients. The same ERICH3 SNP was later demonstrated that was significantly associated with SSRI treatment outcomes in three independent MDD trials, including STAR*D, ISPC and PReDICT. ERICH3 is most highly expressed in a variety of regions of the human brain, including the nucleus accumbens (basal ganglia) and frontal cortex based on the GTEx RNA-seq data. The single-cell RNA-seq data for human brain samples revealed that ERICH3 is predominantly expressed in neurons rather than other CNS cell types. ERICH3 was found interacts with proteins function in vesicle biogenesis and may play a significant role in vesicular function in serotonergic and other neuronal cell types, which might help explain its association with antidepressant treatment response. ERICH3 protein was also found abundant in blood platelets and cilia based on the proteomic studies. Its function in platelet was thought related to plasma serotonin storage because more than 99% of blood serotonin was stored in platelet and ERICH3 SNPs has been associated with plasma serotonin concentration in MDD patients. ERICH3 in primary cilia might regulates cilium formation and the localizations of ciliary transport.
Gene
The ERICH3 gene in humans is 105,628 bases and is encoded on the minus strand at position 31.1 on the short arm of chromosome 1 from
|
https://en.wikipedia.org/wiki/Covalent%20bond%20classification%20method
|
The covalent bond classification (CBC) method is also referred to as the LXZ notation. It was published by M. L. H. Green in 1995 as a solution for the need to describe covalent compounds such as organometallic complexes in a way that is not prone to limitations resulting from the definition of oxidation state. Instead of simply assigning a charge to an atom in the molecule (i.e. the oxidation state), the covalent bond classification method analyzes the nature of the ligands surrounding the atom of interest, which is often a transition metal. According to this method, there are three basic types of interactions that allow for coordination of the ligand. The three types of interaction are classified according to whether the ligating group donates two, one, or zero electrons. These three classes of ligands are respectively given the symbols L, X, and Z.
Types of ligands
X-type ligands are those that donate one electron to the metal and accept one electron from the metal when using the neutral ligand method of electron counting, or donate two electrons to the metal when using the donor pair method of electron counting. Regardless of whether it is considered neutral or anionic, these ligands yield normal covalent bonds.[3] A few examples of this type of ligand are H, halogens (Cl, Br, F, etc.), OH, CN, CH3, and NO (bent).
L-type ligands are neutral ligands that donate two electrons to the metal center regardless of the electron counting method being used. These electrons can come from lone pairs, pi, or sigma donors.[4] The bonds formed between these ligands and the metal are dative covalent bonds, which are also known as coordinate bonds. Examples of this type of ligand include CO, PR3, NH3, H2O, carbenes (=CRR'), and alkenes.
Z-type ligands are those that accept two electrons from the metal center as opposed to the donation occurring with the other two types of ligands. However, these ligands also form dative covalent bonds like the L-type.[3] This typ
|
https://en.wikipedia.org/wiki/Mandibular%20prominence
|
The mandibular prominence is an embryological structure which gives rise to the lower portion of the face.
The mandible and lower lip derive from it. The mesenchymal cells within the mandibular prominence condense to form Meckel's cartilage.
It is innervated by the mandibular nerve.
|
https://en.wikipedia.org/wiki/AP%20Calculus
|
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / Calc BC or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions).
AP Calculus AB
AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams.
Purpose
According to the College Board:
Topic outline
The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus.
Analysis of graphs (predicting and explaining behavior)
Limits of functions (one and two sided)
Asymptotic and unbounded behavior
Continuity
Derivatives
Concept
At a point
As a function
Applications
Higher order derivatives
Techniques
Integrals
Interpretations
Properties
Applications
Techniques
Numerical approximations
Fundamental theorem of calculus
Antidifferentiation
L'Hôpital's rule
Separable differential equations
AP Calculus BC
AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus).
Purpose
According to the College Board,
Topic outline
AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following:
Convergence tests for series
Taylor series
Parametric equations
Polar functions (inclu
|
https://en.wikipedia.org/wiki/Dowling%20geometry
|
In combinatorial mathematics, a Dowling geometry, named after Thomas A. Dowling, is a matroid associated with a group. There is a Dowling geometry of each rank for each group. If the rank is at least 3, the Dowling geometry uniquely determines the group. Dowling geometries have a role in matroid theory as universal objects (Kahn and Kung, 1982); in that respect they are analogous to projective geometries, but based on groups instead of fields.
A Dowling lattice is the geometric lattice of flats associated with a Dowling geometry. The lattice and the geometry are mathematically equivalent: knowing either one determines the other. Dowling lattices, and by implication Dowling geometries, were introduced by Dowling (1973a,b).
A Dowling lattice or geometry of rank n of a group G is often denoted Qn(G).
The original definitions
In his first paper (1973a) Dowling defined the rank-n Dowling lattice of the multiplicative group of a finite field F. It is the set of all those subspaces of the vector space Fn that are generated by subsets of the set E that consists of vectors with at most two nonzero coordinates. The corresponding Dowling geometry is the set of 1-dimensional vector subspaces generated by the elements of E.
In his second paper (1973b) Dowling gave an intrinsic definition of the rank-n Dowling lattice of any finite group G. Let S be the set {1,...,n}. A G-labelled set (T, α) is a set T together with a function α: T → G. Two G-labelled sets, (T, α) and (T, β), are equivalent if there is a group element, g, such that β = gα.
An equivalence class is denoted [T, α].
A partial G-partition of S is a set γ = {[B1,α1], ..., [Bk,αk]} of equivalence classes of G-labelled sets such that B1, ..., Bk are nonempty subsets of S that are pairwise disjoint. (k may equal 0.)
A partial G-partition γ is said to be ≤ another one, γ*, if
every block of the second is a union of blocks of the first, and
for each Bi contained in B*j, αi is equivalent to the restriction of α*j t
|
https://en.wikipedia.org/wiki/Horizon%20Worlds
|
Meta Horizon Worlds is a free virtual reality, online video game with an integrated game creation system developed and published by Meta Platforms. On this multi-player virtual platform, players move and interact with each other in various worlds that host events, games, and social activities. The game works on Oculus Rift S and Meta Quest 2 headsets.
In February 2022, Meta reported Horizon Worlds had an estimated 300,000 users; yet, by October 2022, The Wall Street Journal was reporting less than 200,000 monthly users. Horizon Worlds has received mixed reviews, with critics citing bugs and an unenjoyable environment that degrades the user experience.
In August 2023, Meta announced a new first-party studio called Ouro Interactive to build Horizon Worlds games. Its first title Super Rumble has largely received favorable feedback from users and media outlets. It utilizes new creation features such as asset imports and TypeScript that are not yet available for general creators.
Gameplay
The game may be played with an Oculus Rift S or Meta Quest 2 virtual reality headset and uses full 3D motion via the motion capture system of the headset and two hand-held motion controllers, which are required to interact with objects in the game. In October 2022, Meta announced that they will be launching a web version, allowing users to access the game without a headset as well. Players can explore the space around them within the confines of their physical floor-space, while roaming further by using controller buttons to teleport a short distance or to move continuously through the virtual space.
According to Meta, users can create their own avatar, with a custom face and outfit, to represent themselves in the virtual world. All players begin at the hub (also known as the “plaza”), where they can take portals to different worlds created by other users. An integrated game creation system allows users to create new worlds. Users can also create their own personal space, which is a
|
https://en.wikipedia.org/wiki/Edge%20connector
|
An edge connector is the portion of a printed circuit board (PCB) consisting of traces leading to the edge of the board that are intended to plug into a matching socket. The edge connector is a money-saving device because it only requires a single discrete female connector (the male connector is formed out of the edge of the PCB), and they also tend to be fairly robust and durable. They are commonly used in computers for expansion slots for peripheral cards, such as PCI, PCI Express, and AGP cards.
Socket design
Edge connector sockets consist of a plastic "box" open on one side, with pins on one or both side(s) of the longer edges, sprung to push into the middle of the open center. Connectors are often keyed to ensure the correct polarity, and may contain bumps or notches both for polarity and to ensure that the wrong type of device is not inserted. The socket's width is chosen to fit to the thickness of the connecting PCB.
The opposite side of the socket is often an insulation-piercing connector which is clamped onto a ribbon cable. Alternatively, the other side may be soldered to a motherboard or daughtercard.
Uses
Edge connectors are commonly used in personal computers for connecting expansion cards and computer memory to the system bus. Example expansion peripheral technologies which use edge connectors include PCI, PCI Express, and AGP. Slot 1 and Slot A also used edge connectors; the processor being mounted on a card with an edge connector, instead of directly to the motherboard as before and since.
IBM PCs used edge connector sockets attached to ribbon cables to connect 5.25" floppy disk drives. 3.5" drives use a pin connector instead.
Video game cartridges typically take the form of a PCB with an edge connector: the socket is located within the console itself. The Nintendo Entertainment System was unusual in that it was designed to use a zero insertion force edge connector: instead of the user forcing the cartridge into the socket directly, the cartr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.