source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Vietnam | The following is a list of flags of Vietnam.
National flag
Current
Official
Diasporic
Historical
Imperial standards
Personal standards of emperors
Presidential standards
Political flags
Religious flags
Military flags
Police flags
Ensigns
Flags of Vietnamese subjects
Provinces of the Nguyễn dynasty
Areas with special status and ethnic minorities
Cities
Other flags
Corporation flags
Though not standardized and rarely seen, state-owned corporations in Vietnam sometimes have they own flags.
Organization flags
Historical flags
Cultural flags
Monarchist flags
National flag proposals
Misattributed flags
This is a list of incorrect, fictitious or unknown flags which have been reported on as being factual and/or historical flags of Vietnam by contemporary or otherwise reputable sources.
Fictitious pre-Nguyễn dynastic flags
"Flag of Cochinchina"
Modern flags
Flag construction sheets
See also
List of flags of French Indochina
Vietnamese five-color flags
Notes |
https://en.wikipedia.org/wiki/Phytotoxicity | Phytotoxicity describes any adverse effects on plant growth, physiology, or metabolism caused by a chemical substance, such as high levels of fertilizers, herbicides, heavy metals, or nanoparticles. General phytotoxic effects include altered plant metabolism, growth inhibition, or plant death. Changes to plant metabolism and growth are the result of disrupted physiological functioning, including inhibition of photosynthesis, water and nutrient uptake, cell division, or seed germination.
Fertilizers
High concentrations of mineral salts in solution within the plant growing medium can result in phytotoxicity, commonly caused by excessive application of fertilizers. For example, urea is used in agriculture as a nitrogenous fertilizer. However, if too much is applied, phytotoxic effects can result from urea toxicity directly or ammonia production from hydrolysis of urea. Organic fertilizers, such as compost, also have the potential to be phytotoxic if not sufficiently humified, as intermediate products of this process are harmful to plant growth.
Herbicides
Herbicides are designed and used to control unwanted plants such as agricultural weeds. However, the use of herbicides can cause phytotoxic effects on non-targeted plants through wind-blown spray drift or from the use of herbicide-contaminated material (such as straw or manure) being applied to the soil. Herbicides can also cause phytotoxicity in crops if applied incorrectly, in the wrong stage of crop growth, or in excess. The phytotoxic effects of herbicides are an important subject of study in the field of ecotoxicology.
Heavy Metals
Heavy metals are high-density metallic compounds which are poisonous to plants at low concentrations, although toxicity depends on plant species, specific metal and its chemical form, and soil properties. The most relevant heavy metals contributing to phytotoxicity in crops are silver (Ag), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), iron (Fe), nickel (Ni), lead (Pb) |
https://en.wikipedia.org/wiki/QuRiNet | The Quail Ridge Wireless Mesh Network project is an effort to provide a wireless communications infrastructure to the Quail Ridge Reserve, a wildlife reserve in California in the United States. The network is intended to benefit on-site ecological research and provide a wireless mesh network tested for development and analysis. The project is a collaboration between the University of California Natural Reserve System and the Networks Lab at the Department of Computer Science, UC Davis.
Project
The large-scale wireless mesh network would consist of various sensor networks gathering temperature, visual, and acoustic data at certain locations. This information would then be stored at the field station or relayed further over Ethernet. The backbone nodes would also serve as access points enabling wireless access at their locations.
The Quail Ridge Reserve would also be used for further research into wireless mesh networks.
External links
qurinet.cs.ucdavis.edu
spirit.cs.ucdavis.edu
nrs.ucdavis.edu/quail.html
nrs.ucop.edu
Computer networking |
https://en.wikipedia.org/wiki/Tatamibari | Tatamibari () is a type of logic puzzle designed and published by Nikoli. The puzzle is based on Japanese tatami mats.
Rules
A Tatamibari puzzle is played on a rectangular grid with three different kinds of symbols in it: +, -. and |. The solver must partition the grid into rectangular or square regions according to the following rules:
Every partition must contain exactly one symbol in it.
A + symbol must be contained in a square.
A | symbol must be contained in a rectangle with a greater height than width.
A - symbol must be contained in a rectangle with a greater width than height.
Four pieces may never share the same corner.
Computational complexity
The problem of finding a solution to a particular Tatamibari configuration is NP-complete.
See also
List of Nikoli puzzle types |
https://en.wikipedia.org/wiki/Simon%20Gosling | Simon "Goose" Gosling (born 9 April 1969) is a British designer and builder of special effects models and props. He is best known for his work on the Millennium Falcon cockpit for Star Wars: The Force Awakens in 2014. Also his work on commercials featuring stop-frame animation for Brisk, Apple Jacks and Chips Ahoy in America, and the Windy Miller adverts for Quaker Oats in Britain.
Gosling was born in Shrewsbury, Shropshire, a town he lived in until 1994.
Gosling has created props and models for films including The Brothers Grimm (2005), The Hitchhiker's Guide to the Galaxy (2005) and Stormbreaker (2006). In 2006, he supervised the building of Hex during the Sky One production of Hogfather, an adaptation of the Discworld novel by author Terry Pratchett.
On 22 April 2007, Hogfather won the BAFTA Television Craft Award for best special effects.
Gosling is also a musician, appearing on the soundtrack of the PlayStation videogame Croc 2.
Selected filmography
Star Wars: Episode I – The Phantom Menace (1997) Assistant prop maker
Les Visiteurs 2 (1998) Prosthetic technician
Band of Brothers (2000) Miniature model maker
Dinotopia (2002) Miniature Modeller
The Brothers Grimm (2003) Prop Modeller
The Hitch Hikers Guide To The Galaxy (2004) Prop Modeller
Stormbreaker (2005) Electronic prop Modeller
Terry Pratchett's Hogfather (2006) Prop Modeller
I Want Candy (2006) Prop Modeller
Babylon A.D. (2007) Prop Modeller
The Colour of Magic (2007) Supervising Prop Modeller
Dread (2009) Special effects technician
Gulliver's Travels (2010) Concept model maker
Captain America: The First Avenger (2011) Prop Modeller
Prometheus (2012) Prop Modeller
Snow White & the Huntsman (2012) Prop Modeller
Fast & Furious 6 (2013) Prop Modeller
Jupiter Ascending (2014) Prop Modeller
Kingsman: The Secret Service (2014) Prop Modeller
Pan (2015) Prop Modeller
Star Wars: The Force Awakens (2015) Senior Prop Modeller |
https://en.wikipedia.org/wiki/Significant%20Figures%20%28book%29 | Significant Figures: The Lives and Work of Great Mathematicians is a 2017 nonfiction book by British mathematician Ian Stewart , published by Basic Books. In the work, Stewart discusses the lives and contributions of 25 figures who are prominent in the history of mathematics. The 25 mathematicians selected are: Archimedes, Liu Hui, Muḥammad ibn Mūsā al-Khwārizmī, Madhava of Sangamagrama, Gerolamo Cardano, Pierre de Fermat, Isaac Newton, Euler, Fourier, Gauss, Lobachevsky, Galois, Ada Lovelace, Boole, Riemann, Cantor, Sofia Kovalevskaia, Poincaré, Hilbert, Emmy Noether, Ramanujan, Gödel, Turing, Mandelbrot, and Thurston.
Reception
In Kirkus Reviews, it was written that "even a popularizer as skilled and prolific as Stewart cannot expect general readers to fully digest his highly distilled explanations of what these significant figures did to resolve ever more complex conundrums as math advanced." However, the reviewer praised Stewart's sketches of the lives and times of the innovators. The book was described as "a text for teachers, precocious students, and intellectually curious readers unafraid to tread unfamiliar territory".
See also
In Pursuit of the Unknown: 17 Equations That Changed the World |
https://en.wikipedia.org/wiki/Functional%20residual%20capacity | Functional residual capacity (FRC) is the volume of air present in the lungs at the end of passive expiration. At FRC, the opposing elastic recoil forces of the lungs and chest wall are in equilibrium and there is no exertion by the diaphragm or other respiratory muscles.
Measurement
FRC is the sum of expiratory reserve volume (ERV) and residual volume (RV) and measures approximately 3000 mL in a 70 kg, average-sized male. It cannot be estimated through spirometry, since it includes the residual volume. In order to measure RV precisely, one would need to perform a test such as nitrogen washout, helium dilution or body plethysmography.
Positioning plays a significant role in altering FRC. It is highest when in an upright position and decreases as one moves from upright to supine/prone or Trendelenburg position. The greatest decrease in FRC occurs when going from 60° to totally supine at 0°. There is no significant change in FRC as position changes from 0° to Trendelenburg of up to −30°. However, beyond −30°, the drop in FRC is considerable.
Clinical significance
A lowered or elevated FRC is often an indication of some form of respiratory disease. In restrictive diseases, the decreased total lung capacity leads to a lower FRC. In turn in obstructive diseases, the FRC is increased.
For instance, in emphysema, FRC is increased, because the lungs are more compliant and the equilibrium between the inward recoil of the lungs and outward recoil of the chest wall is disturbed. As such, patients with emphysema often have noticeably broader chests due to the relatively unopposed outward recoil of the chest wall. Total lung capacity also increases, largely as a result of increased functional residual capacity.
Obese and pregnant patients will have a lower FRC in the supine position due to the added tissue weight opposing the outward recoil of the chest wall thus reducing chest wall compliance. In pregnancy, this starts at about the fifth month and reaches 10-20% decrease a |
https://en.wikipedia.org/wiki/Devocalization | Devocalization (also known as ventriculocordectomy or vocal cordectomy; when performed on a dog debarking or bark softening; when performed on a cat demeowing or meow softening) is a surgical procedure where tissue is removed from the vocal cords.
Indications and contraindications
Devocalization is usually performed at the request of an animal owner (where the procedure is legally permitted). The procedure may be forcefully requested as a result of a court order. Owners or breeders generally request the procedure because of excessive animal vocalizations, complaining neighbors, or as an alternative to euthanasia due to a court order.
Contraindications include negative reaction to anesthesia, infection, bleeding, and pain. There is also the possibility that the removed tissue will grow back, or of scar tissue blocking the throatboth cases requiring further surgeriesthough with the incisional technique the risk of fibrosis is virtually eliminated.
Effectiveness
The devocalization procedure does not take away a dog's ability to bark. Dogs will normally bark just as much as before the procedure. After the procedure, the sound will be softer, typically about half as loud as before, or less, and it is not as sharp or piercing.
Most devocalized dogs have a subdued "husky" bark, audible up to 20 metres.
Procedure
The surgery may be performed via the animal's mouth, with a portion of the vocal folds removed using a biopsy punch, cautery tool, scissor, or laser. The procedure may also be performed via an incision in the throat and through the larynx, which is a more invasive technique. All devocalization procedures require general anesthesia.
Reasons for excessive vocalization
Chronic, excessive vocalization may be due to improper socialization or training, stress, boredom, fear, or frustration. Up to 35% of dog owners report problems with barking, which can cause disputes and legal problems. The behavior is more common among some breeds of dog, such as the Shetland She |
https://en.wikipedia.org/wiki/IEC%2062379 | IEC 62379 is a control engineering standard for the common control interface for networked digital audio and video products. IEC 62379 uses Simple Network Management Protocol to communicate control and monitoring information.
It is a family of standards that specifies a control framework for networked audio and video equipment and is published by the International Electrotechnical Commission. It has been designed to provide a means for entering a common set of management commands to control the transmission across the network as well as other functions within the interfaced equipment.
Organization
The parts within this standard include:
Part 1: General,
Part 2: Audio,
Part 3: Video,
Part 4: Data,
Part 5: Transmission over networks,
Part 6: Packet transfer service,
Part 7: Measurement (for EBU ECN-IPM Group)
Part one is common to all equipment that conforms to IEC 62379 and a preview of the published document can be downloaded from the IEC web store here, a section of the International Electrotechnical Commission web site. More information is available at the project group web site.
History
2 October 2008
Part 2, Audio has now been published and a preview can be downloaded from the IEC web store, a section the International Electrotechnical Commission web site.
31 August 2011
A first edition of Part 3, Video has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
It contains the video MIB required by Part 7.
Part 7, Measurement, has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.
This part specifies those aspects that are specific to the measurement requirements of the EBU ECN-IPM Group, a member of the Expert Communities Networks. An associated document EBU TECH 3345 has recently been published by the EBU European Broadcasting Union.
16 December |
https://en.wikipedia.org/wiki/IBM%20SAN%20Volume%20Controller | The IBM SAN Volume Controller (SVC) is a block storage virtualization appliance that belongs to the IBM System Storage product family. SVC implements an indirection, or "virtualization", layer in a Fibre Channel storage area network (SAN).
Architecture
The IBM 2145 SAN Volume Controller (SVC) is an inline virtualization or "gateway" device. It logically sits between hosts and storage arrays, presenting itself to hosts as the storage provider (target) and presenting itself to storage arrays as one big host. SVC is physically attached to one or several SAN fabrics.
The virtualization approach allows for non-disruptive replacements of any part in the storage infrastructure, including the SVC devices themselves. It also aims at simplifying compatibility requirements in strongly heterogeneous server and storage landscapes. All advanced functions are therefore implemented in the virtualization layer, which allows switching storage array vendors without impact. Finally, spreading an SVC installation across two or more sites (stretched clustering) enables basic disaster protection paired with continuous availability.
SVC nodes are always clustered, with a minimum of 2 and a maximum of 8 nodes, and linear scalability. Nodes are rack-mounted appliances derived from IBM System x servers, protected by redundant power supplies and integrated batteries. Earlier models featured external battery-backed power supplies. Each node has Fibre Channel ports simultaneously used for incoming, outgoing, and intracluster data traffic. Hosts may also be attached via FCoE and iSCSI Gbit Ethernet ports. Intracluster communication includes maintaining read/write cache integrity, sharing status information, and forwarding reads and writes to any port. These ports must be zoned together.
Write cache is protected by mirroring within a pair of SVC nodes, called I/O group. Virtualized resources (= storage volumes presented to hosts) are distributed across I/O groups to improve performance. Volum |
https://en.wikipedia.org/wiki/Product%20rule | In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as
The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.
Discovery
Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u(x) and v(x) be two differentiable functions of x. Then the differential of uv is
Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that
and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain
which can also be written in Lagrange's notation as
Examples
Suppose we want to differentiate By using the product rule, one gets the derivative (since the derivative of is and the derivative of the sine function is the cosine function).
One special case of the product rule is the constant multiple rule, which states: if is a number, and is a differentiable function, then is also differentiable, and its derivative is This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear.
The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is it is differentiable.)
Proofs
Limit definition of derivative
Let and suppose that and are each differentiable at . We want to prove that is differentiable at and that its derivative, , is given by . To do this, (which is z |
https://en.wikipedia.org/wiki/SIMMON | SIMMON (Simulation Monitor) was a proprietary software testing system developed in the late 1960s in the IBM Product Test Laboratory, then at Poughkeepsie, New York It was designed for the then-new line of System/360 computers as a vehicle for testing the software that IBM was developing for that architecture. SIMMON was first described at the IBM SimSymp 1968 symposium, held at Rye, New York.
SIMMON was a hypervisor, similar to the IBM CP-40 system that was being independently developed at the Cambridge Scientific Center at about that same time. The chief difference from CP-40 was that SIMMON supported a single virtual machine for testing of a single guest program running there. CP-40 supported many virtual machines for time-sharing production work. CP-40 evolved by many stages into the present VM/CMS operating system. SIMMON was a useful test vehicle for many years.
SIMMON was designed to dynamically include independently developed programs (test tools) for testing the target guest program. The SIMMON kernel maintained control over the hardware (and the guest) and coordinated invocation of the test tools.
Processing modes
Two modes of operation were provided:
Full simulation
Interrupt
Full simulation mode
In this mode, each instruction in the guest program was simulated without ever passing control directly to the guest. As an Instruction Set Simulator, SIMMON was unusual in that it simulated the same architecture as that on which it was running, i.e. that of the IBM System/360/370. While an order of magnitude slower than Interrupt mode (below), it allowed close attention to the operation of the guest. This would be the mode used by various instruction trace test tools.
Interrupt mode
Interrupt mode (a/k/a Bump mode) constrained the guest program to run in user program state, with the SIMMON kernel handling all hardware interrupts and simulating all privileged instructions the guest attempted to execute. This mode could be used, for example, by a test tool |
https://en.wikipedia.org/wiki/G-VPR%20model | The g-VPR model is a model of human intelligence published in 2005 by psychology professors Wendy Johnson and Thomas J. Bouchard Jr. (Johnson & Bouchard, 2005) They developed the model by analyzing Gf-Gc theory, John Carroll’s Three-stratum theory and Vernon’s verbal-perceptual model.
The g-VPR model is a four stratum model:
First stratum: Primary traits.
Second stratum: Broader than stratum I, but still narrow abilities.
Third stratum: Verbal, perceptual and rotation factors.
Fourth stratum: g factor.
Why Johnson and Bouchard claim the g-VPR model is better
Johnson and Bouchard made comparisons between Gf-Gc, three-stratum and verbal-spatial model. They found that Vernon's verbal-perceptual model got better modeling-fit results than the other two, but still did not fit very well. Then, based on the verbal-spatial model, Johnson and Bouchard "began by adding a memory factor (labeled content memory to distinguish it from the memory factor in the fluid-crystallized model)". They also made some extra changes to improve model fit.
According to results from CAB (comprehensive ability battery), they added/moved loadings to adjust new model. Then it came into a new "visual-perceptual-memory" model.
However, Johnson and Bouchard found that "[i]solation of the tests involved resulted in identification of an additional third-stratum factor for image rotation, which also eliminated the contradictory cross-loadings." Therefore, finally, the new model was named as verbal-perceptual-rotation (VPR) model.
After Johnson and Bouchard concluded this g-VPR intelligence structure model, they compared the model fit of g-VPR with the Gf-Gc model and the verbal-spatial model. Results showed that the g-VPR model fitted better than any other models.
In the article, Johnson and Bouchard (2005) also discussed specifically about the g-VPR model and Gf-Gc theory. According to Garlick (2002), special neuronal connections will be built under environmental stimuli when people's knowled |
https://en.wikipedia.org/wiki/Radu%20Grigorovici | Radu Grigorovici (November 20, 1911 – August 2, 2008) was a Romanian physicist.
Biography
Radu Grigorovici was born on November 20, 1911 in Chernivtsi, being the only son of the Bucovina Social Democrats Gheorghe and Tatiana Grigorovici. After graduating from Aron Pumnul High School (1928) he studied at the Chernivtsi University, and in 1931 he got a degree in chemical sciences and in 1934 a degree in physical sciences. At the same university, he was then a trainer at the Experimental Physics Laboratory of Professor Eugen Bădărău.
In 1936 he transferred to the Faculty of Sciences of the University of Bucharest, where Bădărău had been called as head of the Laboratory of Molecular, Acoustic and Optical Physics. In 1938 he obtained a PhD in physical sciences with a dissertation on the disruptive potential of mercury vapor. He climbed the ranks of the university hierarchy, becoming an associate professor in 1949. Between 1947-1957 he worked in parallel in the light source industry (Lumen factory, then Electrofar), as a consulting engineer. He was forced, for political reasons, to give up his university career. He retired from research, becoming head of department (1960) and deputy scientific director (1963) at the Bucharest Institute of Physics of the RPR Academy; in 1970 the institute will be subordinated to the State Committee for Nuclear Energy. In 1973 he applied for retirement, continuing his activity as a leading part-time scientific researcher; in 1977, following a reorganization, he was transferred to the Institute of Physics and Materials Technology, and after a year his employment contract was terminated.
Radu Grigorovici made original contributions to the physics of electric gas discharges, flame spectral analysis, light sources, physiological and instrumental optics, size systems and physical-physiological units. At the Bucharest Institute of Physics, he organized and led a group of researchers who studied the phenomena of transport in disordered thin met |
https://en.wikipedia.org/wiki/Spin%20connection | In differential geometry and mathematical physics, a spin connection is a connection on a spinor bundle. It is induced, in a canonical manner, from the affine connection. It can also be regarded as the gauge field generated by local Lorentz transformations. In some canonical formulations of general relativity, a spin connection is defined on spatial slices and can also be regarded as the gauge field generated by local rotations.
The spin connection occurs in two common forms: the Levi-Civita spin connection, when it is derived from the Levi-Civita connection, and the affine spin connection, when it is obtained from the affine connection. The difference between the two of these is that the Levi-Civita connection is by definition the unique torsion-free connection, whereas the affine connection (and so the affine spin connection) may contain torsion.
Definition
Let be the local Lorentz frame fields or vierbein (also known as a tetrad), which is a set of orthonormal space time vector fields that diagonalize the metric tensor
where is the spacetime metric and is the Minkowski metric. Here, Latin letters denote the local Lorentz frame indices; Greek indices denote general coordinate indices. This simply expresses that , when written in terms of the basis , is locally flat. The Greek vierbein indices can be raised or lowered by the metric, i.e. or . The Latin or "Lorentzian" vierbein indices can be raised or lowered by or respectively. For example, and
The torsion-free spin connection is given by
where are the Christoffel symbols. This definition should be taken as defining the torsion-free spin connection, since, by convention, the Christoffel symbols are derived from the Levi-Civita connection, which is the unique metric compatible, torsion-free connection on a Riemannian Manifold. In general, there is no restriction: the spin connection may also contain torsion.
Note that using the gravitational covariant derivative of the contravariant vector . Th |
https://en.wikipedia.org/wiki/Combinatorics%20of%20Experimental%20Design | Combinatorics of Experimental Design is a textbook on the design of experiments, a subject that connects applications in statistics to the theory of combinatorial mathematics. It was written by mathematician Anne Penfold Street and her daughter, statistician Deborah Street, and published in 1987 by the Oxford University Press under their Clarendon Press imprint.
Topics
The book has 15 chapters. Its introductory chapter covers the history and applications of experimental designs, it has five chapters on balanced incomplete block designs and their existence, and three on Latin squares and mutually orthogonal Latin squares. Other chapters cover resolvable block designs, finite geometry, symmetric and asymmetric factorial designs, and partially balanced incomplete block designs.
After this standard material, the remaining two chapters cover less-standard material. The penultimate chapter covers miscellaneous types of designs including circular block designs, incomplete Latin squares, and serially balanced sequences. The final chapter describes specialized designs for agricultural applications. The coverage of the topics in the book includes examples, clearly written proofs, historical references, and exercises for students.
Audience and reception
Although intended as an advanced undergraduate textbook, this book can also be used as a graduate text, or as a reference for researchers. Its main prerequisites are some knowledge of linear algebra and linear models, but some topics touch on abstract algebra and number theory as well.
Although disappointed by the omission of some topics, reviewer D. V. Chopra writes that the book "succeeds remarkably well" in connecting the separate worlds of combinatorics and statistics.
And Marshall Hall, reviewing the book, called it "very readable" and "very satisfying".
Related books
Other books on the combinatorics of experimental design include Statistical Design and Analysis of Experiments (John, 1971), Constructions and Combinat |
https://en.wikipedia.org/wiki/Dragon%20Knight%20II | Dragon Knight II (ドラゴンナイトII) is a fantasy-themed eroge role-playing video game in the Dragon Knight franchise that was originally developed and published by ELF Corporation in 1990-1991 only in Japan as the first sequel to the original Dragon Knight game from 1989. The game is an erotic dungeon crawler in which a young warrior Takeru fights to lift a witch's curse that has turned girls into monsters.
Following the commercial and critical success of Dragon Knight II, ELF followed up with Dragon Knight III / Knights of Xentar in 1991. A censored remake of Dragon Knight II was published by NEC Avenue in 1992.
Gameplay
Dragon Knight II is available only in Japanese. Its gameplay system has not changed much since the first Dragon Knight game, as it is still a standard dungeon crawler with first-person view perspective and 2D graphics. The player spends most of the time navigating dungeon-like mazes and fighting enemies. As progress is made, the mazes will become more complicated, but as in the first game there is an aid for the player in the form of a mini-map with grid coordinates. The player can also visit shops and converse with non-hostile NPCs.
The game starts with just one player character, Takeru, but two other characters join up later on. The game's battle system has also undergone minor changes. It still features turn-based battles that are mostly randomly generated, but the fights are better balanced than in the first game. The player can attack, defend, and use spells and items to deal with various types of female enemies (berserker, banshee, catgirl, centaur, elf, harpy, ninja, mummy, werewolf, and so on), who are being fought only one at a time. These enemies are actually girls who have been transformed into monsters, and whenever the player character fights off one of them, the subdued enemy loses her clothing. Later, when the enemies revert to their normal self, in their gratitude they offer themselves to have sex with the protagonist in a cutscene (ce |
https://en.wikipedia.org/wiki/Interface%20bloat | In software design, interface bloat (also called fat interfaces by Bjarne Stroustrup and Refused Bequests by Martin Fowler) is when an interface incorporates too many operations on some data into an interface, only to find that most of the objects cannot perform the given operations.
Interface bloat is an example of an anti-pattern. One might consider using visitor pattern, Adapter Pattern, or interface segregation instead.
Anti-patterns
Computer programming folklore
Software engineering folklore |
https://en.wikipedia.org/wiki/Signed%20area | In mathematics, the signed area or oriented area of a region of an affine plane is its area with orientation specified by the positive or negative sign, that is "plus" or "minus" . More generally, the signed area of an arbitrary surface region is its surface area with specified orientation. When the boundary of the region is a simple curve, the signed area also indicates the orientation of the boundary.
Planar area
Polygons
The mathematics of ancient Mesopotamia, Egypt, and Greece had no explicit concept of negative numbers or signed areas, but had notions of shapes contained by some boundary lines or curves, whose areas could be computed or compared by pasting shapes together or cutting portions away, amounting to addition or subtraction of areas. This was formalized in Book I of Euclid's Elements, which leads with several common notions including "if equals are added to equals, then the wholes are equal" and "if equals are subtracted from equals, then the remainders are equal" (among planar shapes, those of the same area were called "equal"). The propositions in Book I concern the properties of triangles and parallelograms, including for example that parallelograms with the same base and in the same parallels are equal and that any triangle with the same base and in the same parallels has half the area of these parallelograms, and a construction for a parallelogram of the same area as any "rectilinear figure" (simple polygon) by splitting it into triangles. Greek geometers often compared planar areas by quadrature (constructing a square of the same area as the shape), and Book II of the Elements shows how to construct a square of the same area as any given polygon.
Just as negative numbers simplify the solution of algebraic equations by eliminating the need to flip signs in separately considered cases when a quantity might be negative, a concept of signed area analogously simplifies geometric computations and proofs. Instead of subtracting one area from anot |
https://en.wikipedia.org/wiki/Kyorochan | is a fictional bird that serves as a mascot for a Japanese brand of Morinaga chocolate, known as ChocoBall. He first appeared in 1967 in the anime television series Uchuu-shonen Soran (Space Boy Soran). Kyorochan also replaced the character Chappy in 1967, a space-themed squirrel who was the original mascot, first appearing in 1965.
Kyorochan's popularity began to take off in 1987, when TV commercials starring Kyorochan, as well as commercial songs performed by famous artists were made. In 1991, the name "Kyorochan" was printed on the boxes of ChocoBall candies. However, that same year, the sales of merchandise, such as stuffed animals and related products exceeded the sales of the ChocoBall brand itself.
Anime
An anime adaptation starring Kyorochan, with the same name, was produced by TV Tokyo, NAS, and SPE Visual Works and animated by Group TAC. These focus on the adventures of Kyorochan, as he lives on Angel Island, a large village home to various other birds. The first theme song is "Halation Summer", performed by Coconuts Musume, while the first ending theme was "Tsuukagu Ro", performed by Whiteberry. These were replaced by original songs by episode 27.
The series was released in very limited amounts in the DVD format, with box-sets being rare. International releases of the anime include Hungary (Kukucska Kalandjai), Romania (with the name intact), Taiwan (大嘴鳥), the Czech Republic (Červánek), and South Korea (왕부리 팅코). The Indian television channel Pogo began broadcasting Kyorochan from May 31, 2010, a decade after the original anime.
An obscure English dub appeared to have been made of the series, with Richie Campos only being the notable voice actor. Campos's name was curiously mentioned on its page in Anime News Network's encyclopedia, voicing Don Girori, Makumou, Dementon, Girosshu and the narrator. Other info and footage about this lost dub currently remain unknown.
Characters
Kyoro-chan (voiced by Miyako Ito) - The titular character and a cute par |
https://en.wikipedia.org/wiki/Mozilla%20Persona | Mozilla Persona was a decentralized authentication system for the web, based on the open BrowserID protocol prototyped by Mozilla and standardized by IETF. It was launched in July 2011, but after failing to achieve traction, Mozilla announced in January 2016 plans to decommission the service by the end of the year.
History and motivations
Persona was launched in July 2011 and shared some of its goals with some similar authentication systems like OpenID or Facebook Connect, but it was different in several ways:
It used email addresses as identifiers
It was more focused on privacy
It was intended to be fully integrated in the browser (relying heavily on Javascript).
The privacy goal was motivated by the fact that the identity provider does not know which website the user is identifying on. It was first released in July 2011 and fully deployed by Mozilla on its own websites in January 2012.
In March 2014, Mozilla indicated it was dropping full-time developers from Persona and moving the project to community ownership. Mozilla indicated, however, that it had no plans to decommission Persona and would maintain some level of involvement such as in maintenance and reviewing pull requests.
Persona services are shut down since November 30, 2016.
Principles and implementation
Persona was inspired by the VerifiedEmailProtocol which is now known as the BrowserID protocol. It uses any user email address to identify its owner. This protocol involves the browser, an identity provider, and any compliant website.
The browser, the provider and the website
The browser stores a list of user verified email addresses (certificates issued by the identity providers), and demonstrates the user's ownership of the addresses to the website using cryptographic proof.
The certificates must be renewed every 24 hours by logging into the identity provider (which will usually mean entering the email and a password in a Web form on the identity provider's site). Once done, they will be usab |
https://en.wikipedia.org/wiki/2019%20in%20paleobotany | This article records new taxa of fossil plants that are scheduled to be described during the year 2019, as well as other significant discoveries and events related to paleobotany that are scheduled to occur in the year 2019.
Mosses
Liverworts
Ferns and fern allies
Lycophytes
Conifers
Araucariaceae
Cupressaceae
Pinceae
Podocarpaceae
Other conifers
Other seed plants
Flowering plants
Basal angiosperms
Nymphaeales
Other basal angiosperms
Monocots
Alismatales
Arecales
Dioscoreales
Poales
Magnoliids
Laurales
Magnoliales
Piperales
Unplaced non-eudicots
Chloranthales
Basal eudicots
Proteales
Ranunculales
Superasterids
Aquifoliales
Asterales
Boraginales
Caryophyllales
Cornales
Ericales
Gentianales
Icacinales
Superrosids
Malvids
Malvales
Sapindales
Other malvids
Fabids
Fabales
Fagales
Malpighiales
Oxalidales
Rosales
Unplaced superrosid eudicots
Other angiosperms
Other plants
General research
Description of fossils of filamentous green algae from the Early Devonian Rhynie chert (Scotland) is published by Wellman, Graham & Lewis (2019).
Cretaceous alga Falsolikanella campanensis, originally assigned to the tribe Diploporeae within the green alga order Dasycladales, is transferred to the genus Actinoporella within the tribe Acetabularieae, family Polyphysaceae by Barattolo et al. (2019).
A study on the impact of the Cretaceous–Paleogene extinction event on European charophytes is published by Vicente, Csiki-Sava & Martín-Closas (2019).
The oldest known trilete spore assemblages reported so far are described from the Sandbian successions from Motala (central Sweden) by Rubinstein & Vajda (2019).
A study on the composition and distribution of dispersed spore assemblages from Middle Devonian deposits of northern Spain, and on their implications for inferring the nature of the Kačák Event, is published by Askew & Wellman (2019).
A study on the morphology of the spore taxon Lagenoisporites magnus from the Carboniferous (Tourn |
https://en.wikipedia.org/wiki/MARIACHI | MARIACHI, the Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization, is an apparatus for the detection of ultra-high-energy cosmic rays (UHECR) via bi-static radar interferometry using VHF transmitters.
MARIACHI is also the name of the research project created and directed by Brookhaven National Laboratory (BNL) on Long Island, New York, initially intended to verify the concept that VHF signals can be reflected off the ionization patch produced by a cosmic ray shower. Project emphasis subsequently shifted to the attempted detection of radio wave reflections from a high energy ionization beam apparatus located at BNL's NASA Space Radiation Laboratory.
Its inventors hope the MARIACHI apparatus will detect UHECR over much larger areas than previously possible, and that it will also detect ultra-high-energy neutrino flux. The ground array detectors are scintillator arrays that are built and operated by high school students and teachers.
The MARIACHI project, being in essence a public outreach project for high school and undergraduate students more than a full-scale science experiment, has continued in a sporadic fashion since its conception in the late 2000s. For example, a high school in New York continued MARIACHI measurements for over 8-year period between 2008 and 2016; the results of these measurements were published 2016. Measurements have been performed by other instances (high schools, community colleges,...) also.
The main researcher behind MARIACHI is Helio Takai (Brookhaven National Laboratory, Stony Brook University, as of 2019 Pratt Institute). |
https://en.wikipedia.org/wiki/Pacifastin | Pacifastin is a family of serine proteinase inhibitors found in arthropods. Pacifastin inhibits the serine peptidases trypsin and chymotrypsin.
All pacifastin members that have been characterized at the molecular level are precursor peptides composed of an N-terminal signal sequence followed by a precursor domain and a variable number of inhibitor domains. Each of these inhibitor domains carries a six-cysteine motif – see below.
The first family members to be identified were isolated from Locusta migratoria migratoria (migratory locust) which were HI, LMCI-1 (PMP-D2) and LMCI-2 (PMP-C). A further five members, SGPI-1 to 5, were then isolated from Schistocerca gregaria (desert locust), and a heterodimeric serine protease inhibitor was isolated from the haemolymph of Pacifastacus leniusculus (Signal crayfish), and named pacifastin.
Function
Peptide proteinase inhibitors are in many cases synthesised as part of a larger precursor protein, referred to as a propeptide or zymogen, which remains inactive until the precursor domain is cleaved off in the lysosome, the precursor domain preventing access of the substrate to the active site until necessary. Proteinase inhibitors destined for secretion have an additional N-terminal signal-peptide domain which will be cleaved by a signal-peptidase. Removal of these one or two N-terminal inhibitor domains, either by interaction with a second peptidase or by autocatalytic cleavage, will activate the zymogen.
Very little is known about the endogenous function of pacifastin-like inhibitors except that they may play roles in arthropod immunity and in regulation of the physiological processes involved in insect reproduction.
Structure
The inhibitor unit of pacifastin is a conserved pattern of six cysteine residues (Cys1 – Xaa9–12 – Cys2 – Asn – Xaa – Cys3 – Xaa – Cys4 – Xaa2–3 – Gly – Xaa3–6 – Cys5 – Thr – Xaa3 – Cys6). Detailed analysis of the 3-D structure shows that these six residues form three disulfide bridges (Cys1–4, Cys |
https://en.wikipedia.org/wiki/Novartis | Novartis AG is a Swiss multinational pharmaceutical corporation based in Basel, Switzerland. Consistently ranked in the global top five, Novartis is one of the largest pharmaceutical companies in the world and was the fourth largest by revenue in 2022.
Novartis manufactures the drugs clozapine (Clozaril), diclofenac (Voltaren; sold to GlaxoSmithKline in 2015 deal), carbamazepine (Tegretol), valsartan (Diovan), imatinib mesylate (Gleevec/Glivec), cyclosporine (Neoral/Sandimmune), letrozole (Femara), methylphenidate (Ritalin; production ceased 2020), terbinafine (Lamisil), deferasirox (Exjade), and others.
In March 1996, the companies Ciba-Geigy and Sandoz merged to form Novartis. It was considered the largest corporate merger in history during that time. The pharmaceutical and agrochemical divisions of both companies formed Novartis as an independent entity. The name Novartis was based on the Latin terms, “novae artes” (new skills).
After the merger, other Ciba-Geigy and Sandoz businesses were sold, or, like Ciba Specialty Chemicals, spun off as independent companies. The Sandoz brand disappeared for three years, but was revived in 2003 when Novartis consolidated its generic drugs businesses into a single subsidiary and named it Sandoz. Novartis divested its agrochemical and genetically modified crops business in 2000 with the spinout of Syngenta in partnership with AstraZeneca, which also divested its agrochemical business. The new company also acquired a series of acquisitions in order to strengthen its core businesses.
Novartis is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA), the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA), and the Pharmaceutical Research and Manufacturers of America (PhRMA).Novartis is the third most valuable pharmaceutical company in Europe, after Novo Nordisk and Roche.
Corporate structure
Novartis AG is a publicly traded Swiss holding company that ope |
https://en.wikipedia.org/wiki/Rho%20factor | A ρ factor (Rho factor) is a bacterial protein involved in the termination of transcription. Rho factor binds to the transcription terminator pause site, an exposed region of single stranded RNA (a stretch of 72 nucleotides) after the open reading frame at C-rich/G-poor sequences that lack obvious secondary structure.
Rho factor is an essential transcription protein in bacteria. In Escherichia coli, it is a ~274.6 kD hexamer of identical subunits. Each subunit has an RNA-binding domain and an ATP-hydrolysis domain. Rho is a member of the RecA/SF5 family of ATP-dependent hexameric helicases that function by wrapping nucleic acids around a single cleft extending around the entire hexamer. Rho functions as an ancillary factor for RNA polymerase.
There are two types of transcriptional termination in bacteria, rho-dependent termination and intrinsic termination (also called Rho-independent termination). Rho-dependent terminators account for about half of the E. coli factor-dependent terminators. Other termination factors discovered in E. coli include Tau and nusA. Rho-dependent terminators were first discovered in bacteriophage genomes.
Function
A Rho factor acts on an RNA substrate. Rho's key function is its helicase activity, for which energy is provided by an RNA-dependent ATP hydrolysis. The initial binding site for Rho is an extended (~70 nucleotides, sometimes 80–100 nucleotides) single-stranded region, rich in cytosine and poor in guanine, called the rho utilisation site (rut), in the RNA being synthesised, upstream of the actual terminator sequence. Several rho binding sequences have been discovered. No consensus is found among these, but the different sequences each seem specific, as small mutations in the sequence disrupts its function. Rho binds to RNA and then uses its ATPase activity to provide the energy to translocate along the RNA until it reaches the RNA–DNA helical region, where it unwinds the hybrid duplex structure. RNA polymerase pauses at the t |
https://en.wikipedia.org/wiki/Internet%20Theatre%20Database | The Internet Theatre Database (ITDb) is an online database with information about plays, playwrights, actors, legitimate theatre, musical theatre, Broadway shows, and similar theatrical information.
The website is run by several volunteer theatre aficionados, each contributing material as time permits. Somewhat similar to the Internet Broadway Database, the site's creators endeavor to include theatre outside of New York City by indexing London and Off-Broadway productions, national tours, and regional theatre. Modelled on the considerably larger Internet Movie Database, the site indexes by six categories: (1) show/play name; (2) people (actor, writer, or director); (3) theatre facility; (4) song title; (5) character/role; and (6) production role. Each day, the site also shows what well-known productions opened or closed on that date at important theatres in the past several decades.
As of July 2020, it has not been updated in over a decade.
See also
Internet Broadway Database (IBDb)
Internet Movie Database (IMDb) |
https://en.wikipedia.org/wiki/Toshiba%20Pasopia%20IQ | The Toshiba Pasopia IQ are a series of MSX compatible machines released by Toshiba between 1983 and 1985. This is not to be confused with a different computer line (unrelated to MSX) with the similar name of Toshiba Pasopia.
HX-10 series
The HX-10 was released in the fall of 1983. There is only one ROM cartridge slot, but there's an optional expansion slot available. Several models exist (D, DP, DPN, F, E and S), targeting different markets. For example, the HX-10DPN is equipped with an RGB 21-pin terminal, but other connections (RF, composite video) are non existing; the HX-10S only has 16KB of RAM.
HX-20 series
The HX-20 was released in the fall of 1984 is equipped with 64KB of RAM. It has a monaural / stereo sound selector switch. Like with the HX-10 series, several models exist (HX-21, HX-22, HX-23). The later models have a RGB 21-pin video output. The HX-23 is compatible with the MSX2 and comes with 64KB of VRAM. The HX-23F is equipped with a RS-232 interface and comes with 128KB of VRAM.
HX-30 series
The HX-30 was MSX compatible and released in 1985, with 16KB of RAM, with latter models coming with 64KB, a RGB 21-pin video output and Programmable sound generator stereo output.
The HX-33 model has 128kB of VRAM and was MSX2 compatible with integrated keyboard. The next model, HX-34, added a floppy disk drive.
Model list
The following table present a condensed model list of the MSX compatible computers released by Toshiba.
See also
Toshiba Pasopia
Toshiba Pasopia 5
Toshiba Pasopia 7
Toshiba Pasopia 16 |
https://en.wikipedia.org/wiki/Microbicide%20Trials%20Network | The Microbicide Trials Network (MTN) is the leading United States government-funded research organization working in the field of microbicides for sexually transmitted diseases. The MTN particularly focuses on research into microbicides which would prevent HIV infection. The MTN is a member of HANC.
Clinical trials
The MTN's current clinical trial is the Vaginal and Oral Interventions to Control the Epidemic (VOICE) study. |
https://en.wikipedia.org/wiki/Norman%20A.%20Ough | Norman Arthur Ough (10 November 1898 – 3 August 1965) was a marine model maker whose models of Royal Navy warships are regarded as among the very finest of warship models.
Family and early life
Ough was born in Leytonstone, London. His father, Arthur Ough (1863–1946), was an architect, surveyor and civil engineer. At the age of two Ough accompanied his parents to Hong Kong, where his father was employed as an architect for the University of Hong Kong and the Kowloon-Canton Railway, remaining there for four years. He was educated at Highfield School, Liphook, Hampshire and Bootham School in York.
Later life
From the mid-1930s Ough lived in a flat at 98 Charing Cross Road, London. He never married and there is much anecdotal evidence that he lived a frugal, even impoverished, lifestyle in which model-making was a totally absorbing pursuit even to the extent of twice being hospitalised for failing to eat adequately due to concentration on his work.
Models
Many of Ough's models are on display or held in store in museums including the Imperial War Museum, the National Maritime Museum and the Royal United Services Museum. One of his earlier models was of the battleship HMS Queen Elizabeth, which he made for Lord Howe, who presented it to Earl Beatty. There followed commissions for his models from many museums. At one time he was employed by Earl Mountbatten to make models of ships on which he had served, who remarked in a reply dated 20 July 1979 to a letter received from a visitor to his Broadlands estate "How interesting that the great model maker, Norman Ough, was a cousin of yours... I was told by the maker of the model of HMS Hampshire, also on display, that other model makers considered Norman Ough, the greatest master of his craft of this century."
As at September 2017, these models were located at the collections and research facility at No. 1 Smithery, Chatham Historic Dockyard.
In an article written for an edition of the magazine Model Maker about his model |
https://en.wikipedia.org/wiki/PRKAR1A | cAMP-dependent protein kinase type I-alpha regulatory subunit is an enzyme that in humans is encoded by the PRKAR1A gene.
Function
cAMP is a signaling molecule important for a variety of cellular functions. cAMP exerts its effects by activating the cAMP-dependent protein kinase A (PKA), which transduces the signal through phosphorylation of different target proteins. The inactive holoenzyme of PKA is a tetramer composed of two regulatory and two catalytic subunits. cAMP causes the dissociation of the inactive holoenzyme into a dimer of regulatory subunits bound to four cAMP and two free monomeric catalytic subunits. Four different regulatory subunits and three catalytic subunits of PKA have been identified in humans. The protein encoded by this gene is one of the regulatory subunits. This protein was found to be a tissue-specific extinguisher that down-regulates the expression of seven liver genes in hepatoma x fibroblast hybrids Three alternatively spliced transcript variants encoding the same protein have been observed.
Clinical significance
Functional null mutations in this gene cause Carney complex (CNC), an autosomal dominant multiple neoplasia syndrome. This gene can fuse to the RET protooncogene by gene rearrangement and form the thyroid tumor-specific chimeric oncogene known as PTC2.
Mutation of PRKAR1A leads to the Carney complex, associating multiple endocrine tumors.
Interactions
PRKAR1A has been shown to interact with:
AKAP10,
AKAP1,
AKAP4,
ARFGEF1,
ARFGEF2,
Grb2,
MYO7A,
PRKAR1B, and
UBE2M.
See also
cAMP-dependent protein kinase |
https://en.wikipedia.org/wiki/British%20Hydromechanics%20Research%20Association | The British Hydromechanics Research Association is a former government research association that supplies consulting engineering over fluid dynamics.
History
It was formed on 20 September 1947 in Essex, under the Companies Act 1929
It had moved to Bedfordshire by the 1960s. In the 1970s it was known as BHRA Fluid Engineering.
Next door was the National Centre for Materials Handling, set up by the Ministry of Technology (MinTech), later known as the National Materials Handling Centre.
On 16 October 1989 it became a private consultancy.
Fluid engineering
The BHRA conducted most of the research for the aerodynamics of British power station infrastructure in the 1960s, such as cooling towers.
In 1966 it designed an early Thames flood barrier.
Computational fluid dynamics
It developed early CFD software.
Visits
On Tuesday 21 June 1966, the new Bedfordshire laboratories were opened by Duke of Edinburgh.
Structure
The organisation, Framatome BHR, is now in Cranfield in west Bedfordshire, near the M1.
See also
Bierrum, has built and designed Britain's power station cooling towers since 1965, also in Bedfordshire. |
https://en.wikipedia.org/wiki/Blain%20%28animal%20disease%29 | Blain was an animal disease of unknown etiology that was well known in the 18th and 19th centuries. It is unclear whether it is still extant, or what modern disease it corresponds to.
According to Ephraim Chambers' 18th-century Cyclopaedia, or an Universal Dictionary of Arts and Sciences, blain was "a distemper" (in the archaic eighteenth-century sense of the word, meaning "disease") occurring in animals, consisting of a "Bladder growing on the Root of the Tongue against the Wind-Pipe", which "at length swelling, stops the Wind". It was thought to occur "by great chafing, and heating of the Stomach".
Blain is also mentioned in Cattle: Their Breeds, Management, and Diseases, published in 1836, where it is also identified as "gloss-anthrax". W. C. Spooner's 1888 book The History, Structure, Economy and Diseases of the Sheep also identifies blain as being the same as gloss-anthrax.
A description of blain is provided in the Horticulture column of the Monday Morning edition of the Belfast News-Letter, September 13, 1852. Headline: The Prevailing Epidemic Disease in Horned Cattle - The Mouth and Food Disease. "There are two diseases of the mouth - one of a very serious character, which is called blain (gloss anthrax) or inflammation of the tongue. This is a very virulent disease, and sometimes of a very rapid action, and which should be at once attended to, and not trifled with; but though it always exhibits itself in inflammation of the membranes of the mouth, beneath or above the tongue, and the sides of the tongue itself, it soon extends through the whole system, and, according to the best veterinarians, involves inflammation and gangrene of the oesophagus and intestines. The symptoms are many, the eyes are inflamed, and constantly weeping; swellings appear round the eyes and some other parts of the body; the pulse quick, heaving of the flanks, and the bowels sometimes constipated. Such are the general symptoms of this formidable disease, more or less aggravated by |
https://en.wikipedia.org/wiki/Frege%27s%20theorem | In metalogic and metamathematics, Frege's theorem is a metatheorem that states that the Peano axioms of arithmetic can be derived in second-order logic from Hume's principle. It was first proven, informally, by Gottlob Frege in his 1884 Die Grundlagen der Arithmetik (The Foundations of Arithmetic) and proven more formally in his 1893 Grundgesetze der Arithmetik I (Basic Laws of Arithmetic I). The theorem was re-discovered by Crispin Wright in the early 1980s and has since been the focus of significant work. It is at the core of the philosophy of mathematics known as neo-logicism (at least of the Scottish School variety).
Overview
In The Foundations of Arithmetic (1884), and later, in Basic Laws of Arithmetic (vol. 1, 1893; vol. 2, 1903), Frege attempted to derive all of the laws of arithmetic from axioms he asserted as logical (see logicism). Most of these axioms were carried over from his Begriffsschrift; the one truly new principle was one he called the Basic Law V (now known as the axiom schema of unrestricted comprehension): the "value-range" of the function f(x) is the same as the "value-range" of the function g(x) if and only if ∀x[f(x) = g(x)]. However, not only did Basic Law V fail to be a logical proposition, but the resulting system proved to be inconsistent, because it was subject to Russell's paradox.
The inconsistency in Frege's Grundgesetze overshadowed Frege's achievement: according to Edward Zalta, the Grundgesetze "contains all the essential steps of a valid proof (in second-order logic) of the fundamental propositions of arithmetic from a single consistent principle." This achievement has become known as Frege's theorem.
Frege's theorem in propositional logic
In propositional logic, Frege's theorem refers to this tautology:
(P → (Q→R)) → ((P→Q) → (P→R))
The theorem already holds in one of the weakest logics imaginable, the constructive implicational calculus. The proof under the Brouwer–Heyting–Kolmogorov interpretation reads .
In words: |
https://en.wikipedia.org/wiki/Steiner%20point%20%28computational%20geometry%29 | In computational geometry, a Steiner point is a point that is not part of the input to a geometric optimization problem but is added during the solution of the problem, to create a better solution than would be possible from the original points alone.
The name of these points comes from the Steiner tree problem, named after Jakob Steiner, in which the goal is to connect the input points by a network of minimum total length. If the input points alone are used as endpoints of the network edges, then the shortest network is their minimum spanning tree. However, shorter networks can often be obtained by adding Steiner points,
and using both the new points and the input points as edge endpoints.
Another problem that uses Steiner points is Steiner triangulation. The goal is to partition an input (such as a point set or polygon) into triangles, meeting edge-to-edge. Both input points and Steiner points may be used as triangle vertices.
See also
Delaunay refinement |
https://en.wikipedia.org/wiki/Homoscedasticity%20and%20heteroscedasticity | In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used.
Assuming a variable is homoscedastic when in reality it is heteroscedastic () results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.
The existence of heteroscedasticity is a major concern in regression analysis and the analysis of variance, as it invalidates statistical tests of significance that assume that the modelling errors all have the same variance. While the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient and inference based on the assumption of homoskedasticity is misleading. In that case, generalized least squares (GLS) was frequently used in the past. Nowadays, standard practice in econometrics is to include Heteroskedasticity-consistent standard errors instead of using GLS, as GLS can exhibit strong bias in small samples if the actual Skedastic function is unknown.
Because heteroscedasticity concerns expectations of the second moment of the errors, its presence is referred to as misspecification of the second order.
The econometrician Robert Engle was awarded the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique.
Definition
Consider the linear regression equation where the dependent random variable equals the deterministic variable times coefficient plus a random disturbance term that has mean zero. The disturbances are homo |
https://en.wikipedia.org/wiki/Emotion%20and%20memory | Emotion can have a powerful effect on humans and animals. Numerous studies have shown that the most vivid autobiographical memories tend to be of emotional events, which are likely to be recalled more often and with more clarity and detail than neutral events.
The activity of emotionally enhanced memory retention can be linked to human evolution; during early development, responsive behavior to environmental events would have progressed as a process of trial and error. Survival depended on behavioral patterns that were repeated or reinforced through life and death situations. Through evolution, this process of learning became genetically embedded in humans and all animal species in what is known as flight or fight instinct.
Artificially inducing this instinct through traumatic physical or emotional stimuli essentially creates the same physiological condition that heightens memory retention by exciting neuro-chemical activity affecting areas of the brain responsible for encoding and recalling memory. This memory-enhancing effect of emotion has been demonstrated in many laboratory studies, using stimuli ranging from words to pictures to narrated slide shows, as well as autobiographical memory studies. However, as described below, emotion does not always enhance memory.
Arousal and valence in memory
One of the most common frameworks in the emotions field proposes that affective experiences are best characterized by two main dimensions: arousal and valence. The dimension of valence ranges from highly positive to highly negative, whereas the dimension of arousal ranges from calming or soothing to exciting or agitating.
The majority of studies to date have focused on the arousal dimension of emotion as the critical factor contributing to the emotional enhancement effect on memory. Different explanations have been offered for this effect, according to the different stages of memory formation and reconstruction. Memory has been shown to be better with arousal linked wit |
https://en.wikipedia.org/wiki/Foveated%20imaging | Foveated imaging is a digital image processing technique in which the image resolution, or amount of detail, varies across the image according to one or more "fixation points". A fixation point indicates the highest resolution region of the image and corresponds to the center of the eye's retina, the fovea.
The location of a fixation point may be specified in many ways.
For example, when viewing an image on a computer monitor, one may specify a fixation using a pointing device, like a computer mouse.
Eye trackers which precisely measure the eye's position and movement are also commonly used to determine fixation points in perception experiments.
When the display is manipulated with the use of an eye tracker, this is known as a gaze contingent display.
Fixations may also be determined automatically using computer algorithms.
Some common applications of foveated imaging include imaging sensor hardware and image compression. For descriptions of these and other applications, see the list below.
Foveated imaging is also commonly referred to as space variant imaging or gaze contingent imaging.
Applications
Compression
Contrast sensitivity falls off dramatically as one moves from the center of the retina to the periphery.
In lossy image compression, one may take advantage of this fact in order to compactly encode images.
If one knows the viewer's approximate point of gaze, one may reduce the amount of information contained in the image as the distance from the point of gaze increases. Because the fall-off in the eye's resolution is dramatic, the potential reduction in display information can be substantial. Also, foveation encoding may be applied to the image before other types of image compression are applied and therefore can result in a multiplicative reduction.
Foveated sensors
Foveated sensors are multiresolution hardware devices that allow image data to be collected with higher resolution concentrated at a fixation point. An advantage to using foveated sen |
https://en.wikipedia.org/wiki/Animals%20in%20space | Animals in space originally served to test the survivability of spaceflight, before human spaceflights were attempted. Later, other non-human animals were flown to investigate various biological processes and the effects microgravity and space flight might have on them. Bioastronautics is an area of bioengineering research that spans the study and support of life in space. To date, seven national space programs have flown animals into space: the United States, Soviet Union, France, Argentina, China, Japan and Iran.
A wide variety of animals have been launched into space, including monkeys and apes, dogs, cats, tortoises, mice, rats, rabbits, fish, frogs, spiders, quail eggs (which hatched in 1990 on Mir), and insects. The US launched the first Earthlings into space - fruit flies in 1947 - and flights carrying primates primarily between 1949 and 1961, with one flight in 1969 and one in 1985. France launched two monkey-carrying flights in 1967. The Soviet Union and Russia launched monkeys between 1983 and 1996. During the 1950s and 1960s, the Soviet space program used a number of dogs for sub-orbital and orbital space flights.
Two tortoises and several varieties of plants were the first inhabitants of Earth to circle the Moon, on the September 1968 Zond 5 mission. Turtles followed on the November 1968 Zond 6 circumlunar mission, and four turtles flew to the Moon on Zond 7 in August 1969. In 1972 five mice, Fe, Fi, Fo, Fum, and Phooey, orbited the Moon a record 75 times in Apollo 17's Command Module America, the last crewed voyage to the Moon.
Background
Animals had been used in aeronautic exploration since 1783 when the Montgolfier brothers sent a sheep, a duck, and a rooster aloft in a hot air balloon to see if ground-dwelling animals can survive (the duck serving as the experimental control). The limited supply of captured German V-2 rockets led to the U.S. use of high-altitude balloon launches carrying fruit flies, mice, hamsters, guinea pigs, cats, dogs, frogs, |
https://en.wikipedia.org/wiki/Chain%20rule%20for%20Kolmogorov%20complexity | The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:
That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X.
This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability:
The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term:
(An exact version, KP(x, y) = KP(x) + KP(y|x*) + O(1),
holds for the prefix complexity KP, where x* is a shortest program for x.)
It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: I(x:y) = I(y:x) + O(log K(x,y)) for all x,y.
Proof
The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given
access to x, and (whence the log term) the length of one of the programs, so
that we know where to separate the two programs for x and y|x (log(K(x, y)) upper-bounds this length).
For the ≥ direction, it suffices to show that for all k,l such that k+l = K(x,y) we have that either
K(x|k,l) ≤ k + O(1)
or
K(y|x,k,l) ≤ l + O(1).
Consider the list (a1,b1), (a2,b2), ..., (ae,be) of all pairs (a,b) produced by programs of length exactly K(x,y) [hence K(a,b) ≤ K(x,y)]. Note that this list
contains the pair (x,y),
can be enumerated given k and l (by running all programs of length K(x,y) in parallel),
has at most 2K(x,y) elements (because there are at most 2n programs of length n).
First, suppose that x appears less than 2l times as first element. We can specify y |
https://en.wikipedia.org/wiki/Sergei%20Ivanov%20%28mathematician%29 | Sergei Vladimirovich Ivanov (Сергей Владимирович Иванов; born 31 May 1972) is a leading Russian mathematician working in differential geometry and mathematical physics.
Education and career
For each of the three years, 1987, 1988, and 1989, Ivanov won a gold medal in the International Mathematical Olympiad. He studied at the Saint Petersburg State University, where he received his Ph.D. (Candidate of Sciences) with advisor Yuri Burago. Ivanov has worked for many years at the Steklov Institute of Mathematics. There in 2009 he habilitated (Doktor nauk).
In 2014, he received, jointly with Yuri Burago and Dmitri Burago, the Leroy P. Steele Prize for their book A course in metric geometry published by the American Mathematical Society in 2001.
In addition to his research on differential geometry, Ivanov also works on informatics.
In 2010, in Hyderabad he was an invited speaker with talk Volume comparison via boundary distances at the International Congress of Mathematicians. In December 2011, he was elected a corresponding member of the Russian Academy of Sciences.
Selected publications |
https://en.wikipedia.org/wiki/Egg%20substitutes | Egg substitutes are food products which can be used to replace eggs in cooking and baking. Common reasons a cook may choose to use an egg substitute instead of egg(s) include having an egg allergy, adhering to a vegan diet or a vegetarian diet of a type that omits eggs, having concerns about the level of animal welfare or environmental burden associated with egg farming, or worries about potential Salmonella contamination when using raw eggs. There is a growing movement to address some of these concerns via third-party certifications, but because many labels in the industry remain confusing or intentionally misleading, some consumers distrust them and may use egg substitutes instead.
Types
Commercial
There are many commercial substitutes on the market today for people who wish to avoid eggs. Most of these products are devoid of all animal products, and thus are vegan and contain no cholesterol.
The EVERY Company, a venture-backed company, produces bioidentical egg whites through a fermentation process.
JUST, Inc., another venture-backed company, produces and markets egg-free products, including cookie dough and a mayonnaise substitute, based on pea protein from the yellow pea.
Egg Replacer is a mixture of "potato starch, tapioca flour, leavening (calcium lactate, calcium carbonate, cream of tartar), cellulose gum, modified cellulose".
The Vegg is a vegan liquid egg yolk replacer, suitable in any recipe that one would alternatively use egg yolk. It is made of "nutritional yeast flakes, sodium alginate, kala namak, [and] beta-carotene". The Vegg was first sold in 2012, and is available in a variety of online and in-store retailers in the United States, Europe, United Kingdom, Australia, New Zealand, and South Africa.
FUMI Ingredients produces egg white substitutes from micro-algae with the help of micro-organisms such as brewer's yeast and baker's yeast.
The product called Egg Beaters is a substitute for whole/fresh eggs (from the shell) but is not an egg subs |
https://en.wikipedia.org/wiki/Critical%20incident%20technique | The critical incident technique (or CIT) is a set of procedures used for collecting direct observations of human behavior that have critical significance and meet methodically defined criteria. These observations are then kept track of as incidents, which are then used to solve practical problems and develop broad psychological principles. A critical incident can be described as one that makes a contribution—either positively or negatively—to an activity or phenomenon. Critical incidents can be gathered in various ways, but typically respondents are asked to tell a story about an experience they have had.
CIT is a flexible method that usually relies on five major areas. The first is determining and reviewing the incident, then fact-finding, which involves collecting the details of the incident from the participants. When all of the facts are collected, the next step is to identify the issues. Afterwards a decision can be made on how to resolve the issues based on various possible solutions. The final and most important aspect is the evaluation, which will determine if the solution that was selected will solve the root cause of the situation and will cause no further problems.
History
The studies of Sir Francis Galton are said to have laid the foundation for the critical incident technique, but it is the work of Colonel John C. Flanagan, that resulted in the present form of CIT.
Flanagan defined the critical incident technique as:
Flanagan's work was carried out as part of the Aviation Psychology Program of the United States Army Air Forces during World War II, where Flanagan conducted a series of studies focused on differentiating effective and ineffective work behaviors. Flanagan went on to found American Institutes for Research continuing to use the critical incident technique in a variety of research. Since then CIT has spread as a method to identify job requirements, develop recommendations for effective practices, and determine competencies for a vast |
https://en.wikipedia.org/wiki/Niels%20Bohr%20International%20Gold%20Medal | The Niels Bohr International Gold Medal is an international engineering award. It has been awarded since 1955 for "outstanding work by an engineer or physicist for the peaceful utilization of atomic energy". The medal is administered by the Danish Society of Engineers (Denmark) in collaboration with the Niels Bohr Institute and the Royal Danish Academy of Sciences. It was awarded 10 times between 1955 and 1982 and again in 2013. The first recipient was Niels Bohr himself who received the medal in connection with his 70th birthday.
2013 laureate
Alain Aspect, regarded as an outstanding figure in optical and atomic physics, was awarded the medal for his experiments on the Bell's inequalities test. It was presented on 7 October 2013 by Queen Margrethe and Prince Henrik at a special event at the Honorary Residence in the Carlsberg Academy.
Recipients
The following scientists have been awarded the Niels Bohr Medal:
Niels Bohr, 1955
John Cockcroft, 1958
George de Hevesy, 1961
Pyotr Kapitsa, 1965
Isidor Isaac Rabi, 1967
Werner Karl Heisenberg, 1970
Richard P. Feynman, 1973
Hans A. Bethe, 1976
Charles H. Townes, 1979
John Archibald Wheeler, 1982
Alain Aspect, 2013
Jens Nørskov, 2018
Ewine van Dishoeck, 2022
See also
UNESCO Niels Bohr Medal
List of engineering awards
List of physics awards |
https://en.wikipedia.org/wiki/Racket%20features | Racket has been under active development as a vehicle for programming language research since the mid-1990s, and has accumulated many features over the years. This article describes and demonstrates some of these features. Note that one of Racket's main design goals is to accommodate creating new languages, both domain-specific languages and completely new languages.
Therefore, some of the following examples are in different languages, but they are all implemented in Racket. Please refer to the main article for more information.
The core Racket implementation is highly flexible. Even without using dialects, it can function as a full-featured scripting language, capable of running both with and without windows-native GUI, and capable of tasks from web server creation to graphics.
Runtime support
Garbage collection, tail calls, and space safety
Racket can use three different garbage collectors:
Originally, the conservative Boehm garbage collector was used. However, conservative collection is impractical for long-running processes such as a web server—such processes tend to slowly leak memory. In addition, there are pathological cases where a conservative collector leaks memory fast enough to make certain programs impossible to run. For example, when traversing an infinite list, a single conservative mistake of retaining a pointer leads to keeping the complete list in memory, quickly overflowing available memory. This collector is often referred to as "CGC" in the Racket community.
SenoraGC is an alternative conservative garbage collector that is intended mainly for debugging and memory tracing.
The moving memory manager (aka "3m") is a precise garbage collector, and it has been Racket's default collector since 2007. This collector is a generational one, and it supports memory accounting via custodians (see below). The collector is implemented as a C source transformer that is itself written in Racket. Therefore, the build process uses the conservati |
https://en.wikipedia.org/wiki/Super%20Crunchers | Super Crunchers: Why Thinking-by-Numbers Is the New Way to be Smart is a book written by Ian Ayres, a law professor at Yale Law School, about how quantitative analysis of social behaviour and natural experiment can be creatively deployed to reveal insights in all areas of life, often in unexpected ways.
With examples such as predicting gestation period more precisely than Naegele's rule, predicting the box office success of films, Orley Ashenfelter's work predicting the price of Bordeaux wine based on weather data, collecting data on the effectiveness of teaching methods such as DISTAR, choosing baseball players based on statistics (Sabermetrics), and A/B testing to determine the most effective advertisements, Ayres explains how statistical evidence can be used as a supplement or substitute for human intuition.
The main mathematical approach used in these studies is multiple regression analysis.
Awards
The Economist – Books of the Year 2007
See also
Freakonomics
Notes
External links
Ian Ayres's home page
2007 non-fiction books
Statistics books |
https://en.wikipedia.org/wiki/Hyper-IgM%20syndrome%20type%203 | Hyper-IgM syndrome type 3 is a form of hyper IgM syndrome characterized by mutations of the CD40 gene. In this type, Immature B cells cannot receive signal 2 from helper T cells which is necessary to mature into mature B cells.
Hyper IgM syndromes
Hyper IgM syndromes is a group of primary immune deficiency disorders characterized by defective CD40 signaling; via B cells affecting class switch recombination (CSR) and somatic hypermutation. Immunoglobulin (Ig) class switch recombination deficiencies are characterized by elevated serum IgM levels and a considerable deficiency in Immunoglobulins G (IgG), A (IgA) and E (IgE). As a consequence, people with HIGM have an increased susceptibility to infections.
Signs and symptoms
Hyper IgM syndrome can have the following syndromes:
Infection/Pneumocystis pneumonia (PCP), which is common in infants with hyper IgM syndrome, is a serious illness. PCP is one of the most frequent and severe opportunistic infections in people with weakened immune systems.
Hepatitis (Hepatitis C)
Chronic diarrhea
Hypothyroidism
Neutropenia
Arthritis
Encephalopathy (degenerative)
Cause
Different genetic defects cause HIgM syndrome, the vast majority are inherited as an X-linked recessive genetic trait and most sufferers are male.
IgM is the form of antibody that all B cells produce initially before they undergo class switching. Healthy B cells efficiently switch to other types of antibodies as needed to attack invading bacteria, viruses, and other pathogens. In people with hyper IgM syndromes, the B cells keep making IgM antibodies because can not switch to a different antibody. This results in an overproduction of IgM antibodies and an underproduction of IgA, IgG, and IgE.
Pathophysiology
CD40 is a costimulatory receptor on B cells that, when bound to CD40 ligand (CD40L), sends a signal to the B-cell receptor. When there is a defect in CD40, this leads to defective T-cell interaction with B cells. Consequently, humoral immune respons |
https://en.wikipedia.org/wiki/Information%20hiding | In computer science, information hiding is the principle of segregation of the design decisions in a computer program that are most likely to change, thus protecting other parts of the program from extensive modification if the design decision is changed. The protection involves providing a stable interface which protects the remainder of the program from the implementation (whose details are likely to change). Written in another way, information hiding is the ability to prevent certain aspects of a class or software component from being accessible to its clients, using either programming language features (like private variables) or an explicit exporting policy.
Overview
The term encapsulation is often used interchangeably with information hiding. Not all agree on the distinctions between the two, though; one may think of information hiding as being the principle and encapsulation being the technique. A software module hides information by encapsulating the information into a module or other construct which presents an interface.
A common use of information hiding is to hide the physical storage layout for data so that if it is changed, the change is restricted to a small subset of the total program. For example, if a three-dimensional point (, , ) is represented in a program with three floating-point scalar variables and later, the representation is changed to a single array variable of size three, a module designed with information hiding in mind would protect the remainder of the program from such a change.
In object-oriented programming, information hiding (by way of nesting of types) reduces software development risk by shifting the code's dependency on an uncertain implementation (design decision) onto a well-defined interface. Clients of the interface perform operations purely through the interface, so, if the implementation changes, the clients do not have to change.
Encapsulation
In his book on object-oriented design, Grady Booch defined encapsulati |
https://en.wikipedia.org/wiki/Small%20subgroup%20confinement%20attack | In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group.
Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE. |
https://en.wikipedia.org/wiki/Eoxin%20A4 | Eoxin A4, also known as 14,15-leukotriene A4, is an eoxin. Cells make eoxins by metabolizing arachidonic acid with a 15-lipoxygenase enzyme to form 15(S)-hydroperoxyeicosapentaenoic acid (i.e. 15(S)-HpETE). This product is then converted serially to eoxin A4 (i.e. EXA4), EXC4, EXD4, and EXE4 by LTC4 synthase, an unidentified gamma-glutamyltransferase, and an unidentified dipeptidase, respectively, in a pathway which appears similar if not identical to the pathway which forms leukotreines, i.e. LTA4, LTC4, LTD4, and LTE4. This pathway is schematically shown as follows:
EXA4 is viewed as an intracellular-bound, short-lived intermediate which is rapidly metabolized to the down-stream eoxins. The eoxins down stream of EXA4 are secreted from their parent cells and, it is proposed but not yet proven, serve to regulate allergic responses and the development of certain cancers (see Eoxins). |
https://en.wikipedia.org/wiki/PerfKitBenchmarker | PerfKit Benchmarker is an open source benchmarking tool used to measure and compare cloud offerings. PerfKit Benchmarker is licensed under the Apache 2 license terms. PerfKit Benchmarker is a community effort involving over 500 participants including researchers, academic institutions and companies together with the originator, Google.
General
PerfKit Benchmarker (PKB) is a community effort to deliver a repeatable, consistent, and open way of measuring Cloud Performance. It supports a growing list of cloud providers including: Alibaba Cloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, Rackspace, IBM Bluemix (Softlayer). In addition to Cloud Providers to supports container orchestration including Kubernetes and Mesos and local "static" workstations and clusters of computers .
The goal is to create an open source living benchmark [framework] that represents how Cloud developers are building applications, evaluating Cloud alternatives, learning how to architect applications for each cloud. Living because it will change and morph quickly as developers change.
PerfKit Benchmarker measures the end to end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance, e.g.: latency, throughput, time-to-complete, IOPS. PerfKit Benchmarker reduces the complexity in running benchmarks on supported cloud providers by unified and simple commands. It's designed to operate via vendor provided command line tools.
PerfKit Benchmarker contains a canonical set of public benchmarks. All benchmarks are running with default/initial state and configuration (Not tuned to in favor of any providers). This provides a way to benchmark across cloud platforms, while getting a transparent view of application throughput, latency, variance, and overhead.
History
PerfKit Benchmarker (PKB) was started by Anthony F. Voellm, Alain Hamel, and Eric Hankland at Google in 2014. |
https://en.wikipedia.org/wiki/Varadhan%27s%20lemma | In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
Statement of the lemma
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
Then
See also
Laplace principle (large deviations theory) |
https://en.wikipedia.org/wiki/Tribometer | A tribometer is an instrument that measures tribological quantities, such as coefficient of friction, friction force, and wear volume, between two surfaces in contact. It was invented by the 18th century Dutch scientist Musschenbroek
A tribotester is the general name given to a machine or device used to perform tests and simulations of wear, friction and lubrication which are the subject of the study of tribology. Often tribotesters are extremely specific in their function and are fabricated by manufacturers who desire to test and analyze the long-term performance of their products. An example is that of orthopedic implant manufacturers who have spent considerable sums of money to develop tribotesters that accurately reproduce the motions and forces that occur in human hip joints so that they can perform accelerated wear tests of their products.
Theory
A simple tribometer is described by a hanging mass and a mass resting on a horizontal surface, connected to each other via a string and pulley. The coefficient of friction, µ, when the system is stationary, is determined by increasing the hanging mass until the moment that the resting mass begins to slide. Then using the general equation for friction force:
Where N, the normal force, is equal to the weight (mass x gravity) of the sitting mass (mT) and F, the loading force, is equal to the weight (mass x gravity) of the hanging mass (mH).
To determine the kinetic coefficient of friction the hanging mass is increased or decreased until the mass system moves at a constant speed.
In both cases, the coefficient of friction is simplified to the ratio of the two masses:
In most test applications using tribometers, wear is measured by comparing the mass or surfaces of test specimens before and after testing. Equipment and methods used to examine the worn surfaces include optical microscopes, scanning electron microscopes, optical interferometry and mechanical roughness testers.
Types
Tribometers are often referred to |
https://en.wikipedia.org/wiki/Joe%20Blade%202 | Joe Blade 2 is the second game in the Joe Blade series.
Gameplay
Joe Blade 2 took a rather different approach to the first game. Instead of being a soldier, Blade was this time a vigilante taking to the city to rid the streets of criminals, rescuing old-age pensioners along the way. Blade was no longer armed with a gun, and had to jump over villains, just touching them with his feet, to dispatch them. In order for the civilians to be successfully rescued, the protagonist was given a simple puzzle (called a sub-game level) of organizing the pattern of symbols. There were four types of these sub-games and all of them needed to be completed within 60 seconds. This almost surreal take on the game was in stark contrast to the comparatively more gritty realism of the first installment. The game was also known for being considerably easier than the first title, almost to the point where many players managed to complete the game in one hour-long sitting.
The Spectrum version of the game included a version of Invade-a-Load featuring Pac-Man.
Reception
Paul Rixon for Page 6 said: "I did find the original Joe Blade more instantly playable, but this is probably due to the lack of instructions supplied with my preview copy of the sequel [...] I'd advise all arcade adventuring types, especially fans of the original game, to grab a copy without hesitation!"
Crash said: "short term Joe Blade II is playable, which wins much of my vote."
Reviewing the Atari ST version, Computer and Video Games said: "If you enjoyed the Joe Blade and you are looking for more of the same, get your hands on this toute-de-suite."
Richard Henderson for Computer Games Week said: "Admittedly the graphics are good, but there is no real variety between screens and the sprites move far too slowly to make it fun."
Sinclair User said: "Joe Blade II is a bit like watching a ballet: it's all very pretty and artistic, but you soon end up wishing someone would cut loose with a machine-gun."
Reviews
Aktuelle |
https://en.wikipedia.org/wiki/Gnits%20standards | The Gnits standards are a collection of standards and recommendations for programming, maintaining, and distributing software. They are published by a group of GNU project maintainers who call themselves "Gnits", which is short for "GNU nit-pickers". As such, they represent advice, not Free Software Foundation or GNU policy, but parts of the Gnits' standards have seen widespread adoption among free software programmers in general.
The Gnits standards are extensions to, refinements of, and annotations for the GNU Standards. However, they are in no way normative in GNU; GNU maintainers are not required to follow them. Nevertheless, maintainers and programmers often find in Gnits standards good ideas on the way to follow GNU Standards themselves, as well as tentative, non-official explanations about why some GNU standards were decided the way they are. There are very few discrepancies between Gnits and GNU standards, and they are always well noted as such.
The standards address aspects of software architecture, program behaviour, human–computer interaction, C programming, documentation, and software releases.
As of 2008, the Gnits standards carry a notice that they are moribund and no longer actively maintained, and points readers to the manuals of Gnulib, Autoconf, and Automake, which are said to cover many of the same topics.
See also
GNU Autotools
GNU coding standards
External links
Gnits Standards
Gnits Standards (mirror)
Effect of Gnits on automake options
Computer standards
GNU Project
Computer programming
Free software culture and documents |
https://en.wikipedia.org/wiki/Data%20validation | In computer science, data validation is the process of ensuring data has undergone data cleansing to confirm they have data quality, that is, that they are both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application program validation logic of the computer and its application.
This is distinct from formal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property.
Overview
Data validation is intended to provide certain well-defined guarantees for fitness and consistency of data in an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts. Their implementation can use declarative data integrity rules, or procedure-based business rules.
The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system.
Different kinds
In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose.
For example:
Data type validation;
Range and constraint validation;
Code and cross-reference validation;
Structured validation; and
Consistency validation
Data-type check
Data type validation is customarily carried out on one or more simple data fields.
The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defined in |
https://en.wikipedia.org/wiki/Lateral%20condyle%20of%20tibia | The lateral condyle is the lateral portion of the upper extremity of tibia.
It serves as the insertion for the biceps femoris muscle (small slip). Most of the tendon of the biceps femoris inserts on the fibula.
See also
Gerdy's tubercle
Medial condyle of tibia
Additional images |
https://en.wikipedia.org/wiki/Kohn%20anomaly | In the field of physics concerning condensed matter, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal. It is named for Walter Kohn. For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. They have been first proposed by Walter Kohn in 1959. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms this vector would be ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically.
In the phononic spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that occurs at certain high symmetry points of the first Brillouin zone, produced by the abrupt change in the screening of lattice vibrations by conduction electrons.
Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at , where is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of in the proximity of the singularity mentioned above. I |
https://en.wikipedia.org/wiki/Medical%20transcription | Medical transcription, also known as MT, is an allied health profession dealing with the process of transcribing voice-recorded medical reports that are dictated by physicians, nurses and other healthcare practitioners. Medical reports can be voice files, notes taken during a lecture, or other spoken material. These are dictated over the phone or uploaded digitally via the Internet or through smart phone apps.
History
Medical transcription as it is currently known has existed since the beginning of the 20th century when standardization of medical records and data became critical to research. At that time, medical stenographers recorded medical information, taking doctors' dictation in shorthand. With the creation of audio recording devices, it became possible for physicians and their transcribers to work asynchronously.
Over the years, transcription equipment has changed from manual typewriters, to electric typewriters, to word processors, and finally, , to computers. Storage methods have also changed: from plastic disks and magnetic belts to cassettes, endless loops, and digital recordings. Today, speech recognition (SR), also known as continuous speech recognition (CSR), is increasingly used, with medical transcriptions and, in some cases, "editors" providing supplemental editorial services. Natural-language processing takes "automatic" transcription a step further, providing an interpretive function that speech recognition alone does not provide.
In the past, these medical reports consisted of very abbreviated handwritten notes that were added in the patient's file for interpretation by the primary physician responsible for the treatment. Ultimately, these handwritten notes and typed reports were consolidated into a single patient file and physically stored along with thousands of other patient records in the medical records department. Whenever the need arose to review the records of a specific patient, the patient's file would be retrieved from the filing c |
https://en.wikipedia.org/wiki/Anagyrine | Anagyrine is a teratogenic alkaloid first isolated from (and named for) Anagyris foetida in the year 1885 by French biologists Hardy and Gallois. A. foetida (family Fabaceae), the Stinking Bean Trefoil, is a highly toxic shrub native to the Mediterranean region, with a long history of use in folk medicine. In the year 1939 Anagyrine was found by James Fitton Couch to be identical to an alkaloid present in many species belonging to the plant genus Lupinus (lupins). The toxin can cause crooked calf disease if a cow ingests the plant during certain periods of pregnancy.
Background
The toxicity of certain species of Lupinus plants has been known for several years. The plant is very common in western North America and is sometimes used in feed for cattle if the toxicity of the given lupine is low enough. The toxicity of the plant comes from a variety of toxins, however out of these chemicals anagyrine is the most well known for causing crooked calf disease when ingested by cows. The discovery of anagyrine was made in 1885 by French biologists Ernest Hardy (born Paris 1826) and N. Gallois, who isolated it from the highly toxic legume Anagyris foetida, while the earliest isolation of anagyrine from a lupinus plant was recorded in 1939. The toxin can be found in growing leaf material in a young lupinus plant and in the flower and seed of a mature plant, though varying concentrations of the alkaloid are present throughout lupines that contain anagyrine. The first correlation between anagyrine and crooked calf disease was made by Richard Keeler in 1973. Recently there have been a few successful syntheses of anagyrine recorded, most notably one completed by Diane Gray and Timothy Gallagher.
Toxicity
Anagyrine causes crooked calf disease if 1.44 g/kg of the substance is ingested by the mother cow between days 40 and 70 of pregnancy. Out of the hundreds of varieties of lupinus plants, 23 (listed below) are known to contain high enough concentrations of anagyrine to be dangerou |
https://en.wikipedia.org/wiki/Artificial%20digestion | Artificial digestion is a laboratory technique that reduces food to protein, fat, carbohydrates, fiber, minerals, vitamins, and non-nutrient compounds for analytical or research purposes. Digestive agents such as pepsin and hydrochloric acid are typically used to accomplish artificial digestion.
Meat inspection
Artificial digestion is used to detect the presence of encysted trichinella larvae in suspected muscle tissue. Prior to this method, a sample of muscle tissue was compressed to visually express the encysted parasite. Using artificial digestion, meat samples are dissolved by a digestive solution and the remains are examined for the presence of larvae.
Digestion research
Artificial stomach and small intestine models are used instead of laboratory animals or human test subjects. Various models, from static one-compartment to dynamic multicompartment, exist. These models are used to study food digestion and subsequent bioavailability. |
https://en.wikipedia.org/wiki/Disability%20studies | Disability studies is an academic discipline that examines the meaning, nature, and consequences of disability. Initially, the field focused on the division between "impairment" and "disability", where impairment was an impairment of an individual's mind or body, while disability was considered a social construct. This premise gave rise to two distinct models of disability: the social and medical models of disability. In 1999 the social model was universally accepted as the model preferred by the field. However, in recent years, the division between the social and medical models has been challenged. Additionally, there has been an increased focus on interdisciplinary research. For example, recent investigations suggest using "cross-sectional markers of stratification" may help provide new insights on the non-random distribution of risk factors capable of acerbating disablement processes.
Disability studies courses include work in disability history, theory, legislation, policy, ethics, and the arts. However, students are taught to focus on the lived experiences of individuals with disabilities in practical terms. The field is focused on increasing individuals with disabilities access to civil rights and improving their quality of life.
Disability studies emerged in the 1980s primarily in the US, the UK, and Canada. In 1986, the Section for the Study of Chronic Illness, Impairment, and Disability of the Social Science Association (United States) was renamed the Society for Disability Studies. The first US disabilities studies program emerged in 1994 at Syracuse University. The first edition of the Disabilities Studies Reader (one of the first collections of academic papers related to disability studies) was published in 1997. The field grew rapidly over the next ten years. In 2005, the Modern Language Association established disability studies as a "division of study".
While disability studies primarily emerged in the US, the UK, and Canada, disability studies wer |
https://en.wikipedia.org/wiki/Libreswan | Libreswan is a fork of the Openswan IPsec VPN implementation.
Libreswan is created by almost all of the Openswan developers after a lawsuit about the ownership of the Openswan name was filed against Paul Wouters, the release manager of Openswan, in December 2012. The lawsuit was later settled out of court.
Libreswan supports most of the common types of IPsec configurations people use including configurations like a host-to-host VPN, subnet to subnet VPN.
See also
StrongSwan |
https://en.wikipedia.org/wiki/Global%20Ocean%20Ecosystem%20Dynamics | Global Ocean Ecosystem Dynamics (GLOBEC) is the International Geosphere-Biosphere Programme (IGBP) core project responsible for understanding how global change will affect the abundance, diversity and productivity of marine populations. The programme was initiated by SCOR and the IOC of UNESCO in 1991, to understand how global change will affect the abundance, diversity and productivity of marine populations comprising a major component of oceanic ecosystems.
The aim of GLOBEC is to advance our understanding of the structure and functioning of the global ocean ecosystem, its major subsystems, and its response to physical forcing so that a capability can be developed to forecast the responses of the marine ecosystem to global change.
Structure
GLOBEC encompasses an integrated suite of research activities consisting of Regional Programmes, National Activities and cross-cutting research focal activities. The GLOBEC programme has been developed by the Scientific Steering Committee (SSC) and is co-ordinated through the GLOBEC International Project Office (IPO).
Regional Programmes:
Ecosystem Structure of Subarctic Seas (ESSAS)
CLimate Impacts on Oceanic TOp Predators (CLIOTOP)
ICES Cod and Climate Change (CCC)
PICES Climate Change and Carrying Capacity (CCCC)
Southern Ocean GLOBEC (SO GLOBEC)
Small Pelagic Fish and Climate Change (SPACC)
National Programmes:
GLOBEC has several active national programmes and scientists from nearly 30 countries participate in GLOBEC activities on a national or regional level.
Focus Working Groups:
There are four GLOBEC cross-cutting research focal activities:
Focus 1. Retrospective analysis
Focus 2. Process studies
Focus 3. Prediction and modelling
Focus 4. Feedback from ecosystem changes
Publications
GLOBEC produces a report series, special contributions series and a biannual newsletter, all of which can be downloaded from the GLOBEC website. GLOBEC science has contributed to over 2000 refereed scientific publications which can |
https://en.wikipedia.org/wiki/Weighted%20Micro%20Function%20Points | Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies, while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.
As many ancestor measurement methods use source lines of code (SLOC) to measure software size, WMFP uses a parser to understand the source code breaking it down into micro functions and derive several code complexity and volume metrics, which are then dynamically interpolated into a final effort score. In addition to compatibility with the waterfall software development life cycle methodology, WMFP is also compatible with newer methodologies, such as Six Sigma, Boehm spiral, and Agile (AUP/Lean/XP/DSDM) methodologies, due to its differential analysis capability made possible by its higher-precision measurement elements.
Measured elements
The WMFP measured elements are several different software metrics deduced from the source code by the WMFP algorithm analysis. They are represented as percentage of the whole unit (project or file) effort, and are translated into time.
Flow complexity (FC) – Measures the complexity of a programs' flow control path in a similar way to the traditional cyclomatic complexity, with higher accuracy by using weights and relations calculation.
Object vocabulary (OV) – Measures the quantity of unique information contained by the programs' source code, similar to the traditional Halstead vocabulary with dynamic language compensation.
Object conjuration (OC) – Measures the quantity of usage done by information contained by the programs' source code.
Arithmetic intricacy (AI) – Measures the complexity of arithmetic calculations across the program
Data transfer (DT) – Me |
https://en.wikipedia.org/wiki/Somatic%20mutation%20and%20recombination%20tests | The somatic mutation and recombination tests (SMARTs) are in vivo genotoxicity tests performed in Drosophila melanogaster (Fruit fly). These fruit fly tests are a short-term test and a non-mammalian approach for in vivo testing of putative genotoxins found in the environment. D. melanogaster has a short lifespan, which allows for fast reproductive cycles and high-throughput genotoxicity testing. D. melanogaster also has around 75% functional orthologs of human disease-related genes, making it an attractive in vivo model for human research. The tests identify loss of heterozygosity for the specified genetic markers in heterozygous or trans-heterozygous adults using phenotypically observable genetic markers in adult tissues. Although diverse events like point mutations/deletions, nondisjunction, and homologous mitotic recombination might theoretically cause this loss of heterozygosity, nondisjunction processes are generally not relevant for most of the examined chemicals. SMARTs are two different tests that use the same genetic foundation, but target different adult tissues and are named accordingly: the wing-spot test and the eye-spot test.
Background
In the developmental phase, larval structures and imaginal discs - clusters of diploid cells of undifferentiated epithelium- are formed in the embryo. The pupa emerges following the completion of the larval stages, and metamorphosis occurs as a result of systemic hormonal regulation, with histolysis of the larval organs and differentiation of the imaginal discs into adult components. When these imaginal discs are exposed to genotoxic substances genetic mutations occur due to possible DNA damage that can be inherited by the progeny cells during mitosis. The phenotypic forms of these genetic mutations can be observed in adult body forms, like the wings and the eyes, and thus can be examined using the wing-spot test and the eye-spot test, respectively. The loss of heterozygosity (LOH) for specific genetic markers in hete |
https://en.wikipedia.org/wiki/Lipid%20raft | The plasma membranes of cells contain combinations of glycosphingolipids, cholesterol and protein receptors organised in glycolipoprotein lipid microdomains termed lipid rafts. Their existence in cellular membranes remains somewhat controversial. It has been proposed that they are specialized membrane microdomains which compartmentalize cellular processes by serving as organising centers for the assembly of signaling molecules, allowing a closer interaction of protein receptors and their effectors to promote kinetically favorable interactions necessary for the signal transduction. Lipid rafts influence membrane fluidity and membrane protein trafficking, thereby regulating neurotransmission and receptor trafficking. Lipid rafts are more ordered and tightly packed than the surrounding bilayer, but float freely within the membrane bilayer. Although more common in the cell membrane, lipid rafts have also been reported in other parts of the cell, such as the Golgi apparatus and lysosomes.
Properties
One key difference between lipid rafts and the plasma membranes from which they are derived is lipid composition. Research has shown that lipid rafts contain 3 to 5-fold the amount of cholesterol found in the surrounding bilayer. Also, lipid rafts are enriched in sphingolipids such as sphingomyelin, which is typically elevated by 50% compared to the plasma membrane. To offset the elevated sphingolipid levels, phosphatidylcholine levels are decreased which results in similar choline-containing lipid levels between the rafts and the surrounding plasma membrane. Cholesterol interacts preferentially, although not exclusively, with sphingolipids due to their structure and the saturation of the hydrocarbon chains. Although not all of the phospholipids within the raft are fully saturated, the hydrophobic chains of the lipids contained in the rafts are more saturated and tightly packed than the surrounding bilayer. Cholesterol is the dynamic "glue" that holds the raft together. D |
https://en.wikipedia.org/wiki/Doubling%20time | The doubling time is the time it takes for a population to double in size/value. It is applied to population growth, inflation, resource extraction, consumption of goods, compound interest, the volume of malignant tumours, and many other things that tend to grow over time. When the relative growth rate (not the absolute growth rate) is constant, the quantity undergoes exponential growth and has a constant doubling time or period, which can be calculated directly from the growth rate.
This time can be calculated by dividing the natural logarithm of 2 by the exponent of growth, or approximated by dividing 70 by the percentage growth rate (more roughly but roundly, dividing 72; see the rule of 72 for details and derivations of this formula).
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life.
As an example, Canada's net population growth was 2.7 percent in the year 2022, dividing 72 by 2.7 gives an approximate doubling time of about 27 years. Thus if that growth rate were to remain constant, Canada's population would double from its 2023 figure of about 39 million to about 78 million by 2050.
History
The notion of doubling time dates to interest on loans in Babylonian mathematics. Clay tablets from circa 2000 BCE include the exercise "Given an interest rate of 1/60 per month (no compounding), come the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. Further, repaying double the initial amount of a loan, after a fixed time, was common commercial practice of the period: a common Assyrian loan of 1900 BCE consisted of loaning 2 minas of gold, getting back 4 in five years, and an Egyptian proverb of the time was "If wealth is placed where it bears interest, it comes back to you redoubled."
Examination
Examining the doubling time can give a more intuitive sense of the |
https://en.wikipedia.org/wiki/Semantic%20service-oriented%20architecture | A Semantic Service Oriented Architecture (SSOA) is an architecture that allows for scalable and controlled Enterprise Application Integration solutions. SSOA describes an approach to enterprise-scale IT infrastructure. It leverages rich, machine-interpretable descriptions of data, services, and processes to enable software agents to autonomously interact to perform critical mission functions. SSOA is technically founded on three notions:
The principles of Service-oriented architecture (SOA);
Standard Based Design (SBD); and
Semantics-based computing.
SSOA combines and implements these computer science concepts into a robust, extensible architecture capable of enabling complex, powerful functions.
Applications
In the health care industry, SSOA of HL7 has long been implemented. Other protocols include LOINC, PHIN, and HIPAA related standards. There is a series of SSOA-related ISO standards published for financial services, which can be found at the ISO's website,,. Some financial sectors also adopt EMV standards to facilitate European consumers. A part of SSOA on transport and trade are in the ISO sections of 03.220.20 and 35.240.60,. Some general guidelines of the technology and the standards in other fields are partially located at 25.040.40, 35.240.99,,.
See also
Cyber security standards
ISO/IEC 7816
ISO 8583
ISO/IEC 8859
ISO 9241
ISO 9660
ISO/IEC 11179
ISO/IEC 15408
ISO/IEC 17799
ISO/IEC 27000-series
Service component architecture
Semantic web
EMML
Business Intelligence 2.0 (BI 2.0) |
https://en.wikipedia.org/wiki/Orientation%20of%20a%20vector%20bundle | In mathematics, an orientation of a real vector bundle is a generalization of an orientation of a vector space; thus, given a real vector bundle π: E →B, an orientation of E means: for each fiber Ex, there is an orientation of the vector space Ex and one demands that each trivialization map (which is a bundle map)
is fiberwise orientation-preserving, where Rn is given the standard orientation. In more concise terms, this says that the structure group of the frame bundle of E, which is the real general linear group GLn(R), can be reduced to the subgroup consisting of those with positive determinant.
If E is a real vector bundle of rank n, then a choice of metric on E amounts to a reduction of the structure group to the orthogonal group O(n). In that situation, an orientation of E amounts to a reduction from O(n) to the special orthogonal group SO(n).
A vector bundle together with an orientation is called an oriented bundle. A vector bundle that can be given an orientation is called an orientable vector bundle.
The basic invariant of an oriented bundle is the Euler class. The multiplication (that is, cup product) by the Euler class of an oriented bundle gives rise to a Gysin sequence.
Examples
A complex vector bundle is oriented in a canonical way.
The notion of an orientation of a vector bundle generalizes an orientation of a differentiable manifold: an orientation of a differentiable manifold is an orientation of its tangent bundle. In particular, a differentiable manifold is orientable if and only if its tangent bundle is orientable as a vector bundle. (note: as a manifold, a tangent bundle is always orientable.)
Operations
To give an orientation to a real vector bundle E of rank n is to give an orientation to the (real) determinant bundle of E. Similarly, to give an orientation to E is to give an orientation to the unit sphere bundle of E.
Just as a real vector bundle is classified by the real infinite Grassmannian, oriented bundles are classified by th |
https://en.wikipedia.org/wiki/Ordered%20topological%20vector%20space | In mathematics, specifically in functional analysis and order theory, an ordered topological vector space, also called an ordered TVS, is a topological vector space (TVS) X that has a partial order ≤ making it into an ordered vector space whose positive cone is a closed subset of X.
Ordered TVS have important applications in spectral theory.
Normal cone
If C is a cone in a TVS X then C is normal if , where is the neighborhood filter at the origin, , and is the C-saturated hull of a subset U of X.
If C is a cone in a TVS X (over the real or complex numbers), then the following are equivalent:
C is a normal cone.
For every filter in X, if then .
There exists a neighborhood base in X such that implies .
and if X is a vector space over the reals then also:
There exists a neighborhood base at the origin consisting of convex, balanced, C-saturated sets.
There exists a generating family of semi-norms on X such that for all and .
If the topology on X is locally convex then the closure of a normal cone is a normal cone.
Properties
If C is a normal cone in X and B is a bounded subset of X then is bounded; in particular, every interval is bounded.
If X is Hausdorff then every normal cone in X is a proper cone.
Properties
Let X be an ordered vector space over the reals that is finite-dimensional. Then the order of X is Archimedean if and only if the positive cone of X is closed for the unique topology under which X is a Hausdorff TVS.
Let X be an ordered vector space over the reals with positive cone C. Then the following are equivalent:
the order of X is regular.
C is sequentially closed for some Hausdorff locally convex TVS topology on X and distinguishes points in X
the order of X is Archimedean and C is normal for some Hausdorff locally convex TVS topology on X.
See also |
https://en.wikipedia.org/wiki/Symbols%20of%20Nova%20Scotia | Nova Scotia is one of Canada's provinces, and has established several provincial symbols.
Symbols |
https://en.wikipedia.org/wiki/Rootkit | A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or an area of its software that is not otherwise allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software. The term rootkit is a compound of "root" (the traditional name of the privileged account on Unix-like operating systems) and the word "kit" (which refers to the software components that implement the tool). The term "rootkit" has negative connotations through its association with malware.
Rootkit installation can be automated, or an attacker can install it after having obtained root or administrator access. Obtaining this access is a result of direct attack on a system, i.e. exploiting a vulnerability (such as privilege escalation) or a password (obtained by cracking or social engineering tactics like "phishing"). Once installed, it becomes possible to hide the intrusion as well as to maintain privileged access. Full control over a system means that existing software can be modified, including software that might otherwise be used to detect or circumvent it.
Rootkit detection is difficult because a rootkit may be able to subvert the software that is intended to find it. Detection methods include using an alternative and trusted operating system, behavioral-based methods, signature scanning, difference scanning, and memory dump analysis. Removal can be complicated or practically impossible, especially in cases where the rootkit resides in the kernel; reinstallation of the operating system may be the only available solution to the problem. When dealing with firmware rootkits, removal may require hardware replacement, or specialized equipment.
History
The term rootkit or root kit originally referred to a maliciously modified set of administrative tools for a Unix-like operating system that granted "root" access. If an intruder could replace the standard administrative tools on a system with a rootki |
https://en.wikipedia.org/wiki/Layer%20four%20traceroute | Layer Four Traceroute (LFT) is a fast, multi-protocol traceroute engine, that also implements numerous other features including AS number lookups through regional Internet registries and other reliable sources, Loose Source Routing, firewall and load balancer detection, etc. LFT is best known for its use by network security practitioners to trace a route to a destination host through many configurations of packet-filters / firewalls, and to detect network connectivity, performance or latency problems.
How it works
LFT sends various TCP SYN and FIN probes (differing from Van Jacobson's UDP-based method) or UDP probes utilizing the IP protocol time to live field and attempts to elicit an ICMP TIME_EXCEEDED response from each gateway along the path to some host. LFT also listens for various TCP, UDP, and ICMP messages along the way to assist network managers in ascertaining per-protocol heuristic routing information, and can optionally retrieve various information about the networks it traverses. The operation of layer four traceroute is described in detail in several prominent security books.
Origins
The lft command first appeared in 1998 as fft. Renamed as a result of confusion with fast Fourier transforms, lft stands for layer four traceroute. Results are often referred to as a layer four trace.
See also
Prefix WhoIs
Sources
External links
Layer Four Traceroute Project
Network analyzers
Free network management software |
https://en.wikipedia.org/wiki/Motorola%20Single%20Board%20Computers | Motorola Single Board Computers is Motorola's production line of computer boards for embedded systems. There are three different lines : mvme68k, mvmeppc and mvme88k. The first version of the board appeared in 1988. Motorola still makes those boards and the last one is MVME3100.
NetBSD supports the MVME147, MVME162, MVME167, MVME172 and MVME177 boards from the mvme68k family, as well as the MVME160x line of mvmeppc boards.
OpenBSD supports the MVME141, MVME165, MVME188 and MVME197 boards. |
https://en.wikipedia.org/wiki/Chartreusin | Chartreusin is an antibiotic originally isolated from the bacteria Streptomyces Chartreusis. The crystalline compound itself has a yellow-green colour, as per its name, and is stable at room temperature for several hours. Chartreusin is chemically related to elsamitrucin, as the two share an aglycone chartarin structure, though they differ in their sugar moieties. Both chartreusin and elsamitrucin were found to have anticancer activity.
Biological activity
Chartreusin was shown to be effective as an antibiotic against some gram-positive species, as well as mycobacteria. This compound has also displayed anti-cancer activity, particularly against certain melanomas and leukemia in mice. However, this effect could only be observed in-vivo when the antibiotic was administered via intraperitoneal injection. Chartreusin administered by intravenous therapy was ineffective, as the compound would be excreted through the bile.
This compound is believed to function by binding directly to DNA, preventing its replication. It binds cooperatively and has a high affinity for alternating AT or GC sequences. Upon binding, Chartreusin may inhibit the relaxation of negatively supercoiled DNA, or induce strand scission. Consequently, this compound has been shown to interfere with mammalian cells' progression through the cell cycle. In the presence of chartreusin, cells in the G1 stage move more slowly into S, while cells in the G2 stage are entirely prevented from moving on to mitosis. Those cells already in the S phase are likely to experience lethal effects, though Chartreusin's lethality is also a function of both dosage and duration of exposure.
Pharmaceutical potential
Chartreusin is not currently considered to have significant potential as an anti-cancer drug. The concentration required for the drug to inhibit cell growth is typically also cytotoxic. Among surviving cells, prolonged exposure to Chartreusin leads to irreversible inhibition of growth and damage to DNA. Fortunat |
https://en.wikipedia.org/wiki/Superstition%20in%20India | Superstition refers to any belief or practice that is caused by supernatural causality, and which contradicts modern science. Superstitious beliefs and practices often vary from one person to another or from one culture to another.
Common examples of superstitious beliefs in India include:
a black cat crossing the road symbolizes bad luck
a crow cawing indicates that guests are arriving
drinking milk after eating fish causes skin diseases
seeing a mongoose symbolizes to be very lucky
breaking of mirrors is also another bad luck
itchy palms mean that money is coming your way.
Overview
Superstitions are usually attributed to lack of education; however, this has not always been the case in India, as there are many educated people with beliefs considered superstitious by the public. Superstitious beliefs and practices vary from one region to another, ranging from harmless practices such as lemon-and-chili totems in order to ward off the evil eye, to harmful acts like witch-burning.
Being part of tradition and religion, these beliefs and practices have been passed down from one generation to another for centuries. The Indian government has tried to put new laws prohibiting such practices into effect. Due to the rich history of superstition, these laws often face a lot of opposition from the general public. In 2013, Narendra Dabholkar, an anti-superstition specialist, who was also the founder of the Committee for the Eradication of Blind Faith, was fatally shot by two bikers for requesting the enactment of a law that prohibits black magic. Critics argued that the Indian constitution does not prohibit such acts.
Past
Sati
Sati is the act or custom of a Hindu widow burning herself or being burned to death on the funeral pyre of her husband. After watching the Sati of his own sister-in-law, Ram Mohan Roy began campaigning for abolition of the practice in 1811. The practice of Sati was abolished by Governor General Lord William Bentinck in British India in 1829. |
https://en.wikipedia.org/wiki/Beam%20propagation%20method | The beam propagation method (BPM) is an approximation technique for simulating the propagation of light in slowly varying optical waveguides. It is essentially the same as the so-called parabolic equation (PE) method in underwater acoustics. Both BPM and the PE were first introduced in the 1970s. When a wave propagates along a waveguide for a large distance (larger compared with the wavelength), rigorous numerical simulation is difficult. The BPM relies on approximate differential equations which are also called the one-way models. These one-way models involve only a first order derivative in the variable z (for the waveguide axis) and they can be solved as "initial" value problem. The "initial" value problem does not involve time, rather it is for the spatial variable z.
The original BPM and PE were derived from the slowly varying envelope approximation and they are the so-called paraxial one-way models. Since then, a number of improved one-way models are introduced. They come from a one-way model involving a square root operator. They are obtained by applying rational approximations to the square root operator. After a one-way model is obtained, one still has to solve it by discretizing the variable z. However, it is possible to merge the two steps (rational approximation to the square root operator and discretization of z) into one step. Namely, one can find rational approximations to the so-called one-way propagator (the exponential of the square root operator) directly. The rational approximations are not trivial. Standard diagonal Padé approximants have trouble with the so-called evanescent modes. These evanescent modes should decay rapidly in z, but the diagonal Padé approximants will incorrectly propagate them as propagating modes along the waveguide. Modified rational approximants that can suppress the evanescent modes are now available. The accuracy of the BPM can be further improved, if you use the energy-conserving one-way model or the single-scatter on |
https://en.wikipedia.org/wiki/Movim | Movim (My Open Virtual Identity Manager) is a distributed social network built on top of XMPP, a popular open standards communication protocol. Movim is a free and open source software licensed under the AGPL-3.0-or-later license. It can be accessed using existing XMPP clients and Jabber accounts.
The project was founded by Timothée Jaussoin in 2010. It is maintained by Timothée Jaussoin and Christine Ho.
Concept
Movim is a distributed social networking platform. It builds an abstraction layer for communication and data management while leveraging the strength of the underlying XMPP protocol.
XMPP is a widely used open standards communication platform. Using XMPP allows the service to interface with existing XMPP clients like Conversations, Pidgin, Xabber and Jappix. Users can directly login to Movim using their existing Jabber account.
Movim addresses the privacy concerns related to centralized social networks by allowing users set up their own server (or "pod") to host content; pods can then interact to share status updates, photographs, and other social data. Users can export their data to other pods or offline allowing for greater flexibility.
It allows its users to host their data with a traditional web host, a cloud-based host, an ISP, or a friend. The framework, which is being built on PHP, is a free software and can be experimented with by external developers.
Technology
Movim is developed using PHP, CSS and HTML5. The software initially used the Symfony framework. Due to the complexity of the application and the XMPP connection management, developers rewrote Movim as a standalone application. It now has its own libraries and APIs.
Movim was earlier based on the JAXL library for implementing XMPP. JAXL has been replaced by Moxl (Movim XMPP Library), licensed under the AGPL-3.0-only license, to manage connecting to the server through the XMPP WebSocket protocol. This is claimed to have reduced the code complexity and performance load while providing |
https://en.wikipedia.org/wiki/Histatin | Histatins are histidine-rich (cationic) antimicrobial proteins found in saliva. Histatin's involvement in antimicrobial activities makes histatin part of the innate immune system.
Histatin was first discovered (isolated) in 1988, with functions that's responsible in keeping homeostasis inside the oral cavity, helping in the formation of pellicles, and assist in bonding of metal ions.
Structure
The structure of histatin is unique depending on whether the protein of interest is histatin 1, 3 or 5. Nonetheless, histatins mainly possess a cationic (positive) charge due to the primary structure consisting mostly of basic amino acids. An amino acid that is crucial to histatin's function is histidine. Studies show that the removal of histidine (especially in histatin 5) resulted in reduction of antifungal activity.
Function
Histatins are antimicrobial and antifungal proteins, and have been found to play a role in wound-closure. A significant source of histatins is found in the serous fluid secreted by Ebner's glands, salivary glands at the back of the tongue, and produced by acinus cells. Here they offer some early defense against incoming microbes.
The three major histatins are 1, 3, and 5, which contains 38, 32, and 24 amino acids, respectively. Histatin 2 is a degradation product of histatin 1, and all other histatins are degradation products of Histatin 3 through the process of post-translational proteolysis of the HTN3 gene product. Therefore there are only two genes, HTN1 and HTN3.
The N-terminus of Histatin 5 allows it to bind with metals, and this can result in the production of reactive oxygen species.
Histatins disrupt the fungal plasma membrane, resulting in release of the intracellular content of the fungal cell. They also inhibit the growth of yeast, by binding to the potassium transporter and facilitating in the loss of azole-resistant species.
The antifungal properties of histatins have been seen with fungi such as Candida glabrata, Candida krusei, |
https://en.wikipedia.org/wiki/Transmembrane%20protein%20150b | Transmembrane protein 150B is a protein that in humans is encoded by the TMEM150B gene.
Function
The protein belongs to the DRAM (damage-regulated autophagy modulator) family of membrane-spanning proteins. Alternate splicing results in multiple transcript variants. [provided by RefSeq, Aug 2013]. |
https://en.wikipedia.org/wiki/AmpliChip%20CYP450%20Test | AmpliChip CYP450 Test is a clinical test from Roche. The test aims to find the specific gene types (a genotype) of the patient that will determine how he or she metabolizes certain medicines, therefore guides the doctors to prescribe medicine for best effectiveness and least side effects.
The AmpliChip CYP450 Test uses micro array technology from Affymetrix (GeneChip) to determine the genotype of the patient in terms of two cytochrome P450 enzymes: 2D6 and 2C19.
2D6 and 2C19 variability
CYP2D6 and CYP2C19 belong to the Cytochrome P450 oxidase family. CYP2D6 has over 90 variants, 2C19 has mainly three. They are responsible for the majority of the inter-individual variability in the ability to metabolize drugs.
There are four phenotypes of CYP2D6: Poor Metabolizer (PM), Intermediate Metabolizer (IM), Extensive (normal) Metabolizer (EM) and Ultrarapid Metabolizer (UM). For CYP2C19, there are only two phenotypes: PM and EM. If a substrate of the enzyme is given to the patient as a medication, and if the patient has reduced CYP2D6 or CYP2C19 activity, the patient will have elevated drug concentration in their body, and therefore severe side effects may occur. On the other hand, for the UM patient, the drug concentration might be too low to have a therapeutic effect. So testing the phenotype of the patient is important to help determine the optimum dosage of the drug.
How it works
The test analyzes the DNA of a patient to determine the genotype, and prediction of the phenotype can then be made. The DNA sample comes from blood (as Roche suggests) or, alternatively, comes from a mouth brush called buccal swab. The analysis has five steps after DNA is extracted from patient samples:
PCR amplification of the gene.
Fragmentation and labeling of the PCR product
Hybridization and staining on the AmpliChip DNA microarray.
Scanning the chip.
Data analysis.
FDA approval
FDA approved the test on December 24, 2004. The AmpliChip CYP450 test is the first FDA approved p |
https://en.wikipedia.org/wiki/Slow%20manifold | In mathematics, the slow manifold of an equilibrium point of a dynamical system occurs as the most common example of a center manifold. One of the main methods of simplifying dynamical systems, is to reduce the dimension of the system to that of the slow manifold—center manifold theory rigorously justifies the modelling. For example, some global and regional models of the atmosphere or oceans resolve the so-called quasi-geostrophic flow dynamics on the slow manifold of the atmosphere/oceanic dynamics,
and is thus crucial to forecasting with a climate model.
In some cases, a slow manifold is defined to be the invariant manifold on which the dynamics are slow compared to the dynamics off the manifold. The slow manifold in a particular problem would be a sub-manifold of either the stable, unstable, or center manifold, exclusively, that has the same dimension of, and is tangent to, the eigenspace with an associated eigenvalue (or eigenvalue pair) that has the smallest real part in magnitude. This generalizes the definition described in the first paragraph. Furthermore, one might define the slow manifold to be tangent to more than one eigenspace by choosing a cut-off point in an ordering of the real part eigenvalues in magnitude from least to greatest. In practice, one should be careful to see what definition the literature is suggesting.
Definition
Consider the dynamical system
for an evolving state vector and with equilibrium point . Then the linearization of the system at the equilibrium point is
The matrix defines four invariant subspaces characterized by the eigenvalues of the matrix: as described in the entry for the center manifold three of the subspaces are the stable, unstable and center subspaces corresponding to the span of the eigenvectors with eigenvalues that have real part negative, positive, and zero, respectively; the fourth subspace is the slow subspace given by the span of the eigenvectors, and generalized eigenvectors, corresponding to the e |
https://en.wikipedia.org/wiki/Huntington%27s%20disease-like%20syndrome | Huntington's disease-like syndromes (HD-like syndromes, or HDL syndromes) are a family of inherited neurodegenerative diseases that closely resemble Huntington's disease (HD) in that they typically produce a combination of chorea, cognitive decline or dementia and behavioural or psychiatric problems.
Types
HDL1
HDL1 is an unusual, autosomal dominant familial prion disease. Only described in one family, it is caused by an eight-octapeptide repeat insertion in the PRNP gene. More broadly, inherited prion diseases in general can mimic HD.
HDL2
HDL2 is the most common HD-like syndrome and is caused by CTG/CAG triplet expansions in the JPH3 gene encoding junctophilin-3. It is almost exclusively restricted to populations of African descent and is actually more common than Huntington's disease in Black South Africans.
HDL3
HDL3 is a rare, autosomal recessive disorder linked to chromosome 4p15.3. It has only been reported in two families, and the causative gene is unidentified.
Other
Other neurogenetic disorders can cause an HD-like or HD phenocopy syndrome but are not solely defined as HDL syndromes. The commonest is spinocerebellar ataxia type 17 (SCA-17), occasionally called HDL-4. Others include mutations in C9orf72, spinocerebellar ataxias type 1 and 3, neuroacanthocytosis, dentatorubral-pallidoluysian atrophy (DRPLA), brain iron accumulation disorders, Wilson's disease, benign hereditary chorea, Friedreich's ataxia and mitochondrial diseases.
A Huntington's disease-like presentation may also be caused by acquired causes. |
https://en.wikipedia.org/wiki/Spray%20drying | Spray drying is a method of forming a dry powder from a liquid or slurry by rapidly drying with a hot gas. This is the preferred method of drying of many thermally-sensitive materials such as foods and pharmaceuticals, or materials which may require extremely consistent, fine particle size. Air is the heated drying medium; however, if the liquid is a flammable solvent such as ethanol or the product is oxygen-sensitive then nitrogen is used.
All spray dryers use some type of atomizer or spray nozzle to disperse the liquid or slurry into a controlled drop size spray. The most common of these are rotary disk and single-fluid high pressure swirl nozzles. Atomizer wheels are known to provide broader particle size distribution, but both methods allow for consistent distribution of particle size. Alternatively, for some applications two-fluid or ultrasonic nozzles are used. Depending on the process requirements, drop sizes from 10 to 500 μm can be achieved with the appropriate choices. The most common applications are in the 100 to 200 μm diameter range. The dry powder is often free-flowing.
The most common type of spray dryers are called single effect. There is a single source of drying air at the top of the chamber (see n°4 on the diagram). In most cases the air is blown in the same direction as the sprayed liquid (co-current). A fine powder is produced, but it can have poor flow and produce much dust. To overcome the dust and poor flow of the powder, a new generation of spray dryers called multiple effect spray dryers have been produced. Instead of drying the liquid in one stage, drying is done through two steps: the first at the top (as per single effect) and the second with an integrated static bed at the bottom of the chamber. The bed provides a humid environment which causes smaller particles to clump, producing more uniform particle sizes, usually within the range of 100 to 300 μm. These powders are free-flowing due to the larger particle size.
The fine powders |
https://en.wikipedia.org/wiki/Dog%20behaviourist | A dog behaviourist is a person who works in modifying or changing behaviour in dogs. They can be experienced dog handlers, who have developed their experience over many years of hands-on experience, or have formal training up to degree level. Some have backgrounds in veterinary science, animal science, zoology, sociology, biology, or animal behaviour, and have applied their experience and knowledge to the interaction between humans and dogs. Professional certification may be offered through either industry associations or local educational institutions. There is however no compulsion for behaviourists to be a member of a professional body nor to take formal training.
Overview
While any person who works to modify a dog's behaviour might be considered a dog behaviourist in the broadest sense of the term, an animal behaviourist, is a title given only to individuals who have obtained relevant professional qualifications. The professional fields and course of study for dog behaviourists include, but are not limited to animal science, zoology, sociology, biology, psychology, ethology, and veterinary science. People with these credentials usually refer to themselves as Clinical Animal Behaviourists, Applied Animal Behaviourists (PhD) or Veterinary behaviourists (veterinary degree). If they limit their practice to a particular species, they might refer to themselves as a dog/cat/bird behaviourist.
While there are many dog trainers who work with behavioural issues, there are relatively few qualified dog behaviourists. For the majority of the general public, the cost of the services of a dog behaviourist usually reflects both the supply/demand inequity, as well as the level of training they have obtained.
Some behaviourists can be identified in the U.S. by the post-nominals "CAAB", indicating that they are a Certified Applied Animal behaviourist (which requires a Ph.D. or veterinary degree), or, "DACVB", indicating that they are a diplomate of the American College of Vet |
https://en.wikipedia.org/wiki/Multi-homogeneous%20B%C3%A9zout%20theorem | In algebra and algebraic geometry, the multi-homogeneous Bézout theorem is a generalization to multi-homogeneous polynomials of Bézout's theorem, which counts the number of isolated common zeros of a set of homogeneous polynomials. This generalization is due to Igor Shafarevich.
Motivation
Given a polynomial equation or a system of polynomial equations it is often useful to compute or to bound the number of solutions without computing explicitly the solutions.
In the case of a single equation, this problem is solved by the fundamental theorem of algebra, which asserts that the number of complex solutions is bounded by the degree of the polynomial, with equality, if the solutions are counted with their multiplicities.
In the case of a system of polynomial equations in unknowns, the problem is solved by Bézout's theorem, which asserts that, if the number of complex solutions is finite, their number is bounded by the product of the degrees of the polynomials. Moreover, if the number of solutions at infinity is also finite, then the product of the degrees equals the number of solutions counted with multiplicities and including the solutions at infinity.
However, it is rather common that the number of solutions at infinity is infinite. In this case, the product of the degrees of the polynomials may be much larger than the number of roots, and better bounds are useful.
Multi-homogeneous Bézout theorem provides such a better root when the unknowns may be split into several subsets such that the degree of each polynomial in each subset is lower than the total degree of the polynomial. For example, let be polynomials of degree two which are of degree one in indeterminate and also of degree one in (that is the polynomials are bilinear. In this case, Bézout's theorem bounds the number of solutions by
while the multi-homogeneous Bézout theorem gives the bound (using Stirling's approximation)
Statement
A multi-homogeneous polynomial is a polynomial that is homoge |
https://en.wikipedia.org/wiki/PSeven | pSeven is a DSE (Design Space Exploration) software platform that was developed by DATADVANCE that features design, simulation and analysis capabilities and assists in design decisions. It provides integration with third party CAD and CAE software tools, multi-objective and robust optimization algorithms, data analysis, and uncertainty quantification tools.
pSeven comes under the notion of PIDO (Process Integration and Design Optimization) software. Design Space Exploration functionality is based on the mathematical algorithms of pSeven Core Python library, also developed by DATADVANCE.
pSeven workflow automation capabilities and algorithms from pSeven Core laid the foundation for the development of pSeven Enterprise, a cloud-native low-code platform used for engineering automation at enterprise level.
History
The foundation for the pSeven Core library as pSeven's background was laid in 2003, when the researchers from the Institute for Information Transmission Problems started collaborating with Airbus to perform R&D in the domains of simulation and data analysis. The first version of pSeven Core library was created in association with EADS Innovation Works in 2009. Since 2012, pSeven software platform for simulation automation, data analysis and optimization is developed and marketed by DATADVANCE, incorporating pSeven Core.
Functionality
pSeven's functionality can be divided into following blocks: Data & Model Analysis, Predictive Modeling, Design Optimization and Process Integration.
Data & Model Analysis
pSeven provides a variety of tools for data and model analysis:
Design of Experiments
Design of Experiments allows controlling the process of surrogate modeling via adaptive sampling plan, which benefits the quality of approximation. As a result, it ensures time and resource saving on experiments and smarter decision-making based on the detailed knowledge of the design space.
Sensitivity and Dependency Analysis
Sensitivity and Dependence analysis are |
https://en.wikipedia.org/wiki/K-convex%20function | K-convex functions, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the policy in inventory control theory. The policy is characterized by two numbers and , , such that when the inventory level falls below level , an order is issued for a quantity that brings the inventory up to level , and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of K-convexity to higher dimensional Euclidean spaces.
Definition
Two equivalent definitions are as follows:
Definition 1 (The original definition)
Let K be a non-negative real number. A function is K-convex if
for any and .
Definition 2 (Definition with geometric interpretation)
A function is K-convex if
for all , where .
This definition admits a simple geometric interpretation related to the concept of visibility. Let . A point is said to be visible from if all intermediate points lie below the line segment joining these two points. Then the geometric characterization of K-convexity can be obtain as:
A function is K-convex if and only if is visible from for all .
Proof of Equivalence
It is sufficient to prove that the above definitions can be transformed to each other. This can be seen by using the transformation
Properties
Property 1
If is K-convex, then it is L-convex for any . In particular, if is convex, then it is also K-convex for any .
Property 2
If is K-convex and is L-convex, then for is -convex.
Property 3
If is K-convex and is a random variable such that for all , then is also K-convex.
Property 4
If is K-convex, restriction of on any convex set is K-convex.
Property 5
If is a continuous K-convex function and as , then there exit scalars and with such that
, for all ;
, for all ;
is a decreasing function on ;
for all with . |
https://en.wikipedia.org/wiki/List%20of%20U.S.%20state%20mushrooms | Five U.S. states, California, Minnesota, Oregon, Texas, and Utah, have officially declared a state mushroom. Minnesota was the first to declare a species; Morchella esculenta was chosen as its state mushroom in 1984, and codified into Statute in 2010. Four other states, Missouri, Washington, Massachusetts, and New York, have had state mushrooms proposed.
Current state mushrooms
Proposed state mushrooms
Notes |
https://en.wikipedia.org/wiki/Dissociated%20press | Dissociated press is a parody generator (a computer program that generates nonsensical text). The generated text is based on another text using the Markov chain technique. The name is a play on "Associated Press" and the psychological term dissociation (although word salad is more typical of conditions like aphasia and schizophrenia – which is, however, frequently confused with dissociative identity disorder by laypeople).
An implementation of the algorithm is available in Emacs. Another implementation is available as a Perl module in CPAN, Games::Dissociate.
The algorithm
The algorithm starts by printing a number of consecutive words (or letters) from the source text. Then it searches the source text for an occurrence of the few last words or letters printed out so far. If multiple occurrences are found, it picks a random one, and proceeds with printing the text following the chosen occurrence. After a predetermined length of text is printed out, the search procedure is repeated for the newly printed ending.
Considering that words and phrases tend to appear in specific grammatical contexts, the resulting text usually seems correct grammatically, and if the source text is uniform in style, the result appears to be of similar style and subject, and takes some effort on the reader's side to recognize as not genuine. Still, the randomness of the assembly process deprives it of any logical flow - the loosely related parts are connected in a nonsensical way, creating a humorously abstract, random result.
Examples
Here is a short example of word-based Dissociated Press applied to the Jargon File:
wart: n. A small, crocky feature that sticks out of an array (C has no checks for this). This is relatively benign and easy to spot if the phrase is bent so as to be not worth paying attention to the medium in question.
Here is a short example of letter-based Dissociated Press applied to the same source:
window sysIWYG: n. A bit was named aften /bee´t@/ prefer to use the oth |
https://en.wikipedia.org/wiki/Russula%20atropurpurea | Russula atropurpurea is an edible member of the genus Russula. It is dark vinaceous (red wine-coloured) or purple, and grows with deciduous, or occasionally coniferous trees. It is commonly called the blackish purple Russula, or the purple brittlegill.
Taxonomy
Initially described as Agaricus atropurpureus by German naturalist Julius von Krombholz in 1845, and placed in Russula by his countryman Max Britzelmayr in 1893, the binomial name of this mushroom R. atropurpurea (Krombh.) Britzelm is accepted as being incorrect, and mycologists cannot agree on a suitable replacement.
Distribution and habitat
Russula atropurpurea appears in late summer and autumn. It is common in the northern temperate zones, Europe, Asia, and Eastern North America, and is mycorrhizal with oak (Quercus), with which it prefers to live. Favouring acid soil, it is occasionally found with beech (Fagus), or pine (Pinus).
Description
The cap is in diameter. It is dark reddish purple, with a dark; sometimes almost black centre. At first it is convex, but later flattens, and often has a shallow depression. It can also be lighter in colour, or mottled yellowish. The stem is firm, white, and turns grey with age. It measures 3–6 cm in length and 1–2 cm in diameter. The closely set and fairly broad gills are adnexed to almost free, and pale cream, giving a spore print of the same colour. The flesh is white; with a fruity smell, similar to apples. It tastes moderately hot.
The species R. brunneviolacea, and R. romellii are similar, though both have darker spore prints.
As the fruitbodies mature, the caps become concave to collect water during wet weather, and much of the color washes off the rim.
Spores
The spore print is Whitish, and the subglobose to globose spores ornamented with warts and ridges measure 7-9 x 6-7 μm.
Edibility
This mushroom is said to be the mildest of the hot-tasting Russula species. It is edible if cooked, although not recommended.
See also
List of Russula species |
https://en.wikipedia.org/wiki/Long%20code%20%28mathematics%29 | In theoretical computer science and coding theory, the long code is an error-correcting code that is locally decodable. Long codes have an extremely poor rate, but play a fundamental role in the theory of hardness of approximation.
Definition
Let for be the list of all functions from .
Then the long code encoding of a message is the string where denotes concatenation of strings.
This string has length .
The Walsh-Hadamard code is a subcode of the long code, and can be obtained by only using functions that are linear functions when interpreted as functions on the finite field with two elements. Since there are only such functions, the block length of the Walsh-Hadamard code is .
An equivalent definition of the long code is as follows:
The Long code encoding of is defined to be the truth table of the Boolean dictatorship function on the th coordinate, i.e., the truth table of with .
Thus, the Long code encodes a -bit string as a -bit string.
Properties
The long code does not contain repetitions, in the sense that the function computing the th bit of the output is different from any function computing the th bit of the output for .
Among all codes that do not contain repetitions, the long code has the longest possible output.
Moreover, it contains all non-repeating codes as a subcode. |
https://en.wikipedia.org/wiki/Wallace%20rule%20of%20nines | The Wallace rule of nines is a tool used in pre-hospital and emergency medicine to estimate the total body surface area (BSA) affected by a burn. In addition to determining burn severity, the measurement of burn surface area is important for estimating patients' fluid requirements and determining hospital admission criteria.
The rule of nines was devised by Pulaski and Tennison in 1947, and published by Alexander Burns Wallace in 1951.
To estimate the body surface area of a burn, the rule of nines assigns BSA values to each major body part:
This allows the emergency medical provider to obtain a quick estimate of how much body surface area is burned. For example, if a patient's entire back (18%) and entire left leg (18%) are burned, about 36% of the patient's BSA is affected. The BSAs assigned to each body part refer to the entire body part. So, for example, if half of a patient's left leg were burned, it would be assigned a BSA value of 9% (half the total surface area of the leg). Thus, if a patient's entire back (18%), but only half of their left leg (9%) was burned, the amount of BSA affected would be 27%.
Accuracy
Some studies have raised concerns about the rule of nines' accuracy with obese patients, noting that "the proportional contribution of various major body segments to the total body surface area changes with obesity." One study found the rule's accuracy to be "reasonable" for patients weighing up to 80 kg, but proposed a new "rule of fives" for patients over that weight:
5% body surface area for each arm
20% BSA for each leg
50% for the trunk, and
2% for the head.
Other studies have found that the rule of nines tends to over-estimate total burn area, and that ratings can be subjective, but that it can be performed quickly and easily, and provide reasonable estimates for initial management of burn patients.
The rule of nines was designed for adult patients. It is less accurate in young children due to their proportionally bigger heads and s |
https://en.wikipedia.org/wiki/Pixel%20Qi | Pixel Qi Corporation (pronounced Pixel "Chi") was an American company involved in the research of low-power computer display technology, based in San Bruno, California. It was founded by Mary Lou Jepsen, who was previously the chief technical officer of the One Laptop per Child project.
The company designed liquid crystal displays (LCDs) that can be largely manufactured using the existing manufacturing infrastructure for conventional LCDs. The advantage of Pixel Qi displays over conventional LCDs is mainly that they can be set to operate under transflective mode and reflective mode, improving eye-comfort, power usage, and visibility under bright ambient light.
By 2015, PixelQi's team and offices were unreachable, and the company is presumed defunct. The intellectual property is now owned by the original investor of Pixel Qi, while the right to manufacture Pixel Qi technology contractually rests with Tripuso Display Solutions.
Devices
The first commercial device to use a Pixel Qi display, ARM-based Adam tablet by Notion Ink, was released mid-January 2011.
Another tablet with a Pixel Qi display has been announced by Innoversal, named Lattice.;
Clover Systems has launched SunBook, a netbook with a Pixel Qi display.
The first ruggedized, MIL-SPEC tablet utilizing Pixel Qi, the Hydra-T3, was created by InHand Electronics, Inc. and launched Q1 of 2012. |
https://en.wikipedia.org/wiki/List%20of%20types%20of%20seafood | The following is a list of types of seafood. Seafood is any form of sea life regarded as food by humans. It prominently includes shellfish, and roe. Shellfish include various species of molluscs, crustaceans, and echinoderms. In most parts of the world, fish are generally not considered seafood even if they are from the sea. In the US, the term "seafood" is extended to fresh water organisms eaten by humans, so any edible aquatic life may be broadly referred to as seafood in the US. Historically, sea mammals such as whales and dolphins have been consumed as food, though that happens to a lesser extent in modern times. Edible sea plants, such as some seaweeds and microalgae, are widely eaten as seafood around the world, especially in Asia (see the category of edible seaweeds).
Fish
Anchovies
Anglerfish
Barracuda
Basa
Bass (see also striped bass)
Black cod
Bluefish
Bombay duck
Bonito
Bream
Brill
Burbot
Catfish
Cod (see also Pacific cod and Atlantic cod)
Dogfish
Dorade
Eel
Flounder
Grouper
Haddock
Hake
Halibut
Herring
Ilish
John Dory
Lamprey
Lingcod (see also Common ling)
Mackerel (see also Horse mackerel)
Mahi Mahi
Monkfish
Mullet
Orange roughy
Pacific rudderfish (Japanese butterfish)
Pacific saury
Parrotfish
Patagonian toothfish (also called Chilean sea bass)
Perch
Pike
Pilchard
Pollock
Pomfret
Pompano
Pufferfish (see also Fugu)
Sablefish
Sanddab, particularly Pacific sanddab
Sardine
Sea bass
Sea bream
Shad (see also alewife and American shad)
Shark
Skate
Smelt
Snakehead
Snapper (see also rockfish, rock cod and Pacific snapper)
Sole
Sprat
Stromateidae (butterfish)
Sturgeon
Surimi
Swordfish
Tilapia
Tilefish
Trout (see also rainbow trout)
Tuna (see also albacore tuna, yellowfin tuna, bigeye tuna, bluefin tuna and dogtooth tuna)
Turbot
Wahoo
Whitefish (see also stockfish)
Whiting
Witch (righteye flounder)
Yellowtail (also called Japanese amberjack)
Roe
Caviar (sturgeon roe)
Ikura (salm |
https://en.wikipedia.org/wiki/Drosophila%20melanogaster | Drosophila melanogaster is a species of fly (the taxonomic order Diptera) in the family Drosophilidae. The species is often referred to as the fruit fly or lesser fruit fly, or less commonly the "vinegar fly", "pomace fly", or "banana fly". Starting with Charles W. Woodworth's 1901 proposal of the use of this species as a model organism, D. melanogaster continues to be widely used for biological research in genetics, physiology, microbial pathogenesis, and life history evolution. As of 2017, six Nobel Prizes have been awarded to drosophilists for their work using the insect.
D. melanogaster is typically used in research owing to its rapid life cycle, relatively simple genetics with only four pairs of chromosomes, and large number of offspring per generation. It was originally an African species, with all non-African lineages having a common origin. Its geographic range includes all continents, including islands. D. melanogaster is a common pest in homes, restaurants, and other places where food is served.
Flies belonging to the family Tephritidae are also called "fruit flies". This can cause confusion, especially in the Mediterranean, Australia, and South Africa, where the Mediterranean fruit fly Ceratitis capitata is an economic pest.
Physical appearance
Wild type fruit flies are yellow-brown, with brick-red eyes and transverse black rings across the abdomen. The black portions of the abdomen are the inspiration for the species name (melanogaster = "black-bellied"). The brick-red color of the eyes of the wild type fly are due to two pigments: xanthommatin, which is brown and is derived from tryptophan, and drosopterins, which are red and are derived from guanosine triphosphate. They exhibit sexual dimorphism; females are about long; males are slightly smaller with darker backs. Males are easily distinguished from females based on colour differences, with a distinct black patch at the abdomen, less noticeable in recently emerged flies, and the sex combs (a ro |
https://en.wikipedia.org/wiki/Barry%20Mazur | Barry Charles Mazur (; born December 19, 1937) is an American mathematician and the Gerhard Gade University Professor at Harvard University. His contributions to mathematics include his contributions to Wiles's proof of Fermat's Last Theorem in number theory, Mazur's torsion theorem in arithmetic geometry, the Mazur swindle in geometric topology, and the Mazur manifold in differential topology.
Life
Born in New York City, Mazur attended the Bronx High School of Science and MIT, although he did not graduate from the latter on account of failing a then-present ROTC requirement. He was nonetheless accepted for graduate studies at Princeton University, from where he received his PhD in mathematics in 1959 after completing a doctoral dissertation titled "On embeddings of spheres." He then became a Junior Fellow at Harvard University from 1961 to 1964. He is the Gerhard Gade University Professor and a Senior Fellow at Harvard. He is the brother of Joseph Mazur and the father of Alexander J. Mazur.
Work
His early work was in geometric topology. In an elementary fashion, he proved the generalized Schoenflies conjecture (his complete proof required an additional result by Marston Morse), around the same time as Morton Brown. Both Brown and Mazur received the Veblen Prize for this achievement. He also discovered the Mazur manifold and the Mazur swindle.
His observations in the 1960s on analogies between primes and knots were taken up by others in the 1990s giving rise to the field of arithmetic topology.
Coming under the influence of Alexander Grothendieck's approach to algebraic geometry, he moved into areas of diophantine geometry. Mazur's torsion theorem, which gives a complete list of the possible torsion subgroups of elliptic curves over the rational numbers, is a deep and important result in the arithmetic of elliptic curves. Mazur's first proof of this theorem depended upon a complete analysis of the rational points on certain modular curves. This proof was c |
https://en.wikipedia.org/wiki/Institute%20for%20Condensed%20Matter%20Theory | The Institute for Condensed Matter Theory (ICMT) is an institute for the research of condensed matter theory hosted by and located at the University of Illinois at Urbana-Champaign.
ICMT was founded in 2007. The first director of the institute was Paul Goldbart who was followed by Eduardo Fradkin. The chief scientist is Nobel laureate Anthony Leggett. |
https://en.wikipedia.org/wiki/Traffic%20flow | In mathematics and transportation engineering, traffic flow is the study of interactions between travellers (including pedestrians, cyclists, drivers, and their vehicles) and infrastructure (including highways, signage, and traffic control devices), with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems.
History
Attempts to produce a mathematical theory of traffic flow date back to the 1920s, when American Economist Frank Knight first produced an analysis of traffic equilibrium, which was refined into Wardrop's first and second principles of equilibrium in 1952.
Nonetheless, even with the advent of significant computer processing power, to date there has been no satisfactory general theory that can be consistently applied to real flow conditions. Current traffic models use a mixture of empirical and theoretical techniques. These models are then developed into traffic forecasts, and take account of proposed local or major changes, such as increased vehicle use, changes in land use or changes in mode of transport (with people moving from bus to train or car, for example), and to identify areas of congestion where the network needs to be adjusted.
Overview
Traffic behaves in a complex and nonlinear way, depending on the interactions of a large number of vehicles. Due to the individual reactions of human drivers, vehicles do not interact simply following the laws of mechanics, but rather display cluster formation and shock wave propagation, both forward and backward, depending on vehicle density. Some mathematical models of traffic flow use a vertical queue assumption, in which the vehicles along a congested link do not spill back along the length of the link.
In a free-flowing network, traffic flow theory refers to the traffic stream variables of speed, flow, and concentration. These relationships are mainly concerned with uninterrupted traffic flow, primarily found on fr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.