source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Adjusted%20winner%20procedure | Adjusted Winner (AW) is a procedure for envy-free item allocation. Given two agents and some goods, it returns a partition of the goods between the two agents with the following properties:
Envy-freeness: Each agent believes that his share of the goods is at least as good as the other share;
Equitability: The "relative happiness levels" of both agents from their shares are equal;
Pareto-optimality: no other allocation is better for one agent and at least as good for the other agent;
At most one good has to be shared between the agents.
For two agents, Adjusted Winner is the only Pareto optimal and equitable procedure that divides at most a single good.
The procedure can be used in divorce settlements and partnership dissolutions, as well as international conflicts.
The procedure was designed by Steven Brams and Alan D. Taylor. It was first published in their book on fair division and later in a stand-alone book.
The algorithm has been commercialized through the FairOutcomes website. AW was patented in the United States but that patent has expired.
Method
Each partner is given the list of goods and an equal number of points (e.g. 100 points) to distribute among them. He or she assigns a value to each good and submits it sealed to an arbiter.
The arbiter, or a computer program, assigns each item to the high bidder. If both partners have the same number of points, then we are done. Otherwise, call the partner who has more points "winner" and the other partner "loser".
Order the goods in increasing order of the ratio value-for-winner / value-for-loser. Start moving goods in this order from the winner to the loser, until the point-totals become "almost" equal, i.e., moving one more good from the winner to the loser will make the winner have less points than the loser.
At this point, divide the next good between the winner and the loser such that their totals are the same.
Use cases
While there is no account of AW actually being used to resolve disputes, |
https://en.wikipedia.org/wiki/Lesser%20Triangle | It is a triangle contained within the Submandibular triangle. Its boundaries are the Hypoglossal Nerve, and the Anterior and Posterior belly of the Digastric muscle. This triangle was named after a German surgeon named Ladislaus Leon Lesser, who lived from 1846 to 1925 |
https://en.wikipedia.org/wiki/Pelvic%20thrust | The pelvic thrust is the thrusting motion of the pelvic region, which is used for a variety of activities, such as dance, exercise, or sexual activity.
Sexual activity
The pelvic thrust is used during copulation by many species of mammals, including humans, or for other sexual activities (such as non-penetrative sex). In 2007, German scientists noted that female monkeys could increase the vigor and amount of pelvic thrusts made by the male by shouting during intercourse. In whitetail deer, copulation consists of a single pelvic thrust.
Dance
One of the first to perform this move on stage was Elvis Presley. It was quite controversial due to its obvious sexual connotations. Due to this controversy, he was sometimes shown (as seen on his third appearance on The Ed Sullivan Show) from the waist up on TV. Later, the pelvic thrust also became one of the signature moves of Michael Jackson. It is also mentioned in "Time Warp", a song from The Rocky Horror Show, as a part of the choreography associated with the warp itself. Twerking, a reverse and sometimes passive form of pelvic thrust dance move, is also a very popular form of hip-hop dance move. The sideways pelvic thrust is a famous female dance move in India and Bangladesh and known as thumka. It appears in the lyrics of various Bollywood songs.
Exercise
Hip thrusts can be used as an exercise to train the gluteus maximus muscle. The athlete will get into a reclined position and thrust their hips upward to lift weights balanced on their lap.
Infants
Pelvic thrusting is observed in infant monkeys, apes, and humans. These observations led ethologist John Bowlby (1969) to suggest that infantile sexual behavior may be the rule in mammals, not the exception. Thrusting has been observed in humans at eight to 10 months of age and may be an expression of affection. Typically, the infant clings to the parent, then nuzzles, thrusts, and rotates the pelvis for several seconds.
See also
Lordosis behavior
Twerking |
https://en.wikipedia.org/wiki/Brocard%20triangle | In geometry, the Brocard triangle of a triangle is a triangle formed by the intersection of lines from a vertex to its corresponding Brocard point and a line from another vertex to its corresponding Brocard point and the other two points constructed using different combinations of vertices and Brocard points. This triangle is also called the first Brocard triangle, as further triangles can be formed by forming the Brocard triangle of the Brocard triangle and continuing this pattern. The Brocard triangle is inscribed in the Brocard circle. It is named for Henri Brocard.
See also
Henri Brocard
Brocard points
Notes
Triangles |
https://en.wikipedia.org/wiki/Upland%20pasture | Upland pasture (rough grazing and/or semi-natural rough grazing) is a type of semi-natural grassland located in uplands of rolling foothills or upon higher slopes, greater than 350 meters (1148.29 feet) and less than 600 meters (1968.50 feet) from ground level, that is used primarily for grazing. Upland pastures occur in most grassland systems where topographic slope prevents feasible crop production; they are a primary component of rangelands, but are not necessarily water limited. Upland pastures include highlands, moorland, and other grasslands in regions of upland soils (said to have the potential for hydric inclusions, rather than definitive hydric inclusion; meaning there is potential for "saturation, flooding, or ponding long enough during the growing season to develop anaerobic conditions").
Locations
The term originates in the British Isles where upland pastures constitute approximately 9 million hectares, 48% of agricultural land use in the UK. Upland pastures are widely managed in the United States in New England and Appalachia, and in semi-arid mountain regions in the inter-mountain west where their management is an important aspect of historic farming, wildlife preservation, and range livestock production. Upland pastures are also of primary importance to livestock production in western Australia, the Mongolian-Manchurian grassland ecoregion, in the Andes of Argentina, Uruguay, Paraguay, and western Brazil, in the Eurasian Steppes, in South Africa's Highveld, in Switzerland and the Alps, Sweden, Iceland, India, Juniper grasslands around Jabal Sawda in Saudi Arabia and other Juniper encroached grasslands in the Middle East including in Jordan, Israel, Turkey, Kazakhstan, as well as in eastern European nations, and, Tibet, New Zealand, Ethiopia, Canada, Kenya, Tanzania, Eritrea, Yemen, Ghana, Nigeria, Papua New Guinea, Syria, and Cantabria.
See also
Upland (mountain range)
Hill farming
Upland game bird
Pasture
Fell
Alpine pasture
Lowland semi-natural |
https://en.wikipedia.org/wiki/Yuri%20Burago | Yuri Dmitrievich Burago (; born 1936) is a Russian mathematician. He works in differential and convex geometry.
Education and career
Burago studied at Leningrad University, where he obtained his Ph.D. and Habilitation degrees. His advisors were Victor Zalgaller and Aleksandr Aleksandrov.
Yuri is a creator (with his students Perelman and Petrunin, and M. Gromov) of what is known now as Alexandrov Geometry. Also brought geometric inequalities to the state of art.
Burago is the head of the Laboratory of Geometry and Topology that is part of the St. Petersburg Department of Steklov Institute of Mathematics. He took part in a report for the United States Civilian Research and Development Foundation for the Independent States of the former Soviet Union.
Works
His other books and papers include:
Geometry III: Theory of Surfaces (1992)
Potential Theory and Function Theory for Irregular Regions (1969)
Isoperimetric inequalities in the theory of surfaces of bounded external curvature (1970)
Students
He has advised Grigori Perelman, who solved the Poincaré conjecture, one of the seven Millennium Prize Problems. Burago was an advisor to Perelman during the latter's post-graduate research at St. Petersburg Department of Steklov Institute of Mathematics.
Footnotes
External links
Burago's page on the site of Steklov Mathematical Institute
Yuri Dmitrievich Burago in the Oberwolfach Photo Collection
Soviet mathematicians
Geometers
Differential geometers
1936 births
Living people
Saint Petersburg State University alumni
20th-century Russian mathematicians
21st-century Russian mathematicians |
https://en.wikipedia.org/wiki/Ptolemy%27s%20theorem | In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy.
If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that:
This relation may be verbally expressed as follows:
If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides.
Moreover, the converse of Ptolemy's theorem is also true:
In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral.
Corollaries on Inscribed Polygons
Equilateral triangle
Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle.
Given An equilateral triangle inscribed on a circle and a point on the circle.
The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices.
Proof: Follows immediately from Ptolemy's theorem:
Square
Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to then the length of the diagonal is equal to according to the Pythagorean theorem, and Ptolemy's relation obviously holds.
Rectangle
More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of |
https://en.wikipedia.org/wiki/List%20of%20minor%20planets%20discovered%20using%20the%20WISE%20spacecraft | The following is a list of numbered minor planets discovered, co-discovered and re-discovered by the Wide-field Infrared Survey Explorer (WISE), a NASA infrared spaceborne observatory. As of July 2018, the list contains 4093 entries, accredited by the Minor Planet Center as discovered by "WISE". Notable discoveries include , , and . Also see :Category:Discoveries by WISE.
See also
List of minor planets
Minor Planet Center
Discoveries |
https://en.wikipedia.org/wiki/Obelisk%20posture | The obelisk posture is a handstand-like position that some dragonflies and damselflies assume to prevent overheating on sunny days. The abdomen is raised until its tip points at the sun, minimizing the surface area exposed to solar radiation. When the sun is close to directly overhead, the vertical alignment of the insect's body suggests an obelisk.
Function and occurrence
Dragonflies may also raise their abdomens for other reasons. For instance, male blue dashers (Pachydiplax longipennis) assume an obelisk-like posture while guarding their territories or during conflicts with other males, displaying the blue pruinescence on their abdomens to best advantage. However, both females and males will raise their abdomens at high temperature and lower them again if shaded. This behavior can be demonstrated in the laboratory by heating captive blue dashers with a 250 watt lamp, and has been shown to be effective in stopping or slowing the rise in their body temperature.
The obelisk posture has been observed in about 30 species in the demoiselle, clubtail, and kimmer families. All are "perchers"—sit-and-wait predators that fly up from a perch to take prey and perch again to eat it. Since they spend most of their time stationary, perchers have the most opportunity to thermoregulate by adjusting their position.
Other forms of postural thermoregulation
Some species, including the dragonhunter (Hagenius brevistylus), reduce exposure to the sun by perching with the abdomen pointed downward, rather than upward. The tropical skimmer dragonfly Diastatops intensa, whose wings are mostly black, points its wings rather than its abdomen at the sun, apparently to reduce the heat they absorb.
While flying, some saddlebags gliders (genus Tramea) lower their abdomens into the shade provided by dark patches at the bases of their hindwings. The same behavior has been observed in Pseudothemis zonata, which has a similar hindwing patch.
Dragonflies also use postural thermoregul |
https://en.wikipedia.org/wiki/All%20models%20are%20wrong | All models are wrong is a common aphorism and anapodoton in statistics; it is often expanded as "All models are wrong, but some are useful". The aphorism acknowledges that statistical models always fall short of the complexities of reality but can still be useful nonetheless. The aphorism originally referred just to statistical models, but it is now sometimes used for scientific models in general.
The aphorism is generally attributed to the statistician George Box. The underlying concept, though, predates Box's writings.
Quotations of George Box
The first record of Box saying "all models are wrong" is in a 1976 paper published in the Journal of the American Statistical Association. The 1976 paper contains the aphorism twice. The two sections of the paper that contain the aphorism state:
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop. The paper contains a section entitled "All models are wrong but some are useful". The section states (p 202-3):
Box repeated the aphorism twice more in his 1987 book, Empirical Model-Building and Response Surfaces (which was co-authored with Norman Draper). The first repetition is on p. 74: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." The second repetition is on p. 424, which is excerpted below.
A second edition of the book was published in 2007, under the title Response Surfaces, Mixtures, and Ridge Analyses. The second edition also repeats the aphorism twice, in contexts identical with those of the first edition (on p. 63 and p. 414).
Box repeated the aphorism two more times in his 1997 book, Statistical Control: By Monitoring and Feedback Adjustment (which was co-authored with Alberto Luceño). The first repetition is on p. 6, which is excerpted below.
The second repetition is on p. 9: "So since all models are wrong, it is very important to know what to worry about;
or, to put it in another way, what |
https://en.wikipedia.org/wiki/Odell%20Lake%20%28video%20game%29 | Odell Lake is a 1986 educational life simulation game produced by MECC for the Apple II and Commodore 64. The player is a fish living in Odell Lake, a real-world lake in Oregon. It is based on a 1980 BASIC program of the same name. It was followed-up by Odell Down Under.
Gameplay
As a fish, the player could "go exploring" or "play for points". The object was to decide which fish to eat, while trying to survive and avoid other enemies; such as otters, ospreys, and bait from fishermen. When simply exploring, the player could select from six different species of fish, such as Mackinaw Trout, Whitefish, or Rainbow Trout; however, when playing for points, the computer randomly assigned the type of fish that the player will play as. In addition, the titles for each of the types of fish and other creatures are removed when playing for points, forcing the player to rely on memory; also the game was timed. After every five moves, the player played as a different type of fish.
When playing for points, the best decision netted the player the most points, with less intelligent decisions earning the player fewer or no points, or in the case of the fish eating something disagreeable, actually taking them away. If no decision was made when time ran out, it counted as "Ignore". If at any time the player's fish was attacked by an enemy, or the player got caught by an angler, the game ended immediately.
In Israel the game was published in Hebrew in 1987 for Apple II.
Main fish
The species of fish found in Odell Lake included the following:
Rainbow trout
Dolly Varden trout
Mackinaw trout, the largest fish in the game
Blueback salmon
Whitefish
Chub, the smallest fish in the game
The game is heavily random; the same situation played in the same way can have different outcomes. For the most points, players must play the game safely, choosing the action that has the greatest chance of leading to a positive outcome. It's helpful to remember the typical locations of food and predator |
https://en.wikipedia.org/wiki/Hot%20air%20solder%20leveling | HASL or HAL (for hot air (solder) leveling) is a type of finish used on printed circuit boards (PCBs).
The PCB is typically dipped into a bath of molten solder so that all exposed copper surfaces are covered by solder. Excess solder is removed by passing the PCB between hot air knives.
HASL can be applied with or without lead (Pb), but only lead-free HASL is RoHS compliant.
Advantages of HASL
Excellent wetting during component soldering
Avoids copper corrosion.
Disadvantages of HASL
Low planarity on vertical levelers may make this surface finish unsuitable for use with fine pitch components. Improved planarity can be achieved using a horizontal leveler.
High thermal stress during the process may introduce defects into PCB
See also
Electroless Nickel Immersion Gold (ENIG)
Immersion Silver (IAg)
Organic Solderability Preservative (OSP)
Reflow soldering
Wave soldering
Printed circuit board manufacturing
Soldering |
https://en.wikipedia.org/wiki/Indo-1 | Indo-1 is a popular calcium indicator similar to Fura-2. In contrast to Fura-2, Indo-1 has a dual emissions peak. The main emission peak in calcium-free solution is 475 nm while in the presence of calcium the emission is shifted to 400 nm. It is widely used in flow cytometry.
The penta potassium salt is commercially available and preferred to the free acid because of its higher solubility in water. While Indo-1 is not cell permeable the penta acetoxymethyl ester Indo-1 AM enters the cell where it is cleaved by intracellular esterases to Indo-1.
The synthesis and properties of Indo-1 were presented in 1985 by the group of Roger Y Tsien. |
https://en.wikipedia.org/wiki/Resin%20canal | Resin canals or resin ducts are elongated, tube-shaped intercellular spaces surrounded by epithelial cells which secrete resin into the canal. These canals are orientated longitudinally and radially in between fusiform rays. They are usually found in late wood: denser wood grown later in the season. Resin is antiseptic and aromatic and prevents the development of fungi and deters insects.
Types
Normal resin canals exist naturally in the wood of the genera Picea, Larix, Pinus, Pseudotsuga and Shorea.
Traumatic resin canals may be formed in wounded trees that don't have normal resin canals. Wounding occurs from either fire, freezing or mechanical damage. These canals are irregularly shaped compared to normal resin canals.
Characteristics
Resin canal characteristics (such as number, size and density) in pine species can determine its resistance to pests. In one study, biologists were able to categorize 84% of lodgepole pine, and 92% of limber pines, as being either susceptible or resistant to bark beetles based only on their resin canals and growth rate over 20 years. In another study, scientists found ponderosa pine trees that survived drought and bark beetle attacks had resin ducts that were >10% larger in diameter, >25% denser (resin canals per mm2), and composed >50% more area of per ring. |
https://en.wikipedia.org/wiki/Harmony%20Compiler | Harmony Compiler was written by Peter Samson at the Massachusetts Institute of Technology (MIT). The compiler was designed to encode music for the PDP-1 and built on an earlier program Samson wrote for the TX-0 computer.
Jack Dennis noticed and had mentioned to Samson that the sound on or off state of the TX-0's speaker could be enough to play music. They succeeded in building a WYSIWYG program for one voice before or by 1960.
For the PDP-1 which arrived at MIT in September 1961, Samson designed the Harmony Compiler which synthesizes four voices from input in a text-based notation. Although it created music in many genres, it was optimized for baroque music. PDP-1 music is merged from four channels and played back in stereo. Notes are on pitch and each has an undertone. The music does not stop for errors. Mistakes are greeted with a message from the typewriter's red ribbon, "To err is human, to forgive divine."
Samson joined the PDP-1 restoration project at the Computer History Museum in 2004 to recreate the music player. |
https://en.wikipedia.org/wiki/Sim%C3%A9on%20Denis%20Poisson | Baron Siméon Denis Poisson FRS FRSE (; 21 June 1781 – 25 April 1840) was a French mathematician and physicist who worked on statistics, complex analysis, partial differential equations, the calculus of variations, analytical mechanics, electricity and magnetism, thermodynamics, elasticity, and fluid mechanics. Moreover, he predicted the Poisson spot in his attempt to disprove the wave theory of Augustin-Jean Fresnel, which was later confirmed.
Biography
Poisson was born in Pithiviers, Loiret district in France, the son of Siméon Poisson, an officer in the French army.
In 1798, he entered the École Polytechnique in Paris as first in his year, and immediately began to attract the notice of the professors of the school, who left him free to make his own decisions as to what he would study. In his final year of study, less than two years after his entry, he published two memoirs, one on Étienne Bézout's method of elimination, the other on the number of integrals of a finite difference equation and this was so impressive that he was allowed to graduate in 1800 without taking the final examination,. The latter of the memoirs was examined by Sylvestre-François Lacroix and Adrien-Marie Legendre, who recommended that it should be published in the Recueil des savants étrangers, an unprecedented honor for a youth of eighteen. This success at once procured entry for Poisson into scientific circles. Joseph Louis Lagrange, whose lectures on the theory of functions he attended at the École Polytechnique, recognized his talent early on, and became his friend. Meanwhile, Pierre-Simon Laplace, in whose footsteps Poisson followed, regarded him almost as his son. The rest of his career, until his death in Sceaux near Paris, was occupied by the composition and publication of his many works and in fulfilling the duties of the numerous educational positions to which he was successively appointed.
Immediately after finishing his studies at the École Polytechnique, he was appointed répét |
https://en.wikipedia.org/wiki/Academy%20of%20Interactive%20Arts%20%26%20Sciences | The Academy of Interactive Arts & Sciences (AIAS) is a non-profit organization of video game industry professionals. It organizes the annual Design Innovate Communicate Entertain summit, better known as D.I.C.E., which includes the presentations of the D.I.C.E. Awards.
History
Andrew S. Zucker, an attorney in the entertainment industry, founded the Academy of Interactive Arts & Sciences in 1991 and served as its first president. AIAS co-promoted numerous events with organizations such as the Academy of Television Arts and Sciences, the Directors Guild of America and Women in Film. Their first awards show program, Cybermania '94, which was hosted by Leslie Nielsen and Jonathan Taylor Thomas, was broadcast on TBS in 1994. While a second show was run in 1995, and was the first awards program to be streamed over the Web, it drew far fewer audiences as the first.
Video game industry leaders decided that they wanted to reform AIAS as a non-profit organization for the video game industry. The effort was backed by Peter Main of Nintendo, Tom Kalinske of Sega, and Doug Lowenstein, founder of the Entertainment Software Association (ESA), and with funding support from ESA. The AIAS was formally reestablished on November 19, 1996, with Marc Teren as president, soon replaced by game developer Glenn Entis. Initially, in 1998, AIAS' role was to handle the awards, originally known as the Interactive Achievement Awards. These awards were nominated and selected by game developers that are members of the organization themselves, mimicking the means by which the Academy Awards are voted by its members.
Around 2000, the ESA pulled out of funding AIAS, leading AIAS members Richard Hilleman and Lorne Lanning to suggest that AIAS create the D.I.C.E. Summit (short for "Design Innovate Communicate Entertain"), a convention centered around the presentation of the awards as a means to providing funding for the organization. The Summit was aimed at industry executives and lead developers as |
https://en.wikipedia.org/wiki/Internet%20Archive | The Internet Archive is an American digital library founded on May 10, 1996, and chaired by free information advocate Brewster Kahle. It provides free access to collections of digitized materials including websites, software applications, music, audiovisual and print materials. The Archive also advocates for a free and open Internet. , the Internet Archive holds more than 38 million print materials, 11.6 million pieces of audiovisual content, 2.6 million software programs, 15 million audio files, 4.7 million images, 251,000 concerts, and over 832 billion web pages in its Wayback Machine. Their mission is to provide "universal access to all knowledge."
The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains hundreds of billions of web captures. The Archive also oversees numerous book digitization projects, collectively one of the world's largest book digitization efforts.
History
Brewster Kahle founded the Archive in May 1996 around the same time that he began the for-profit web crawling company Alexa Internet. In October of that year, the Internet Archive had begun to archive and preserve the World Wide Web in large amounts, though it saved the earliest known page on May 10, 1996, at 2:42 PM. The archived content first became available to the general public in 2001, when it developed the Wayback Machine.
In late 1999, the Archive expanded its collections beyond the web archive, beginning with the Prelinger Archives. Now, the Internet Archive includes texts, audio, moving images, and software. It hosts a number of other projects: the NASA Images Archive, the contract crawling service Archive-It, and the wiki-editable library catalog and book information site Open Library. Soon after that, the Archive began working to provide specialized serv |
https://en.wikipedia.org/wiki/Lamellar%20phase | Lamellar phase refers generally to packing of polar-headed long chain nonpolar-tail molecules in an environment of bulk polar liquid, as sheets of bilayers separated by bulk liquid. In biophysics, polar lipids (mostly, phospholipids, and rarely, glycolipids) pack as a liquid crystalline bilayer, with hydrophobic fatty acyl long chains directed inwardly and polar headgroups of lipids aligned on the outside in contact with water, as a 2-dimensional flat sheet surface. Under transmission electron microscope (TEM), after staining with polar headgroup reactive chemical osmium tetroxide, lamellar lipid phase appears as two thin parallel dark staining lines/sheets, constituted by aligned polar headgroups of lipids. 'Sandwiched' between these two parallel lines, there exists one thicker line/sheet of non-staining closely packed layer of long lipid fatty acyl chains. This TEM-appearance became famous as Robertson's unit membrane - the basis of all biological membranes, and structure of lipid bilayer in unilamellar liposomes. In multilamellar liposomes, many such lipid bilayer sheets are layered concentrically with water layers in between.
In lamellar lipid bilayers, polar headgroups of lipids align together at the interface of water and hydrophobic fatty-acid acyl chains align parallel to one another 'hiding away' from water. The lipid head groups are somewhat more 'tightly' packed than relatively 'fluid' hydrocarbon fatty acyl long chains. The lamellar lipid bilayer organization, thus reveals a 'flexibility gradient' of increasing freedom of motions from near the head-groups towards the terminal fatty-acyl chain methyl groups. Existence of such a dynamic organization of lamellar phase in liposomes as well as biological membranes can be confirmed by spin label electron paramagnetic resonance and high resolution nuclear magnetic resonance spectroscopy studies of biological membranes and liposomes.
In 'soft matter science', where physics and chemistry meet biological scie |
https://en.wikipedia.org/wiki/Hebeloma%20insigne | Hebeloma insigne is a species of mushroom in the family Hymenogastraceae. Along with other species of its genus, it is poisonous and can result in severe gastrointestinal upset. |
https://en.wikipedia.org/wiki/Glue%20logic | In electronics, glue logic is the custom logic circuitry used to interface a number of off-the-shelf integrated circuits. This is often achieved using common, inexpensive 7400- or 4000-series components. In more complex cases, a programmable logic device like a CPLD or FPGA might be used. The falling price of programmable logic devices, combined with their reduced size and power consumption compared to discrete components, is making them common even for simple systems. In addition, programmable logic can be used to hide the exact function of a circuit, in order to prevent a product from being cloned or counterfeited.
The software equivalent of glue logic is called glue code.
Usage
Typical functions of glue logic may include:
Simple logic functions.
Address decoding circuitry used with older processors like the MOS Technology 6502 or Zilog Z80 to divide up the processor's address space into RAM, ROM and I/O. Newer versions of these processors, such as the WDC 65816 or the Zilog eZ80, may add features that enable glueless interfacing to external devices.
Buffers to protect outputs from overload, or protect sensitive inputs from electrostatic discharge damage.
Voltage level conversion, e.g., when interfacing one logic family (CMOS) to another (TTL).
See also
Glue code
Reverse engineering |
https://en.wikipedia.org/wiki/Galilean%20transformation | In physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. In special relativity the homogenous and inhomogenous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations; conversely, the group contraction in the classical limit of Poincaré transformations yields Galilean transformations.
The equations below are only physically valid in a Newtonian framework, and not applicable to coordinate systems moving relative to each other at speeds approaching the speed of light.
Galileo formulated these concepts in his description of uniform motion.
The topic was motivated by his description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth.
Translation
Although the transformations are named for Galileo, it is the absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors.
The notation below describes the relationship under the Galilean transformation between the coordinates and of a single arbitrary event, as measured in two coordinate systems and , in uniform relative motion (velocity ) in their common and directions, with their spatial origins coinciding at time :
Note that the last equati |
https://en.wikipedia.org/wiki/Landau%20theory | Landau theory in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions. It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the
renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative.
Mean-field formulation (no long-range correlation)
Landau was motivated to suggest that the free energy of any system should obey two conditions:
Be analytic in the order parameter and its gradients.
Obey the symmetry of the Hamiltonian.
Given these two conditions, one can write down (in the vicinity of the critical temperature, Tc) a phenomenological expression for the free energy as a Taylor expansion in the order parameter.
Second-order transitions
Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter . This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model, the order parameter is characterized by the net magnetization , which becomes spontaneously non-zero below a critical temperature . In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion
In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider |
https://en.wikipedia.org/wiki/Order%20complete | In mathematics, specifically in order theory and functional analysis, a subset of an ordered vector space is said to be order complete in if for every non-empty subset of that is order bounded in (meaning contained in an interval, which is a set of the form for some ), the supremum ' and the infimum both exist and are elements of
An ordered vector space is called order complete, Dedekind complete, a complete vector lattice, or a complete Riesz space, if it is order complete as a subset of itself, in which case it is necessarily a vector lattice.
An ordered vector space is said to be countably order complete if each countable subset that is bounded above has a supremum.
Being an order complete vector space is an important property that is used frequently in the theory of topological vector lattices.
Examples
The order dual of a vector lattice is an order complete vector lattice under its canonical ordering.
If is a locally convex topological vector lattice then the strong dual is an order complete locally convex topological vector lattice under its canonical order.
Every reflexive locally convex topological vector lattice is order complete and a complete TVS.
Properties
If is an order complete vector lattice then for any subset is the ordered direct sum of the band generated by and of the band of all elements that are disjoint from For any subset of the band generated by is If and are lattice disjoint then the band generated by contains and is lattice disjoint from the band generated by which contains
See also |
https://en.wikipedia.org/wiki/Deflect.ca | Deflect is a DDoS mitigation and website security service by eQualitie, a Canadian social enterprise developing open and reusable systems with a focus on privacy, resilience and self-determination, to protect and promote human rights and press freedom online.
History
Deflect was founded by digital security expert and trainer Dmitri Vitaliev and Canadian internet entrepreneur David Mason in 2011. The Deflect project predates similar initiatives by Google's Project Shield and Cloudflare's Project Galileo. The initiative was created in response to an influential report by the Berkman Center for Internet & Society which highlighted the prevalence of DDoS as a means of political repression and censorship against independent media and human rights groups around the world, and recommended practical methods to protect websites from future incidents. The company claims to reach approximately 2% of the population connected to the Internet on an annual basis
Deflect offers free services to many civil society organizations and commercial plans for small business and enterprise.
In 2016, the Deflect team released its first investigative report into attacks against a Ukrainian independent media website. ""On the 2nd of February, the Kotsubynske website published an article from a meeting of the regional administrative council where it stated that members of the political party 'New Faces' were interfering with and trying to sabotage the council's work on stopping deforestation. Attacks against the website begin thereafter."
Also in 2016, CBC noted that Deflect thwarted DDoS attacks for Black Lives Matter. Investigations led by the Deflect team to discover the methods and provenance of over a hundred separate incidents against the Black Lives Matter website, were noted in The Verge, Ars Technica and BoingBoing.
In 2019, the Deflect team discovered a persistent cyber offensive campaign against Uzbek human rights activists, leading to a more detailed study by Amnesty Intern |
https://en.wikipedia.org/wiki/COVID-19%20lab%20leak%20theory | The COVID-19 lab leak theory, or lab leak hypothesis, is the idea that SARS-CoV-2, the virus that caused the COVID-19 pandemic, came from a laboratory. The claim is highly controversial; most scientists believe the virus spilled into human populations through natural zoonosis (transfer directly from an infected non-human animal), similar to the SARS-CoV-1 and MERS-CoV outbreaks, and consistent with other pandemics in human history. Available evidence suggests that the SARS-CoV-2 virus was originally harbored by bats, and spread to humans from infected wild animals, functioning as an intermediate host, at the Huanan Seafood Market in Wuhan, Hubei, China, in December 2019. Several candidate animal species have been identified as potential intermediate hosts. There is no evidence SARS-CoV-2 existed in any laboratory prior to the pandemic, or that any suspicious biosecurity incidents happened in any laboratory.
Many scenarios proposed for a lab leak are characteristic of conspirary theories. Central to many is a misplaced suspicion about the proximity of the outbreak to a virology institute that studies coronaviruses, the Wuhan Institute of Virology (WIV). Most large Chinese cities have laboratories that study coronaviruses, and virus outbreaks typically begin in rural areas, but are first noticed in large cities. If a coronavirus outbreak occurs in China, there is a high likelihood it will occur near a large city, and therefore near a laboratory studying coronaviruses. The idea of a leak at the WIV also gained support due to secrecy during the Chinese government's response. The lab leak theory is informed by racist undercurrents, and has resulted in anti-Chinese sentiment. Scientists from WIV had previously collected SARS-related coronaviruses from bats in the wild, and allegations that they also performed undisclosed risky work on such viruses are central to some versions of the idea. Some versions, particularly those alleging genome engineering, are based on misinfo |
https://en.wikipedia.org/wiki/Russell%20bodies | Russell bodies are inclusion bodies usually found in atypical plasma cells that become known as Mott cells. Russell bodies are eosinophilic, homogeneous immunoglobulin (Ig)-containing inclusions usually found in cells undergoing excessive synthesis of Ig; the Russell body is characteristic of the distended endoplasmic reticulum. Russell bodies are large and globular of varying size, and become packed into the cell's cytoplasm pushing the nucleus to the edge of the cell, and are found in the peripheral areas of tumors. Russell bodies are thought to have originated as abnormal proteins that have not been secreted. The excess immunoglobulin builds up and forms intracytoplasmic globules, which is thought to be a result of insufficient protein transport within the cell. This causes the proteins to neither be degraded or secreted and stay stored in dilated cisternae. In 1949, Pearse discovered that Russell bodies also contain mucoproteins that are secreted by plasma cells. Russell bodies are not tissue specific; during research they were induced in rat glioma cells. Russell bodies were found to have positive reactions to PAS stain, CD 38 and CD 138 stains. Plasma cells that contain Russell bodies and are stained with H&E stain are found to be autofluorescent, while those without Russell bodies are not. Russell bodies tend to be found in places with chronic inflammation.
This is one cell variation found in multiple myeloma.
Similar inclusion bodies that tend to overlie the nucleus or invaginate into it are known as Dutcher bodies.
They are named for William Russell (1852–1940), a Scottish physician. |
https://en.wikipedia.org/wiki/Chess%20prodigy | A chess prodigy is a young child who possesses an aptitude for the game of chess that far exceeds what might be expected at their age. Their prodigious talent will often enable them to defeat experienced adult players and even titled chess masters. Some chess prodigies have progressed to become World Chess Champions.
Early chess prodigies
Early chess prodigies included Paul Morphy (1837–1884) and José Raúl Capablanca (1888–1942), both of whom won matches against strong adult opponents at the age of 12, and Samuel Reshevsky (1911–1992), who was giving simultaneous exhibitions at the age of six. Morphy went on to become the world's leading player before the formal title of World Champion existed. Capablanca became the third World Champion, and Reshevsky—while never attaining the title—was amongst the world's elite players for many decades.
Arturo Pomar (1931–2016) was another to be labelled a prodigy by chess writers. He played his first international tournament (Madrid 1943) at the age of 11 and went on to become Spain's first grandmaster.
Youngest to defeat a grandmaster
There is often widespread attention when a young player defeats a Grandmaster, whether in a standard tournament game or less formal conditions.
Formal conditions
The youngest player to defeat a grandmaster under standard time controls is Awonder Liang, who in 2012 defeated Larry Kaufman at the Washington International at the age of 9 years and 111 days.
The previous record was set in 2009, when Hetul Shah defeated GM Nurlan Ibrayev at the age of nine years and six months at the Parsvnath Open.
Informal conditions
In 1999, David Howell defeated John Nunn in a blitz game at the age of eight.
In 1976, a ten-year-old Nigel Short beat Viktor Korchnoi as a participant in a simultaneous exhibition, the only game Korchnoi lost in the event.
In March 2021, 10-year-old Frederick Waldhausen Gordon, from Scotland, won against GM Bogdan Lalic in an online rapid 10+5 game in the ECF Grand Prix Rapid Eve |
https://en.wikipedia.org/wiki/Omega-categorical%20theory | In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ = = ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories.
Equivalent conditions for omega-categoricity
Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors.
Given a countable complete first-order theory T with infinite models, the following are equivalent:
The theory T is omega-categorical.
Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n).
Some countable model of T has an oligomorphic automorphism group.
The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite.
For every natural number n, T has only finitely many n-types.
For every natural number n, every n-type is isolated.
For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite.
Every model of T is atomic.
Every countable model of T is atomic.
The theory T has a countable atomic and saturated model.
The theory T has a saturated prime model.
Examples
The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. Hence, the following theories are omega-categorical:
The theory of dense linear orders without endpoints (Cantor's isomorphism theorem)
The theory of the Rado graph
The theory o |
https://en.wikipedia.org/wiki/6th%20meridian%20west | The meridian 6° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, the Atlantic Ocean, Europe, Africa, the Southern Ocean, and Antarctica to the South Pole.
The 6th meridian west forms a great circle with the 174th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 6th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Passing just east of the island of Fugloy, (at ) Passing just east of the island of Svínoy, (at ) Passing just west of the island of North Rona, Scotland, (at ) Passing just east of the island of Sula Sgeir, Scotland, (at )
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | The Minch
| style="background:#b0e0e6;" | Passing just east of the isle of Lewis, Scotland, (at )
|-
|
! scope="row" |
| Scotland — islands of South Rona, Raasay, Scalpay, Skye
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Sea of the Hebrides
|-valign="top"
|
! scope="row" |
| Scotland — peninsulas of Ardnamurchan and Morvern, and the Isle of Mull
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Firth of Lorn
|-
|
! scope="row" |
| Scotland — island of Jura
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Sound of Jura — passing just east of the island of Islay, Scotland, (at ) North Channel — passing just ea |
https://en.wikipedia.org/wiki/Ergo%20decedo | , Latin for "therefore I leave" or "then I go off", a truncation of argumentum ergo decedo, and colloquially denominated the traitorous critic fallacy, denotes responding to the criticism of a critic by implying that the critic is motivated by undisclosed favorability or affiliation to an out-group, rather than responding to the criticism itself. The fallacy implicitly alleges that the critic does not appreciate the values and customs of the criticized group or is traitorous, and thus suggests that the critic should avoid the question or topic entirely, typically by leaving the criticized group.
Argumentum ergo decedo is generally categorized as a type of informal fallacy and more specifically as a species of the subclass of ad hominem informal fallacies.
In politics
Argumentum ergo decedo is directly related to the tu quoque fallacy when responding to political criticism. As whataboutism is used against external criticism, ergo decedo is used against internal criticism.
Examples
Critic: "I think we need to work on improving Nauru's taxation system. The current system suffers from multiple issues that have been resolved in other places such as Tuvalu and the Marshall Islands."
Respondent "Well, if you don't like it, why don't you just leave and go somewhere you think is better?"
Critic: "Our office's atmosphere is unsuitable for starting constructive conversations about reforms for the future of the company. A number of improvements are needed."
Respondent "Well, if you don't like the corporate system, then why are you here? You should just leave!"
See also
List of logical fallacies
Ad hominem
No True Scotsman
Tu quoque
Whataboutism |
https://en.wikipedia.org/wiki/Ascending%20limb%20of%20loop%20of%20Henle | Within the nephron of the kidney, the ascending limb of the loop of Henle is a segment of the heterogenous loop of Henle downstream of the descending limb, after the sharp bend of the loop. This part of the renal tubule is divided into a thin and thick ascending limb; the thick portion is also known as the distal straight tubule, in contrast with the distal convoluted tubule downstream.
Structure
The ascending limb of the loop of Henle is a direct continuation from the descending limb of loop of Henle, and one of the structures in the nephron of the kidney. The ascending limb has a thin and a thick segment. The ascending limb drains urine into the distal convoluted tubule.
The thick ascending limb is found in the medulla of the kidney, and the thick ascending limb can be divided into a part that is in the renal medulla and a part that is in the renal cortex. The ascending limb is much thicker than the descending limb.
At the junction of the thick ascending limb and the distal convoluted tubule are a subset of 15–25 cells known as the macula densa that are part of renal autoregulation through the mechanism of tubuloglomerular feedback.
Histology
As in the descending limb, the epithelium is simple squamous epithelium.
Function
Thin ascending limb
The thin ascending limb is impermeable to water; but is permeable to ions allowing for some sodium reabsorption. Na/K-ATPase is expressed at very low levels in this segment and thus this reabsorption is likely through passive diffusion. Salt moves out of the tubule and into the interstitium due to osmotic pressure created by the countercurrent system.
Thick ascending limb
Functionally, the parts of the ascending limb in the medulla and cortex are very similar.
The medullary ascending limb is largely impermeable to water. Sodium (Na+), potassium (K+) and chloride (Cl−) ions are reabsorbed by active transport. The predominant mechanism of active transport in this segment is through the Na+/K+/Cl− co-transporter NKCC2 |
https://en.wikipedia.org/wiki/Bornhuetter%E2%80%93Ferguson%20method | The Bornhuetter–Ferguson method is a loss reserving technique in insurance.
Background
The Bornhuetter–Ferguson method was introduced in the 1972 paper "The Actuary and IBNR", co-authored by Ron Bornhuetter and Ron Ferguson.
Like other loss reserving techniques, the Bornhuetter–Ferguson method aims to estimate incurred but not reported insurance claim amounts. It is primarily used in the property and casualty and health insurance fields.
Generally considered a blend of the chain-ladder and expected claims loss reserving methods, the Bornhuetter–Ferguson method uses both reported or paid losses as well as an a priori expected loss ratio to arrive at an ultimate loss estimate. Simply, reported (or paid) losses are added to a priori expected losses multiplied by an estimated percent unreported. The estimated percent unreported (or unpaid) is established by observing historical claims experience.
The Bornhuetter–Ferguson method can be used with either reported or paid losses.
Methodology
There are two algebraically equivalent approaches to calculating the Bornhuetter–Ferguson ultimate loss.
In the first approach, undeveloped reported (or paid) losses are added directly to expected losses (based on an a priori loss ratio) multiplied by an estimated percent unreported.
In the second approach, reported (or paid) losses are first developed to ultimate using a chain-ladder approach and applying a loss development factor (LDF). Next, the chain-ladder ultimate is multiplied by an estimated percent reported. Finally, expected losses multiplied by an estimated percent unreported are added (as in the first approach).
The estimated percent reported is the reciprocal of the loss development factor.
Incurred but not reported claims can then be determined by subtracting reported losses from the Bornhuetter–Ferguson ultimate loss estimate. |
https://en.wikipedia.org/wiki/Gustatory%20nucleus | The gustatory nucleus is the rostral part of the solitary nucleus located in the medulla. The gustatory nucleus is associated with the sense of taste and has two sections, the rostral and lateral regions. A close association between the gustatory nucleus and visceral information exists for this function in the gustatory system, assisting in homeostasis - via the identification of food that might be possibly poisonous or harmful for the body. There are many gustatory nuclei in the brain stem. Each of these nuclei corresponds to three cranial nerves, the facial nerve (VII), the glossopharyngeal nerve (IX), and the vagus nerve (X) and GABA is the primary inhibitory neurotransmitter involved in its functionality. All visceral afferents in the vagus and glossopharyngeal nerves first arrive in the nucleus of the solitary tract and information from the gustatory system can then be relayed to the thalamus and cortex.
The central axons on primary sensory neurons in the taste system in the cranial nerve ganglia connect to lateral and rostral regions of the nucleus of the solitary tract which is located in the medulla and is also known as the gustatory nucleus. The most pronounced gustatory nucleus is the rostral cap of the nucleus solitarius which is located at the ponto-medullary junction. Afferent taste fibers from the facial and from the facial and glossopharyngeal nerves are sent to the nucleus solitarius. The gustatory system then sends information to the thalamus which ultimately sends information to the cerebral cortex.
Each nucleus from the gustatory system can contain networks of interconnected neurons that can help regulate the firing rates of one another. Fishes (specifically channel catfish), have been used to study the structure, mechanism for activation and its integrated with the solitary nucleus. The secondary gustatory nucleus contains three subnucleic structures: a medial, central and dorsal subnucleus (with the central and dorsal positioned in the rostra |
https://en.wikipedia.org/wiki/Leber%27s%20hereditary%20optic%20neuropathy | Leber's hereditary optic neuropathy (LHON) is a mitochondrially inherited (transmitted from mother to offspring) degeneration of retinal ganglion cells (RGCs) and their axons that leads to an acute or subacute loss of central vision; it predominantly affects young adult males. LHON is transmitted only through the mother, as it is primarily due to mutations in the mitochondrial (not nuclear) genome, and only the egg contributes mitochondria to the embryo. Men cannot pass on the disease to their offspring. LHON is usually due to one of three pathogenic mitochondrial DNA (mtDNA) point mutations. These mutations are at nucleotide positions 11778 G to A, 3460 G to A and 14484 T to C, respectively in the ND4, ND1 and ND6 subunit genes of complex I of the oxidative phosphorylation chain in mitochondria.
Signs and symptoms
Clinically, there is an acute onset of visual loss, first in one eye, and then a few weeks to months later in the other. Onset is usually young adulthood, but age range at onset from 7-75 is reported. The age of onset is slightly higher in females (range 19–55 years: mean 31.3 years) than males (range 15–53 years: mean 24.3). The male-to-female ratio varies between mutations: 3:1 for 3460 G>A, 6:1 for 11778 G>A and 8:1 for 14484 T>C.
This typically evolves to very severe optic atrophy and a permanent decrease of visual acuity. Both eyes become affected either simultaneously (25% of cases) or sequentially (75% of cases) with a median inter-eye delay of 8 weeks. Rarely, only one eye is affected. In the acute stage, lasting a few weeks, the affected eye demonstrates an oedematous appearance of the nerve fiber layer, especially in the arcuate bundles and enlarged or telangiectatic and tortuous peripapillary vessels (microangiopathy). The main features are seen on fundus examination, just before or after the onset of visual loss. A pupillary defect may be visible in the acute stage as well. Examination reveals decreased visual acuity, loss of color vision a |
https://en.wikipedia.org/wiki/Ion | An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons while an anion is a negatively charged ion with more electrons than protons. Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from Greek neuter present participle of ienai (), meaning "to go". A cation is something that moves down ( pronounced kato, meaning "down") and an anion is something that moves up (, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of |
https://en.wikipedia.org/wiki/List%20of%20sovereign%20states%20by%20number%20of%20Internet%20hosts | This is the list of countries by number of Internet hosts, based on 2012 figures from the CIA World Factbook. Several dependent territories, not fully recognized states, and non-state territories are also listed. The European Union host (.eu) is mostly composed of French, Polish and German hosts.
List
(*) The U.S. figure includes hosts in the .us, .mil, .gov, .edu, .com, .org, and .net domains.
See also
Internet Census of 2012 |
https://en.wikipedia.org/wiki/Thiamine%20triphosphate | Thiamine triphosphate (ThTP) is a biomolecule found in most organisms including bacteria, fungi, plants and animals. Chemically, it is the triphosphate derivative of the vitamin thiamine.
Function
It has been proposed that ThTP has a specific role in nerve excitability, but this has never been confirmed and recent results suggest that ThTP probably plays a role in cell energy metabolism. Low or absent levels of thiamine triphosphate have been found in Leighs disease.
In E. coli, ThTP is accumulated in the presence of glucose during amino acid starvation. On the other hand, suppression of the carbon source leads to the accumulation, of adenosine thiamine triphosphate (AThTP).
Metabolism
It has been shown that in brain ThTP is synthesized in mitochondria by a chemiosmotic mechanism, perhaps similar to ATP synthase. In mammals, ThTP is hydrolyzed to thiamine pyrophosphate (ThDP) by a specific thiamine-triphosphatase. It can also be converted into ThDP by thiamine-diphosphate kinase.
History
Thiamine triphosphate (ThTP) was chemically synthesized in 1948 at a time when the only organic triphosphate known was ATP. The first claim of the existence of ThTP in living organisms was made in rat liver, followed by baker’s yeast. Its presence was later confirmed in rat tissues and in plants germs, but not in seeds, where thiamine was essentially unphosphorylated. In all those studies, ThTP was separated from other thiamine derivatives using a paper chromatographic method, followed by oxidation in fluorescent thiochrome compounds with ferricyanide in alkaline solution. This method is at best semi-quantitative, and the development of liquid chromatographic methods suggested that ThTP represents far less than 10% of total thiamine in animal tissues. |
https://en.wikipedia.org/wiki/Jeewanu | Jeewanu (Sanskrit for "particles of life") are synthetic chemical particles that possess cell-like structure and seem to have some functional properties; that is, they are a model of primitive cells, or protocells. It was first synthesised by Krishna Bahadur (20 January 1926 — 5 August 1994), an Indian chemist and his team in 1963. Using photochemical reaction, they produced coacervates, microscopic cell-like spheres from a mixture of simple organic and inorganic compounds. Bahadur named these particles 'Jeewanu' because they exhibit some of the basic properties of a cell, such as the presence of semipermeable membrane, amino acids, phospholipids and carbohydrates. Further, like living cells, they had several catalytic activities. Jeewanu are cited as models of protocells for the origin of life, and as artificial cells.
Etymology
Jeewanu is derived from Sanskrit jeewa, meaning "life", and anu, meaning the "smallest part of something", or the "indivisible". In contemporary Hindi, jeewanu also means unicellular organisms such as bacteria. Bahadur specifically used the term to represent the Indian philosophical tradition not only through the use of Sanskrit but also by inferring ideas on the origin of life from the Vedas. Bahadur, while employing the traditional Hindu philosophy, attempted to incorporate the advances in cell biology to the concept of abiogenesis.
Synthesis
In 1954 and 1958 Krishna Bahadur and co-workers published the successful synthesis of amino acids from a mixture of paraformaldehyde, colloidal molybdenum oxide or potassium nitrate and ferric chloride under sunlight. It appears that this experimental approach was seminal for the assays to produce Jeewanu, which he first reported in 1963 in an obscure Indian journal, Vijnana Parishad Anusandhan Patrika. His detailed syntheses were published in Germany in 1964 in a series of articles.
Their initial experiment consisted of a sterilised apparatus in which inorganic nitrogenous compounds (such as a |
https://en.wikipedia.org/wiki/Information%20diagram | An information diagram is a type of Venn diagram used in information theory to illustrate relationships among Shannon's basic measures of information: entropy, joint entropy, conditional entropy and mutual information. Information diagrams are a useful pedagogical tool for teaching and learning about these basic measures of information. Information diagrams have also been applied to specific problems such as for displaying the information theoretic similarity between sets of ontological terms. |
https://en.wikipedia.org/wiki/Coronavirus%20breathalyzer | A coronavirus breathalyzer is a diagnostic medical device enabling the user to test with 90% or greater accuracy the presence of severe acute respiratory syndrome coronavirus 2 in an exhaled breath.
As of the first half of 2020, the idea of a practical coronavirus breathalyzer was concomitantly developed by unrelated research groups in Australia, Canada, Finland, Germany, Indonesia, Israel, Netherlands, Poland, Singapore, United Kingdom and USA.
People with COVID-19 have higher levels of aldehydes, compounds produced when cells or tissues are damaged by inflammation, and ketones, which fits with research suggesting that the virus may damage the pancreas and cause ketosis. Diagnostics researchers hope to find the components in exhaled air that are truly characteristic of a disease and develop more specific sensors for them, This is done by studying breath samples using sensors in parallel with mass spectrometry analyses.
Different diseases may cause similar breath changes.
Diet can affect the chemicals someone exhales, as can smoking, alcohol consumption and medicines.
Australia
In Australia, GreyScan CEO Samantha Ollerton and Prof. Michael Breadmore of the University of Tasmania are basing a coronavirus breathalyzer on existing technology that is used around the world to detect explosives.
Canada
Canary Health Technologies, headquartered in Toronto with offices in Cleveland, Ohio, is developing a breathalyzer with disposable nanosensors using AI-powered cloud-based analysis. According to a press release, clinical trials began in India during November 2020. The stated goal is to develop an accurate, reasonably priced screening tool that can be used anywhere and deliver a result in less than a minute. The company postulates that analyzing volatile organic compounds in human breath could potentially detect diseases before the on-set of symptoms, earlier than currently available methods. Moreover, the cloud-based technology is designed to be used as a disease surv |
https://en.wikipedia.org/wiki/Programmer%27s%20key | The programmer's key, or interrupt button, is a button or switch on Classic Mac OS-era Macintosh systems, which jumps to a machine code monitor. The symbol on the button is ⎉: . On most 68000 family based Macintosh computers, an interrupt request can also be sent by holding down the command key and pressing the power key on the keyboard. This effect is also simulated by the 68000 environment of the Mac OS nanokernel on PowerPC machines and the Classic environment.
A plastic insert came with Macintosh 128K, Macintosh 512K, Macintosh Plus, and Macintosh SE computers that could be attached to the exterior of the case and was used to press an interrupt button located on the motherboard.
Modern Mac hardware no longer includes the interrupt button, as the Mac OS X operating system has integrated debugging options. In addition, Mac OS X's protected memory blocks direct patching of system memory (in order to better secure the system).
See also
Interrupt
Context switch
MacsBug |
https://en.wikipedia.org/wiki/Gingivitis | Gingivitis is a non-destructive disease that causes inflammation of the gums. The most common form of gingivitis, and the most common form of periodontal disease overall, is in response to bacterial biofilms (also called plaque) that is attached to tooth surfaces, termed plaque-induced gingivitis. Most forms of gingivitis are plaque-induced.
While some cases of gingivitis never progress to periodontitis, periodontitis is always preceded by gingivitis.
Gingivitis is reversible with good oral hygiene; however, without treatment, gingivitis can progress to periodontitis, in which the inflammation of the gums results in tissue destruction and bone resorption around the teeth. Periodontitis can ultimately lead to tooth loss.
Signs and symptoms
The symptoms of gingivitis are somewhat non-specific and manifest in the gum tissue as the classic signs of inflammation:
Swollen gums
Bright red gums
Gums that are tender or painful to the touch
Bleeding gums or bleeding after brushing and/or flossing
Bad breath (halitosis)
Additionally, the stippling that normally exists in the gum tissue of some individuals will often disappear and the gums may appear shiny when the gum tissue becomes swollen and stretched over the inflamed underlying connective tissue. The accumulation may also emit an unpleasant odor. When the gingiva are swollen, the epithelial lining of the gingival crevice becomes ulcerated and the gums will bleed more easily with even gentle brushing, and especially when flossing.
Complications
Recurrence of gingivitis
Periodontitis
Infection or abscess of the gingiva or the jaw bones
Trench mouth (bacterial infection and ulceration of the gums)
Swollen lymph nodes
Associated with premature birth and low birth weight
Alzheimer's and dementia
A study from 2018 found evidence that gingivitis bacteria may be linked to Alzheimer's disease. Scientists agree that more research is needed to prove a cause and effect link. "Studies have also found that the bacteria P. |
https://en.wikipedia.org/wiki/REPROM | Reprogrammable memory (abbreviated as REPROM or RePROM) is type of ROM, more precisely, a type of PROM electronic memory. Re refers to reprogrammable ROM memory.
There are two types of RePROM electronic memories:
EPROM
E²PROM or EEPROM
See also
Read-mostly memory (RMM)
Non-volatile memory
Computer memory |
https://en.wikipedia.org/wiki/Talent%20scheduling | Talent scheduling is an optimization problem in computer science and operations research, and it is also a problem in combinatorial optimization. Suppose we need to make films, and each film contains several scenes. Each scene needs to be shot by one or more actors. And suppose you can only shoot one scene a day. The salaries of these actors are calculated by the day. In this problem, we can only hire each actor consecutively. For example, we can't hire an actor on the first and third days, but not the second day. During the hiring period, the producers still need to pay the actors even if they are not involved in the filming assignment. The purpose of talent scheduling is to minimize the actors' total salary by adjusting the sequence of scenes.
Mathematical formulation
Consider a film shoot composed of shooting days and involving a total of actors. Then we use the day out of days matrix (DODM) to represent the requirements for the various shooting days. The matrix with the entry given by:
Then we define the pay vector , with the th element given by which means rate of pay per day of the th actor. Let v denote any permutation of the n columns of , we have:
is the permutation set of the n shooting days. Then define to be the matrix with its columns permuted according to , we have:
for
Then we use and to represent denote respectively the earliest and latest days in the schedule determined by a which require actor . So we can find actor will be hired for days. But in these days, only days are actually required, which means days are unnecessary, we have:
The total cost of unnecessary days is:
will be the objective function we should minimize.
Proof of strong NP-hardness
In talent scheduling problem, we can prove that is NP-hard by a reduction from the optimal linear arrangement(OLA) problem. And in this problem, even we restrict each actor is needed for just two days and all actors' salaries are 1, it's still polynomially reducible to the OLA pr |
https://en.wikipedia.org/wiki/Pseudomedian | In statistics, the pseudomedian is a measure of centrality for data-sets and populations. It agrees with the median for symmetric data-sets or populations. In mathematical statistics, the pseudomedian is also a location parameter for probability distributions.
Description
The pseudomedian of a distribution is defined to be a median of the distribution of , where and are independent, each with the same distribution .
When is a symmetric distribution, the pseudomedian coincides with the median; otherwise this is not generally the case.
The Hodges–Lehmann statistic, defined as the median of all of the midpoints of pairs of observations, is a consistent estimator of the pseudomedian.
Like the set of medians, the pseudomedian is well defined for all probability distributions, even for the many distributions that lack modes or means.
Pseudomedian filter in signal processing
In signal processing there is another definition of pseudomedian filter for discrete signals.
For a time series of length 2N + 1, the pseudomedian is defined as follows. Construct N + 1 sliding windows each of length N + 1. For each window, compute the minimum and maximum. Across all N + 1 windows, find the maximum minimum and the minimum maximum. The pseudomedian is the average of these two quantities.
See also
Hodges–Lehmann estimator
Median filter
Lulu smoothing |
https://en.wikipedia.org/wiki/Neil%20Sloane |
Neil James Alexander Sloane FLSW (born October 10, 1939) is a British-American mathematician. His major contributions are in the fields of combinatorics, error-correcting codes, and sphere packing. Sloane is best known for being the creator and maintainer of the On-Line Encyclopedia of Integer Sequences (OEIS).
Biography
Sloane was born in Beaumaris, Anglesey, Wales, in 1939, moving to Cowes, Isle of Wight, England in 1946. The family emigrated to Australia, arriving at the start of 1949. Sloane then moved from Melbourne to the United States in 1961.
He studied at Cornell University under Nick DeClaris, Frank Rosenblatt, Frederick Jelinek and Wolfgang Heinrich Johannes Fuchs, receiving his Ph.D. in 1967. His doctoral dissertation was titled Lengths of Cycle Times in Random Neural Networks. Sloane joined AT&T Bell Labs in 1968 and retired from AT&T Labs in 2012. He became an AT&T Fellow in 1998. He is also a Fellow of the Learned Society of Wales, an IEEE Fellow, a Fellow of the American Mathematical Society, and a member of the National Academy of Engineering.
He is a winner of a Lester R. Ford Award in 1978 and the Chauvenet Prize in 1979. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. In 2005 Sloane received the IEEE Richard W. Hamming Medal.
In 2008 he received the Mathematical Association of America David P. Robbins Prize, and in 2013 the George Pólya Award.
In 2014, to celebrate his 75th birthday, Sloane shared some of his favorite integer sequences. Besides mathematics, he loves rock climbing and has authored two rock-climbing guides to New Jersey.
He regularly appears in videos for Brady Haran's YouTube channel Numberphile.
Selected publications
Neil James Alexander Sloane, A Handbook of Integer Sequences, Academic Press, NY, 1973.
Florence Jessie MacWilliams and Neil James Alexander Sloane, The Theory of Error-Correcting Codes, Elsevier/North-Holland, Amsterdam, 1977.
M. Harwit and Neil James Alexander |
https://en.wikipedia.org/wiki/Decision-making%20models | Decision-making as a team is a scientific process when that decision will affect a policy affecting an entity. Decision-making models are used as a method and process to fulfill the following objectives:
Every team member is clear about how a decision will be made
The roles and responsibilities for the decision making
Who will own the process to make the final decision
These models help the team to plan the process and the agenda for each decision-making meeting, and the understanding of the process and collaborative approach helps in achieving the support of the team members for the final decision to ensure commitment for the same.
Types
There are several models of decision-making:
Economic rationality model
When using this model, the following conditions are assumed.
The decision will be completely rational in a means-ends sense.
There is a complete and consistent system of preferences that allows a choice among alternatives.
There is a complete awareness of all the possible alternatives
Probability calculations are neither frightening nor mysterious
There are no limits to the complexity of computations that can be performed to determine the best alternatives
According to Kuwashima (2014, p. 1) in an organizational decision-making context, the decision-maker approaches the problem in a solely objective way and avoids all subjectivity. Moreover, the rational choice theory revolves around the idea that every individual attempt to maximize their own personal happiness or satisfaction gained from a good or service. This basic idea leads to the “rational” decision model, which is often used in the decision-making process. (Bergmiller, McCright and Weisenborn 2011, p.2)
Simon's bounded rationality model
To present a more realistic alternative to the economic rationality model, Herbert Simon proposed an alternative model. He felt that management decision-making behavior could be described as follows:
In choosing between alternatives, the manager attempts |
https://en.wikipedia.org/wiki/Common%20data%20model | A common data model (CDM) can refer to any standardised data model which allows for data and information exchange between different applications and data sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data", and can be seen as a way to "organize data from many sources that are in different formats into a standard structure".
A common data model has been described as one of the components of a "strong information system". A standardised common data model has also been described as a typical component of a well designed agile application besides a common communication protocol. Providing a single common data model within an organisation is one of the typical tasks of a data warehouse.
Examples of common data models
Border crossings
X-trans.eu was a cross-border pilot project between the Free State of Bavaria (Germany) and Upper Austria with the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval.
Climate data
The Climate Data Store Common Data Model is a common data model set up by the Copernicus Climate Change Service for harmonising essential climate variables from different sources and data providers.
General information technology
Within service-oriented architecture, S-RAMP is a specification released by HP, IBM, Software AG, TIBCO, and Red Hat which defines a common data model for SOA repositories as well as an interaction protocol to facilitate the use of common tooling and sharing of data.
Content Management Interoperability Services (CMIS) is an open standard for inter-operation of different content management systems over the internet, and provides a common data model for typed files and folders used with version control.
The NetCDF software libraries for array-oriented scientific data implements a commo |
https://en.wikipedia.org/wiki/Signal%20velocity | The signal velocity is the speed at which a wave carries information. It describes how quickly a message can be communicated (using any particular method) between two separated parties. No signal velocity can exceed the speed of a light pulse in a vacuum (by Special Relativity).
Signal velocity is usually equal to group velocity (the speed of a short "pulse" or of a wave-packet's middle or "envelope"). However, in a few special cases (e.g., media designed to amplify the front-most parts of a pulse and then attenuate the back section of the pulse), group velocity can exceed the speed of light in vacuum, while the signal velocity will still be less than or equal to the speed of light in vacuum.
In electronic circuits, signal velocity is one member of a group of five closely related parameters. In these circuits, signals are usually treated as operating in TEM (Transverse ElectroMagnetic) mode. That is, the fields are perpendicular to the direction of transmission and perpendicular to each other. Given this presumption, the quantities: signal velocity, the product of dielectric constant and magnetic permeability, characteristic impedance, inductance of a structure, and capacitance of that structure, are all related such that if you know any two, you can calculate the rest. In a uniform medium if the permeability is constant, then variation of the signal velocity will be dependent only on variation of the dielectric constant.
In a transmission line, signal velocity is the reciprocal of the square root of the capacitance-inductance product, where inductance and capacitance are typically expressed as per-unit length. In circuit boards made of FR-4 material, the signal velocity is typically about six inches (15 cm) per nanosecond, or 6.562 ps/mm. In circuit boards made of Polyimide material, the signal velocity is typically about 16.3 cm per nanosecond or 6.146 ps/mm. In these boards, permeability is usually constant and dielectric constant often varies from locati |
https://en.wikipedia.org/wiki/Walter%20Thirring | Walter Eduard Thirring (29 April 1927 – 19 August 2014) was an Austrian physicist after whom the Thirring model in quantum field theory is named. He was the son of the physicist Hans Thirring.
Life and career
Walter Thirring was born in Vienna, Austria, where he earned his Doctor of Physics degree in 1949 at the age of 22. In 1959 he became a professor of theoretical physics at the University of Vienna, and from 1968 to 1971 he was head of the Theory Division and director at CERN.
Besides pioneering work in quantum field theory, Walter Thirring devoted his scientific life to mathematical physics. He is the author of one of the first textbooks on quantum electrodynamics as well as of a four-volume course in mathematical physics.
In 2000, he received the Henri Poincaré Prize of the International Association of Mathematical Physics.
Walter Thirring authored Cosmic Impressions, Templeton Press, Philadelphia and London, in 2007, and in that book he sums up his feelings about the scientific discoveries made by modern cosmology:In the last decades, new worlds have been unveiled that our great teachers wouldn’t have even dreamed of. The panorama of cosmic evolution now enables deep insights into the blueprint of creation… Human beings recognize the blueprints, and understand the language of the Creator… These realizations do not make science the enemy of religion, but glorify the book of Genesis in the Bible.
His memoirs were published in 2010 as The Joy of Discovery: Great Encounters Along the Way by World Scientific Publishing Company. He recollects encounters with scientists like Albert Einstein, Erwin Schrödinger, Werner Heisenberg, Wolfgang Pauli and others as well as his collaborations with Murray Gell-Mann and Elliott Lieb.
Honours and awards
Eötvös Medal (1967)
Erwin Schrödinger Prize (1969)
Max Planck Medal of the German Physical Society (1978)
Prize of the city of Vienna (1978)
Austrian Decoration for Science and Art (1993)
Honorary Medal of the Aus |
https://en.wikipedia.org/wiki/Multi-model%20database | In the field of database design, a multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated. Document, graph, relational, and key–value models are examples of data models that may be supported by a multi-model database.
Background
The relational data model became popular after its publication by Edgar F. Codd in 1970. Due to increasing requirements for horizontal scalability and fault tolerance, NoSQL databases became prominent after 2009. NoSQL databases use a variety of data models, with document, graph, and key–value models being popular.
A multi-model database is a database that can store, index and query data in more than one model. For some time, databases have primarily supported only one model, such as: relational database, document-oriented database, graph database or triplestore. A database that combines many of these is multi-model.
For some time, it was all but forgotten (or considered irrelevant) that there were any other database models besides relational. The relational model and notion of third normal form were the default standard for all data storage. However, prior to the dominance of relational data modeling, from about 1980 to 2005, the hierarchical database model was commonly used. Since 2000 or 2010, many NoSQL models that are non-relational, including documents, triples, key–value stores and graphs are popular. Arguably, geospatial data, temporal data, and text data are also separate models, though indexed, queryable text data is generally termed a "search engine" rather than a database.
The first time the word "multi-model" has been associated to the databases was on May 30, 2012 in Cologne, Germany, during the Luca Garulli's key note "NoSQL Adoption – What’s the Next Step?". Luca Garulli envisioned the evolutio |
https://en.wikipedia.org/wiki/Potato%20paradox | The potato paradox is a mathematical calculation that has a counter-intuitive result. The Universal Book of Mathematics states the problem as such:
Then reveals the answer:
In Quine's classification of paradoxes, the potato paradox is a veridical paradox.
If the potatoes are 99% water, the dry mass is 1%. This means that the 100 kg of potatoes contains 1 kg of dry mass, which does not change, as only the water evaporates.
In order to make the potatoes be 98% water, the dry mass must become 2% of the total weight—double what it was before. The amount of dry mass, 1 kg, remains unchanged, so this can only be achieved by reducing the total mass of the potatoes. Since the proportion that is dry mass must be doubled, the total mass of the potatoes must be halved, giving the answer 50 kg.
Mathematical proofs
Let x be the new total mass of the potatoes (dry + water).
Let d be the dry mass of the potatoes and w, the mass of water within the potatoes.
Recall w is 98% of the total mass, that is, w = 0.98x.
Therefore, x = d + w = d + 0.98x, i.e., x = d / 0.02 = 50 kg.
In our case, d = 1 kg so the new mass of the potatoes will indeed be 50 kg.
Let X be the mass lost. Since the solid (non-water) mass remains constant, then
X = initial water content – final water content
X = 99% 100 kg – 98% (100 kg – X)
X = 99 kg – 98 kg + 0.98X
1 kg = 0.02X
X = (1 kg)/0.02 = 50 kg
In popular culture
The potato paradox has made its way into popular culture. In one instance, it was the "Puzzler" on the Car Talk radio show. It was subsequently featured on Neatorama. It was also named one of the "Five Famous Paradoxes". |
https://en.wikipedia.org/wiki/Categories%3A%20On%20the%20Beauty%20of%20Physics | Categories: On the Beauty of Physics is a non-fiction science and art book edited, co-written, and published by American author Hilary Thayer Hamann in 2006. The book was conceived as a multidisciplinary educational tool that uses art and literature to broaden the reader's understanding of challenging material. Alan Lightman, author of Einstein's Dreams, called Categories "A beautiful synthesis of science and art, pleasing to the mind and to the eye," and Dr. Helen Caldicott, founder and president of the Nuclear Policy Research Institute, said, "This wonderful book will provoke thought in lovers of science and art alike, and with knowledge comes the inspiration to preserve the beauty of life on Earth."
Author
Hamann is co-writer, creative and editorial director of Categories—On the Beauty of Physics (2006), a multidisciplinary, interdisciplinary educational text that uses imagery to facilitate the reader's encounter with challenging material. She worked with physicist Emiliano Seffusati, Ph.D., who wrote the science text, and collage artist John Morse, who created the original artwork.
Overview
Categories is a book about physics that uses literature and art to stimulate the wonder and interest of the reader. It is intended to promote scientific literacy, foster an appreciation of the humanities, and encourage readers to make informed and imaginative connections between the sciences and the arts.
Hamann intended the physics book to be the first in a series, with subsequent titles to focus on biology and chemistry, and for the three titles to form the cornerstone of a television series for adolescents and their parents.
Criticism
Library Journal gave the book a starred review, calling Categories "a gorgeous book," "a comprehensive overview of physics," and "highly recommended."
The book received high praise from critics and scientists.
Cognitive scientist, Harvard professor, and author of The Language Instinct (1994), and How the Mind Works (1997) Steven Pinker |
https://en.wikipedia.org/wiki/Sink%20test | Sink test is a form of medical laboratory diagnostics healthcare fraud whereby clinical specimens are discarded, via a sink drain, and fabricated results are reported, without the clinical specimen actually being tested.
In the United States, the prevalence of sink test laboratories in the 1980s led in part to regulation following the passage of Clinical Laboratory Improvement Amendments in 1988.
While this illegal practice still occurs, it is rare within the highly regulated US lab market. |
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20585b | Zinc finger protein 585B is a protein that in humans is encoded by the ZNF585B gene. |
https://en.wikipedia.org/wiki/Legendre%E2%80%93Clebsch%20condition |
In the calculus of variations the Legendre–Clebsch condition is a second-order condition which a solution of the Euler–Lagrange equation must satisfy in order to be a minimum.
For the problem of minimizing
the condition is
Generalized Legendre–Clebsch
In optimal control, the situation is more complicated because of the possibility of a singular solution. The generalized Legendre–Clebsch condition, also known as convexity, is a sufficient condition for local optimality such that when the linear sensitivity of the Hamiltonian to changes in u is zero, i.e.,
The Hessian of the Hamiltonian is positive definite along the trajectory of the solution:
In words, the generalized LC condition guarantees that over a singular arc, the Hamiltonian is minimized.
See also
Bang–bang control |
https://en.wikipedia.org/wiki/Principle%20of%20locality | In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings. A theory that includes the principle of locality is said to be a "local theory". This is an alternative to the concept of instantaneous, or "non-local" action at a distance. Locality evolved out of the field theories of classical physics. The idea is that for a cause at one point to have an effect at another point, something in the space between those points must mediate the action. To exert an influence, something, such as a wave or particle, must travel through the space between the two points, carrying the influence.
The special theory of relativity limits the maximum speed at which causal influence can travel to the speed of light, . Therefore, the principle of locality implies that an event at one point cannot cause a simultaneous result at another point. An event at point cannot cause a result at point in a time less than , where is the distance between the points and is the speed of light in vacuum.
Bell test experiments show that quantum mechanics broadly violates the inequalities established in Bell's theorem. According to some interpretations of quantum mechanics, this result implies that some quantum effects violate the principle of locality.
Pre-quantum mechanics
During the 17th century Newton's principle of universal gravitation was formulated in terms of "action at a distance", thereby violating the principle of locality. Newton himself considered this violation to be absurd:
Coulomb's law of electric forces was initially also formulated as instantaneous action at a distance, but in 1880 James Clerk Maxwell showed that field equations – which obey locality – predict all of the phenomena of electromagnetism. These equations show that electromagnetic forces propagate at the speed of light.
In 1905, Albert Einstein's special theory of relativity postulated that no matter or energy can travel faster than the speed of light, and |
https://en.wikipedia.org/wiki/Vibroejaculation | Vibroejaculation (or penile vibratory stimulation) is a means of inducing ejaculation through vibration. It is used for semen collection, and in humans, the management of anejaculation.
One method of penile vibratory stimulation is the use of specialised devices that are placed around the glans penis to stimulate it by vibration. Alternatively, a powerful wand vibrator of the type used as sex toys can be used.
See also
Electroejaculation |
https://en.wikipedia.org/wiki/Antibody%20microarray | An antibody microarray (also known as antibody array) is a specific form of protein microarray. In this technology, a collection of captured antibodies are spotted and fixed on a solid surface such as glass, plastic, membrane, or silicon chip, and the interaction between the antibody and its target antigen is detected. Antibody microarrays are often used for detecting protein expression from various biofluids including serum, plasma and cell or tissue lysates. Antibody arrays may be used for both basic research and medical and diagnostic applications.
Background
The concept and methodology of antibody microarrays were first introduced by Tse Wen Chang in 1983 in a scientific publication and a series of patents, when he was working at Centocor in Malvern, Pennsylvania. Chang coined the term “antibody matrix” and discussed “array” arrangement of minute antibody spots on small glass or plastic surfaces. He demonstrated that a 10×10 (100 in total) and 20×20 (400 in total) grid of antibody spots could be placed on a 1×1 cm surface. He also estimated that if an antibody is coated at a 10 μg/mL concentration, which is optimal for most antibodies, 1 mg of antibody can make 2,000,000 dots of 0.25 mm diameter. Chang's invention focused on the employment of antibody microarrays for the detection and quantification of cells bearing certain surface antigens, such as CD antigens and HLA allotypic antigens, particulate antigens, such as viruses and bacteria, and soluble antigens. The principle of "one sample application, multiple determinations", assay configuration, and mechanics for placing absorbent dots described in the paper and patents should be generally applicable to different kinds of microarrays. When Tse Wen Chang and Nancy T. Chang were setting up Tanox, Inc. in Houston, Texas in 1986, they purchased the rights on the antibody matrix patents from Centocor as part of the technology base to build their new startup. Their first product in development was an assay, te |
https://en.wikipedia.org/wiki/Survey%20of%20Health%2C%20Ageing%20and%20Retirement%20in%20Europe | The Survey of Health, Ageing and Retirement in Europe (SHARE) is a multidisciplinary and cross-national panel database of micro data on health, socio-economic status and social and family networks. In seven survey waves to date, SHARE has conducted approximately 380,000 interviews with about 140,000 individuals aged 50 and over. The survey covers 28 European countries and Israel.
SHARE was founded as a response to the European Commission's call to "examine the possibility of establishing, in co-operation with Member States, a European Longitudinal Ageing Survey". It has since become a major pillar of the European Research Area, selected as one of the projects to be implemented in the European Strategy Forum on Research Infrastructures (ESFRI) in 2006 and was given a new legal status as the first ever European Research Infrastructure Consortium (SHARE-ERIC) in March 2011.
About SHARE
Founded in 2002, SHARE is coordinated centrally at the Munich Center for the Economics of Aging (MEA), Max-Planck-Institute for Social Law and Social Policy and led by Managing Director Axel Börsch-Supan. It is a collaborative effort of more than 150 researchers worldwide who are organized in multidisciplinary national teams and cross-national working groups. A Scientific Monitoring Board composed of eminent international researchers and a network of advisors help to maintain and improve the project’s high scientific standards.
SHARE is harmonized with its role models and sister studies the U.S. Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA), and has the advantage of encompassing cross-national variation in public policy, culture and history across a variety of European countries. Its scientific power is based on its panel design that grasps the dynamic character of the ageing process. SHARE’s multi-disciplinary approach delivers a full picture of the ageing process. Procedural guidelines and programs ensure an ex-ante harmonized cross-national |
https://en.wikipedia.org/wiki/Carputer | A carputer, or car-puter, is a computer with specializations to run in a car, such as compact size, low power requirement, and some customized components. The computing hardware is typically based on standard PCs or mobile devices. They normally have standard interfaces such as Bluetooth, USB, and WiFi. The first carputer was introduced by Clarion on December 4, 1998, although on-board diagnostics have been employed since the 1980s to precisely measure the amount of fuel entering the engine as the carburetors got too complex.
A challenge to installing a computer in a car is the power supply. Energy is supplied as a nominal 12 VDC in cars or 24 VDC in some trucks. The voltage varies according to whether the engine is on or off since the battery generally delivers 12V, while the generator supplies more. There can be peaks, and at ignition time the supply current drops. External DC/DC converters can help to regulate voltages.
Police cars often have Mobile data terminals in the form of a laptop swivel mounted where the driver's armrest would be. This can be used to log data and to query networked databases.
Microsoft developed Windows Embedded Automotive and used it with the AutoPC, a brand of carputer jointly developed with Clarion. The system was released in 1998, and referred to the operating system itself as "Auto PC". It was based on Windows CE 2.0. It evolved into "Windows CE for Automotive". The platform was used for the first two generations of MyFord Touch while the third generation runs QNX from BlackBerry Limited.
Tablet computers such as the Nexus 7 can be installed either permanently (in-dash) or removably (a dock). It can be used for watching movies or listening to music, as well as for GPS navigation. It also has Bluetooth for hands-free calls.
Computers can be used to decode on-board diagnostics (OBD) data to a visual display. Many interfaces are based on the ELM327 OBD Interpreter ICs. STN1110 is also known to be used.
See also
Vehicular commun |
https://en.wikipedia.org/wiki/YGL%20motif | The YGL motif (or the amino acids in sequence of Tyrosine-Glycine-Leucine) is an integrin-binding motif present in several viral glycoproteins including Equine Herpes Virus (EHV) 1, EHV-4, and in rotavirus VP4. |
https://en.wikipedia.org/wiki/Galactic%20algorithm | A galactic algorithm is one that outperforms other algorithms for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth.
Possible use cases
Even if they are never used in practice, galactic algorithms may still contribute to computer science:
An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms. As Lipton states: Similarly, a hypothetical large but polynomial algorithm for the Boolean satisfiability problem, although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems.
Examples
Integer multiplication
An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."
Matrix multiplication
The first improvement over brute-force matrix multiplication (which needs multiplications) was the Strassen algorithm: a recursive algorithm that needs multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppers |
https://en.wikipedia.org/wiki/Algebraic%20enumeration | Algebraic enumeration is a subfield of enumeration that deals with finding exact formulas for the number of combinatorial objects of a given type, rather than estimating this number asymptotically. Methods of finding these formulas include generating functions and the solution of recurrence relations. |
https://en.wikipedia.org/wiki/Faustovirus | Faustovirus is a genus of giant virus which infects amoebae associated with humans. The virus was first isolated in 2015 and shown to be around 0.2 micrometers in diameter with a double stranded DNA genome of 466 kilobases predicted to encode 451 proteins. Although classified as a nucleocytoplasmic large DNA virus (NCLDV), faustoviruses share less than a quarter of their genes with other NCLDVs; however, ~46% are homologous to bacterial genes and the remainder are orphan genes (ORFans). Specifically, the gene encoding the major capsid protein (MCP) of faustovirus is different than that of its most closely related giant virus, asfivirus, as well as other NCLDVs. In asfivirus, the gene encoding MCP is a single genomic fragment of ~2000 base pairs (bp), however, in faustovirus the MCP is encoded by 13 exons separated by 12 large introns. The exons have a mean length of 149 bp and the introns have a mean length of 1,273 bp. The presence of introns in faustovirus genes is highly unusual for viruses.
Replication
The replication strategy of faustovirus in amoeba is similar to that of mimivirus. Lasting 18 to 20 hours, the replication cycle begins with the amoeba ingesting individual viral particles through a process known as phagocytosis. After about 2 to 4 hours post infection, virus particles are internalized via phagocytic vacuoles and are detected by the host. While the particles appear near the host's nucleus, there is no evidence that the virus is within the nucleus or has an interaction with the nuclear membrane. Similar to the mimivirus, in which a channel is created for particle proteins and DNA to travel through, the faustovirus particles empty their internal compartments into the amoeba's cytoplasm. In both viruses, the fusion leads to an eclipse phase in which the contents of particles become invisible inside the cytoplasm of the host. However, the eclipse phase of the faustovirus is longer than the mimivirus, taking place from 4 to 6 hours post infection. Ch |
https://en.wikipedia.org/wiki/Footprinting | Footprinting (also known as reconnaissance) is the technique used for gathering information about computer systems and the entities they belong to. To get this information, a hacker might use various tools and technologies. This information is very useful to a hacker who is trying to crack a whole system.
When used in the computer security lexicon, "Footprinting" generally refers to one of the pre-attack phases; tasks performed before doing the actual attack. Some of the tools used for Footprinting are Sam Spade, nslookup, traceroute, Nmap and neotrace.
Techniques used
DNS queries
Network enumeration
Network queries
Operating system identification
Software used
Wireshark
Uses
It allows a hacker to gain information about the target system or network. This information can be used to carry out attacks on the system. That is the reason by which it may be named a Pre-Attack, since all the information is reviewed in order to get a complete and successful resolution of the attack. Footprinting is also used by ethical hackers and penetration testers to find security flaws and vulnerabilities within their own company's network before a malicious hacker does.
Types
There are two types of Footprinting that can be used: active Footprinting and passive Footprinting. Active Footprinting is the process of using tools and techniques, such as performing a ping sweep or using the traceroute command, to gather information on a target. Active Footprinting can trigger a target's Intrusion Detection System (IDS) and may be logged, and thus requires a level of stealth to successfully do. Passive Footprinting is the process of gathering information on a target by innocuous, or, passive, means. Browsing the target's website, visiting social media profiles of employees, searching for the website on WHOIS, and performing a Google search of the target are all ways of passive Footprinting. Passive Footprinting is the stealthier method since it will not trigger a target's IDS or otherwise |
https://en.wikipedia.org/wiki/Philip%20Emeagwali | Philip Emeagwali (born 23 August 1954) is a computer scientist originally from Nigeria. He won the 1989 Gordon Bell Prize for price-performance in high-performance computing applications, in an oil reservoir modeling calculation using a novel mathematical formulation and implementation. He is known for making controversial claims about his achievements that are disputed by the scientific community.
Biography
Philip Emeagwali was born in Akure, Nigeria on 23 August 1954. He was raised in Onitsha in the South Eastern part of Nigeria. His early schooling was suspended in 1967 as a result of the Nigerian Civil War. At age 13, he served in the Biafran army. After the war he completed high-school equivalence through self-study.
Later on he married Dale Brown Emeagwali, an African-American microbiologist.
Education
He traveled to the United States to study under a scholarship following completion of a correspondence course at the University of London. He received a bachelor's degree in mathematics from Oregon State University in 1977. He later moved to Washington D.C., receiving in 1986 a master's degree from George Washington University in ocean and marine engineering, and a second master's in applied mathematics from the University of Maryland. Next magazine suggested that Emeagwali claimed to have further degrees. During this time, he worked as a civil engineer at the Bureau of Land Reclamation in Wyoming.
Court case and the denial of degree
Emeagwali studied for a Ph.D. degree from the University of Michigan from 1987 through 1991. His thesis was not accepted by a committee of internal and external examiners and thus he was not awarded the degree. Emeagwali filed a court challenge, stating that the decision was a violation of his civil rights and that the university had discriminated against him in several ways because of his race. The court challenge was dismissed, as was an appeal to the Michigan state Court of Appeals.
Supercomputing
Emeagwali received the 1989 |
https://en.wikipedia.org/wiki/ACSL1 | Long-chain-fatty-acid—CoA ligase 1 is an enzyme that in humans is encoded by the ACSL1 gene.
Structure
Gene
The ACSL1 gene is located on the 4th chromosome, with its specific location being 4q35.1. The gene contains 28 exons.
The protein encoded by this gene is an isozyme of the long-chain fatty-acid-coenzyme A ligase family. Although differing in substrate specificity, subcellular localization, and tissue distribution, all isozymes of this family convert free long-chain fatty acids into fatty acyl-CoA esters, and thereby play a key role in lipid biosynthesis and fatty acid degradation.
In melanocytic cells ACSL1 gene expression may be regulated by MITF.
Function
The protein encoded by this gene is an isozyme of the long-chain fatty-acid-coenzyme A ligase family. Although differing in substrate specificity, subcellular localization, and tissue distribution, all isozymes of this family convert free long-chain fatty acids into fatty acyl-CoA esters, and thereby play a key role in lipid biosynthesis and fatty acid degradation. Several transcript variants encoding different isoforms have been found for this gene. This specific protein is most commonly found in mitochondria and peroxisomes.
Clinical significance
ACSL1 is known to be involved in fatty-acid metabolism critical for heart function and nonspecific mental retardation. Since the ACSL4 gene is highly expressed in brain, where it encodes a brain specific isoform, an ASCL1 mutation may be an efficient diagnostic tool in mentally retarded males.
Interactions
ACSL1 expression is regulated by SHP2 activity. Additionally, ACSL4 interacts with ACSL3, APP, DSE, ELAVL1, HECW2, MINOS1, PARK2, SPG20, SUMO2, TP53, TUBGCP3, UBC, UBD, and YWHAQ. |
https://en.wikipedia.org/wiki/Privilege%20%28computing%29 | In computing, privilege is defined as the delegation of authority to perform security-relevant functions on a computer system. A privilege allows a user to perform an action with security consequences. Examples of various privileges include the ability to create a new user, install software, or change kernel functions.
Users who have been delegated extra levels of control are called privileged. Users who lack most privileges are defined as unprivileged, regular, or normal users.
Theory
Privileges can either be automatic, granted, or applied for.
An automatic privilege exists when there is no requirement to have permission to perform an action. For example, on systems where people are required to log into a system to use it, logging out will not require a privilege. Systems that do not implement file protection - such as MS-DOS - essentially give unlimited privilege to perform any action on a file.
A granted privilege exists as a result of presenting some credential to the privilege granting authority. This is usually accomplished by logging on to a system with a username and password, and if the username and password supplied are correct, the user is granted additional privileges.
A privilege is applied for by either an executed program issuing a request for advanced privileges, or by running some program to apply for the additional privileges. An example of a user applying for additional privileges is provided by the sudo command to run a command as superuser (root) user, or by the Kerberos authentication system.
Modern processor architectures have multiple CPU modes that allows the OS to run at different privilege levels. Some processors have two levels (such as user and supervisor); i386+ processors have four levels (#0 with the most, #3 with the least privileges). Tasks are tagged with a privilege level. Resources (segments, pages, ports, etc.) and the privileged instructions are tagged with a demanded privilege level. When a task tries to use a re |
https://en.wikipedia.org/wiki/%CE%95-quadratic%20form | In mathematics, specifically the theory of quadratic forms, an ε-quadratic form is a generalization of quadratic forms to skew-symmetric settings and to *-rings; , accordingly for symmetric or skew-symmetric. They are also called -quadratic forms, particularly in the context of surgery theory.
There is the related notion of ε-symmetric forms, which generalizes symmetric forms, skew-symmetric forms (= symplectic forms), Hermitian forms, and skew-Hermitian forms. More briefly, one may refer to quadratic, skew-quadratic, symmetric, and skew-symmetric forms, where "skew" means (−) and the * (involution) is implied.
The theory is 2-local: away from 2, ε-quadratic forms are equivalent to ε-symmetric forms: half the symmetrization map (below) gives an explicit isomorphism.
Definition
ε-symmetric forms and ε-quadratic forms are defined as follows.
Given a module M over a *-ring R, let B(M) be the space of bilinear forms on M, and let be the "conjugate transpose" involution . Since multiplication by −1 is also an involution and commutes with linear maps, −T is also an involution. Thus we can write and εT is an involution, either T or −T (ε can be more general than ±1; see below). Define the ε-symmetric forms as the invariants of εT, and the ε-quadratic forms are the coinvariants.
As an exact sequence,
As kernel and cokernel,
The notation Qε(M), Qε(M) follows the standard notation MG, MG for the invariants and coinvariants for a group action, here of the order 2 group (an involution).
Composition of the inclusion and quotient maps (but not ) as yields a map Qε(M) → Qε(M): every ε-symmetric form determines an ε-quadratic form.
Symmetrization
Conversely, one can define a reverse homomorphism , called the symmetrization map (since it yields a symmetric form) by taking any lift of a quadratic form and multiplying it by . This is a symmetric form because , so it is in the kernel. More precisely, . The map is well-defined by the same equation: choosing a different lift |
https://en.wikipedia.org/wiki/Miller%20Puckette | Miller Smith Puckette (born 1959) is the associate director of the Center for Research in Computing and the Arts as well as a professor of music at the University of California, San Diego, where he has been since 1994.
Puckette is known for authoring Max, a graphical development environment for music and multimedia synthesis, which he developed while working at IRCAM in the late 1980s. He is also the author of Pure Data (Pd), a real-time performing platform for audio, video and graphical programming language for the creation of interactive computer music and multimedia works, written in the 1990s with input from many others in the computer music and free software communities.
Biography
An alumnus of St. Andrew's-Sewanee School in Tennessee, Miller Puckette got involved in computer music in 1979 at MIT with Barry Vercoe. In 1979 he became a Putnam Fellow.
He earned a Ph.D. in mathematics from Harvard University in 1986 after completing an undergraduate degree at MIT in 1980. He was a member of the MIT Media Lab from its opening in 1985 until 1987 before continuing his research at IRCAM, and since 1997 has been a part of the Global Visual Music project.
He used Max to complete his first work, which is called Pluton from the second work of Manoury' series called Sonus ex Machina.
He is the 2008 SEAMUS Award Recipient.
On May 11, 2011, he received the title of Doctor Honoris Causa from the University of Mons.
On July 21, 2012, he received an Honorary Degree from Bath Spa University in recognition of his extraordinary contribution to computer music research.
He was the recipient of the Gold Medal at the 1975 Math Olympiads and the Silver Medal at the 1976 Math Olympiads.
Selected publications
For a full list, see: http://msp.ucsd.edu/publications.html
Puckette, Miller (2004) “Who Owns our Software?: A first-person case study” Proceedings, ISEA, pp. 200–202, republished in September 2009 issue of Montréal: Communauté électroacoustique canadienne / Canadian Electro |
https://en.wikipedia.org/wiki/Coulomb%20barrier | The Coulomb barrier, named after Coulomb's law, which is in turn named after physicist Charles-Augustin de Coulomb, is the energy barrier due to electrostatic interaction that two nuclei need to overcome so they can get close enough to undergo a nuclear reaction.
Potential energy barrier
This energy barrier is given by the electric potential energy:
where
ε0 is the permittivity of free space;
q1, q2 are the charges of the interacting particles;
r is the interaction radius.
A positive value of U is due to a repulsive force, so interacting particles are at higher energy levels as they get closer. A negative potential energy indicates a bound state (due to an attractive force).
The Coulomb barrier increases with the atomic numbers (i.e. the number of protons) of the colliding nuclei:
where e is the elementary charge, and Zi the corresponding atomic numbers.
To overcome this barrier, nuclei have to collide at high velocities, so their kinetic energies drive them close enough for the strong interaction to take place and bind them together.
According to the kinetic theory of gases, the temperature of a gas is just a measure of the average kinetic energy of the particles in that gas. For classical ideal gases the velocity distribution of the gas particles is given by Maxwell–Boltzmann. From this distribution, the fraction of particles with a velocity high enough to overcome the Coulomb barrier can be determined.
In practice, temperatures needed to overcome the Coulomb barrier turned out to be smaller than expected due to quantum mechanical tunnelling, as established by Gamow. The consideration of barrier-penetration through tunnelling and the speed distribution gives rise to a limited range of conditions where fusion can take place, known as the Gamow window.
The absence of the Coulomb barrier enabled the discovery of the neutron by James Chadwick in 1932. |
https://en.wikipedia.org/wiki/Balanced%20audio | Balanced audio is a method of interconnecting audio equipment using balanced interfaces. This type of connection is very important in sound recording and production because it allows the use of long cables while reducing susceptibility to external noise caused by electromagnetic interference. The balanced interface guarantees that induced noise appears as common-mode voltages at the receiver which can be rejected by a differential device.
Balanced connections typically use shielded twisted-pair cable and three-conductor connectors. The connectors are usually three-pin XLR or TRS phone connectors. When used in this manner, each cable carries one channel, therefore stereo audio (for example) would require two of them.
A common misconception is that balanced audio requires the signal source to deliver equal waveforms of opposite polarity to the two signal conductors of the balanced line. However, many balanced devices actively drive only one side of the line, but do so at an impedance that is equal to the impedance of the non-driven side of the line. This impedance balance permits the balanced line receiver (input stage of the next device) to reject common-mode signals introduced to the two conductors by electromagnetic coupling.
Applications
Many microphones operate at low voltage levels and some with high output impedance (hi-Z), which makes long microphone cables especially susceptible to electromagnetic interference. Microphone interconnections are therefore a common application for a balanced interconnection, which allows the receiver to reject most of this induced noise. If the power amplifiers of a public address system are located at any distance from the mixing console, it is also normal to use balanced lines for the signal paths from the mixer to these amplifiers. Many other components, such as graphic equalizers and effects units, have balanced inputs and outputs to allow this. In recording and for short cable runs in general, a compromise is necessar |
https://en.wikipedia.org/wiki/POPLmark%20challenge | In programming language theory, the POPLmark challenge (from "Principles of Programming Languages benchmark", formerly Mechanized Metatheory for the Masses!) (Aydemir, 2005) is a set of benchmarks designed to evaluate the state of automated reasoning (or mechanization) in the metatheory of programming languages, and to stimulate discussion and collaboration among a diverse cross section of the formal methods community. Very loosely speaking, the challenge is about measurement of how well programs may be proven to match a specification of how they are intended to behave (and the many complex issues that this involves). The challenge was initially proposed by the members of the PL club at the University of Pennsylvania, in association with collaborators around the world. The Workshop on Mechanized Metatheory is the main meeting of researchers participating in the challenge.
The design of the POPLmark benchmark is guided by features common to reasoning about programming languages. The challenge problems do not require the formalisation of large programming languages, but they do require sophistication in reasoning about:
Binding Most programming languages have some form of binding, ranging in complexity from the simple binders of simply typed lambda calculus to complex, potentially infinite binders needed in the treatment of record patterns.
Induction Properties such as subject reduction and strong normalisation often require complex induction arguments.
Reuse Furthering collaboration being a key aim of the challenge, the solutions are expected to contain reusable components that would allow researchers to share language features and designs without requiring them to start from scratch every time.
The problems
, the POPLmark challenge is composed of three parts. Part 1 concerns solely the types of System F<: (System F with subtyping), and has problems such as:
Checking that the type system admits transitivity of subtyping.
Checking the transitivity of subt |
https://en.wikipedia.org/wiki/Wafer%20bonding | Wafer bonding is a packaging technology on wafer-level for the fabrication of microelectromechanical systems (MEMS), nanoelectromechanical systems (NEMS), microelectronics and optoelectronics, ensuring a mechanically stable and hermetically sealed encapsulation. The wafers' diameter range from 100 mm to 200 mm (4 inch to 8 inch) for MEMS/NEMS and up to 300 mm (12 inch) for the production of microelectronic devices. Smaller wafers were used in the early days of the microelectronics industry, with wafers being just 1 inch in diameter in the 1950s.
Overview
In microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS), the package protects the sensitive internal structures from environmental influences such as temperature, moisture, high pressure and oxidizing species. The long-term stability and reliability of the functional elements depend on the encapsulation process, as does the overall device cost. The package has to fulfill the following requirements:
protection against environmental influences
heat dissipation
integration of elements with different technologies
compatibility with the surrounding periphery
maintenance of energy and information flow
Techniques
The commonly used and developed bonding methods are as follows:
Direct bonding
Surface activated bonding
Plasma activated bonding
Anodic bonding
Eutectic bonding
Glass frit bonding
Adhesive bonding
Thermocompression bonding
Reactive bonding
Transient liquid phase diffusion bonding
Atomic diffusion bonding
Requirements
The bonding of wafers requires specific environmental conditions which can generally be defined as follows:
substrate surface
flatness
smoothness
cleanliness
bonding environment
bond temperature
ambient pressure
applied force
materials
substrate materials
intermediate layer materials
The actual bond is an interaction of all those conditions and requirements. Hence, the applied technology needs to be chosen in respect to the present substrat |
https://en.wikipedia.org/wiki/Test-driven%20development | Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later.
Software engineer Kent Beck, who is credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more recently has created more general interest in its own right.
Programmers also apply the concept to improving and debugging legacy code developed with older techniques.
Test-driven development cycle
The following sequence is based on the book Test-Driven Development by Example:
1. Add a test
The adding of a new feature begins by writing a test that passes iff the feature's specifications are met. The developer can discover these specifications by asking about use cases and user stories. A key benefit of test-driven development is that it makes the developer focus on requirements before writing code. This is in contrast with the usual practice, where unit tests are only written after code.
2. Run all tests. The new test should fail for expected reasons
This shows that new code is actually needed for the desired feature. It validates that the test harness is working correctly. It rules out the possibility that the new test is flawed and will always pass.
3. Write the simplest code that passes the new test
Inelegant or hard code is acceptable, as long as it passes the test. The code will be honed anyway in Step 5. No code should be added beyond the tested functionality.
4. All tests should now pass
If any fail, the new code must be revised until they pass. This ensures the new code meets the test requirements and does not break exis |
https://en.wikipedia.org/wiki/Autonetics%20Recomp%20II | The Autonetics RECOMP II was a computer first introduced in 1958. It was made by the Autonetics division of North American Aviation.
It was attached to a desk that housed the input/output devices. Its desk integration made it a hands-on small system intended for the scientific and engineering computing market. The computer weighed about , including input-output.
Architecture
It had a 40-bit word size, 20-bit instruction size. Memory and registers were on a fixed head disk that operated like a drum memory—4080 words on standard tracks, 16 words on fast loop tracks, registers A, B, R, X each on their own high-speed loop track, and one prerecorded read only clock track.
It had a complete set of built-in floating point operations, including square root. Floating-point values used two words, one for the exponent and one for the fraction for a total of 80 bits.
Whereas the full 40-bit word was used for data, instructions were only 20 bits long and were stored two per word. Since indexing was commonly done by modifying the address part of an instruction (say, by adding one to access the next data item in a list), such instructions always had to be in the second half-word, and the first half-word was padded with a NOP instruction. Programmers also used these NOP instructions to provide space for future inserted instructions, since the assembler did not allow for use of symbolic addresses, and the insertion of a single instruction could otherwise require rewriting a lot of code.
The machine had a bit-serial architecture.
Punched paper tape was the external storage medium. The desk also had an electronic typewriter for printed output and a keyboard integrated with the system console to allow typed input and system control. Programs written in machine code could be input to the system from the console. |
https://en.wikipedia.org/wiki/Stealth%20wallpaper | For computer network security, stealth wallpaper is a material designed to prevent an indoor Wi-Fi network from extending or "leaking" to the outside of a building, where malicious persons may attempt to eavesdrop or attack a network. While it is simple to prevent all electronic signals from passing through a building by covering the interior with metal, stealth wallpaper accomplishes the more difficult task of blocking Wi-Fi signals while still allowing cellphone signals to pass through.
The first stealth wallpaper was originally designed by UK defense contractor BAE Systems
In 2012, The Register reported that a commercial wallpaper had been developed by Institut Polytechnique Grenoble and the Centre Technique du Papier with planned sale in 2013. This wallpaper blocks three selected Wi-Fi frequencies. Nevertheless, it does allow GSM and 4G signals to pass through the network, therefore allowing cell phone use to remain unaffected by the wallpaper.
See also
Electromagnetic shielding
Faraday cage
TEMPEST
Wallpaper
Wireless security |
https://en.wikipedia.org/wiki/Eiichi%20Goto | was a Japanese computer scientist, the builder of one of the first general-purpose computers in Japan.
Biography
Goto was born on January 26, 1931, in Shibuya, Tokyo. After attending Seikei High School he went to Tokyo University, where he graduated in 1953. He continued his graduate studies at Tokyo in physics under the supervision of Hidetoshi Takahashi, earning his doctorate in 1962. He became a faculty member at Tokyo in 1959. In 1968, he became the chief scientist of the Information Science Laboratory at RIKEN, a position he held until 1991. However, he continued to hold a position at Tokyo University as well, becoming a full professor there in 1970. He retired from the University of Tokyo in 1990, and in 1991 he moved to Kanagawa University.
Goto was a visiting professor at the Massachusetts Institute of Technology in 1961. He was vice president of the International Federation for Information Processing from 1971 to 1974, and also served several times on the steering committee of the Information Processing Society of Japan.
Goto died on June 12, 2005, of complications of diabetes.
Research
In 1954 while he was still a graduate student, Goto invented the parametron, a circuit element that combined a ferrite core with a capacitor to generate electrical oscillations whose timing could be controlled. This provided an alternative to the vacuum tube technology then in use for building computing devices. He completed the construction of the PC-1, one of the first general-purpose computers built in Japan, in 1958, using parametron-based logic.
Soon afterwards, he proposed the Goto pair, a device related to the parametron. Parametrons continued to be used for computing in Japan until the 1960s when they gave way to transistors. The quantum flux parametron is a later improvement of the parametron, also by Goto, that uses superconducting Josephson junctions to improve both the speed and the energy consumption of these devices.
During his visit to MIT in 1961, Goto d |
https://en.wikipedia.org/wiki/W3af | w3af (Web Application Attack and Audit Framework) is an open-source web application security scanner. The project provides a vulnerability scanner and exploitation tool for Web applications. It provides information about security vulnerabilities for use in penetration testing engagements. The scanner offers a graphical user interface and a command-line interface.
Architecture
w3af is divided into two main parts, the core and the plug-ins. The core coordinates the process and provides features that are consumed by the plug-ins, which find the vulnerabilities and exploit them. The plug-ins are connected and share information with each other using a knowledge base.
Plug-ins can be categorized as Discovery, Audit, Grep, Attack, Output, Mangle, Evasion or Bruteforce.
History
w3af was started by Andres Riancho in March 2007, after many years of development by the community. In July 2010, w3af announced its sponsorship and partnership with Rapid7. With Rapid7's sponsorship the project will be able to increase its development speed and keep growing in terms of users and contributors.
See also
Metasploit Project
Low Orbit Ion Cannon (LOIC)
Web application security
OWASP Open Web Application Security Project |
https://en.wikipedia.org/wiki/Table%20of%20Clebsch%E2%80%93Gordan%20coefficients | This is a table of Clebsch–Gordan coefficients used for adding angular momentum values in quantum mechanics. The overall sign of the coefficients for each set of constant , , is arbitrary to some degree and has been fixed according to the Condon–Shortley and Wigner sign convention as discussed by Baird and Biedenharn. Tables with the same sign convention may be found in the Particle Data Group's Review of Particle Properties and in online tables.
Formulation
The Clebsch–Gordan coefficients are the solutions to
Explicitly:
The summation is extended over all integer for which the argument of every factorial is nonnegative.
For brevity, solutions with and are omitted. They may be calculated using the simple relations
and
Specific values
The Clebsch–Gordan coefficients for j values less than or equal to 5/2 are given below.
When , the Clebsch–Gordan coefficients are given by .
SU(N) Clebsch–Gordan coefficients
Algorithms to produce Clebsch–Gordan coefficients for higher values of and , or for the su(N) algebra instead of su(2), are known.
A web interface for tabulating SU(N) Clebsch–Gordan coefficients is readily available. |
https://en.wikipedia.org/wiki/Jet%20%28mathematics%29 | In mathematics, the jet is an operation that takes a differentiable function f and produces a polynomial, the truncated Taylor polynomial of f, at each point of its domain. Although this is the definition of a jet, the theory of jets regards these polynomials as being abstract polynomials rather than polynomial functions.
This article first explores the notion of a jet of a real valued function in one real variable, followed by a discussion of generalizations to several real variables. It then gives a rigorous construction of jets and jet spaces between Euclidean spaces. It concludes with a description of jets between manifolds, and how these jets can be constructed intrinsically. In this more general context, it summarizes some of the applications of jets to differential geometry and the theory of differential equations.
Jets of functions between Euclidean spaces
Before giving a rigorous definition of a jet, it is useful to examine some special cases.
One-dimensional case
Suppose that is a real-valued function having at least k + 1 derivatives in a neighborhood U of the point . Then by Taylor's theorem,
where
Then the k-jet of f at the point is defined to be the polynomial
Jets are normally regarded as abstract polynomials in the variable z, not as actual polynomial functions in that variable. In other words, z is an indeterminate variable allowing one to perform various algebraic operations among the jets. It is in fact the base-point from which jets derive their functional dependency. Thus, by varying the base-point, a jet yields a polynomial of order at most k at every point. This marks an important conceptual distinction between jets and truncated Taylor series: ordinarily a Taylor series is regarded as depending functionally on its variable, rather than its base-point. Jets, on the other hand, separate the algebraic properties of Taylor series from their functional properties. We shall deal with the reasons and applications of this separation later |
https://en.wikipedia.org/wiki/Character%20group | In mathematics, a character group is the group of representations of a group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces do not in general form a group. Some important properties of these one-dimensional characters apply to characters in general:
Characters are invariant on conjugacy classes.
The characters of irreducible representations are orthogonal.
The primary importance of the character group for finite abelian groups is in number theory, where it is used to construct Dirichlet characters. The character group of the cyclic group also appears in the theory of the discrete Fourier transform. For locally compact abelian groups, the character group (with an assumption of continuity) is central to Fourier analysis.
Preliminaries
Let be an abelian group. A function mapping the group to the non-zero complex numbers is called a character of if it is a group homomorphism from to —that is, if for all .
If is a character of a finite group , then each function value is a root of unity, since for each there exists such that , and hence .
Each character f is a constant on conjugacy classes of G, that is, f(hgh−1) = f(g). For this reason, a character is sometimes called a class function.
A finite abelian group of order n has exactly n distinct characters. These are denoted by f1, ..., fn. The function f1 is the trivial representation, which is given by for all . It is called the principal character of G; the others are called the non-principal characters.
Definition
If G is an abelian group, then the set of characters fk forms an abelian group under pointwise multiplication. That is, the product of characters and is defined by for all . This grou |
https://en.wikipedia.org/wiki/Vicsek%20model | The Vicsek model is a mathematical model used to describe active matter. One motivation of the study of active matter by physicists is the rich phenomenology associated to this field. Collective motion and swarming are among the most studied phenomena. Within the huge number of models that have been developed to catch such behavior from a microscopic description, the most famous is the model introduced by Tamás Vicsek et al. in 1995.
Physicists have a great interest in this model as it is minimal and describes a kind of universality. It consists in point-like self-propelled particles that evolve at constant speed and align their velocity with their neighbours' one in presence of noise. Such a model shows collective motion at high density of particles or low noise on the alignment.
Model (mathematical description)
As this model aims at being minimal, it assumes that flocking is due to the combination of any kind of self propulsion and of effective alignment.
An individual is described by its position and the angle defining the direction of its velocity at time . The discrete time evolution of one particle is set by two equations:
At each time step , each agent aligns with its neighbours within a given distance with an uncertainty due to a noise :
The particle then moves at constant speed in the new direction:
In these equations, denotes the average direction of the velocities of particles (including particle ) within a circle of radius surrounding particle .
The whole model is controlled by three parameters: the density of particles, the amplitude of the noise on the alignment and the ratio of the travel distance to the interaction range . From these two simple iteration rules, various continuous theories have been elaborated such as the Toner Tu theory which describes the system at the hydrodynamic level.
An Enskog-like kinetic theory, which is valid at arbitrary particle density, has been developed. This theory quantitatively describes th |
https://en.wikipedia.org/wiki/Tonnetz | In musical tuning and harmony, the (German for 'tone network') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739. Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music.
History through 1900
The Tonnetz originally appeared in Leonhard Euler's 1739 . Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). The Tonnetz was rediscovered in 1858 by Ernst Naumann, and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic motion between chords and modulation between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists.
Oettingen and Riemann both conceived of the relationships in the chart being defined through just intonation, which uses pure intervals. One can extend out one of the horizontal rows of the Tonnetz indefinitely, to form a never-ending sequence of perfect fifths: F-C-G-D-A-E-B-F♯-C♯-G♯-D♯-A♯-E♯-B♯-F𝄪-C𝄪-G𝄪- (etc.) Starting with F, after 12 perfect fifths, one reaches E♯. Perfect fifths in just intonation are slightly larger than the compromised fifths used in equal temperament tuning systems more common in the present. This means that when one stacks 12 fifths starting from F, the E♯ we arrive at will not be seven octaves above the F we started with. Oettingen and Riemann's Tonnetz thus extended on infinitely in every direction without actually repeating any pitches. In the twentieth century, composer-theorists such as Ben Johnston and James Tenney continued to developed theories and applications involving |
https://en.wikipedia.org/wiki/Local%20nonsatiation | In microeconomics, the property of local nonsatiation (LNS) of consumer preferences states that for any bundle of goods there is always another bundle of goods arbitrarily close that is strictly preferred to it.
Formally, if X is the consumption set, then for any and every , there exists a such that and is strictly preferred to .
Several things to note are:
Local nonsatiation is implied by monotonicity of preferences. However, as the converse is not true, local nonsatiation is a weaker condition.
There is no requirement that the preferred bundle y contain more of any good – hence, some goods can be "bads" and preferences can be non-monotone.
It rules out the extreme case where all goods are "bads", since the point x = 0 would then be a bliss point.
Local nonsatiation can only occur either if the consumption set is unbounded or open (in other words, it is not compact) or if x is on a section of a bounded consumption set sufficiently far away from the ends. Near the ends of a bounded set, there would necessarily be a bliss point where local nonsatiation does not hold.
Applications of local nonsatiation
Local nonsatiation (LNS) is often applied in consumer theory, a branch of microeconomics, as an important property often assumed in theorems and propositions. Consumer theory is a study of how individuals make decisions and spend their money based on their preferences and budget. Local nonsatiation is also a key assumption for the First welfare theorem.
Indifference curve
An indifference curve is a set of all commodity bundles providing consumers with the same level of utility. The indifference curve is named so because the consumer would be indifferent between choosing any of these bundles. The indifference curves are not thick.
Walras’s law
Local nonsatiation is a key assumption in the Walras’ law theorem. Walras's law says that if consumers have locally nonsatiated preferences, they will consume their entire budget over their lifetime.
The indirect |
https://en.wikipedia.org/wiki/Skybirds | Skybirds was a brand name for a series of 1:72 scale wood and metal aircraft model kits produced during the 1930s and 1940s, manufactured by the A. J. Holladay & Co.
Some of the Skybird-branded products were die-cast scale model cars, aircraft, military vehicles, figurines, among others.
History
These kits were designed by pilot and aviation journalist James Hay Stevens and comprised shaped wooden blanks with cast metal detail parts. The kits were intended to educate their assemblers of the aircraft. They were designed to be built similarly to real aircraft construction. The kits were supposedly approved by "educational and air-minded organisations". These were the first model aircraft kits in the world made to a constant scale of 1:72. This scale was later adopted by many other model manufacturers, such as Frog and Airfix.
Around 80 different Skybirds kits were released from 1932 onwards, marketed towards those aged 12 and over. Subjects ranged from First World War to Second World War military aircraft, plus a number of inter-war period civilian types.
The company endorsed the foundation of clubs, specifically for model-making. Together, these formed the Skybird League which had its own magazine of which new issues were published four times a year. Photographs of aircraft models built could be submitted into competitions, in order to be displayed within the windows of the Hamleys toyshop in London.
It was marvelled, by customers, that a photograph of a completed model, if finished properly, looked identical to the original article. Of course, this was somewhat easier to achieve with a completed model in the 1930s due to the fact that all photography was monochromatic. The kits encouraged photographers to experiment with scale and trickery to make the model seem more like an actual vehicle. Magazine readers sent their photographs to Skybirds, hoping them to be published within an upcoming issue. Modellers were also encouraged to produce a diorama of their co |
https://en.wikipedia.org/wiki/NLTSS | The Network Livermore Timesharing System (NLTSS, also sometimes the New Livermore Time Sharing System) is an operating system that was actively developed at Lawrence Livermore Laboratory (now Lawrence Livermore National Laboratory) from 1979 until about 1988, though it continued to run production applications until 1995. An earlier system, the Livermore Time Sharing System had been developed over a decade earlier.
NLTSS ran initially on a CDC 7600 computer, but only ran production from about 1985 until 1994 on Cray computers including the Cray-1, Cray X-MP, and Cray Y-MP models.
Characteristics
The NLTSS operating system was unusual in many respects and unique in some.
Low-level architecture
NLTSS was a microkernel message passing system. It was unique in that only one system call was supported by the kernel of the system. That system call, which might be called "communicate" (it didn't have a name because it didn't need to be distinguished from other system calls) accepted a list of "buffer tables" (e.g., see The NLTSS Message System Interface) that contained control information for message communication – either sends or receives. Such communication, both locally within the system and across a network was all the kernel of the system supported directly for user processes. The "message system" (supporting the one call and the network protocols) and drivers for the disks and processor composed the entire kernel of the system.
Mid-level architecture
NLTSS is a capability-based security client–server system. The two primary servers are the file server and the process server. The file server was a process privileged to be trusted by the drivers for local storage (disk storage,) and the process server was a process privileged to be trusted by the processor driver (software that switched time sharing control between processes in the "alternator", handled interrupts for processes besides the "communicate" call, provided access to memory and process state for the proce |
https://en.wikipedia.org/wiki/MiNT | MiNT (MiNT is Now TOS) is a free software alternative operating system kernel for the Atari ST series. It is a multi-tasking alternative to TOS and MagiC. Together with the free system components fVDI device drivers, XaAES graphical user interface widgets, and TeraDesk file manager, MiNT provides a free TOS compatible replacement OS that can multitask.
History
Work on MiNT began in 1989, as the developer Eric Smith was trying to port the GNU library and related utilities on the Atari ST TOS. It soon became much easier to add a Unix-like layer to the TOS, than to patch all of the GNU software, and MiNT began as a TOS extension to help in porting.
MiNT was originally released by Eric Smith as "MiNT is Not TOS" (a recursive acronym in the style of "GNU's Not Unix") in May 1990. The new Kernel got traction, with people contributing a port of the MINIX file system and a port to the Atari TT.
At the same time, Atari was looking to enhance the TOS with multitasking abilities. MiNT could fulfill the job, and Atari hired Eric Smith. MiNT was adopted as an official alternative kernel with the release of the Atari Falcon, slightly altering the MiNT acronym into "MiNT is Now TOS". Atari bundled MiNT with a multitasking version of the Graphics Environment Manager (GEM) under the name MultiTOS as a floppy disk based installer.
After Atari left the computer market, MiNT development continued as FreeMiNT, and became maintained by a team of volunteers. FreeMiNT development follows a classic open-source approach, with the source code hosted on a publicly browsable FreeMiNT Git repository on GitHub and development discussed in a public mailing list., which is maintained on SourceForge, after an earlier (2014) move from AtariForge, where it was maintained for almost 20 years.
MiNT software ecosystem
FreeMiNT provides only a kernel, so several distributions support MiNT, like VanillaMint, EasyMint, STMint, and BeeKey/BeePi.
Although FreeMiNT can use the graphical user interface |
https://en.wikipedia.org/wiki/Resprouter | Resprouters are plant species that are able to survive fire by the activation of dormant vegetative buds to produce regrowth.
Plants may resprout from a bud bank that can be located in different places, including in the trunk or major branches (epicormic shoots) or in belowground structures like lignotubers, bulbs, and other structures.
Resprouters characterize chaparral, fynbos, kwongan, savanna and other landscapes that experience periodic fires.
See also
Adventitiousness
Coppicing
Crown sprouting
Cutting (plant)
Geoxyle
Water sprout |
https://en.wikipedia.org/wiki/Otway%E2%80%93Rees%20protocol | The Otway–Rees protocol is a computer network authentication protocol designed for use on insecure networks (e.g. the Internet). It allows individuals communicating over such a network to prove their identity to each other while also preventing eavesdropping or replay attacks and allowing for the detection of modification.
The protocol can be specified as follows in security protocol notation, where Alice is authenticating herself to Bob using a server S (M is a session-identifier, NA and NB are nonces):
Note: The above steps do not authenticate B to A.
This is one of the protocols analysed by Burrows, Abadi and Needham in the paper that introduced an early version of Burrows–Abadi–Needham logic.
Attacks on the protocol
There are a variety of attacks on this protocol currently published.
Interception attacks
These attacks leave the intruder with the session key and may exclude one of the parties from the conversation.
Boyd and Mao observe that the original description does not require that S check the plaintext A and B to be the same as the A and B in the two ciphertexts. This allows an intruder masquerading as B to intercept the first message, then send the second message to S constructing the second ciphertext using its own key and naming itself in the plaintext. The protocol ends with A sharing a session key with the intruder rather than B.
Gürgens and Peralta describe another attack which they name an arity attack. In this attack the intruder intercepts the second message and replies to B using the two ciphertexts from message 2 in message 3. In the absence of any check to prevent it, M (or perhaps M,A,B) becomes the session key between A and B and is known to the intruder.
Cole describes both the Gürgens and Peralta arity attack and another attack in his book Hackers Beware. In this the intruder intercepts the first message, removes the plaintext A,B and uses that as message 4 omitting messages 2 and 3. This leaves A communicating with the in |
https://en.wikipedia.org/wiki/Padstool%20%28signage%29 | Bicycle mushroom (Dutch: ) is a form of rural wayfinding signage for cyclists, in use in the Netherlands. They are named for their toadstool-like shape; "paddenstoel" first came into use as a nickname around 1921.
Use
Padstools are considered complementary to conventional signs on tall poles. In built-up areas, pole signs are preferred, but in natural areas such as moors, dunes, and woods, padstool signs are preferred. In natural areas, padstools are sufficiently visible to be spotted and read by the passing cyclist, without being so visible from far off that they spoil the views of the landscape.
Cyclists can look down on padstool signs, rather than having to look up away from the path. The signs are designed to be read quickly; there is a principle in the Netherlands that cyclists should not be slowed or stopped, even to read signage. A constant speed is more comfortable and efficient, and makes for shorter travel times.
History
The early twentieth century saw a dramatic increase in the number of cars; in 1920 the were about three thousand of them in the Netherlands. Cyclists' objections lead to the development of separate bike paths (paths deliberately made too narrow for cars), and these paths needed their own signage.
The Algemene Nederlandsche Wielrijdersbond (ANWB), disliking the cluttering of natural landscapes with pole-mounted bike signage, ran a prize competition in 1918 for a better design, intended to be locally-produced. Three prototypes were set up on the heath of Laren in early 1919. The winning design was by Johannes Hendrik Willem Leliman, a house architect from Baarn. The first twelve padstools were installed between Laren and Baarn in 1919, by the local Cycle Path Society and the ANWB, two closely-entwined organizations. In 1920, 13 more were installed, and by 1975 there were 32 hundred padstools. In 2019, there were about six thousand padstools in the Netherlands, clustered in certain areas.
From 2000, the ANWB gradually ceased being res |
https://en.wikipedia.org/wiki/Electronic%20message%20journaling | Electronic message journaling is the process of retaining information relating to electronic messages. In this context, electronic messages are defined as any type of electronic communication data structure. Historically this was an electronic mail, but it may also include instant messages, audio messages (such as those in VoIP), text messages, facsimile messages, or other user collaboration protocol data structures. Beginning about 2005 electronic messages began to include social media that included user-generated content such as blogs, discussion forums, posts, chats, tweets, podcasting, pins, digital images, video and audio files. Several implementation variations exist, altering when, what, and how information is retained.
Background
Archival of electronic messages has become a concern in modern society as regulations and compliance requirements for businesses have become more prevalent with notable Congressional acts, such as Sarbanes Oxley. Other compliance areas of concern are those dealing with U.S. Securities and Exchange Commission (SEC) 17a-4, NASD 3010, HIPAA, the Data Protection Act, and the Patriot Act. Several large corporations lost significant amounts of money because of their failure to meet these compliance requirements. Morgan Stanley had a $1.45 billion judgment against it and Merrill Lynch was issued a $2.5 million fine because of its inability to reproduce e-mail transmissions. Because of growing concerns of similar repercussions, major corporations are implementing electronic message journaling to meet compliance requirements.
Overview
A communication system recognizes and identifies any new outgoing or incoming message. It then creates a journal message containing information extracted from the new outgoing or incoming message. The journal message is then processed for storage while the new outgoing or incoming message is processed normally. Then, at a time of audit, reviewers may search and analyze stored journal messages. E-mail |
https://en.wikipedia.org/wiki/Consensus%20clustering | Consensus clustering is a method of aggregating (potentially conflicting) results from multiple clustering algorithms. Also called cluster ensembles or aggregation of clustering (or partitions), it refers to the situation in which a number of different (input) clusterings have been obtained for a particular dataset and it is desired to find a single (consensus) clustering which is a better fit in some sense than the existing clusterings. Consensus clustering is thus the problem of reconciling clustering information about the same data set coming from different sources or from different runs of the same algorithm. When cast as an optimization problem, consensus clustering is known as median partition, and has been shown to be NP-complete, even when the number of input clusterings is three. Consensus clustering for unsupervised learning is analogous to ensemble learning in supervised learning.
Issues with existing clustering techniques
Current clustering techniques do not address all the requirements adequately.
Dealing with large number of dimensions and large number of data items can be problematic because of time complexity;
Effectiveness of the method depends on the definition of "distance" (for distance-based clustering)
If an obvious distance measure doesn't exist, we must "define" it, which is not always easy, especially in multidimensional spaces.
The result of the clustering algorithm (that, in many cases, can be arbitrary itself) can be interpreted in different ways.
Justification for using consensus clustering
There are potential shortcomings for all existing clustering techniques. This may cause interpretation of results to become difficult, especially when there is no knowledge about the number of clusters. Clustering methods are also very sensitive to the initial clustering settings, which can cause non-significant data to be amplified in non-reiterative methods. An extremely important issue in cluster analysis is the val |
https://en.wikipedia.org/wiki/Complementary%20currency | A complementary currency is a currency or medium of exchange that is not necessarily a national currency, but that is thought of as supplementing or complementing national currencies. Complementary currencies are usually not legal tender and their use is based on agreement between the parties exchanging the currency. According to Jérôme Blanc of Laboratoire d'Économie de la Firme et des Institutions, complementary currencies aim to protect, stimulate or orientate the economy. They may also be used to advance particular social, environmental, or political goals.
When speaking about complementary currencies, a number of overlapping and often interchangeable terms are in use: local or community currencies are complementary currencies used within a locality or other form of community (such as business-based or online communities); regional currencies are similar to local currencies, but are used within a larger geographical region; and sectoral currencies are complementary currencies used within a single economic sector, such as education or health care. Many private currencies are complementary currencies issued by private businesses or organizations. Other terms include alternative currency, auxiliary currency, and microcurrency. Mutual credit is a form of alternative currency, and thus any form of lending that does not go through the banking system can be considered a form of alternative currency. Barters are another type of alternative currency. These are actually exchange systems, which trade only items, without the use of any currency whatsoever. Finally, LETS is a special form of barter that trades points for items. One point stands for one worker-hour of work, and is thus a time-based currency.
Purposes
Current complementary currencies have often been designed intentionally to address specific issues, for example to increase financial stability. Most complementary currencies have multiple purposes and/or are intended to address multiple issues. They can be u |
https://en.wikipedia.org/wiki/List%20of%20national%20mottos | This article lists state and national mottos for the world's nations. The mottos for some states lacking general international recognition, extinct states, non-sovereign nations, regions, and territories are listed, but their names are not bolded.
A state motto is used to describe the intent or motivation of the state in a short phrase. For example, it can be included on a country's flag, coat of arms, or currency. Some countries do not have a national motto.
Current sovereign countries
: There is no god but God; Muhammad is the messenger of God. (; )
: You, Albania, give me honour, give me the name Albanian ()
: By the people and for the people (; ).
: Strength united is stronger ().
: Virtue is stronger when united ()
: Each endeavouring, all achieving
: No official motto. Unofficial motto: In Union and Liberty ().
: One Nation, One Culture (; ).
: No official motto. Formerly Advance Australia.
: No official motto.
: No official motto. Unofficial: The Land of Fire ()
: Forward, Upward, Onward Together
: No official motto.
: No official motto. Recognized official national slogan and war cry: Victory to Bengal (; জয় বাংলা).
: Pride and Industry
: No official motto. Unofficial motto: Long Live Belarus! (, )
: Unity makes strength (, , ).
: Under the shade I flourish ().
: Fellowship, Justice, Labour ().
: No official motto.
: Unity makes strength ().
: No official motto.
: Rain ().
: Order and progress ()
: Always in service with God's guidance ( ).
: Unity makes strength ( ).
: Unity, Progress, Justice ().
: Unity, Work, Progress ()
: Nation, Religion, King (; )
: Peace, Work, Fatherland ()
: From sea to sea ()
: Unity, Work, Progress ().
: Unity, Dignity, Work ().
: Unity, Work, Progress ()
: Through Reason Or By Force ()
: No official motto.
Serve The People! () is the motto of the Chinese Communist Party.
Long live the People's Republic of China, Long live the Great People's Unity of the World! () is the motto inscribed onto the Tiananmen, the symbol of |
https://en.wikipedia.org/wiki/Holdridge%20life%20zones%20in%20Guatemala | There are 14 Holdridge life zones in Guatemala: |
https://en.wikipedia.org/wiki/Georges%20Sagnac | Georges Sagnac (; 14 October 1869 – 26 February 1928) was a French physicist who lent his name to the Sagnac effect, a phenomenon which is at the basis of interferometers and ring laser gyroscopes developed since the 1970s.
Life and work
Sagnac was born at Périgueux and entered the École Normale Supérieure in 1889. While a lab assistant at the Sorbonne, he was one of the first in France to study X-rays, following Wilhelm Conrad Röntgen. He belonged to a group of friends and scientists that notably included Pierre and Marie Curie, Paul Langevin, Jean Perrin, and the mathematician Émile Borel. Marie Curie says that she and her husband had traded ideas with Sagnac around the time of the discovery of radioactivity. Sagnac died at Meudon-Bellevue.
Sagnac effect
In 1913, Georges Sagnac showed that if a beam of light is split and sent in two opposite directions around a closed path on a revolving platform with mirrors on its perimeter, and then the beams are recombined, they will exhibit interference effects. From this result Sagnac concluded that light propagates at a speed independent of the speed of the source. The motion of the earth through space had no apparent effect on the speed of the light beam, no matter how the platform was turned. The effect had been observed earlier (by Harress in 1911), but Sagnac was the first to correctly identify the cause.
This Sagnac effect (in vacuum) had been theoretically predicted by Max von Laue in 1911. He showed that such an effect is consistent with stationary ether theories (such as the Lorentz ether theory) as well as with Einstein's theory of relativity. It is generally taken to be inconsistent with a complete ether drag; and also inconsistent with emission theories of light, according to which the speed of light depends on the speed of the source.
Sagnac was a staunch opponent of the theory of relativity, despite the Sagnac effect being consistent with it.
See also
Sagnac effect
History of special relativity#Experime |
https://en.wikipedia.org/wiki/NordLocker | NordLocker is a file encryption software integrated with end-to-end encrypted cloud storage. It is available on Windows and macOS. NordLocker is developed by Nord Security, a company behind the NordVPN virtual private network, and is based in the UK and the Netherlands.
NordLocker uses a freemium business model, where users are offered a free account with unlimited local file encryption and a set amount of cloud storage with sync and backup features. More cloud storage is available via a paid subscription.
History
In May 2019, NordVPN announced the upcoming launch of NordLocker, “an app with a zero-knowledge encryption process”. Although the initial estimated time of arrival was summer 2019, the actual launch took place in November. The app was launched as a local file encryption tool with secure sharing. Users were able to encrypt up to 5 GB of data for free or pay for unlimited encryption.
In March 2020, NordLocker announced newly implemented cloud sharing integrations with Dropbox and Google Drive.
In August 2020, NordLocker launched a cloud storage add-on, a feature allowing users to back up their data and synchronize it across multiple devices.
Features
NordLocker is an encryption software with cloud integration. The software uses so called "lockers" - encrypted folders to encrypt and store user files. Users can create an unlimited number of lockers, drop files in to encrypt them, and transfer lockers separately.
The app uses client-side encryption to secure files on the user's device first. It's a zero-knowledge encryption system, where the developers have no data about users' files. After the encryption process, the user can decide whether to store data locally or sync it via NordLocker’s cloud. NordLocker syncs files via a private cloud, so they can be accessed from any computer with the NordLocker app installed.
The program uses AES-256 and 4096-bit RSA encryption algorithms as well as Argon2 and ECC (with XChaCha20, EdDSA, and Poly1305). NordLocke |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.