id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,592,469 | https://en.wikipedia.org/wiki/Cat%20play%20and%20toys | Cat play and toys incorporates predatory games of "play aggression". Cats' behaviors when playing are similar to hunting behaviors. These activities allow kittens and younger cats to grow and acquire cognitive and motor skills, and to socialize with other cats. Cat play behavior can be either solitary (with toys or other objects) or social (with animals and people). They can play with a multitude of toys ranging from strings, to small furry toys resembling prey (e.g. mice), to plastic bags.
Defining cat play
Object play for cats is the use of inanimate objects by the animal to express play behaviour. In the case of pet domestic cats, humans normally provide them with purchased, human-made toys such as toy mice, bird or feather toys, or toy insects. These may be suspended from a string attached to a wooden or fishing-style rod designed to simulate lifelike activity in the toy, triggering the cat's predatory instincts – this game is known as catfishing. Cat play can be enriched with the addition of obstacles behind which the prey can hide and items that make sound when the toy moves through them such as dried leaves, grass, or even a paper bag. When it comes to non-domestic, wild cats, they may use several objects in the wilderness as their toys including sticks, leaves, rocks, feathers, etc. Play behaviour includes throwing, chasing, biting, and capturing the toy object, mimicking behaviors used during an interaction with a real source of prey. Engaging in object play helps young cats practice these adult skills.
There are several different motor patterns associated with the play behaviour of cats and they have different roles in the social context. Pouncing is used to initiate play through physical contact. Kittens exhibit their preference for physical contact play by rolling and exposing the abdomen and rearing up on the hind legs. Chasing and horizontal leaping are examples of motor patterns that may be used to end play. The varying speed and directional movement of a cat's tail can be a useful indicator of its level of playfulness.
Development of play in kittens
Play in cats is a behaviour that first emerges in kittens. Some important developmental aspects of play behaviour include motor development, social behaviour and cognitive development. There are different types of play that develop at different stages during the development and growth of a kitten. The first play behaviours observed in kittens include approaching, pawing and holding onto each other. Following this stage in their development, kittens begin to show an interest in inanimate objects and prey behaviour. This is the development of their nonsocial behaviour in which they become more independent and begin to practice predatory/hunting behaviour. Play behaviour in kittens is also important in providing physical exercise as they are growing, as well as providing a means of interacting with other members of their litter to foster strong social bonds. Social play is important between litter-mates since this is the main source of play for kittens in early life, as they are limited in the room needed to explore other means of play. Engaging in this social play behaviour is important until they have access to other play objects, such as toys.
Relation to predation
Since cats are meat-eating predators, nearly all cat games are predatory games.
Playing with live prey caught while hunting may be distinguished as a separate concept from playing with other cats or with humans, although the two look much the same to the human eye. It is suggested that ‘playing’ with prey is a behaviour evolved to avoid injury to the hunting cat by wearing down the caught prey before closing in to eat it. Predatory play would then be a part of hunting behaviour.
Predators often encounter prey that attempt to escape predation. Cats often play more with toys that behave like prey trying to flee than with toys that mimic confrontational prey by moving towards the cat with an aggressive or defensive posture.
Success rate
Success rate is important in play. A cat that catches its "prey" every time soon gets bored, and a cat that is never successful at capture can lose interest. The ideal hunting success rate is one successful capture for every three to six attempts. Capturing prey at this rate generally maximizes a cat's interest in the game.
Food
Catching prey and eating it are two closely related but separate activities. Domestic cats often store caught food for later consumption. In the wild, eating occurs at the end of the chase once the prey is caught; therefore, incorporating food into hunting games tends to end the interest in play. Hidden treats, however, help engage the cat's senses, such as sense of smell, and can be a form of play that enables them to utilize their searching skills.
Influence of hunger on cat behaviour
Hunger has been shown to increase the intensity in the play behaviour of cats, and a decrease in fear elicited by larger-sized toys. This effect that hunger has on play behaviour may be attributed to the cat's level of hunting experience. A hungry cat with a high level of hunting experience is more likely to engage in predatory behaviours than a less experienced cat, who will exhibit play behaviours since it is not able to engage in actual predatory activities.
Precautions
If playing with a human's bare hands, a cat will generally resist using its claws or biting too hard (known as bite inhibition). However, play is about predatory behaviour, and a highly excited cat can unintentionally inflict minor injuries to other playmates in the form of light scratches or small puncture wounds from biting too hard. With most cats, it is wise to keep playthings at least 20 cm (8") away from fingers or eyes. Cats' claws and mouths contain bacteria that can lead to infection, so it is wise to clean and treat any wounds with an antiseptic solution and seek professional medical services if the wound becomes infected.
See also
Cat exercise wheel
Catnip
Scratching post
Cat tree
References
Further reading
External links
Cat behavior
Cat equipment
Play (activity)
Toys
Articles needing additional references from November 2007 | Cat play and toys | [
"Biology"
] | 1,211 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
14,593,084 | https://en.wikipedia.org/wiki/Nested%20set%20collection | A nested set collection or nested set family is a collection of sets that consists of chains of subsets forming a hierarchical structure, like Russian dolls.
It is used as reference concept in scientific hierarchy definitions, and many technical approaches, like the tree in computational data structures or nested set model of relational databases.
Sometimes the concept is confused with a collection of sets with a hereditary property (like finiteness in a hereditarily finite set).
Formal definition
Some authors regard a nested set collection as a family of sets. Others prefer to classify it relation as an inclusion order.
Let B be a non-empty set and C a collection of subsets of B. Then C is a nested set collection if:
(and, for some authors, )
The first condition states that the whole set B, which contains all the elements of every subset, must belong to the nested set collection. Some authors do not assume that B is nonempty.
The second condition states that the intersection of every couple of sets in the nested set collection is not the empty set only if one set is a subset of the other.
In particular, when scanning all pairs of subsets at the second condition, it is true for any combination with B.
Example
Using a set of atomic elements, as the set of the playing card suits:
B = {♠, ♥, ♦, ♣}; B1 = {♠, ♥}; B2 = {♦, ♣}; B3 = {♣}; C = {B, B1, B2, B3}.
The second condition of the formal definition can be checked by combining all pairs:
.
There is a hierarchy that can be expressed by two branches and its nested order: .
Derived concepts
As sets, that are general abstraction and foundations for many concepts, the nested set is the foundation for "nested hierarchy", "containment hierarchy" and others.
Nested hierarchy
A nested hierarchy or inclusion hierarchy is a hierarchical ordering of nested sets. The concept of nesting is exemplified in Russian matryoshka dolls. Each doll is encompassed by another doll, all the way to the outer doll. The outer doll holds all of the inner dolls, the next outer doll holds all the remaining inner dolls, and so on. Matryoshkas represent a nested hierarchy where each level contains only one object, i.e., there is only one of each size of doll; a generalized nested hierarchy allows for multiple objects within levels but with each object having only one parent at each level. Illustrating the general concept:
A square can always also be referred to as a quadrilateral, polygon or shape. In this way, it is a hierarchy. However, consider the set of polygons using this classification. A square can only be a quadrilateral; it can never be a triangle, hexagon, etc.
Nested hierarchies are the organizational schemes behind taxonomies and systematic classifications. For example, using the original Linnaean taxonomy (the version he laid out in the 10th edition of Systema Naturae), a human can be formulated as:
Taxonomies may change frequently (as seen in biological taxonomy), but the underlying concept of nested hierarchies is always the same.
Containment hierarchy
A containment hierarchy is a direct extrapolation of the nested hierarchy concept. All of the ordered sets are still nested, but every set must be "strict" — no two sets can be identical. The shapes example above can be modified to demonstrate this:
The notation means x is a subset of y but is not equal to y.
Containment hierarchy is used in class inheritance of object-oriented programming.
See also
Hereditarily countable set
Hereditary property
Hierarchy (mathematics)
Nested set model for storing hierarchical information in relational databases
References
Set theory | Nested set collection | [
"Mathematics"
] | 787 | [
"Mathematical logic",
"Set theory"
] |
14,593,201 | https://en.wikipedia.org/wiki/Principal%20root%20of%20unity | In mathematics, a principal n-th root of unity (where n is a positive integer) of a ring is an element satisfying the equations
In an integral domain, every primitive n-th root of unity is also a principal -th root of unity. In any ring, if n is a power of 2, then any n/2-th root of −1 is a principal n-th root of unity.
A non-example is in the ring of integers modulo ; while and thus is a cube root of unity, meaning that it is not a principal cube root of unity.
The significance of a root of unity being principal is that it is a necessary condition for the theory of the discrete Fourier transform to work out correctly.
References
Algebraic numbers
Cyclotomic fields
Polynomials
1 (number)
Complex numbers | Principal root of unity | [
"Mathematics"
] | 165 | [
"Polynomials",
"Mathematical objects",
"Algebraic numbers",
"Complex numbers",
"Numbers",
"Algebra"
] |
14,593,396 | https://en.wikipedia.org/wiki/Allwork%20tractors | Allwork tractors were manufactured by the Electric Wheel Company of Quincy, Illinois.
Electric Wheel Co. was acquired by the Firestone Tire and Rubber Company. The All Work II Model F was a lightweight tractor with a big surplus of power for general farming and orchard work. This tractor was fueled by kerosene.
Allwork 14-28
Manufacturer................................Electric Wheel Co Quincy Illinois
Nebraska test number........................53
Test date...................................August 16-September 14, 1920
Test tractor serial number..................5043
Years produced .............................1918-1923
Engine......................................Electric Wheel Co. vertical L-head
Cylinders...................................4
Bore and stroke (inches)....................5.00x6.00
Rated rpm...................................900
Displacement (c.i.).........................471.3
Fuel........................................kerosine/gasoline
Fuel tank capacity (gallons)................25
Auxiliary tank capacity (gallons)...........5
Carburetor..................................Kingston E
Air cleaner.................................Bennett
Ignition....................................Kingston L magneto
Cooling capacity (gallons)..................13
Maximum brake horsepower tests
PTO/belt horsepower........................28.86
Crankshaft rpm.............................915
Fuel use (gallons per hour)................4.95
Maximum drawbar horsepower tests
Gear.......................................low
Drawbar horsepower.........................19.69
Pull weight (pounds).......................3,950
Speed......................................1.87
Percent slippage...........................15.10
SAE drawbar horsepower.....................14
SAE belt/PTO horsepower....................28
Type.......................................4
Front wheel (inches).......................steel: 32x6
Rear wheel (inches)........................steel: 48x12
Length (inches)............................125
Height (inches)............................69
Rear width (inches)........................79
Weight (pounds)............................5,000
Gear/speed (miles per hour)................forward: 1/1.75, 2/2.50; reverse 1/1.75
References
Ultimate American Farm Tractor Data Book Lorry Dunning First published by MBI Publishing Company, 729 Prospect Avenue PO Box 1 Osceola, WI 54020-0001 USA
http://www.flywheelers.com/pages/craig/mypics/Quincy%20All%20Work.jpg
http://www.farmcollector.com/equipment/electric-wheel-co.aspx
Tractors | Allwork tractors | [
"Engineering"
] | 1,365 | [
"Engineering vehicles",
"Tractors"
] |
14,593,776 | https://en.wikipedia.org/wiki/De%20Longchamps%20point | In geometry, the de Longchamps point of a triangle is a triangle center named after French mathematician Gaston Albert Gohierre de Longchamps. It is the reflection of the orthocenter of the triangle about the circumcenter.
Definition
Let the given triangle have vertices , , and , opposite the respective sides , , and , as is the standard notation in triangle geometry. In the 1886 paper in which he introduced this point, de Longchamps initially defined it as the center of a circle orthogonal to the three circles , , and , where is centered at with radius and the other two circles are defined symmetrically. De Longchamps then also showed that the same point, now known as the de Longchamps point, may be equivalently defined as the orthocenter of the anticomplementary triangle of , and that it is the reflection of the orthocenter of around the circumcenter.
The Steiner circle of a triangle is concentric with the nine-point circle and has radius 3/2 the circumradius of the triangle; the de Longchamps point is the homothetic center of the Steiner circle and the circumcircle.
Additional properties
As the reflection of the orthocenter around the circumcenter, the de Longchamps point belongs to the line through both of these points, which is the Euler line of the given triangle. Thus, it is collinear with all the other triangle centers on the Euler line, which along with the orthocenter and circumcenter include the centroid and the center of the nine-point circle.
The de Longchamp point is also collinear, along a different line, with the incenter and the Gergonne point of its triangle. The three circles centered at , , and , with radii , , and respectively (where is the semiperimeter) are mutually tangent, and there are two more circles tangent to all three of them, the inner and outer Soddy circles; the centers of these two circles also lie on the same line with the de Longchamp point and the incenter. The de Longchamp point is the point of concurrence of this line with the Euler line, and with three other lines defined in a similar way as the line through the incenter but using instead the three excenters of the triangle.
The Darboux cubic may be defined from the de Longchamps point, as the locus of points such that , the isogonal conjugate of , and the de Longchamps point are collinear. It is the only cubic curve invariant of a triangle that is both isogonally self-conjugate and centrally symmetric; its center of symmetry is the circumcenter of the triangle. The de Longchamps point itself lies on this curve, as does its reflection the orthocenter.
References
External links
Triangle centers | De Longchamps point | [
"Physics",
"Mathematics"
] | 604 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
2,206,712 | https://en.wikipedia.org/wiki/History%20of%20fluid%20mechanics | The history of fluid mechanics is a fundamental strand of the history of physics and engineering. The study of the movement of fluids (liquids and gases) and the forces that act upon them dates back to pre-history. The field has undergone a continuous evolution, driven by human dependence on water, meteorological conditions, and internal biological processes.
The success of early civilizations, can be attributed to developments in the understanding of water dynamics, allowing for the construction of canals and aqueducts for water distribution and farm irrigation, as well as maritime transport. Due to its conceptual complexity, most discoveries in this field relied almost entirely on experiments, at least until the development of advanced understanding of differential equations and computational methods. Significant theoretical contributions were made by notables figures like Archimedes, Johann Bernoulli and his son Daniel Bernoulli, Leonhard Euler, Claude-Louis Navier and Stokes, who developed the fundamental equations to describe fluid mechanics. Advancements in experimentation and computational methods have further propelled the field, leading to practical applications in more specialized industries ranging from aerospace to environmental engineering. Fluid mechanics has also been important for the study of astronomical bodies and the dynamics of galaxies.
Antiquity
Pre-history
A pragmatic, if not scientific, knowledge of fluid flow was exhibited by ancient civilizations, such as in the design of arrows, spears, boats, and particularly hydraulic engineering projects for flood protection, irrigation, drainage, and water supply. The earliest human civilizations began near the shores of rivers, and consequently coincided with the dawn of hydrology, hydraulics, and hydraulic engineering.
Ancient China
Observations of specific gravity and buoyancy were recorded by ancient Chinese philosophers. In the 4th century BCE Mencius describes the weight of the gold is equivalent to the feathers. In 3rd century CE, Cao Chong describes the story of weighing the elephant by observing displacement of the boats loaded with different weights.
Archimedes
The fundamental principles of hydrostatics and dynamics were given by Archimedes in his work On Floating Bodies (), around 250 BC. In it, Archimedes develops the law of buoyancy, also known as Archimedes' principle. This principle states that a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Archimedes maintained that each particle of a fluid mass, when in equilibrium, is equally pressed in every direction; and he inquired into the conditions according to which a solid body floating in a fluid should assume and preserve a position of equilibrium.
The Alexandrian school
In the Greek school at Alexandria, which flourished under the auspices of the Ptolemies, attempts were made at the construction of hydraulic machinery, and about 120 BC the fountain of compression, the siphon, and the forcing-pump were invented by Ctesibius and Hero. The siphon is a simple instrument; but the forcing-pump is a complicated invention, which could scarcely have been expected in the infancy of hydraulics. It was probably suggested to Ctesibius by the Egyptian wheel or Noria, which was common at that time, and which was a kind of chain pump, consisting of a number of earthen pots carried round by a wheel. In some of these machines the pots have a valve in the bottom which enables them to descend without much resistance, and diminishes greatly the load upon the wheel; and, if we suppose that this valve was introduced so early as the time of Ctesibius, it is not difficult to perceive how such a machine might have led to the invention of the forcing-pump.
Sextus Julius Frontinus
Notwithstanding these inventions of the Alexandrian school, its attention does not seem to have been directed to the motion of fluids; and the first attempt to investigate this subject was made by Sextus Julius Frontinus, inspector of the public fountains at Rome in the reigns of Nerva and Trajan. In his work De aquaeductibus urbis Romae commentarius, he considers the methods which were at that time employed for ascertaining the quantity of water discharged from ajutages (tubes), and the mode of distributing the waters of an aqueduct or a fountain. He remarked that the flow of water from an orifice depends not only on the magnitude of the orifice itself, but also on the height of the water in the reservoir; and that a pipe employed to carry off a portion of water from an aqueduct should, as circumstances required, have a position more or less inclined to the original direction of the current. But as he was unacquainted with the law of the velocities of running water as depending upon the depth of the orifice, the want of precision which appears in his results is not surprising.
Middle Ages
Islamicate physicists
Islamicate scientists, particularly Abu Rayhan Biruni (973–1048) and later Al-Khazini (fl. 1115–1130), were the first to apply experimental scientific methods to fluid mechanics, especially in the field of fluid statics, such as for determining specific weights. They applied the mathematical theories of ratios and infinitesimal techniques, and introduced algebraic and fine calculation techniques into the field of fluid statics.
Biruni introduced the method of checking tests during experiments and measured the weights of various liquids. He also recorded the differences in weight between freshwater and saline water, and between hot water and cold water. During his experiments on fluid mechanics, Biruni invented the conical measure, in order to find the ratio between the weight of a substance in air and the weight of water displaced.
Al-Khazini, in The Book of the Balance of Wisdom (1121), invented a hydrostatic balance.
Islamicate engineers
In the 9th century, Banū Mūsā brothers' Book of Ingenious Devices described a number of early automatic controls in fluid mechanics. Two-step level controls for fluids, an early form of discontinuous variable structure controls, was developed by the Banu Musa brothers. They also described an early feedback controller for fluids. According to Donald Routledge Hill, the Banu Musa brothers were "masters in the exploitation of small variations" in hydrostatic pressures and in using conical valves as "in-line" components in flow systems, "the first known use of conical valves as automatic controllers." They also described the use of other valves, including a plug valve, float valve and tap. The Banu Musa also developed an early fail-safe system where "one can withdraw small quantities of liquid repeatedly, but if one withdraws a large quantity, no further extractions are possible." The double-concentric siphon and the funnel with bent end for pouring in different liquids, neither of which appear in any earlier Greek works, were also original inventions by the Banu Musa brothers. Some of the other mechanisms they described include a float chamber and an early differential pressure.
In 1206, Al-Jazari's Book of Knowledge of Ingenious Mechanical Devices described many hydraulic machines. Of particular importance were his water-raising pumps. The first known use of a crankshaft in a chain pump was in one of al-Jazari's saqiya machines. The concept of minimizing intermittent working is also first implied in one of al-Jazari's saqiya chain pumps, which was for the purpose of maximising the efficiency of the saqiya chain pump. Al-Jazari also invented a twin-cylinder reciprocating piston suction pump, which included the first suction pipes, suction pumping, double-action pumping, and made early uses of valves and a crankshaft-connecting rod mechanism. This pump is remarkable for three reasons: the first known use of a true suction pipe (which sucks fluids into a partial vacuum) in a pump, the first application of the double-acting principle, and the conversion of rotary to reciprocating motion, via the crankshaft-connecting rod mechanism.
Sixteenth and seventeenth century
Leonardo da Vinci
During the Renaissance, Leonardo da Vinci was well known for his experimental skills. His notes provide precise depictions of various phenomena, including vessels, jets, hydraulic jumps, eddy formation, tides, as well as designs for both low drag (streamlined) and high drag (parachute) configurations. Da Vinci is also credited for formulating the conservation of mass in one-dimensional steady flow.
Simon Stevin
In 1586, the Flemish engineer and mathematician Simon Stevin published De Beghinselen des Waterwichts (Principles on the Weight of Water), a study of hydrostatics that, among other things, extensively discussed the hydrostatic paradox.
Castelli and Torricelli
Benedetto Castelli, and Evangelista Torricelli, two of the disciples of Galileo, applied the discoveries of their master to the science of hydrodynamics. In 1628 Castelli published a small work, Della misura dell' acque correnti, in which he satisfactorily explained several phenomena in the motion of fluids in rivers and canals; but he committed a great paralogism in supposing the velocity of the water proportional to the depth of the orifice below the surface of the vessel. Torricelli, observing that in a jet where the water rushed through a small ajutage it rose to nearly the same height with the reservoir from which it was supplied, imagined that it ought to move with the same velocity as if it had fallen through that height by the force of gravity, and hence he deduced the proposition that the velocities of liquids are as the square root of the head, apart from the resistance of the air and the friction of the orifice. This theorem was published in 1643, at the end of his treatise De motu gravium projectorum, and it was confirmed by the experiments of Raffaello Magiotti on the quantities of water discharged from different ajutages under different pressures (1648).
Blaise Pascal
In the hands of Blaise Pascal hydrostatics assumed the dignity of a science, and in a treatise on the equilibrium of liquids (Sur l’équilibre des liqueurs), found among his manuscripts after his death and published in 1663, the laws of the equilibrium of liquids were demonstrated in the most simple manner, and amply confirmed by experiments.
Mariotte and Guglielmini
The theorem of Torricelli was employed by many succeeding writers, but particularly by Edme Mariotte (1620–1684), whose Traité du mouvement des eaux, published after his death in the year 1686, is founded on a great variety of well-conducted experiments on the motion of fluids, performed at Versailles and Chantilly. In the discussion of some points he committed considerable mistakes. Others he treated very superficially, and in none of his experiments apparently did he attend to the diminution of efflux arising from the contraction of the liquid vein, when the orifice is merely a perforation in a thin plate; but he appears to have been the first who attempted to ascribe the discrepancy between theory and experiment to the retardation of the water's velocity through friction. His contemporary Domenico Guglielmini (1655–1710), who was inspector of the rivers and canals at Bologna, had ascribed this diminution of velocity in rivers to transverse motions arising from inequalities in their bottom. But as Mariotte observed similar obstructions even in glass pipes where no transverse currents could exist, the cause assigned by Guglielmini seemed destitute of foundation. The French philosopher, therefore, regarded these obstructions as the effects of friction. He supposed that the filaments of water which graze along the sides of the pipe lose a portion of their velocity; that the contiguous filaments, having on this account a greater velocity, rub upon the former, and suffer a diminution of their celerity; and that the other filaments are affected with similar retardations proportional to their distance from the axis of the pipe. In this way the medium velocity of the current may be diminished, and consequently the quantity of water discharged in a given time must, from the effects of friction, be considerably less than that which is computed from theory.
Eighteenth century
Studies by Isaac Newton
Friction and viscosity
The effects of friction and viscosity in diminishing the velocity of running water were noticed in the Principia of Sir Isaac Newton, who threw much light upon several branches of hydromechanics. At a time when the Cartesian system of vortices universally prevailed, he found it necessary to investigate that hypothesis, and in the course of his investigations he showed that the velocity of any stratum of the vortex is an arithmetical mean between the velocities of the strata which enclose it; and from this it evidently follows that the velocity of a filament of water moving in a pipe is an arithmetical mean between the velocities of the filaments which surround it. Taking advantage of these results, French engineer Henri Pitot afterwards showed that the retardations arising from friction are inversely as the diameters of the pipes in which the fluid moves.
Orifices
The attention of Newton was also directed to the discharge of water from orifices in the bottom of vessels. He supposed a cylindrical vessel full of water to be perforated in its bottom with a small hole by which the water escaped, and the vessel to be supplied with water in such a manner that it always remained full at the same height. He then supposed this cylindrical column of water to be divided into two parts – the first, which he called the "cataract," being an hyperboloid generated by the revolution of an hyperbola of the fifth degree around the axis of the cylinder which should pass through the orifice, and the second the remainder of the water in the cylindrical vessel. He considered the horizontal strata of this hyperboloid as always in motion, while the remainder of the water was in a state of rest, and imagined that there was a kind of cataract in the middle of the fluid.
When the results of this theory were compared with the quantity of water actually discharged, Newton concluded that the velocity with which the water issued from the orifice was equal to that which a falling body would receive by descending through half the height of water in the reservoir. This conclusion, however, is absolutely irreconcilable with the known fact that jets of water rise nearly to the same height as their reservoirs, and Newton seems to have been aware of this objection. Accordingly, in the second edition of his Principia, which appeared in 1713, he reconsidered his theory. He had discovered a contraction in the vein of fluid (vena contracta) which issued from the orifice, and found that, at the distance of about a diameter of the aperture, the section of the vein was contracted in the subduplicate ratio of two to one. He regarded, therefore, the section of the contracted vein as the true orifice from which the discharge of water ought to be deduced, and the velocity of the effluent water as due to the whole height of water in the reservoir; and by this means his theory became more conformable to the results of experience, though still open to serious objections.
Waves
Newton was also the first to investigate the difficult subject of the motion of waves.
Daniel Bernoulli
In 1738 Daniel Bernoulli published his Hydrodynamica seu de viribus et motibus fluidorum commentarii. His theory of the motion of fluids, the germ of which was first published in his memoir entitled Theoria nova de motu aquarum per canales quocunque fluentes, communicated to the academy of St Petersburg as early as 1726, was founded on two suppositions, which appeared to him conformable to experience. He supposed that the surface of the fluid, contained in a vessel which is emptying itself by an orifice, remains always horizontal; and, if the fluid mass is conceived to be divided into an infinite number of horizontal strata of the same bulk, that these strata remain contiguous to each other, and that all their points descend vertically, with velocities inversely proportional to their breadth, or to the horizontal sections of the reservoir. In order to determine the motion of each stratum, he employed the principle of the conservatio virium vivarum, and obtained very elegant solutions. But in the absence of a general demonstration of that principle, his results did not command the confidence which they would otherwise have deserved, and it became desirable to have a theory more certain, and depending solely on the fundamental laws of mechanics. Colin Maclaurin and John Bernoulli, who were of this opinion, resolved the problem by more direct methods, the one in his Fluxions, published in 1742, and the other in his Hydraulica nunc primum detecta, et demonstrata directe ex fundamentis pure mechanicis, which forms the fourth volume of his works. The method employed by Maclaurin has been thought not sufficiently rigorous; and that of John Bernoulli is, in the opinion of Lagrange, defective in clearness and precision.
Jean le Rond d'Alembert
The theory of Daniel Bernoulli was opposed also by Jean le Rond d'Alembert. When generalizing the theory of pendulums of Jacob Bernoulli he discovered a principle of dynamics so simple and general that it reduced the laws of the motions of bodies to that of their equilibrium. He applied this principle to the motion of fluids, and gave a specimen of its application at the end of his Dynamics in 1743. It was more fully developed in his Traité des fluides, published in 1744, in which he gave simple and elegant solutions of problems relating to the equilibrium and motion of fluids. He made use of the same suppositions as Daniel Bernoulli, though his calculus was established in a very different manner. He considered, at every instant, the actual motion of a stratum as composed of a motion which it had in the preceding instant and of a motion which it had lost; and the laws of equilibrium between the motions lost furnished him with equations representing the motion of the fluid. It remained a desideratum to express by equations the motion of a particle of the fluid in any assigned direction. These equations were found by d'Alembert from two principles – that a rectangular canal, taken in a mass of fluid in equilibrium, is itself in equilibrium, and that a portion of the fluid, in passing from one place to another, preserves the same volume when the fluid is incompressible, or dilates itself according to a given law when the fluid is elastic. His ingenious method, published in 1752, in his Essai sur la résistance des fluides, was brought to perfection in his Opuscules mathématiques, and was adopted by Leonhard Euler.
Leonhard Euler
The resolution of the questions concerning the motion of fluids was effected by means of Leonhard Euler's partial differential coefficients. This calculus was first applied to the motion of water by d'Alembert, and enabled both him and Euler to represent the theory of fluids in formulae restricted by no particular hypothesis.
Pierre Louis Georges Dubuat
One of the most successful labourers in the science of hydrodynamics at this period was Pierre-Louis-Georges du Buat. Following in the steps of the Abbé Charles Bossut (Nouvelles Experiences sur la résistance des fluides, 1777), he published, in 1786, a revised edition of his Principes d'hydraulique, which contains a satisfactory theory of the motion of fluids, founded solely upon experiments. Dubuat considered that if water were a perfect fluid, and the channels in which it flowed infinitely smooth, its motion would be continually accelerated, like that of bodies descending in an inclined plane. But as the motion of rivers is not continually accelerated, and soon arrives at a state of uniformity, it is evident that the viscosity of the water, and the friction of the channel in which it descends, must equal the accelerating force. Dubuat, therefore, assumed it as a proposition of fundamental importance that, when water flows in any channel or bed, the accelerating force which obliges it to move is equal to the sum of all the resistances which it meets with, whether they arise from its own viscosity or from the friction of its bed. This principle was employed by him in the first edition of his work, which appeared in 1779. The theory contained in that edition was founded on the experiments of others, but he soon saw that a theory so new, and leading to results so different from the ordinary theory, should be founded on new experiments more direct than the former, and he was employed in the performance of these from 1780 to 1783. The experiments of Bossut were made only on pipes of a moderate declivity, but Dubuat used declivities of every kind, and made his experiments upon channels of various sizes.
Nineteenth century
Claude-Louis Navier and George Gabriel Stokes
Hermann von Helmholtz
In 1858 Hermann von Helmholtz published his seminal paper "Über Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen," in Journal für die reine und angewandte Mathematik, vol. 55, pp. 25–55. So important was the paper that a few years later P. G. Tait published an English translation, "On integrals of the hydrodynamical equations which express vortex motion", in Philosophical Magazine, vol. 33, pp. 485–512 (1867). In his paper Helmholtz established his three "laws of vortex motion" in much the same way one finds them in any advanced textbook of fluid mechanics today. This work established the significance of vorticity to fluid mechanics and science in general.
For the next century or so vortex dynamics matured as a subfield of fluid mechanics, always commanding at least a major chapter in treatises on the subject. Thus, H. Lamb's well known Hydrodynamics (6th ed., 1932) devotes a full chapter to vorticity and vortex dynamics as does G. K. Batchelor's Introduction to Fluid Dynamics (1967). In due course entire treatises were devoted to vortex motion. H. Poincaré's Théorie des Tourbillons (1893), H. Villat's Leçons sur la Théorie des Tourbillons (1930), C. Truesdell's The Kinematics of Vorticity (1954), and P. G. Saffman's Vortex Dynamics (1992) may be mentioned. Early on individual sessions at scientific conferences were devoted to vortices, vortex motion, vortex dynamics and vortex flows. Later, entire meetings were devoted to the subject.
The range of applicability of Helmholtz's work grew to encompass atmospheric and oceanographic flows, to all branches of engineering and applied science and, ultimately, to superfluids (today including Bose–Einstein condensates). In modern fluid mechanics the role of vortex dynamics in explaining flow phenomena is firmly established. Well known vortices have acquired names and are regularly depicted in the popular media: hurricanes, tornadoes, waterspouts, aircraft trailing vortices (e.g., wingtip vortices), drainhole vortices (including the bathtub vortex), smoke rings, underwater bubble air rings, cavitation vortices behind ship propellers, and so on. In the technical literature a number of vortices that arise under special conditions also have names: the Kármán vortex street wake behind a bluff body, Taylor vortices between rotating cylinders, Görtler vortices in flow along a curved wall, etc.
Gaspard Riche de Prony
The theory of running water was greatly advanced by the researches of Gaspard Riche de Prony (1755–1839). From a collection of the best experiments by previous workers he selected eighty-two (fifty-one on the velocity of water in conduit pipes, and thirty-one on its velocity in open canals); and, discussing these on physical and mechanical principles, he succeeded in drawing up general formulae, which afforded a simple expression for the velocity of running water.
Johann Albert Eytelwein
J. A. Eytelwein of Berlin, who published in 1801 a valuable compendium of hydraulics entitled Handbuch der Mechanik und der Hydraulik, investigated the subject of the discharge of water by compound pipes, the motions of jets and their impulses against plane and oblique surfaces; and he showed theoretically that a water-wheel will have its maximum effect when its circumference moves with half the velocity of the stream.
Jean Nicolas Pierre Hachette and others
JNP Hachette in 1816–1817 published memoirs containing the results of experiments on the spouting of fluids and the discharge of vessels. His object was to measure the contracted part of a fluid vein, to examine the phenomena attendant on additional tubes, and to investigate the form of the fluid vein and the results obtained when different forms of orifices are employed. Extensive experiments on the discharge of water from orifices (Expériences hydrauliques, Paris, 1832) were conducted under the direction of the French government by J. V. Poncelet (1788–1867) and J. A. Lesbros (1790–1860).
P. P. Boileau (1811–1891) discussed their results and added experiments of his own (Traité de la mesure des eaux courantes, Paris, 1854). K. R. Bornemann re-examined all these results with great care, and gave formulae expressing the variation of the coefficients of discharge in different conditions (Civil Ingénieur, 1880). Julius Weisbach (1806–1871) also made many experimental investigations on the discharge of fluids.
The experiments of J. B. Francis (Lowell Hydraulic Experiments, Boston, Mass., 1855) led him to propose variations in the accepted formulae for the discharge over weirs, and a generation later a very complete investigation of this subject was carried out by Henri-Émile Bazin. An elaborate inquiry on the flow of water in pipes and channels was conducted by Henry G. P. Darcy (1803–1858) and continued by Bazin, at the expense of the French government (Recherches hydrauliques, Paris, 1866).
Andreas Rudolf Harlacher and others
German engineers have also devoted special attention to the measurement of the flow in rivers; the Beiträge zur Hydrographie des Königreiches Böhmen (Prague, 1872–1875) of Andreas Rudolf Harlacher contained valuable measurements of this kind, together with a comparison of the experimental results with the formulae of flow that had been proposed up to the date of its publication, and important data were yielded by the gaugings of the Mississippi made for the United States government by Andrew Atkinson Humphreys and Henry Larcom Abbot, by Robert Gordon's gaugings of the Irrawaddy River, and by Allen J. C. Cunningham's experiments on the Ganges canal. The friction of water, investigated for slow speeds by Coulomb, was measured for higher speeds by William Froude (1810–1879), whose work is of great value in the theory of ship resistance (Brit. Assoc. Report., 1869), and stream line motion was studied by Professor Osborne Reynolds and by Professor Henry S. Hele-Shaw.
Twentieth century
Ludwig Prandtl
In 1904, German scientist Ludwig Prandtl pioneered boundary layer theory. He pointed out that fluids with small viscosity can be divided into a thin viscous layer (boundary layer) near solid surfaces and interfaces, and an outer layer where Bernoulli's principle and Euler equations apply.
Developments in vortex dynamics
Vortex dynamics is a vibrant subfield of fluid dynamics, commanding attention at major scientific conferences and precipitating workshops and symposia that focus fully on the subject.
A curious diversion in the history of vortex dynamics was the Vortex theory of the atom of William Thomson, later Lord Kelvin. His basic idea was that atoms were to be represented as vortex motions in the ether. This theory predated the quantum theory by several decades and because of the scientific standing of its originator received considerable attention. Many profound insights into vortex dynamics were generated during the pursuit of this theory. Other interesting corollaries were the first counting of simple knots by P. G. Tait, today considered a pioneering effort in graph theory, topology and knot theory. Ultimately, Kelvin's vortex atom was seen to be wrong-headed but the many results in vortex dynamics that it precipitated have stood the test of time. Kelvin himself originated the notion of circulation and proved that in an inviscid fluid circulation around a material contour would be conserved. This result singled out by Einstein in "Zum hundertjährigen Gedenktag von Lord Kelvins Geburt, Naturwissenschaften, 12 (1924), 601–602," (title translation: "On the 100th Anniversary of Lord Kelvin's Birth"), as one of the most significant results of Kelvin's work provided an early link between fluid dynamics and topology.
The history of vortex dynamics seems particularly rich in discoveries and re-discoveries of important results, because results obtained were entirely forgotten after their discovery and then were re-discovered decades later. Thus, the integrability of the problem of three point vortices on the plane was solved in the 1877 thesis of a young Swiss applied mathematician named Walter Gröbli. In spite of having been written in Göttingen in the general circle of scientists surrounding Helmholtz and Kirchhoff, and in spite of having been mentioned in Kirchhoff's well known lectures on theoretical physics and in other major texts such as Lamb's Hydrodynamics, this solution was largely forgotten. A 1949 paper by the noted applied mathematician J. L. Synge created a brief revival, but Synge's paper was in turn forgotten. A quarter century later a 1975 paper by E. A. Novikov and a 1979 paper by H. Aref on chaotic advection finally brought this important earlier work to light. The subsequent elucidation of chaos in the four-vortex problem, and in the advection of a passive particle by three vortices, made Gröbli's work part of "modern science".
Another example of this kind is the so-called "localized induction approximation" (LIA) for three-dimensional vortex filament motion, which gained favor in the mid-1960s through the work of Arms, Hama, Betchov and others, but turns out to date from the early years of the 20th century in the work of Da Rios, a gifted student of the noted Italian mathematician T. Levi-Civita. Da Rios published his results in several forms but they were never assimilated into the fluid mechanics literature of his time. In 1972 H. Hasimoto used Da Rios' "intrinsic equations" (later re-discovered independently by R. Betchov) to show how the motion of a vortex filament under LIA could be related to the non-linear Schrödinger equation. This immediately made the problem part of "modern science" since it was then realized that vortex filaments can support solitary twist waves of large amplitude.
See also
Timeline of fluid and continuum mechanics
Further reading
J. D. Anderson Jr. (1997). A History of Aerodynamics (Cambridge University Press).
J. D. Anderson Jr. (1998). Some Reflections on the History of Fluid Dynamics, in The Handbook of Fluid Dynamics (ed. by R.W. Johnson, CRC Press) Ch. 2.
J. S. Calero (2008). The Genesis of Fluid Mechanics, 1640–1780 (Springer).
O. Darrigol (2005). Worlds of Flow: A History of Hydrodynamics from the Bernoullis to Prandtl (Oxford University Press).
P. A. Davidson, Y. Kaneda, K. Moffatt, and K. R. Sreenivasan (eds, 2011). A Voyage Through Turbulence (Cambridge University Press).
M. Eckert (2006). The Dawn of Fluid Dynamics: A Discipline Between Science and Technology (Wiley-VCH).
G. Garbrecht (ed., 1987). Hydraulics and Hydraulic Research: A Historical Review (A.A. Balkema).
M. J. Lighthill (1995). Fluid mechanics, in Twentieth Century Physics ed. by L.M. Brown, A. Pais, and B. Pippard (IOP/AIP), Vol. 2, pp. 795–912.
H. Rouse and S. Ince (1957). History of Hydraulics (Iowa Institute of Hydraulic Research, State University of Iowa).
G. A. Tokaty (1994). A History and Philosophy of Fluid Mechanics (Dover).
References
Fluid dynamics
Fluid mechanics
History of physics | History of fluid mechanics | [
"Chemistry",
"Engineering"
] | 6,857 | [
"Chemical engineering",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
2,206,783 | https://en.wikipedia.org/wiki/Direct%20fluorescent%20antibody | A direct fluorescent antibody (DFA or dFA), also known as "direct immunofluorescence", is an antibody that has been tagged in a direct fluorescent antibody test. Its name derives from the fact that it directly tests the presence of an antigen with the tagged antibody, unlike western blotting, which uses an indirect method of detection, where the primary antibody binds the target antigen, with a secondary antibody directed against the primary, and a tag attached to the secondary antibody.
Commercial DFA testing kits are available, which contain fluorescently labelled antibodies, designed to specifically target unique antigens present in the bacteria or virus, but not present in mammals (Eukaryotes). This technique can be used to quickly determine if a subject has a specific viral or bacterial infection.
In the case of respiratory viruses, many of which have similar broad symptoms, detection can be carried out using nasal wash samples from the subject with the suspected infection. Although shedding cells in the respiratory tract can be obtained, it is often in low numbers, and so an alternative method can be adopted where compatible cell culture can be exposed to infected nasal wash samples, so if the virus is present it can be grown up to a larger quantity, which can then give a clearer positive or negative reading.
As with all types of fluorescence microscopy, the correct absorption wavelength needs to be determined in order to excite the fluorophore tag attached to the antibody, and detect the fluorescence given off, which indicates which cells are positive for the presence of the virus or bacteria being detected.
Direct immunofluorescence can be used to detect deposits of immunoglobulins and complement proteins in biopsies of skin, kidney and other organs. Their presence is indicative of an autoimmune disease. When skin not exposed to the sun is tested, a positive direct IF (the so-called Lupus band test) is an evidence of systemic lupus erythematosus. Direct fluorescent antibody can also be used to detect parasitic infections, as was pioneered by Sadun, et al. (1960).
See also
Immunofluorescence
References
External links
Laboratory techniques
Clinical pathology
Immunologic tests
Reagents for biochemistry | Direct fluorescent antibody | [
"Chemistry",
"Biology"
] | 451 | [
"Biochemistry methods",
"Immunologic tests",
"nan",
"Biochemistry",
"Reagents for biochemistry"
] |
2,206,793 | https://en.wikipedia.org/wiki/Immunomagnetic%20separation | Immunomagnetic separation (IMS) is a laboratory tool that can efficiently isolate cells out of body fluid or cultured cells. It can also be used as a method of quantifying the pathogenicity of food, blood or feces. DNA analysis have supported the combined use of both this technique and Polymerase Chain Reaction (PCR). Another laboratory separation tool is the affinity magnetic separation (AMS), which is more suitable for the isolation of prokaryotic cells.
IMS deals with the isolation of cells, proteins, and nucleic acids through the specific capture of biomolecules through the attachment of small-magnetized particles, beads, containing antibodies and lectins. These beads are coated to bind to targeted biomolecules, gently separated and goes through multiple cycles of washing to obtain targeted molecules bound to these super paramagnetic beads, which can differentiate based on strength of magnetic field and targeted molecules, are then eluted to collect supernatant and then are able to determine the concentration of specifically targeted biomolecules. IMS obtains certain concentrations of specific molecules within targeted bacteria.
A mixture of cell population will be put into a magnetic field where cells then are attached to super paramagnetic beads, specific example are Dynabeads (4.5-μm), will remain once excess substrate is removed binding to targeted antigen. Dynabeads consists of iron-containing cores, which is covered by a thin layer of a polymer shell allowing the absorption of biomolecules. The beads are coated with primary antibodies, specific-specific antibodies, lectins, enzymes, or streptavidin; the linkage between magnetized beads coated materials are cleavable DNA linker cell separation from the beads when the culturing of cells is more desirable.
Many of these beads have the same principles of separation; however, the presence and different strength s of magnetic fields requires certain sizes of beads, based on the ramifications of the separation of the cell population. The larger sized beads (>2μm) are the most commonly used range that was produced by Dynal (Dynal [UK] Ltd., Wirral, Mersyside, UK; Dynal, Inc., Lake Success, NY). Where as smaller beads (<100 nm) are mostly used for MACS system that was produced by Miltenyi Biotech (Miltenyi Biotech Ltd., Bisley, Surrey, UK; Miltenyi Biotech Inc., Auburn, CA).
Immunomagnetic separation is used in a variety of scientific fields including molecular biology, microbiology, and immunology. (3) This technique of separation does not only consist of separation of cells within the blood, but can also be used for techniques of separation from primary tumors and in metastases research, through separation into component parts, creating a singular-cell delay, then allowing the suitable antibody to label the cell. In metastasis research this separation technique may become necessary to separate when given a cell population and wanting to isolate tumors cells in tumors, peripheral blood, and bone marrow.
Technique
Antibodies coating paramagnetic beads will bind to antigens present on the surface of cells thus capturing the cells and facilitate the concentration of these bead-attached cells. The concentration process is created by a magnet placed on the side of the test tube bringing the beads to it.
MACS systems (Magnetic Cell Separation system):
Through the usage of smaller super paramagnetic beads (<100 nm), which requires a stronger magnetic field to separate cells. Cells are labeled with primary antibodies and then MACS beads are coated with specific- specific antibodies. These labeled cell suspension is then put into a separation column in a strong magnetic field. The labeled cells are contained, magnetized, while in the magnetic field and the unlabeled cells are suspended, un-magnetized, to be collected. Once removed from magnetic field positive cells are eluted. These MACS beads are then incorporated by the cells allowing them to remain in the column because they do not intrude with the cell attachment to the culture surface to cell-cell interactions. A bead removal reagent is then applied to have an enzymatically release of the MACS beads allowing those cells to become relabeled with some other marker, which then is sorted.
References
Laboratory techniques
Molecular biology | Immunomagnetic separation | [
"Chemistry",
"Biology"
] | 909 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
2,206,957 | https://en.wikipedia.org/wiki/Rim%20%28crater%29 | The rim or edge of an impact crater is the part that extends above the height of the local surface, usually in a circular or elliptical pattern. In a more specific sense, the rim may refer to the circular or elliptical edge that represents the uppermost tip of this raised portion. If there is no raised portion, the rim simply refers to the inside edge of the curve where the flat surface meets the curve of the crater bottom.
Simple craters
Smaller, simple craters retain rim geometries similar to the features of many craters found on the Moon and the planet Mercury.
Complex craters
Large craters are those with a diameter greater than 2.3 km, and are distinguished by central uplifts within the impact zone. These larger (also called “complex”) craters can form rims up to several hundred meters in height.
A process to consider when determining the exact height of a crater rim is that melt may have been pushed over the crest of the initial rim from the initial impact, thereby increasing its overall height. When combined with potential weathering due to atmospheric erosion over time, determining the average height of a crater rim can be somewhat difficult. It has also been observed that the slope along the excavated interior of many craters can facilitate a spur-and-gully morphology, including mass wasting events occurring due to slope instability and nearby seismic activity.
Complex crater rims observed on Earth have anywhere between 5X – 8X greater height:diameter ratio compared to those observed on the Moon, which can likely be attributed to the greater force of gravitational acceleration between the two planetary bodies that collide. Additionally, crater depth and the volume of melt produced in the impact are directly related to the gravitational acceleration between the two bodies. It has been proposed that “reverse faulting and thrusting at the final crater rim [is] one of the main contributing factors [to] forming the elevated crater rim”. When an impact crater is formed on a sloped surface, the rim will form in an asymmetric profile. As the impacted surface's angle of repose increases, the crater's profile becomes more elongate.
Classification
The rim type classifications are full-rim craters, broken-rim craters, and depressions.
References
Impact geology
Impact craters | Rim (crater) | [
"Astronomy"
] | 450 | [
"Astronomical objects",
"Impact craters"
] |
2,206,977 | https://en.wikipedia.org/wiki/Jean%20Baptiste%20Julien%20d%27Omalius%20d%27Halloy | Jean Baptiste Julien d'Omalius d'Halloy (17 February 1783 in Liège – 15 January 1875 in Brussels) was a Belgian statesman and geologist. He was the first to define the Cretaceous as a distinct geological period, in 1822. He produced the first geological map of France, the Benelux, the Rhineland and Switzerland, completed in 1813 and published in 1822. Halloysite, a clay mineral, was named in his honour. He also wrote on races.
He was a member of the Royal Academy of Belgium (elected on July 3, 1816 and president in 1850, 1858 and 1872), president of the Geological Society of France (1852) and corresponding member of the French Academy of Sciences (1842). He was made a foreign member of the Royal Society in 1873.
Halloy was governor of the province of Namur during the period of the United Kingdom of the Netherlands (1815-1830). He was elected to the Belgian Senate in 1848, of which he became vice-president three years later (1851), a position he held until 1870 making him the longest serving vice-presidents of the Senate in Belgian history.
He had two daughters. His daughter Sophie married on February 27, 1838 Baron Edmond de Selys Longchamps, vice-president of the Senate of Belgium, renowned entomologist, president of the Royal Society of Sciences of Liège.
Early life and education
Born in Liège, he was the only son of an ancient and noble family, and his education was carefully directed. After completing his classical studies in his home town he was sent to Paris in 1801 by his parents to avail himself of the social and literary advantages of the metropolis. A lively interest, however, in geology awakened by the works of Buffon, directed his steps to the museums and the Jardin des Plantes.
He visited Paris again in 1803 and 1805, and during these periods attended the lectures of Fourcroy, Lacépède, and Georges Cuvier. His homeward journeys were usually made the occasion of a geological expedition through northern France. As early as 1808 he communicated to the Journal des Mines a paper entitled Essai sur la géologie du Nord de la France. He thus conceived the project of making a series of surveys throughout the whole country. This was furthered by a commission to execute a geological map of the empire which brought with it exemption from military duty.
Work
Halloy was one of the pioneers of modern geology, and in particular laid the foundation of geological knowledge over wide areas. He made important studies in the Carboniferous districts of Belgium and the Rhine provinces and in the Tertiary deposits of the Paris basin.
He devoted himself energetically to the work and by 1813 had traversed over 15,500 miles across France, Belgium, the Netherlands and portions of Germany, Switzerland and Italy. His family had, however, but little sympathy with his geological activity, and persuaded him to give up his expeditions. The map which he had made of France and the neighbouring territories was not published until 1822 and served as a basis for the more detailed surveys of Armand Dufrénoy and Elie de Beaumont.
In 1830, he sided with Étienne Geoffroy Saint-Hilaire against Georges Cuvier. Until 1841, there were no other geological maps than those drawn by Omalius for France and it was only at this time that Ami Boué published a geological map including the western part of Europe. .
Halloy was a practicing Catholic during his long and active life, and was characterized by his loyalty and devotion to the Church. He insisted on the harmony between faith and science, making this the subject of his oration on the occasion of the golden jubilee of the Belgian Academy in 1866.
Descent with modification
In the third edition of On the Origin of Species published in 1861, Charles Darwin added a Historical Sketch giving due credit to naturalists who had preceded him in publishing the opinion that species undergo modification, and that the existing forms of life have descended by true generation from pre-existing forms. This included d'Halloy –
Belgian Academy of Sciences
He was an active member of the Belgian Academy of Sciences from 1816, and served three times as president. He was likewise president of the Geological Society of France in 1852. He studied also in detail the Tertiary deposits of the Paris Basin, and ascertained the extent of the Cretaceous and some of the older strata, which he for the first time clearly depicted on a map (1817). He was distinguished as an ethnologist, and when nearly ninety years of age he was chosen president of the Congress of Pre-historic Archaeology (Brussels, 1872).
In 1816 he was elected first class corresponding member living abroad of the Royal Institute of the Netherlands. When the Institute became the Royal Netherlands Academy of Arts and Sciences he joined as foreign member in 1851.
Scientific publications
1808 - Essai sur la géologie du nord de la France
1823 - Geological map of France drawn up on order from the government of Napoléon I. Ready in 1813, it was not published until 1822.
1828 - Description géologique des Pays-Bas
1831 - Eléments de Géologie
1833 - Introduction à la Géologie
1842 - Coup d'œil sur la géologie de la Belgique
1843 - Précis élémentaire de Géologie
1845 - Des Races humaines ou Eléments d'Ethnographie : un Manuel pratique d'ethnographie ou description des races humaines. Les différents peuples, leurs caractères sociaux, divisions et subdivisions des différentes races humaines.
1853 - Abrégé de Géologie
1860 - Minéralogie, A. Jamar (Bruxelles). texte en ligne disponible sur IRIS
1874 - Le transformisme, La Revue scientifique, 31 janvier 1874
As well as numerous memoirs and notes in: the Journal de physique, de chimie et d'histoire naturelle, the Annales des mines de France, the bulletins of the Société d'anthropologie de Paris, the bulletins of the Société géologique de France and those of the Royal Academy of Science, Letters and Fine Arts of Belgium.
In his book Des Races humaines ou Eléments d'Ethnographie, Halloy established a racial classification according to skin colour.
Statesman
After having served as sous-intendant of the arrondissement of Dinant (1814) and general secretary of the province of Liège (1815), he became in 1815 governor of Namur. He held this office until after the Revolution of 1830. He was elected a member of the Belgian Senate in 1848, became its vice-president in 1851, was made a member of the Academy of Brussels in 1816, and was elected its president in 1850.
As a statesman Halloy had at heart the well-being of the people and, though his duties allowed him little opportunity for extended geological research, he retained a lively interest in his favourite science and engaged occasionally in field work. In his later years he gave much attention to questions of ethnology and philosophy. His death was hastened by the exertions of a scientific expedition undertaken alone in his ninety-first year. He died in Brussels on 15 January 1875.
References
1783 births
1875 deaths
19th-century Roman Catholics
Scientists from Liège
Proto-evolutionary biologists
Belgian geologists
Belgian Roman Catholic writers
Proponents of scientific racism
Foreign members of the Royal Society
Members of the French Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences | Jean Baptiste Julien d'Omalius d'Halloy | [
"Biology"
] | 1,517 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
2,207,479 | https://en.wikipedia.org/wiki/Rainbow%20Series | The Rainbow Series (sometimes known as the Rainbow Books) is a series of computer security standards and guidelines published by the United States government in the 1980s and 1990s. They were originally published by the U.S. Department of Defense Computer Security Center, and then by the National Computer Security Center.
Objective
These standards describe a process of evaluation for trusted systems. In some cases, U.S. government entities (as well as private firms) would require formal validation of computer technology using this process as part of their procurement criteria. Many of these standards have influenced, and have been superseded by, the Common Criteria.
The books have nicknames based on the color of its cover. For example, the Trusted Computer System Evaluation Criteria was referred to as "The Orange Book." In the book entitled Applied Cryptography, security expert Bruce Schneier states of NCSC-TG-021 that he "can't even begin to describe the color of [the] cover" and that some of the books in this series have "hideously colored covers." He then goes on to describe how to receive a copy of them, saying "Don't tell them I sent you."
Most significant Rainbow Series books
References
External links
Rainbow Series from Federation of American Scientists, with more explanation
Rainbow Series from Archive of Information Assurance
Computer security standards | Rainbow Series | [
"Technology",
"Engineering"
] | 270 | [
"Computer security standards",
"Computer standards",
"Cybersecurity engineering"
] |
2,207,781 | https://en.wikipedia.org/wiki/Cross-tolerance | Cross-tolerance is a phenomenon that occurs when tolerance to the effects of a certain drug produces tolerance to another drug. It often happens between two drugs with similar functions or effects—for example, acting on the same cell receptor or affecting the transmission of certain neurotransmitters. Cross-tolerance has been observed with pharmaceutical drugs such as anti-anxiety agents and illicit substances, and sometimes the two of them together. Often, a person who uses one drug can be tolerant to a drug that has a completely different function. This phenomenon allows one to become tolerant to a drug that they have never used before.
Drug classifications and cross-tolerance
Anxiolytics and sedatives
Excitation of the GABA receptor produces an influx of negatively charged chloride ions, which hyperpolarizes the neuron and makes it less likely to give rise to an action potential. In addition to gamma-Aminobutyric acid (GABA) itself, the GABAA receptor can also bind barbiturates and benzodiazepines. Benzodiazepine binding increases the binding of GABA and barbiturates maximize the time the pore is open. Both of these mechanisms allow for influx of chloride ions. When these drugs are taken together, especially with ethanol (drinking alcohol), there is a disproportionate increase in toxicity because the effects of both occur simultaneously and add up since they act on the same receptor at different sites. Convergence upon the GABAA receptor is why tolerance for one drug in the group will most likely cause cross-tolerance for the other drugs in the group. However, the barbiturates are also AMPA receptor blockers, and in addition interact with the nAChR and voltage-gated calcium channels. As a result, somebody who is tolerant to benzodiazepines is more sensitive to barbiturates than vice versa.
Antipsychotics
These drugs block dopamine receptors and some also block serotonin receptors (such as chlorpromazine, the first antipsychotic used clinically). Having been on one or more antipsychotics for any appreciable amount of time results in dramatically reduced sensitivity to others with similar mechanisms of action. However, an antipsychotic with a substantial disparity in pharmacology (e.g. haloperidol and quetiapine) may retain significant efficacy.
Antidepressants and mood stabilizers
MAO inhibitor drugs block an enzyme system resulting in increased stores of monoamine neurotransmitters. More common antidepressants such as tricyclic antidepressants and SSRIs block reuptake transporters causing increased levels of norepinephrine or serotonin in synapses. Mood stabilizers include lithium and many anticonvulsants, such as carbamazepine and lamotrigine are also used for mood disorders. This would demonstrate little to zero cross-tolerance with serotonergic or lithium treatment.
Opioid analgesics
These drugs mimic three classes of endorphins, such as endomorphins, enkephalins, and dynorphins. All three of these classes each have their own receptor-mu, kappa, and delta. Opioids will bind to the receptor for the endorphin they are most chemically similar to. Tolerance to some effects occurs with regular use, a result of the downregulation of the stimulated opioid receptors. Cross tolerance to analgesia may develop incompletely and less rapidly, allowing rotation between opioid medications be used to compensate somewhat for tolerance. This phenomenon is called incomplete cross-tolerance.
Stimulants
Cocaine, amphetamines, methylphenidate and ephedrine block the reuptake of dopamine and norepinephrine. With increasing doses, amphetamines also cause the direct release of these neurotransmitters.
Psychedelics
Serotonergic psychedelics act through modulation of serotonin receptors. Most of these drugs share a high affinity for the 5-HT2A receptor subtype, known to result in their common perceptual and psychological effects.
Cross-tolerance between drugs of different classifications
Sometimes cross-tolerance occurs between two drugs that do not share mechanisms of action or classification. For example, in rats some amphetamine-like stimulants have been shown to exhibit cross-tolerance with caffeine, though this effect was not observed with amphetamine itself. It is likely that this mechanism of cross-tolerance involves the dopamine receptor D1. Amphetamines also have cross-tolerance with pseudoephedrine, as pseudoephedrine can block dopamine uptake in the same manner that amphetamines do, but less potently.
Alcohol is another substance that often cross-tolerates with other drugs. Findings of cross-tolerance with nicotine in animal models suggest that it is also possible in humans, and may explain why the two drugs are often used together. Numerous studies have also suggested the possibility of cross-tolerance between alcohol and cannabis.
Cigarette smoking produces increased metabolic tolerance to caffeine due to upregulation of the CYP1A enzyme family (see aryl hydrocarbon receptor).
References
Pharmacodynamics
fr:Accoutumance#Tolérance croisée | Cross-tolerance | [
"Chemistry"
] | 1,109 | [
"Pharmacology",
"Pharmacodynamics"
] |
2,207,789 | https://en.wikipedia.org/wiki/Surface%20power%20density | In physics and engineering, surface power density is power per unit area.
Applications
The intensity of electromagnetic radiation can be expressed in W/m2. An example of such a quantity is the solar constant.
Wind turbines are often compared using a specific power measuring watts per square meter of turbine disk area, which is , where r is the length of a blade. This measure is also commonly used for solar panels, at least for typical applications.
Radiance is surface power density per unit of solid angle (steradians) in a specific direction. Spectral radiance is radiance per unit of frequency (Hertz) at a specific (or as a function of) frequency, or per unit of wavelength (e.g. nm) at a specific (or as a function of) wavelength.
Surface power densities of energy sources
Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to "power density" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende.
The following table shows median surface power density of renewable and non-renewable energy sources.
Background
As an electromagnetic wave travels through space, energy is transferred from the source to other objects (receivers). The rate of this energy transfer depends on the strength of the EM field components. Simply put, the rate of energy transfer per unit area (power density) is the product of the electric field strength (E) times the magnetic field strength (H).
Pd (Watts/meter2) = E × H (Volts/meter × Amperes/meter)where
Pd = the power density,
E = the RMS electric field strength in volts per meter,
H = the RMS magnetic field strength in amperes per meter.
The above equation yields units of W/m2 . In the USA the units of mW/cm2, are more often used when making surveys. One mW/cm2 is the same power density as 10 W/m2. The following equation can be used to obtain these units directly:
Pd = 0.1 × E × H mW/cm2
The simplified relationships stated above apply at distances of about two or more wavelengths from the radiating source. This distance can be a far distance at low frequencies, and is called the far field. Here the ratio between E and H becomes a fixed constant (377 Ohms) and is called the characteristic impedance of free space. Under these conditions we can determine the power density by measuring only the E field component (or H field component, if you prefer) and calculating the power density from it.
This fixed relationship is useful for measuring radio frequency or microwave (electromagnetic) fields. Since power is the rate of energy transfer, and the squares of E and H are proportional to power, E2 and H2 are proportional to the energy transfer rate and the energy absorption of a given material. [??? This would imply that with no absorption, E and H are both zero, i.e. light or radio waves cannot travel in a vacuum. The intended meaning of this statement is unclear.]
Far field
The region extending farther than about 2 wavelengths away from the source is called the far field. As the source emits electromagnetic radiation of a given wavelength, the far-field electric component of the wave E, the far-field magnetic component H, and power density are related by the equations: E = H × 377 and Pd = E × H.
Pd = H2 × 377 and Pd = E2 ÷ 377
where Pd is the power density in watts per square meter (one W/m2 is equal to 0.1 mW/cm2),
H2 = the square of the value of the magnetic field in amperes RMS squared per meter squared,
E2 = the square of the value of the electric field in volts RMS squared per meter squared.
References
Physical quantities
Area-specific quantities | Surface power density | [
"Physics",
"Mathematics"
] | 925 | [
"Physical phenomena",
"Physical quantities",
"Area-specific quantities",
"Quantity",
"Physical properties"
] |
2,207,850 | https://en.wikipedia.org/wiki/Faint%20blue%20galaxy | A faint blue galaxy (FBG) is an inconspicuous, often small galaxy with low surface luminosity. In addition to being dim, they show a remarkable preponderance of sparsely scattered blue stars, but comparatively few red stars, which in most galaxies are by far the most common. They appear as dim, bluish smudges on old photographic plates, with no clear structure or shape, and do not register well on modern electronic cameras, which are more sensitive to red light. They are currently interpreted as small dwarf-irregular satellite-galaxies undergoing a burst of star formation.
Previously overlooked
Although some had been previously photographed as faint smudges in sky surveys, they were first noticed in the 1970s, posing a problem for then-current theories of galaxy formation. FBGs tend to be found in the peripheries of galaxy clusters and as remote satellites of large galaxies, and appear to be a now-finished stage of galactic growth. Any galaxy might appear faint because it is small or because it is far away. Neither explanation, nor any combination, matched the initial FBG observations.
The first faint blue galaxy problem
The faint blue galaxy (FBG) problem in astrophysics first arose with observations starting in 1978 that there were more galaxies with a than then-current theory predicted.
The distribution of these galaxies has since been found to be consistent with models of cosmic inflation, measurements of the cosmic microwave background, and a nonzero cosmological constant; that is, with the existence of the now-accepted dark energy. It thus serves as a confirmation of supernova observations requiring dark energy.
The second faint blue galaxy problem
A second problem arose in 1988, with even deeper observations showing a much greater excess of faint galaxies.
These are now interpreted as dwarf galaxies experiencing large bursts of stellar formation, resulting in blue light from young, massive stars. Thus FBGs are extremely bright for their size and distance.
Most FBGs appear between red-shift . It is inferred that they merged with other galaxies and consequently disappeared as separate objects some time in the "recent" cosmological past.
References
Galaxies | Faint blue galaxy | [
"Astronomy"
] | 436 | [
"Galaxies",
"Astronomical objects"
] |
2,207,861 | https://en.wikipedia.org/wiki/Power%20density | Power density, defined as the amount of power (the time rate of energy transfer) per unit volume, is a critical parameter used across a spectrum of scientific and engineering disciplines. This metric, typically denoted in watts per cubic meter (W/m3), serves as a fundamental measure for evaluating the efficacy and capability of various devices, systems, and materials based on their spatial energy distribution.
The concept of power density finds extensive application in physics, engineering, electronics, and energy technologies. It plays a pivotal role in assessing the efficiency and performance of components and systems, particularly in relation to the power they can handle or generate relative to their physical dimensions or volume.
In the domain of energy storage and conversion technologies, such as batteries, fuel cells, motors, and power supply units, power density is a crucial consideration. Here, power density often refers to the volume power density, quantifying how much power can be accommodated or delivered within a specific volume (W/m3).
For instance, when examining reciprocating internal combustion engines, power density assumes a distinct importance. In this context, power density is commonly defined as power per swept volume or brake horsepower per cubic centimeter. This measure is derived from the internal capacity of the engine, providing insight into its power output relative to its internal volume rather than its external size. This extends to advancement in material science where new materials which can withstand higher power densities can reduce size or weight of devices, or just increase their performance.
The significance of power density extends beyond these examples, impacting the design and optimization of a myriad of systems and devices. Notably, advancements in power density often drive innovations in areas ranging from renewable energy technologies to aerospace propulsion systems.
Understanding and enhancing power density can lead to substantial improvements in the performance and efficiency of various applications. Researchers and engineers continually explore ways to push the limits of power density, leveraging advancements in materials science, manufacturing techniques, and computational modeling.
By engaging with these educational resources and specialized coursework, students and professionals can deepen their understanding of power density and its implications across diverse industries. The pursuit of higher power densities continues to drive innovation and shape the future of energy systems and technological development.
Examples
See also
Surface power density, energy per unit of area
Energy density, energy per unit volume
Specific energy, energy per unit mass
Power-to-weight ratio/specific power, power per unit mass
Specific absorption rate (SAR)
References
Power (physics) | Power density | [
"Physics",
"Mathematics"
] | 495 | [
"Force",
"Physical quantities",
"Quantity",
"Power (physics)",
"Energy (physics)",
"Wikipedia categories named after physical quantities"
] |
2,207,911 | https://en.wikipedia.org/wiki/Kelvin%20probe%20force%20microscope | Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings.
With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion. The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid.
History
The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals.
Working principle
In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample. This vibration causes a change in probe to sample distance, which in turn results in the flow of current, taking the form of an ac sine wave. The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier. Typically the user must select the correct reference phase value used by the lock-in amplifier. Once the dc potential has been determined, an external potential, known as the backing potential (Vb) can be applied to null the charge between the probe and the sample. When the charge is nullified, the Fermi level of the sample returns to its original position. This means that Vb is equal to -Vc, which is the work function difference between the SKP probe and the sample measured.
The cantilever in the AFM is a reference electrode that forms a capacitor with the surface, over which it is scanned laterally at a constant separation. The cantilever is not piezoelectrically driven at its mechanical resonance frequency ω0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency.
When there is a direct-current (DC) potential difference between the tip and the surface, the AC+DC voltage offset will cause the cantilever to vibrate. The origin of the force can be understood by considering that the energy of the capacitor formed by the cantilever and the surface is
plus terms at DC. Only the cross-term proportional to the VDC·VAC product is at the resonance frequency ω0. The resulting vibration of the cantilever is detected using usual scanned-probe microscopy methods (typically involving a diode laser and a four-quadrant detector). A null circuit is used to drive the DC potential of the tip to a value which minimizes the vibration. A map of this nulling DC potential versus the lateral position coordinate therefore produces an image of the work function of the surface.
A related technique, electrostatic force microscopy (EFM), directly measures the force produced on a charged tip by the electric field emanating from the surface. EFM operates much like magnetic force microscopy in that the frequency shift or amplitude change of the cantilever oscillation is used to detect the electric field. However, EFM is much more sensitive to topographic artifacts than KPFM. Both EFM and KPFM require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride. Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy, quantifies surface potentials based on their ability to gate a tip-attached quantum dot.
Factors affecting SKP measurements
The quality of an SKP measurement is affected by a number of factors. This includes the diameter of the SKP probe, the probe to sample distance, and the material of the SKP probe. The probe diameter is important in the SKP measurement because it affects the overall resolution of the measurement, with smaller probes leading to improved resolution. On the other hand, reducing the size of the probe causes an increase in fringing effects which reduces the sensitivity of the measurement by increasing the measurement of stray capacitances. The material used in the construction of the SKP probe is important to the quality of the SKP measurement. This occurs for a number of reasons. Different materials have different work function values which will affect the contact potential measured. Different materials have different sensitivity to humidity changes. The material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, though probes of platinum, copper, gold, and NiCr has been used. The probe to sample distance affects the final SKP measurement, with smaller probe to sample distances improving the lateral resolution and the signal-to-noise ratio of the measurement. Furthermore, reducing the SKP probe to sample distance increases the intensity of the measurement, where the intensity of the measurement is proportional to 1/d2, where d is the probe to sample distance. The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode.
Work function
The Kelvin probe force microscope or Kelvin force microscope (KFM) is based on an AFM set-up and the determination of the work function is based on the measurement of the electrostatic forces between the small AFM tip and the sample. The conducting tip and the sample are characterized by (in general) different work functions, which represent the difference between the Fermi level and the vacuum level for each material. If both elements were brought in contact, a net electric current would flow between them until the Fermi levels were aligned. The difference between the work functions is called the contact potential difference and is denoted generally with VCPD. An electrostatic force exists between tip and sample, because of the electric field between them. For the measurement a voltage is applied between tip and sample, consisting of a DC-bias VDC and an AC-voltage VAC sin(ωt) of frequency ω.
Tuning the AC-frequency to the resonant frequency of the AFM cantilever results in an improved sensitivity. The electrostatic force in a capacitor may be found by differentiating the energy function with respect to the separation of the elements and can be written as
where C is the capacitance, z is the separation, and V is the voltage, each between tip and surface. Substituting the previous formula for voltage (V) shows that the electrostatic force can be split up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and 2ω.
The DC component, FDC, contributes to the topographical signal, the term Fω at the characteristic frequency ω is used to measure the contact potential and the contribution F2ω can be used for capacitance microscopy.
Contact potential measurements
For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω. During the scan VDC will be adjusted so that the electrostatic forces between the tip and the sample become zero and thus the response at the frequency ω becomes zero. Since the electrostatic force at ω depends on VDC − VCPD, the value of VDC that minimizes the ω-term corresponds to the contact potential. Absolute values of the sample work function can be obtained if the tip is first calibrated against a reference sample of known work function. Apart from this, one can use the normal topographic scan methods at the resonance frequency ω independently of the above. Thus, in one scan, the topography and the contact potential of the sample are determined simultaneously.
This can be done in (at least) two different ways: 1) The topography is captured in AC mode which means that the cantilever is driven by a piezo at its resonant frequency. Simultaneously the AC voltage for the KPFM measurement is applied at a frequency slightly lower than the resonant frequency of the cantilever. In this measurement mode the topography and the contact potential difference are captured at the same time and this mode is often called single-pass. 2) One line of the topography is captured either in contact or AC mode and is stored internally. Then, this line is scanned again, while the cantilever remains on a defined distance to the sample without a mechanically driven oscillation but the AC voltage of the KPFM measurement is applied and the contact potential is captured as explained above. It is important to note that the cantilever tip must not be too close to the sample in order to allow good oscillation with applied AC voltage. Therefore, KPFM can be performed simultaneously during AC topography measurements but not during contact topography measurements.
Applications
The Volta potential measured by SKP is directly proportional to the corrosion potential of a material, as such SKP has found widespread use in the study of the fields of corrosion and coatings. In the field of coatings for example, a scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP. Initially after the scratch was made the Volta potential was noticeably higher and wider over the scratch than over the rest of the sample, implying this region is more likely to corrode. The Volta potential decreased over subsequent measurements, and eventually the peak over the scratch completely disappeared implying the coating has healed. Because SKP can be used to investigate coatings in a non-destructive way it has also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating.
Using SKP the corrosion of industrially important alloys has been measured. In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined. SKP is useful to study this sort of corrosion because it usually occurs locally, therefore global techniques are poorly suited. Surface potential changes related to increased localized corrosion were shown by SKP measurements. Furthermore, it was possible to compare the resulting corrosion from different microbial species. In another example SKP was used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide protected surface of the alloy. SKP has also been used to investigate the effects of atmospheric corrosion, for example to investigate copper alloys in marine environment. In this study Kelvin potentials became more positive, indicating a more positive corrosion potential, with increased exposure time, due to an increase in thickness of corrosion products. As a final example SKP was used to investigate stainless steel under simulated conditions of gas pipeline. These measurements showed an increase in difference in corrosion potential of cathodic and anodic regions with increased corrosion time, indicating a higher likelihood of corrosion. Furthermore, these SKP measurements provided information about local corrosion, not possible with other techniques.
SKP has been used to investigate the surface potential of materials used in solar cells, with the advantage that it is a non-contact, and therefore a non-destructive technique. It can be used to determine the electron affinity of different materials in turn allowing the energy level overlap of conduction bands of differing materials to be determined. The energy level overlap of these bands is related to the surface photovoltage response of a system.
As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils.
SKP has been used to analyze the corrosion mechanisms of schreibersite-containing meteorites. The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry.
In the field of biology SKP has been used to investigate the electric fields associated with wounding, and acupuncture points.
In the field of electronics, KPFM is used to investigate the charge trapping in High-k gate oxides/interfaces of electronic devices.
See also
Scanning probe microscopy
Surface photovoltage
References
External links
– Full description of the principles with good illustrations to aid comprehension
Transport measurements by Scanning Probe Microscopy
Introduction to Kelvin Probe Force Microscopy (KPFM)
Dynamic Kelvin Probe Force Microscopy
Kelvin Probe Force Microscopy of Lateral Devices
Kelvin Probe Force Microscopy in Liquids
Current-voltage Measurements in Scanning Probe Microscopy
Dynamic IV measurements in SPM
Scanning probe microscopy
Condensed matter physics
Surface science
Electric and magnetic fields in matter
Probe force microscope | Kelvin probe force microscope | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,845 | [
"Phases of matter",
"Electric and magnetic fields in matter",
"Surface science",
"Materials science",
"Condensed matter physics",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Matter"
] |
2,208,118 | https://en.wikipedia.org/wiki/Structural%20differential | The structural differential is a physical chart or three-dimensional model illustrating the abstracting processes of the human nervous system. In one form, it appears as a pegboard with tags. Created by Alfred Korzybski, and awarded a U.S. patent on May 26, 1925, it is used as a training device in general semantics. The device is intended to show that human "knowledge" of, or acquaintance with, anything is partial—not total.
The model
The structural differential consists of three basic objects. The parabola represents a domain beyond our direct observation, the sub-microscopic, dynamic world of molecules, atoms, electrons, protons, quarks, and so on; a world known to us only inferentially from science. Korzybski described it as an 'event' in the sense of "an instantaneous cross-section of a process." Thus the 'event' or parabola represents the sub-microscopic 'stuff' that, at any given moment, constitutes an apple. In other words, the parabola represents the "external" cause of what we experience.
The disc represents the non-verbal result of our nervous systems reacting to submicroscopic "stuff", e.g., the apple that we see, hold, bite into, all on the non-verbal levels of experience. The disc represents what we experience of our surroundings versus what our surroundings actually are.
The labels [usually seven or eight are linked together in a chain, with the last one attached back to the parabola, but here we see just one] are shaped like suitcase labels, and represent the static world of words, e.g., "apple", giving imperfect accounts of dynamic reality. An object called an "apple" left in a jar for months becomes a putrid liquid (because of its underlying, dynamic, sub-microscopic structure), but the label "apple" does not change. The word "steak", at a lower verbal order, may imply "something to eat" at a higher verbal order, but in the sub-microscopic domain, a particular steak may be contaminated with poisons created by harmful bacteria that we could see only on microscopic levels. Thus the differential sets up a hierarchy of order, with the submicroscopic domain of dynamic change coming first, the relatively stable universe conveyed non-verbally by our senses coming next, and then the verbal levels. A label is what we attach to a non-verbal experience in order to identify this experience in verbal terms; when we identify an "apple", we attribute to this identification various non-verbal experiences.
The holes in the figures represent the characteristics that exist at each level. The characteristics that are abstracted to the next level are indicated by the attached strings. The strings that don't make it to the next level represent characteristics left out of our abstractions, as do the holes without strings at all. More is left out of our abstractions at each level than was there at the previous level.
The structural differential was used by Korzybski to demonstrate that human beings abstract from their environments, that these abstractions leave out many characteristics, and that verbal abstractions build on themselves indefinitely, through many orders or levels, represented by seven or eight labels (or less, or more, it is totally arbitrary how many we want to symbolize the higher levels), chained in order. The highest, most reliable abstractions at a date are made by science, he claimed (e.g., science has conveyed the nature and danger of bacteria to us), and that is why he attached the last label back to the parabola. It is science that has told us that the sub-microscopic domain exists, and in general semantics the parabola represents that domain. In general semantics, the natural order of evaluation proceeds from lower orders of abstraction to higher orders of abstraction, and back again in an endless cycle. In these cycles, we return periodically or eventually to "silence on the objective levels" (our ground) before moving on to the higher orders, i.e., before bursting into speech or theory.
General semantics
The general semantics discipline was founded by Korzybski, who gained recognition first with the publication of Manhood of Humanity (1921) and then Science and Sanity (1933). Some of his ideas were popularized by Stuart Chase in The Tyranny of Words in 1938, and by Samuel Ichiye Hayakawa, in Language in Action in 1941 (which later became Language in Thought and Action). Also influential was the magazine ETC: A Review of General Semantics, founded in 1943. The name of the magazine, ETC, was a play on a fundamental notion of Korzybski's that names or descriptions do not exhaustively convey all of an object's properties (the word "steak" does not convey the possibility of harmful bacteria, for instance). We can hardly refrain from describing things altogether, but we can bear in mind that we could append to any name or description the word "etc.", to indicate that the label is only a subset of the total set of possibilities. There is always more that can be said about anything. ETC magazine was founded by Hayakawa, who was a professor at San Francisco State College and member of the U.S. Senate during the Carter administration. His Language in Thought and Action, went through several editions and is concerned in part with the confusion of words with reality. Hayakawa's work coincided with the advent of television broadcasting and contained early warnings against the dangers of mediated reality that television embodied.
See also
Semantic differential
References
Korzybski, A. (1933) Science and Sanity: An Introduction to Non-aristotelian Systems and General Semantics. Institute of General Semantics.
Hayakawa, S. I. (1978) Language in Thought and Action. Harcourt; 4th Ed.
External links
A picture of a complete and original SD by Korzybski, including a disc representing the non-verbal abstractions of animals
A variation of the differential by Steven Lewis
Brief Explanation of Korzybski's Structural Differential
Human communication
General semantics | Structural differential | [
"Biology"
] | 1,251 | [
"Human communication",
"Behavior",
"Human behavior"
] |
2,208,293 | https://en.wikipedia.org/wiki/RAF%20Denge | Royal Air Force Denge or more simply RAF Denge is a former Royal Air Force site near Dungeness, in Kent, England. It is best known for the early experimental acoustic mirrors which remain there.
The RAF had begun research into acoustic mirrors during World War I.
The Denge acoustic mirrors, known colloquially as 'listening ears', are located between Greatstone-on-Sea and Lydd Airport, on the banks of a now disused gravel pit. The mirrors were built in the late 1920s and early 1930s as an experimental early warning system for incoming aircraft, developed by William Sansome Tucker. Several were built along the south and east coasts, but the complex at Denge is the best preserved, and are protected as scheduled monuments.
Denge complex
There are three acoustic mirrors in the complex, each consisting of a single concrete hemispherical reflector.
The 200 foot mirror is a near vertical, curved wall, 200 feet (60m) long. It is one of only two similar acoustic mirrors in the world, the other being in Magħtab, Malta.
The 30 foot mirror is a circular dish, similar to a deeply curved satellite dish, 9 m (30 ft) across, supported on concrete buttresses. This mirror still retains the metal microphone pole at its centre.
The 20 foot mirror is similar to the 30 foot mirror, with a smaller, shallower dish 6 m (20 ft) across. The design is close to that of an acoustic mirror in Kilnsea, East Riding of Yorkshire.
Acoustic mirrors did work and could effectively be used to detect slow moving enemy aircraft before they came into sight. They worked by concentrating sound waves towards a central point, where a microphone would have been located. However, their use was limited as aircraft became faster. Operators also found it difficult to distinguish between aircraft and seagoing vessels. In any case, they quickly became obsolete due to the invention of radar in 1932. The experiment was abandoned, and the mirrors left to decay. The gravel extraction works caused some undermining of at least one of the structures.
The striking forms of the sound mirrors have attracted artists and photographers. British artist Tacita Dean created a film inspired by the complex. The band Turin Brakes featured the mirrors on some of their album covers. The object appeared in the music video for Blank & Jones' "A Forest".The mirrors have also been featured in the music videos for "Last Time Forever" by Squeeze, Invaders Must Die by The Prodigy, Young Kato - Something Real and "A Kiss For The Whole World x" by Enter Shikari.
Restoration
In 2003, English Heritage secured £500,000 from the Aggregates Levy Sustainability Fund and from the EU's Interreg programme under the Historic Fortifications Network, as administered by Kent County Council. This money was spent to restore the damage caused by the gravel works, as well as to install a swing bridge which now is the only means of access, reducing the monument's exposure to vandalism. The mirrors are situated on an island within an RSPB nature reserve, and can only be accessed on open days as the designated site (which has both Site of Special Scientific Interest and Special Protection Area status) is sensitive to disturbance.
References
External links
Greatstone Sound Mirrors
Guided Tours by the Romney Marsh Countryside Project
Acoustics
Warning systems
History of Kent
Denge
Lydd | RAF Denge | [
"Physics",
"Technology",
"Engineering"
] | 683 | [
"Safety engineering",
"Classical mechanics",
"Acoustics",
"Measuring instruments",
"Warning systems"
] |
2,208,744 | https://en.wikipedia.org/wiki/Macro-engineering | In engineering, macro-engineering (alternatively known as mega engineering) is the implementation of large-scale design projects. It can be seen as a branch of civil engineering or structural engineering applied on a large landmass. In particular, macro-engineering is the process of marshaling and managing of resources and technology on a large scale to carry out complex tasks that last over a long period. In contrast to conventional engineering projects, macro-engineering projects (called macro-projects or mega-projects) are multidisciplinary, involving collaboration from all fields of study. Because of the size of macro-projects they are usually international.
Macro-engineering is an evolving field that has only recently started to receive attention. Because we routinely deal with challenges that are multinational in scope, such as global warming and pollution, macro-engineering is emerging as a transcendent solution to worldwide problems.
Macro-engineering is distinct from Megascale engineering due to the scales where they are applied. Where macro-engineering is currently practical, mega-scale engineering is still within the domain of speculative fiction because it deals with projects on a planetary or stellar scale.
Projects
Macro engineering examples include the construction of the Panama Canal and the Suez Canal.
Planned projects
Examples of projects include the Channel Tunnel and the planned Gibraltar Tunnel.
Two intellectual centers focused on macro-engineering theory and practice are the Candida Oancea Institute in Bucharest, and The Center for Macro Projects and Diplomacy at Roger Williams University in Bristol, Rhode Island.
See also
Afforestation
Agroforestry
Atlantropa (Gibraltar Dam)
Analog forestry
Bering Strait bridge
Buffer strip
Biomass
Biomass (ecology)
Climate engineering (Geoengineering)
Collaborative innovation network
Deforestation
Deforestation during the Roman period
Ecological engineering
Ecological engineering methods
Ecotechnology
Energy-efficient landscaping
Forest gardening
Forest farming
Great Plains Shelterbelt
Green Wall of China
IBTS Greenhouse
Home gardens
Human ecology
Megascale engineering
Permaculture
Permaforestry
Sahara Forest Project
Qattara Depression Project
Red Sea dam
Sand fence
Seawater Greenhouse
Sustainable agriculture
Terraforming
Windbreak
Wildcrafting
References
Frank P. Davidson and Kathleen Lusk Brooke, BUILDING THE WORLD: AN ENCYCLOPEDIA OF THE GREAT ENGINEERING PROJECTS IN HISTORY, two volumes (Greenwood Publishing Group, Oxford UK, 2006)
V. Badescu, R.B. Cathcart and R.D. Schuiling, MACRO-ENGINEERING: A CHALLENGE FOR THE FUTURE (Springer, The Netherlands, 2006)
R.B. Cathcart, V. Badescu with Ramesh Radhakrishnan, (2006): Macro-Engineers' Dreams PDF, 175pp. Accessed 24 May 2013
Alexander Bolonkin and Richard B. Cathcart, Macro-Projects (NOVA Publishing, 2009)
Viorel Badescu and R.B. Cathcart, Macro-engineering Seawater (Springer, 2010), 880 pages.
R.B. Cathcart, MACRO-IMAGINEERING OUR DOSMOZOICUM. (Lambert Academic Publishing, 2018) 154 pages.
External links
Engineering and the Future of Technology
Megaengineering at Popular Mechanics | Macro-engineering | [
"Engineering"
] | 626 | [
"Macro-engineering"
] |
2,208,748 | https://en.wikipedia.org/wiki/Super%20black | Super black is a surface treatment developed at the National Physical Laboratory (NPL) in the United Kingdom. It absorbs approximately 99.6% of visible light at normal incidence, while conventional black paint absorbs about 97.5%. At other angles of incidence, super black is even more effective: at an angle of 45°, it absorbs 99.9% of light.
Technology
The technology to create super black involves chemically etching a nickel-phosphorus alloy.
Applications of super black are in specialist optical instruments for reducing unwanted reflections. The disadvantage of this material is its low optical thickness, as it is a surface treatment. As a result, infrared light of a wavelength longer than a few micrometers penetrates through the dark layer and has much higher reflectivity. The reported spectral dependence increases from about 1% at 3 μm to 50% at 20 μm.
In 2009, a competitor to the super black material, Vantablack, was developed based on carbon nanotubes. It has a relatively flat reflectance in a wide spectral range.
In 2011, NASA and the US Army began funding research in the use of nanotube-based super black coatings in sensitive optics.
Nanotube-based superblack arrays and coatings have recently become commercially available.
See also
Vantablack
Emissivity
Black hole
Black body
References
External links
Materials science
Optical materials
Shades of black | Super black | [
"Physics",
"Materials_science",
"Engineering"
] | 287 | [
"Applied and interdisciplinary physics",
"Materials science",
"Materials",
"Optical materials",
"nan",
"Matter"
] |
2,208,839 | https://en.wikipedia.org/wiki/Shrubland | Shrubland, scrubland, scrub, brush, or bush is a plant community characterized by vegetation dominated by shrubs, often also including grasses, herbs, and geophytes. Shrubland may either occur naturally or be the result of human activity. It may be the mature vegetation type in a particular region and remain stable over time, or it may be a transitional community that occurs temporarily as the result of a disturbance, such as fire. A stable state may be maintained by regular natural disturbance such as fire or browsing.
Shrubland may be unsuitable for human habitation because of the danger of fire. The term was coined in 1903.
Shrubland species generally show a wide range of adaptations to fire, such as heavy seed production, lignotubers, and fire-induced germination.
Botanical structural form
In botany and ecology a shrub is defined as a much-branched woody plant less than 8 m high, usually with many stems. Tall shrubs are mostly 2–8 m high, small shrubs 1–2 m high and subshrubs less than 1 m high.
There is a descriptive system widely adopted in Australia to describe different types of vegetation is based on structural characteristics based on plant life-form, as well as the height and foliage cover of the tallest stratum or dominant species.
For shrubs that are high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-shrubs
mid-dense foliage cover (30–70%) — open-shrubs
sparse foliage cover (10–30%) — tall shrubland
very sparse foliage cover (<10%) — tall open shrubland
For shrubs less than high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-heath or closed low shrubland—(North America)
mid-dense foliage cover (30–70%) — open-heath or mid-dense low shrubland—(North America)
sparse foliage cover (10–30%) — low shrubland
very sparse foliage cover (<10%) — low open shrubland
Biome plant group
Similarly, shrubland is a category that is used to describe a type of biome plant group. In this context, shrublands are dense thickets of evergreen sclerophyll shrubs and small trees, called:
Chaparral in California
Matorral in Chile, Mexico, and Spain
Maquis in France and elsewhere around the Mediterranean
Macchia in Italy
Fynbos in South Africa
Eastern Suburbs Banksia Scrub in Sydney
Kwongan in Southwest Australia
Cedar scrub in Texas Hill Country
Caatinga in northeastern Brazil
In some places, shrubland is the mature vegetation type. In other places, it is the result of degradation of former forest or woodland by logging or overgrazing, or disturbance by major fires.
A number of World Wildlife Fund biomes are characterized as shrublands, including the following:
Desert scrublands
Xeric or desert scrublands occur in the world's deserts and xeric shrublands ecoregions or in fast-draining sandy soils in more humid regions. These scrublands are characterized by plants with adaptations to the dry climate, which include small leaves to limit water loss, thorns to protect them from grazing animals, succulent leaves or stems, storage organs to store water, and long taproots to reach groundwater.
Mediterranean scrublands
Mediterranean scrublands occur naturally in the Mediterranean scrub biome, located in the five Mediterranean climate regions of the world. Scrublands are most common near the seacoast and have often adapted to the wind and salt air of the ocean. Low, soft-leaved scrublands around the Mediterranean Basin are known as garrigue in France, phrygana in Greece, tomillares in Spain, and batha in Israel. Northern coastal scrub and coastal sage scrub occur along the California coast, strandveld in the Western Cape of South Africa, coastal matorral in central Chile, and sand-heath and kwongan in Southwest Australia.
Interior scrublands
Interior scrublands occur naturally in semi-arid areas with nutrient-poor soils, such as on the matas of Portugal, which are underlain by Cambrian and Silurian schists. Florida scrub is another example of interior scrublands.
Dwarf shrubs
Some vegetation types are formed of dwarf-shrubs, low-growing or creeping shrubs. They include the maquis and the garrigues of Mediterranean climates and the acid-loving dwarf shrubs of heathland and moorland.
See also
Fynbos
Maquis
Prostrate shrub
Semi-desert
Shrub-steppe
Shrub swamp
Moorland
Notes and references
External links | Shrubland | [
"Biology"
] | 936 | [
"Ecosystems",
"Shrublands"
] |
2,208,859 | https://en.wikipedia.org/wiki/Pentazole | Pentazole is an aromatic molecule consisting of a five-membered ring with all nitrogen atoms, one of which is bonded to a hydrogen atom. It has the molecular formula . Although strictly speaking a homocyclic, inorganic compound, pentazole has historically been classed as the last in a series of heterocyclic azole compounds containing one to five nitrogen atoms. This set contains pyrrole, imidazole, pyrazole, triazoles, tetrazole, and pentazole.
Derivatives
Substituted analogs of pentazole are collectively known as pentazoles. As a class, they are unstable and often highly explosive compounds. The first pentazole synthesized was phenylpentazole, where the pentazole ring is highly stabilized by conjugation with the phenyl ring. The derivative 4-dimethylaminophenylpentazole is among the most stable pentazole compounds known, although it still decomposes at temperatures over 50 °C. It is known that electron-donating groups stabilize aryl pentazole compounds.
Ions
The cyclic pentazolium cation () is not known due to its probable antiaromatic character; whereas the open-chained pentazenium cation () is known. Butler et al. first demonstrated the presence of the cyclic in solution through the decomposition of substituted aryl pentazoles at low temperature. The presence of and (held in solution through the interaction with zinc ions) was proven primarily using 15N NMR techniques of the decomposition products. These results were initially challenged by some authors, but subsequent experiments involving the detailed analysis of the decomposition products, complemented by computational studies, bore out the initial conclusion. The pentazolide anion is not expected to last longer than a few seconds in aqueous solution without the aid of complexing agents. The discovery of pentazoles spurred attempts to create all-nitrogen salts such as , which should be highly potent propellants for space travel.
In 2002, the pentazolate anion was first detected with electrospray ionization mass spectrometry In 2016, the ion was also detected in solution. In 2017, white cubic crystals of the pentazolate salt, (N5)6(H3O)3(NH4)4Cl were announced. In this salt, the rings are planar. The bond lengths in the ring are 1.309 Å, 1.310 Å, 1.310 Å, 1.324 Å, and 1.324 Å. When heated, the salt is stable up to 117 °C, and over this temperature it decomposes to ammonium azide. Under extreme pressure conditions, the pentazolate ion was also synthesized. It was first obtained in 2016 in the form of the CsN5 salt by compressing and laser-heating a mixture of CsN3 embedded in molecular N2 at 60 GPa. Following the pressure release, it was found metastable down to 18 GPa. In 2018, another team reported the high pressure synthesis of LiN5 above 45 GPa from a pure lithium surrounded by molecular nitrogen. This compound could be retained down to ambient conditions after the complete release of pressure.
References
Nitrogen hydrides
Explosive chemicals
Simple aromatic rings | Pentazole | [
"Chemistry"
] | 689 | [
"Explosive chemicals"
] |
2,208,893 | https://en.wikipedia.org/wiki/Carbon%20suboxide | Carbon suboxide, or tricarbon dioxide, is an organic, oxygen-containing chemical compound with formula and structure . Its four cumulative double bonds make it a cumulene. It is one of the stable members of the series of linear oxocarbons , which also includes carbon dioxide () and pentacarbon dioxide (). Although if carefully purified it can exist at room temperature in the dark without decomposing, it will polymerize under certain conditions.
The substance was discovered in 1873 by Benjamin Brodie by subjecting carbon monoxide to an electric current. He claimed that the product was part of a series of "oxycarbons" with formulas , namely , , , , …, and to have identified the last two; however, only is known. In 1891 Marcellin Berthelot observed that heating pure carbon monoxide at about 550 °C created small amounts of carbon dioxide but no trace of carbon, and assumed that a carbon-rich oxide was created instead, which he named "sub-oxide". He assumed it was the same product obtained by electric discharge and proposed the formula . Otto Diels later stated that the more organic names dicarbonylmethane and dioxallene were also correct.
It is commonly described as an oily liquid or gas at room temperature with an extremely noxious odor.
Synthesis
It is synthesized by warming a dry mixture of phosphorus pentoxide () and malonic acid or its esters.
Therefore, it can be also considered as the anhydride of malonic anhydride, i.e. the "second anhydride" of malonic acid.
Several other ways for synthesis and reactions of carbon suboxide can be found in a review from 1930 by Reyerson.
Polymerization
Carbon suboxide polymerizes spontaneously to a red, yellow, or black solid. The structure is postulated to be poly(α-pyronic), similar to the structure in 2-pyrone (α-pyrone). The number of monomers in the polymers is variable (see Oxocarbon#Polymeric carbon oxides).
In 1969, it was hypothesized that the color of the Martian surface was caused by this compound; this was disproved by the Viking Mars probes (the red color is instead due to iron oxide).
Uses
Carbon suboxide is used in the preparation of malonates; and as an auxiliary to improve the dye affinity of furs.
In chemical synthesis, carbon suboxide is a 1,3-dipole, reacting with alkenes to make 1,3cyclopentadiones. Because it is so unstable, it is a reagent of last resort.
Biological role
Carbon suboxide, C3O2, can be produced in small amounts in any biochemical process that normally produces carbon monoxide, CO, for example, during heme oxidation by heme oxygenase-1. It can also be formed from malonic acid. It has been shown that carbon suboxide in an organism can quickly polymerize into macrocyclic polycarbon structures with the common formula ()n (mostly and ), and that those macrocyclic compounds are potent inhibitors of Na+/K+-ATP-ase and Ca-dependent ATP-ase, and have digoxin-like physiological properties and natriuretic and antihypertensive actions. Those macrocyclic carbon suboxide polymer compounds are thought to be endogenous digoxin-like regulators of Na+/K+-ATP-ases and Ca-dependent ATP-ases, and endogenous natriuretics and antihypertensives. Other than that, some authors think also that those macrocyclic compounds of carbon suboxide can possibly diminish free radical formation and oxidative stress and play a role in endogenous anticancer protective mechanisms, for example in the retina.
Structure and bonding
The structure of carbon suboxide has been the subject of experiments and computations since the 1970s. The central issue is the question of whether the molecule is linear or bent (i.e., whether \theta_{C2} = \angle C1C2C3 \ \overset{?}{=}\ 180\!^\circ). Studies generally agree that the molecule is highly non-rigid, with a very shallow barrier to bending. According to one study, the molecular geometry is described by a double-well potential with a minimum at θC2 ~ 160°, an inversion barrier of 20 cm−1 (0.057 kcal/mol), and a total energy change of 80 cm−1 (0.23 kcal/mol) for 140° ≤ θC2 ≤ 180°. The small energetic barrier to bending is around the same order of magnitude as the vibrational zero-point energy. Therefore, the molecule is best described as quasilinear. While infrared and electron diffraction studies have indicated that has a bent structure in the gas phase, the compound was found to possess at least an average linear geometry in the solid phase by X-ray crystallography, although the large thermal ellipsoids of the oxygen atoms and C2 have been interpreted to be consistent with rapid bending (minimum θC2 ~ 170°), even in the solid state.
A heterocumulene resonance form of carbon suboxide based on minimization of formal charges does not readily explain the molecule's non-rigidity and deviation from linearity. To account for the quasilinear structure of carbon suboxide, Frenking has proposed that carbon suboxide be regarded as a "coordination complex" of carbon(0) bearing two carbonyl ligands and two lone pairs: OC:->\overset{..}{\underset{..}{C}}<-:CO. However, the contribution of dative bonding in and similar species has been criticized as chemically unreasonable by others.
References
External links
WebElements page on compound's properties
Oxocarbons
Gaseous signaling molecules
Heterocumulenes
Enones
Ketenes
Diketones
Foul-smelling chemicals | Carbon suboxide | [
"Chemistry"
] | 1,270 | [
"Gaseous signaling molecules",
"Ketenes",
"Functional groups",
"Signal transduction"
] |
2,208,941 | https://en.wikipedia.org/wiki/Hydrogen%20selenide | Hydrogen selenide is an inorganic compound with the formula H2Se. This hydrogen chalcogenide is the simplest and most commonly encountered hydride of selenium. H2Se is a colorless, flammable gas under standard conditions. It is the most toxic selenium compound with an exposure limit of 0.05 ppm over an 8-hour period. Even at extremely low concentrations, this compound has a very irritating smell resembling that of decayed horseradish or "leaking gas", but smells of rotten eggs at higher concentrations.
Structure and properties
H2Se adopts a bent structure with a H−Se−H bond angle of 91°. Consistent with this structure, three IR-active vibrational bands are observed: 2358, 2345, and 1034 cm−1.
The properties of H2S and H2Se are similar, although the selenide is more acidic with pKa = 3.89 and the second pKa = 11, or 15.05 ± 0.02 at 25 °C.
Preparation
Industrially, it is produced by treating elemental selenium at T > 300 °C with hydrogen gas. A number of routes to H2Se have been reported, which are suitable for both large and small scale preparations. In the laboratory, H2Se is usually prepared by the action of water on Al2Se3, concomitant with formation of hydrated alumina. A related reaction involves the acid hydrolysis of FeSe.
Al2Se3 + 6 H2O ⇌ 2 Al(OH)3 + 3 H2Se
H2Se can also be prepared by means of different methods based on the in situ generation in aqueous solution using boron hydride, Marsh test and Devarda's alloy. According to the Sonoda method, H2Se is generated from the reaction of H2O and CO on Se in the presence of Et3N. H2Se can be purchased in cylinders.
Reactions
Elemental selenium can be recovered from H2Se through a reaction with aqueous sulfur dioxide (SO2).
2 H2Se + SO2 ⇌ 2 H2O + 2 Se + S
Its decomposition is used to prepare the highly pure element.
Applications
H2Se is commonly used in the synthesis of Se-containing compounds. It adds across alkenes. Illustrative is the synthesis of selenoureas from cyanamides:
H2Se gas is used to dope semiconductors with selenium.
Safety
Hydrogen selenide is hazardous, being the most toxic selenium compound and far more toxic than its congener hydrogen sulfide. The threshold limit value is 0.05 ppm. The gas acts as an irritant at concentrations higher than 0.3 ppm, which is the main warning sign of exposure; below 1 ppm, this is "insufficient to prevent exposure", while at 1.5 ppm the irritation is "intolerable". Exposure at high concentrations, even for less than a minute, causes the gas to attack the eyes and mucous membranes; this causes cold-like symptoms for at least a few days afterwards. In Germany, the limit in drinking water is 0.008 mg/L, and the US EPA recommends a maximum contamination of 0.01 mg/L.
Despite being extremely toxic, no human fatalities have yet been reported. It is suspected that this is due to the gas' tendency to oxidise to form red selenium in mucous membranes; elemental selenium is less toxic than selenides are.
See also
Hydrogen diselenide
References
External links
WebElements page on compound's properties
CDC - NIOSH Pocket Guide to Chemical Hazards
Hydrogen compounds
Triatomic molecules
Selenides | Hydrogen selenide | [
"Physics",
"Chemistry"
] | 775 | [
"Highly-toxic chemical substances",
"Molecules",
"Harmful chemical substances",
"Triatomic molecules",
"Matter"
] |
2,208,970 | https://en.wikipedia.org/wiki/Intensity%20interferometer | An intensity interferometer is the name given to devices that use the Hanbury Brown and Twiss effect. In astronomy, the most common use of such an astronomical interferometer is to determine the apparent angular diameter of a radio source or star. If the distance to the object can then be determined by parallax or some other method, the physical diameter of the star can then be inferred. An example of an optical intensity interferometer is the Narrabri Stellar Intensity Interferometer. In quantum optics, some devices which take advantage of correlation and anti-correlation effects in beams of photons might be said to be intensity interferometers, although the term is usually reserved for observatories.
An intensity interferometer is built from two light detectors, typically either radio antenna or optical telescopes with photomultiplier tubes (PMTs), separated by some distance, called the baseline. Both detectors are pointed at the same astronomical source, and intensity measurements are then transmitted to a central correlator facility. A major advantage of intensity interferometers is that only the measured intensity observed by each detector must be sent to the central correlator facility, rather than the amplitude and phase of the signal. The intensity interferometer measures interferometric visibilities like all other astronomical interferometers. These measurements can be used to calculate the diameter and limb darkening coefficients of stars, but with intensity interferometers aperture synthesis images cannot be produced as the visibility phase information is not preserved by an intensity interferometer.
References
Telescopes
Interferometric telescopes
Quantum optics | Intensity interferometer | [
"Physics",
"Astronomy"
] | 329 | [
"Quantum optics",
"Telescopes",
"Quantum mechanics",
"Astronomical instruments"
] |
2,209,162 | https://en.wikipedia.org/wiki/Nier%20Prize | The Nier Prize is named after Alfred O. C. Nier. It is awarded annually by the Meteoritical Society and recognizes outstanding research in meteoritics and closely allied fields by young scientists.
Recipients must be under 35 years old at the end of the calendar year in which they are selected. The Leonard Medal Committee recommends to the Council candidates for the Nier Prize.
Nier Prize Winners
See also
List of astronomy awards
Glossary of meteoritics
References
Astronomy prizes
Meteorite prizes
American science and technology awards
Awards established in 1996 | Nier Prize | [
"Astronomy",
"Technology"
] | 108 | [
"Science and technology awards",
"Meteorite prizes",
"Astronomy prizes"
] |
2,209,246 | https://en.wikipedia.org/wiki/The%20Meteoritical%20Society | The Meteoritical Society is a non-profit scholarly organization founded in 1933 to promote research and education in planetary science with emphasis on studies of meteorites and other extraterrestrial materials that further our understanding of the origin and history of the Solar System.
Members
The membership of the society comprises over 1,000 scientists and amateur enthusiasts from over 52 countries who are interested in a wide range of planetary science topics. Members interests include meteorites, cosmic dust, asteroids and comets, natural satellites, planets, impact events, and the origins of the Solar System.
Activities
The Meteoritical Society is the organization that records all known meteorites in its Meteoritical Bulletin. The Society also publishes one of the world's leading planetary science journals, Meteoritics & Planetary Science, and is a cosponsor with the Geochemical Society of the renowned journal Geochimica et Cosmochimica Acta.
The Society presents or cosponsors seven awards each year:
The Leonard Medal, awarded since 1966 in honor of the first President of the Society, Frederick C. Leonard, is given for outstanding contributions to the science of meteoritics and closely allied fields.
The Barringer Medal, awarded since 1984 and cosponsored by the Barringer Crater Company, recognizes outstanding work in the field of impact cratering and/or work that has led to a better understanding of impact phenomena. The Prize is given in memory of D. Moreau Barringer Sr. and his son D. Moreau Barringer Jr.
The Nier Prize recognizes outstanding research in meteoritics and allied fields by young (under age 35) scientists. It has been awarded since 1996 in honor of the late physicist and geochemist, Alfred O. C. Nier.
The Paul Pellas-Graham Ryder Award, cosponsored by the Planetary Geology Division of the Geological Society of America, is given for undergraduate and graduate students who are first author of a planetary science paper published in a peer-reviewed scientific journal. It has been given since 2000, and honors the memories of the incomparable meteoriticist Paul Pellas and lunar scientist Graham Ryder.
The Meteoritical Society's Service Award is for members who have advanced the goals of the Society to promote research and education in meteoritics and planetary science in ways other than by conducting scientific research. The first award was presented in 2006.
The Gordon A. McKay Award is for the best oral presentation by a student at the annual meeting of the society. It honors the memory of planetary scientist Gordon A. McKay. The first award was presented in 2009.
The Jessberger Award is awarded to a mid-career female scientist in the field of isotope cosmochemistry. The award was endowed by the family of geochemist Elmar Jessberger. The award is given every other year.
The Meteoritical Society hosts an annual meeting during the summer, which generally alternates between North America and Europe. It has also held meetings in South Africa, Australia, Brazil, and Japan. The next meeting will be August 13-18, 2023 at UCLA.
See also
Meteoritics
External links
Meteoritical Society website
Meteoritics & Planetary Science website
Geochimica et Cosmochimica Acta
Meteoritical Bulletin
Award Winners of the Meteoritical Society
History of the Meteoritical Society: 1933 to 1993, by Ursula B. Marvin
The British and Irish Meteorite Society
Meteorite organizations
Fellows of the Meteoritical Society
Planetary defense organizations
Scientific organizations established in 1933 | The Meteoritical Society | [
"Astronomy"
] | 701 | [
"Planetary defense organizations",
"Astronomy organizations"
] |
2,209,432 | https://en.wikipedia.org/wiki/Inelastic%20mean%20free%20path | The inelastic mean free path (IMFP) is an index of how far an electron on average travels through a solid before losing energy.
If a monochromatic, primary beam of electrons is incident on a solid surface, the majority of incident electrons lose their energy because they interact strongly with matter, leading to plasmon excitation, electron-hole pair formation, and vibrational excitation. The intensity of the primary electrons, , is damped as a function of the distance, , into the solid. The intensity decay can be expressed as follows:
where is the intensity after the primary electron beam has traveled through the solid to a distance . The parameter , termed the inelastic mean free path (IMFP), is defined as the distance an electron beam can travel before its intensity decays to of its initial value. (Note that this is equation is closely related to the Beer–Lambert law.)
The inelastic mean free path of electrons can roughly be described by a universal curve that is the same for all materials.
The knowledge of the IMFP is indispensable for several electron spectroscopy and microscopy measurements.
Applications of the IMFP in XPS
Following, the IMFP is employed to calculate the effective attenuation length (EAL), the mean escape depth (MED) and the information depth (ID). Besides, one can utilize the IMFP to make matrix corrections for the relative sensitivity factor in quantitative surface analysis. Moreover, the IMFP is an important parameter in Monte Carlo simulations of photoelectron transport in matter.
Calculations of the IMFP
Calculations of the IMFP are mostly based on the algorithm (full Penn algorithm, FPA) developed by Penn, experimental optical constants or calculated optical data (for compounds). The FPA considers an inelastic scattering event and the dependence of the energy-loss function (EFL) on momentum transfer which describes the probability for inelastic scattering as a function of momentum transfer.
Experimental measurements of the IMFP
To measure the IMFP, one well known method is elastic-peak electron spectroscopy (EPES). This method measures the intensity of elastically backscattered electrons with a certain energy from a sample material in a certain direction. Applying a similar technique to materials whose IMFP is known, the measurements are compared with the results from the Monte Carlo simulations under the same conditions. Thus, one obtains the IMFP of a certain material in a certain energy spectrum. EPES measurements show a root-mean-square (RMS) difference between 12% and 17% from the theoretical expected values. Calculated and experimental results show higher agreement for higher energies.
For electron energies in the range 30 keV – 1 MeV, IMFP can be directly measured by electron energy loss spectroscopy inside a transmission electron microscope, provided the sample thickness is known. Such measurements reveal that IMFP in elemental solids is not a smooth, but an oscillatory function of the atomic number.
For energies below 100 eV, IMFP can be evaluated in high-energy secondary electron yield (SEY) experiments. Therefore, the SEY for an arbitrary incident energy between 0.1 keV-10 keV is analyzed. According to these experiments, a Monte Carlo model can be used to simulate the SEYs and determine the IMFP below 100 eV.
Predictive formulas
Using the dielectric formalism, the IMFP can be calculated by solving the following integral:
with the minimum (maximum) energy loss (), the dielectric function , the energy loss function (ELF) and the smallest and largest momentum transfer . In general, solving this integral is quite challenging and only applies for energies above 100 eV. Thus, (semi)empirical formulas were introduced to determine the IMFP.
A first approach is to calculate the IMFP by an approximate form of the relativistic Bethe equation for inelastic scattering of electrons in matter. Equation holds for energies between 50 eV and 200 keV:
with
and
and the electron energy in eV above the Fermi level (conductors) or above the bottom of the conduction band (non-conductors). is the electron mass, the vacuum velocity of light, is the number of valence electrons per atom or molecule, describes the density (in ), is the atomic or molecular weight and , , and are parameters determined in the following. Equation calculates the IMFP and its dependence on the electron energy in condensed matter.
Equation was further developed to find the relations for the parameters , , and for energies between 50 eV and 2 keV:
Here, the bandgap energy is given in eV. Equation an are also known as the TTP-2M equations and are in general applicable for energies between 50 eV and 200 keV. Neglecting a few materials (diamond, graphite, Cs, cubic-BN and hexagonal BN) that are not following these equations (due to deviations in ), the TTP-2M equations show precise agreement with the measurements.
Another approach based on Equation to determine the IMFP is the S1 formula. This formula can be applied for energies between 100 eV and 10 keV:
with the atomic number (average atomic number for a compound), or ( is the heat of formation of a compound in eV per atom) and the average atomic spacing :
with the Avogadro constant and the stoichiometric coefficients and describing binary compounds . In this case, the atomic number becomes
with the atomic numbers and of the two constituents. This S1 formula shows higher agreement with measurements compared to Equation .
Calculating the IMFP with either the TTP-2M formula or the S1 formula requires different knowledge of some parameters. Applying the TTP-2M formula one needs to know , and for conducting materials (and also for non-conductors). Employing S1 formula, knowledge of the atomic number (average atomic number for compounds), and is required for conductors. If non-conducting materials are considered, one also needs to know either or .
An analytical formula for calculating the IMFP down to 50 eV was proposed in 2021. Therefore, an exponential term was added to an analytical formula already derived from that was applicible for energies down to 500 eV:
For relativistic electrons it holds:
with the electron velocity , and . denotes the velocity of light. and are given in nanometers. The constants in and are defined as following:
IMFP data
IMFP data can be collected from the National Institute of Standards and Technology (NIST) Electron Inelastic-Mean-Free-Path Database or the NIST Database for the Simulation of Electron Spectra for Surface Analysis (SESSA). The data contains IMFPs determined by EPES for energies below 2 keV. Otherwise, IMFPs can be determined from the TPP-2M or the S1 formula.
See also
Beer–Lambert law
Scattering theory
References
Atomic, molecular, and optical physics | Inelastic mean free path | [
"Physics",
"Chemistry"
] | 1,411 | [
"Atomic",
" molecular",
" and optical physics"
] |
2,209,683 | https://en.wikipedia.org/wiki/Calcium%20phosphide | Calcium phosphide (CP) is the inorganic compound with the formula Ca3P2. It is one of several phosphides of calcium, being described as the salt-like material composed of Ca2+ and P3−. Other, more exotic calcium phosphides have the formula CaP / Ca2P2, CaP3, and Ca5P8.
Ca3P2 has the appearance of red-brown crystalline powder or grey lumps. Its trade name is Photophor for the incendiary use or Polytanol for the use as rodenticide.
Preparation, history and structure
It may be formed by reaction of the elements, but it is more commonly prepared by carbothermal reduction of calcium phosphate:
Ca3(PO4)2 + 8 C → Ca3P2 + 8 CO
This is also the way how it was accidentally discovered by Smithson Tennant in 1791 while verifying the composition of carbon dioxide proposed by Antoine Lavoisier by reducing calcium carbonate with phosphorus.
The structure of the room temperature form of Ca3P2 has not been confirmed by X-ray crystallography. A high temperature phase has been characterized by Rietveld refinement. Ca2+ centers are octahedral.
Uses
Metal phosphides are used as a rodenticide. A mixture of food and calcium phosphide is left where the rodents can eat it. The acid in the digestive system of the rodent reacts with the phosphide to generate the toxic gas phosphine. This method of vermin control has possible use in places where rodents immune to many of the common warfarin-type (anticoagulant) poisons have appeared. Other pesticides similar to calcium phosphide are zinc phosphide and aluminium phosphide.
Calcium phosphide is also used in fireworks, torpedoes, self-igniting naval pyrotechnic flares, and various water-activated ammunition. During the 1920s and 1930s, Charles Kingsford Smith used separate buoyant canisters of calcium carbide and calcium phosphide as naval flares lasting up to ten minutes. It is speculated that calcium phosphide—made by boiling bones in urine, within a closed vessel—was an ingredient of some ancient Greek fire formulas.
Calcium phosphide is a common impurity in calcium carbide, which may cause the resulting phosphine-contaminated acetylene to ignite spontaneously.
See also
Phosphorus
References
Phosphides
Calcium compounds
Rodenticides
Fumigants | Calcium phosphide | [
"Biology"
] | 536 | [
"Biocides",
"Rodenticides"
] |
2,209,688 | https://en.wikipedia.org/wiki/Einstein%20coefficients | In atomic, molecular, and optical physics, the Einstein coefficients are quantities describing the probability of absorption or emission of a photon by an atom or molecule. The Einstein A coefficients are related to the rate of spontaneous emission of light, and the Einstein B coefficients are related to the absorption and stimulated emission of light. Throughout this article, "light" refers to any electromagnetic radiation, not necessarily in the visible spectrum.
These coefficients are named after Albert Einstein, who proposed them in 1916.
Spectral lines
In physics, one thinks of a spectral line from two viewpoints.
An emission line is formed when an atom or molecule makes a transition from a particular discrete energy level of an atom, to a lower energy level , emitting a photon of a particular energy and wavelength. A spectrum of many such photons will show an emission spike at the wavelength associated with these photons.
An absorption line is formed when an atom or molecule makes a transition from a lower, , to a higher discrete energy state, , with a photon being absorbed in the process. These absorbed photons generally come from background continuum radiation (the full spectrum of electromagnetic radiation) and a spectrum will show a drop in the continuum radiation at the wavelength associated with the absorbed photons.
The two states must be bound states in which the electron is bound to the atom or molecule, so the transition is sometimes referred to as a "bound–bound" transition, as opposed to a transition in which the electron is ejected out of the atom completely ("bound–free" transition) into a continuum state, leaving an ionized atom, and generating continuum radiation.
A photon with an energy equal to the difference between the energy levels is released or absorbed in the process. The frequency at which the spectral line occurs is related to the photon energy by Bohr's frequency condition where denotes the Planck constant.
Emission and absorption coefficients
An atomic spectral line refers to emission and absorption events in a gas in which is the density of atoms in the upper-energy state for the line, and is the density of atoms in the lower-energy state for the line.
The emission of atomic line radiation at frequency may be described by an emission coefficient with units of energy/(time × volume × solid angle). ε dt dV dΩ is then the energy emitted by a volume element in time into solid angle . For atomic line radiation,
where is the Einstein coefficient for spontaneous emission, which is fixed by the intrinsic properties of the relevant atom for the two relevant energy levels.
The absorption of atomic line radiation may be described by an absorption coefficient with units of 1/length. The expression κ' dx gives the fraction of intensity absorbed for a light beam at frequency while traveling distance dx. The absorption coefficient is given by
where and are the Einstein coefficients for photon absorption and induced emission respectively. Like the coefficient , these are also fixed by the intrinsic properties of the relevant atom for the two relevant energy levels. For thermodynamics and for the application of Kirchhoff's law, it is necessary that the total absorption be expressed as the algebraic sum of two components, described respectively by and , which may be regarded as positive and negative absorption, which are, respectively, the direct photon absorption, and what is commonly called stimulated or induced emission.
The above equations have ignored the influence of the spectroscopic line shape. To be accurate, the above equations need to be multiplied by the (normalized) spectral line shape, in which case the units will change to include a 1/Hz term.
Under conditions of thermodynamic equilibrium, the number densities and , the Einstein coefficients, and the spectral energy density provide sufficient information to determine the absorption and emission rates.
Equilibrium conditions
The number densities and are set by the physical state of the gas in which the spectral line occurs, including the local spectral radiance (or, in some presentations, the local spectral radiant energy density). When that state is either one of strict thermodynamic equilibrium, or one of so-called "local thermodynamic equilibrium", then the distribution of atomic states of excitation (which includes and ) determines the rates of atomic emissions and absorptions to be such that Kirchhoff's law of equality of radiative absorptivity and emissivity holds. In strict thermodynamic equilibrium, the radiation field is said to be black-body radiation and is described by Planck's law. For local thermodynamic equilibrium, the radiation field does not have to be a black-body field, but the rate of interatomic collisions must vastly exceed the rates of absorption and emission of quanta of light, so that the interatomic collisions entirely dominate the distribution of states of atomic excitation. Circumstances occur in which local thermodynamic equilibrium does not prevail, because the strong radiative effects overwhelm the tendency to the Maxwell–Boltzmann distribution of molecular velocities. For example, in the atmosphere of the Sun, the great strength of the radiation dominates. In the upper atmosphere of the Earth, at altitudes over 100 km, the rarity of intermolecular collisions is decisive.
In the cases of thermodynamic equilibrium and of local thermodynamic equilibrium, the number densities of the atoms, both excited and unexcited, may be calculated from the Maxwell–Boltzmann distribution, but for other cases, (e.g. lasers) the calculation is more complicated.
Einstein coefficients
In 1916, Albert Einstein proposed that there are three processes occurring in the formation of an atomic spectral line. The three processes are referred to as spontaneous emission, stimulated emission, and absorption. With each is associated an Einstein coefficient, which is a measure of the probability of that particular process occurring. Einstein considered the case of isotropic radiation of frequency and spectral energy density . Paul Dirac derived the coefficients in a 1927 paper titled "The Quantum Theory of the Emission and Absorption of Radiation".
Various formulations
Hilborn has compared various formulations for derivations for the Einstein coefficients, by various authors. For example, Herzberg works with irradiance and wavenumber; Yariv works with energy per unit volume per unit frequency interval, as is the case in the more recent (2008) formulation. Mihalas & Weibel-Mihalas work with radiance and frequency, as does Chandrasekhar, and Goody & Yung; Loudon uses angular frequency and radiance.
Spontaneous emission
Spontaneous emission is the process by which an electron "spontaneously" (i.e. without any outside influence) decays from a higher energy level to a lower one. The process is described by the Einstein coefficient A21 (s−1), which gives the probability per unit time that an electron in state 2 with energy will decay spontaneously to state 1 with energy , emitting a photon with an energy . Due to the energy-time uncertainty principle, the transition actually produces photons within a narrow range of frequencies called the spectral linewidth. If is the number density of atoms in state i , then the change in the number density of atoms in state 2 per unit time due to spontaneous emission will be
The same process results in an increase in the population of state 1:
Stimulated emission
Stimulated emission (also known as induced emission) is the process by which an electron is induced to jump from a higher energy level to a lower one by the presence of electromagnetic radiation at (or near) the frequency of the transition. From the thermodynamic viewpoint, this process must be regarded as negative absorption. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 2 with energy will decay to state 1 with energy , emitting a photon with an energy . The change in the number density of atoms in state 1 per unit time due to induced emission will be
where denotes the spectral energy density of the isotropic radiation field at the frequency of the transition (see Planck's law).
Stimulated emission is one of the fundamental processes that led to the development of the laser. Laser radiation is, however, very far from the present case of isotropic radiation.
Photon absorption
Absorption is the process by which a photon is absorbed by the atom, causing an electron to jump from a lower energy level to a higher one. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 1 with energy will absorb a photon with an energy and jump to state 2 with energy . The change in the number density of atoms in state 1 per unit time due to absorption will be
Detailed balancing
The Einstein coefficients are fixed probabilities per time associated with each atom, and do not depend on the state of the gas of which the atoms are a part. Therefore, any relationship that we can derive between the coefficients at, say, thermodynamic equilibrium will be valid universally.
At thermodynamic equilibrium, we will have a simple balancing, in which the net change in the number of any excited atoms is zero, being balanced by loss and gain due to all processes. With respect to bound-bound transitions, we will have detailed balancing as well, which states that the net exchange between any two levels will be balanced. This is because the probabilities of transition cannot be affected by the presence or absence of other excited atoms. Detailed balance (valid only at equilibrium) requires that the change in time of the number of atoms in level 1 due to the above three processes be zero:
Along with detailed balancing, at temperature we may use our knowledge of the equilibrium energy distribution of the atoms, as stated in the Maxwell–Boltzmann distribution, and the equilibrium distribution of the photons, as stated in Planck's law of black body radiation to derive universal relationships between the Einstein coefficients.
From Boltzmann distribution we have for the number of excited atomic species i:
where n is the total number density of the atomic species, excited and unexcited, k is the Boltzmann constant, T is the temperature, is the degeneracy (also called the multiplicity) of state i, and Z is the partition function. From Planck's law of black-body radiation at temperature we have for the spectral radiance (radiance is energy per unit time per unit solid angle per unit projected area, when integrated over an appropriate spectral interval) at frequency
where
where is the speed of light and is the Planck constant.
Substituting these expressions into the equation of detailed balancing and remembering that yields
or
The above equation must hold at any temperature, so from one gets
and from
Therefore, the three Einstein coefficients are interrelated by
and
When this relation is inserted into the original equation, one can also find a relation between and , involving Planck's law.
Oscillator strengths
The oscillator strength is defined by the following relation to the cross section for absorption:
where is the electron charge, is the electron mass, and and are normalized distribution functions in frequency and angular frequency respectively.
This allows all three Einstein coefficients to be expressed in terms of the single oscillator strength associated with the particular atomic spectral line:
Dipole approximation
The value of A and B coefficients can be calculated using quantum mechanics where dipole approximations in time dependent perturbation theory is used. While the calculation of B coefficient can be done easily, that of A coefficient requires using results of second quantization. This is because the theory developed by dipole approximation and time dependent perturbation theory gives a semiclassical description of electronic transition which goes to zero as perturbing fields go to zero. The A coefficient which governs spontaneous emission should not go to zero as perturbing fields go to zero. The result for transition rates of different electronic levels as a result of spontaneous emission is given as (in SI units):
For B coefficient, straightforward application of dipole approximation in time dependent perturbation theory yields (in SI units):
Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms.
Here, the B coefficients are chosen to correspond to energy distribution function. Often these different definitions of B coefficients are distinguished by superscript, for example, where term corresponds to frequency distribution and term corresponds to distribution. The formulas for B coefficients varies inversely to that of the energy distribution chosen, so that the transition rate is same regardless of convention.
Hence, AB coefficients are calculated using dipole approximation as:
where and B coefficients correspond to energy distribution function.
Hence the following ratios are also derived:
and
Derivation of Planck's law
It follows from theory that:
where and are number of occupied energy levels of and respectively, where . Note that from time dependent perturbation theory application, the fact that only radiation whose is close to value of can produce respective stimulated emission or absorption, is used.
Where Maxwell distribution involving and ensures
Solving for for equilibrium condition using the above equations and ratios while generalizing to , we get:
which is the angular frequency energy distribution from Planck's law.
See also
Transition dipole moment
Oscillator strength
Breit–Wigner distribution
Electronic configuration
Fano resonance
Siegbahn notation
Atomic spectroscopy
Molecular radiation, continuous spectra emitted by molecules
References
Cited bibliography
Chandrasekhar, S. (1950). Radiative Transfer, Oxford University Press, Oxford.
Garrison, J. C., Chiao, R. Y. (2008). Quantum Optics, Oxford University Press, Oxford UK, .
Goody, R. M., Yung, Y. L. (1989). Atmospheric Radiation: Theoretical Basis, 2nd edition, Oxford University Press, Oxford, New York, 1989, .
Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in
Herzberg, G. (1950). Molecular Spectroscopy and Molecular Structure, vol. 1, Diatomic Molecules, second edition, Van Nostrand, New York.
Loudon, R. (1973/2000). The Quantum Theory of Light, (first edition 1973), third edition 2000, Oxford University Press, Oxford UK, .
Mihalas, D., Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics, Oxford University Press, New York .
Yariv, A. (1967/1989). Quantum Electronics, third edition, John Wiley & sons, New York, .
Other reading
External links
Emission Spectra from various light sources
Emission spectroscopy
bg:Атомна спектрална линия
it:Linea spettrale atomica
pl:Widmo liniowe
zh:原子谱线#爱因斯坦系数 | Einstein coefficients | [
"Physics",
"Chemistry"
] | 3,060 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
2,209,960 | https://en.wikipedia.org/wiki/Saccharomyces%20uvarum | Saccharomyces uvarum is a species of yeast that is commonly found in fermented beverages, particularly those fermented at colder temperatures. It was originally described by Martinus Willem Beijerinck in 1898, but was long considered identical to S. bayanus. In 2000 and 2005, genetic investigations of various Saccharomyces species indicated that S. uvarum is genetically distinct from S. bayanus and should be considered a unique species.
It is a bottom-fermenting yeast, so-called because it does not form the foam on top of the wort that top-fermenting yeast does.
References
uvarum
Yeasts
Yeasts used in brewing
Fungus species | Saccharomyces uvarum | [
"Biology"
] | 146 | [
"Yeasts",
"Fungi",
"Fungus species"
] |
2,210,064 | https://en.wikipedia.org/wiki/Cognitive%20revolution | The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, from which emerged a new field known as cognitive science. The preexisting relevant fields were psychology, linguistics, computer science, anthropology, neuroscience, and philosophy. The approaches used were developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. In the 1960s, the Harvard Center for Cognitive Studies and the Center for Human Information Processing at the University of California, San Diego were influential in developing the academic study of cognitive science. By the early 1970s, the cognitive movement had surpassed behaviorism as a psychological paradigm. Furthermore, by the early 1980s the cognitive approach had become the dominant line of research inquiry across most branches in the field of psychology.
A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition. Some of the main ideas and developments from the cognitive revolution were the use of the scientific method in cognitive science research, the necessity of mental systems to process sensory input, the innateness of these systems, and the modularity of the mind. Important publications in triggering the cognitive revolution include psychologist George Miller's 1956 article "The Magical Number Seven, Plus or Minus Two" (one of the most frequently cited papers in psychology), linguist Noam Chomsky's Syntactic Structures (1957) and "Review of B. F. Skinner's Verbal Behavior" (1959), and foundational works in the field of artificial intelligence by John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, such as the 1958 article "Elements of a Theory of Human Problem Solving". Ulric Neisser's 1967 book Cognitive Psychology was also a landmark contribution.
Historical background
Prior to the cognitive revolution, behaviorism was the dominant trend in psychology in the United States. Behaviorists were interested in "learning", which was seen as "the novel association of stimuli with responses." Animal experiments played a significant role in behaviorist research, and prominent behaviorist J. B. Watson, interested in describing the responses of humans and animals as one group, stated that there was no need to distinguish between the two. Watson hoped to learn to predict and control behavior through his research. The popular Hull-Spence stimulus-response approach was, according to George Mandler, impossible to use to research topics that held the interest of cognitive scientists, like memory and thought, because both the stimulus and the response were thought of as completely physical events. Behaviorists typically did not research these subjects. B. F. Skinner, a functionalist behaviorist, criticized certain mental concepts like instinct as "explanatory fiction(s)", ideas that assume more than humans actually know about a mental concept. Various types of behaviorists had different views on the exact role (if any) that consciousness and cognition played in behavior. Although behaviorism was popular in the United States, Europe was not particularly influenced by it, and research on cognition could easily be found in Europe during this time.
Noam Chomsky has framed the cognitive and behaviorist positions as rationalist and empiricist, respectively, which are philosophical positions that arose long before behaviorism became popular and the cognitive revolution occurred. Empiricists believe that humans acquire knowledge only through sensory input, while rationalists believe that there is something beyond sensory experience that contributes to human knowledge. However, whether Chomsky's position on language fits into the traditional rationalist approach has been questioned by philosopher John Cottingham.
George Miller, one of the scientists involved in the cognitive revolution, sets the date of its beginning as September 11, 1956, when several researchers from fields like experimental psychology, computer science, and theoretical linguistics presented their work on cognitive science-related topics at a meeting of the 'Special Interest Group in Information Theory' at the Massachusetts Institute of Technology. This interdisciplinary cooperation went by several names like cognitive studies and information-processing psychology but eventually came to be known as cognitive science. Grants from the Alfred P. Sloan Foundation in the 1970s advanced interdisciplinary understanding in the relevant fields and supported the research that led to the field of cognitive neuroscience.
Main ideas
George Miller states that six fields participated in the development of cognitive science: psychology, linguistics, computer science, anthropology, neuroscience, and philosophy, with the first three playing the main roles.
Scientific method
A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition. This was done by designing experiments that used computational models of artificial intelligence to systematically test theories about human mental processes in a controlled laboratory setting.
Mediation and information processing
When defining the "Cognitive Approach," Ulric Neisser says that humans can only interact with the "real world" through intermediary systems that process information like sensory input. As understood by a cognitive scientist, the study of cognition is the study of these systems and the ways they process information from the input. The processing includes not just the initial structuring and interpretation of the input but also the storage and later use.
Steven Pinker claims that the cognitive revolution bridged the gap between the physical world and the world of ideas, concepts, meanings and intentions. It unified the two worlds with a theory that mental life can be explained in terms of information, computation and feedback.
Innateness
In his 1975 book Reflections on Language, Noam Chomsky questions how humans can know so much, despite relatively limited input. He argues that they must have some kind of innate, domain-specific learning mechanism that processes input. Chomsky observes that physical organs do not develop based on their experience, but based on some inherent genetic coding, and wrote that the mind should be treated the same way. He says that there is no question that there is some kind of innate structure in the mind, but it is less agreed upon whether the same structure is used by all organisms for different types of learning. He compares humans to rats in the task of maze running to show that the same learning theory cannot be used for different species because they would be equally good at what they are learning, which is not the case. He also says that even within humans, using the same learning theory for multiple types of learning could be possible, but there is no solid evidence to suggest it. He proposes a hypothesis that claims that there is a biologically based language faculty that organizes the linguistic information in the input and constrains human language to a set of particular types of grammars. He introduces universal grammar, a set of inherent rules and principles that all humans have to govern language, and says that the components of universal grammar are biological. To support this, he points out that children seem to know that language has a hierarchical structure, and they never make mistakes that one would expect from a hypothesis that language is linear.
Steven Pinker has also written on this subject from the perspective of modern-day cognitive science. He says that modern cognitive scientists, like figures in the past such as Gottfried Wilhelm Leibniz (1646-1716), don't believe in the idea of the mind starting as a "blank slate." Though they have disputes on the nature-nurture diffusion, they all believe that learning is based on something innate to humans. Without this innateness, there will be no learning process. He points out that humans' acts are non-exhaustive, even though basic biological functions are finite. An example of this from linguistics is the fact that humans can produce infinite sentences, most of which are brand new to the speaker themselves, even though the words and phrases they have heard are not infinite.
Pinker, who agrees with Chomsky's idea of innate universal grammar, claims that although humans speak around six thousand mutually unintelligible languages, the grammatical programs in their minds differ far less than the actual speech. Many different languages can be used to convey the same concepts or ideas, which suggests there may be a common ground for all the languages.
Modularity of the mind
Pinker claims another important idea from the cognitive revolution was that the mind is modular, with many parts cooperating to generate a train of thought or an organized action. It has different distinct systems for different specific missions. Behaviors can vary across cultures, but the mental programs that generate the behaviors don't need to be varied.
Criticism
There have been criticisms of the typical characterization of the shift from behaviorism to cognitivism.
Henry L. Roediger III argues that the common narrative most people believe about the cognitive revolution is inaccurate. The narrative he describes states that psychology started out well but lost its way and fell into behaviorism, but this was corrected by the Cognitive Revolution, which essentially put an end to behaviorism. He claims that behavior analysis is actually still an active area of research that produces successful results in psychology and points to the Association for Behavior Analysis International as evidence. He claims that behaviorist research is responsible for successful treatments of autism, stuttering, and aphasia, and that most psychologists actually study observable behavior, even if they interpret their results cognitively. He believes that the change from behaviorism to cognitivism was gradual, slowly evolving by building on behaviorism.
Lachman and Butterfield were among the first to imply that cognitive psychology has a revolutionary origin. Thomas H. Leahey has criticized the idea that the introduction of behaviorism and the cognitive revolution were actually revolutions and proposed an alternative history of American psychology as "a narrative of research traditions."
Other authors criticize behaviorism, but they also criticize the cognitive revolution for having adopted new forms of anti-mentalism.
Cognitive psychologist Jerome Bruner criticized the adoption of the computational theory of mind and the exclusion of meaning from cognitive science, and he characterized one of the primary objects of the cognitive revolution as changing the study of psychology so that meaning was its core.
His understanding of the cognitive revolution revolves entirely around "meaning-making" and the hermeneutic description of how people go about this. He believes that the cognitive revolution steered psychology away from behaviorism and this was good, but then another form of anti-mentalism took its place: computationalism. Bruner states that the cognitive revolution should replace behaviorism rather than only modify it.
Neuroscientist Gerald Edelman argues in his book Bright Air, Brilliant Fire (1991) that a positive result of the emergence of "cognitive science" was the departure from "simplistic behaviorism". However, he adds, a negative result was the growing popularity of a total misconception of the nature of thought: the computational theory of mind or cognitivism, which asserts that the brain is a computer that processes symbols whose meanings are entities of the objective world. In this view, the symbols of the mind correspond exactly to entities or categories in the world defined by criteria of necessary and sufficient conditions, that is, classical categories. The representations would be manipulated according to certain rules that constitute a syntax.
Edelman rejects the idea that objects of the world come in classical categories, and also rejects the idea that the brain/mind is a computer. The author rejects behaviorism (a points he also makes in his 2006 book Second Nature. Brain science and human knowledge), but also cognitivism (the computational-representational theory of the mind), since the latter conceptualizes the mind as a computer and meaning as objective correspondence. Furthermore, Edelman criticizes "functionalism", the idea that formal and abstract functional properties of the mind can be analyzed without making direct reference to the brain and its processes.
Edelman asserts that most of those who work in the field of cognitive psychology and cognitive science seem to adhere to this computational view, but he mentions some important exceptions. Exceptions include John Searle, Jerome Bruner, George Lakoff, Ronald Langacker, Alan Gauld, Benny Shanon, Claes von Hofsten, and others. Edelman argues that he agrees with the critical and dissenting approaches of these authors that are exceptions to the majority view of cognitivism.
Perceptual symbols, imagery and the cognitive neuroscience revolution
In their paper "The cognitive neuroscience revolution", Gualtiero Piccinini and Worth Boone argue that cognitive neuroscience emerged as a discipline in the late 1980s. Prior to that time, cognitive science and neuroscience had largely developed in isolation. Cognitive science developed between the 1950s and 1970s as an interdisciplinary field composed primarily of aspects of psychology, linguistics, and computer science. However, both classical symbolic computational theories and connectionist models developed largely independently of biological considerations. The authors argue that connectionist models were closer to symbolic models than to neurobiology.
Piccinini and Boone state that a revolutionary change is currently taking place: the move from cognitive science (autonomous from neuroscience) to cognitive neuroscience. The authors point out that many researchers who previously carried out psychological and behavioral studies now give properly cognitive neuroscientific explanations. They mention the example of Stephen Kosslyn, who postulated his theory of the pictorial format of mental images in the 1980s based on behavioral studies. Later, with the advent of magnetic resonance imaging technology, Kosslyn was able to show that when people imagine, the visual cortex is activated. This lent strong neuroscientific evidence to his theory of the pictorial format, refuting speculations about a supposed non-pictorial format of mental images.
According to Canales Johnson et al. (2021):
Neuroscientist Joseph LeDoux in his book The Emotional Brain argues that cognitive science emerged around the middle of the 20th century, and is often described as 'the new science of the mind.' However, in fact, cognitive science is actually a science of only one part of the mind, the part that has to do with thinking, reasoning, and intellect. It leaves emotions out. "And minds without emotions are not really minds at all…"
Psychologist Lawrence Barsalou argues that human cognitive processing involves the simulation of perceptual, motor, and emotional states. The classical and 'intellectualist' view of cognition, considers that it is essentially processing propositional information of a verbal or numerical type. However, Barsalou's theory explains human conceptual processing by the activation of regions of the sensory cortices of different modalities, as well as of the motor cortex, and by the simulation of embodied experiences –visual, auditory, emotional, motor–, that ground meaning in experience situated in the world.
Modal symbols are those analogical mental representations linked to a specific sensory channel: for example, the representation of 'dog' through a visual image similar to a dog or through an auditory image of the barking of dogs, based on the memory of the experiences of seeing a dog or hearing its barking. Lawrence Barsalou's 'perceptual symbols' theory asserts that mental processes operate with modal symbols that maintain the sensory properties of perceptual experiences.
According to Barsalou (2020), the "grounded cognition" perspective in which his theory is framed asserts that cognition emerges from the interaction between amodal symbols, modal symbols, the body and the world. Therefore, this perspective does not rule out 'classical' symbols –amodal ones, such as those typical of verbal language or numerical reasoning– but rather considers that these interact with imagination, perception and action situated in the world.
See also
Digital infinity
Embodied cognition
Enactivism (psychology)
Human factors
Postcognitivism
Notes
References
Bruner, J. S., Bruner, U. P. J. (1990). Acts of meaning. Cambridge: Harvard University Press.
Mandler, G. (2007) A history of modern experimental psychology: From James and Wundt to cognitive science. Cambridge, MA: MIT Press.
Further reading
Books
Baars, Bernard J. (1986) The cognitive revolution in psychology Guilford Press, New York,
Gardner, Howard (1986) The mind's new science : a history of the cognitive revolution Basic Books, New York, ; reissued in 1998 with an epilogue by the author: "Cognitive science after 1984"
Johnson, David Martel and Emeling, Christina E. (1997) The future of the cognitive revolution Oxford University Press, New York,
LePan, Don (1989) The cognitive revolution in Western culture Macmillan, Basingstoke, England,
Murray, David J. (1995) Gestalt psychology and the cognitive revolution Harvester Wheatsheaf, New York,
Olson, David R. (2007) Jerome Bruner: the cognitive revolution in educational theory Continuum, London,
Richardson, Alan and Steen, Francis F. (editors) (2002) Literature and the cognitive revolution Duke University Press, Durham, North Carolina, being Poetics today 23(1),
Royer, James M. (2005) The cognitive revolution in educational psychology Information Age Publishing, Greenwich, Connecticut,
Simon, Herbert A. et al. (1992) Economics, bounded rationality and the cognitive revolution E. Elgar, Aldershot, England,
Todd, James T. and Morris, Edward K. (editors) (1995) Modern perspectives on B. F. Skinner and contemporary behaviorism (Series: Contributions in psychology, no. 28) Greenwood Press, Westport, Connecticut,
Articles
Pinker, Steven (2011) "The Cognitive Revolution" Harvard Gazette
Cognitive psychology
Cognitive science
History of psychology
Philosophical schools and traditions
Revolutions by type
Western culture | Cognitive revolution | [
"Biology"
] | 3,535 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
2,210,469 | https://en.wikipedia.org/wiki/Net2Phone | net2phone is a Cloud Communications provider offering cloud based telephony services to businesses worldwide. The company is a subsidiary of IDT Corporation.
History
net2phone was founded in 1990 by telecom entrepreneur Howard Jonas, the chairman and chief executive officer of net2phone’s parent company, IDT Corporation. The company was an early pioneer in the commercialization of voice-over-Internet protocol (VoIP) technologies leveraging the global carrier business and infrastructure of IDT and focusing on transitioning businesses and consumers from PSTN, traditional telecom interconnects, to Voice over IP.
On July 30, 1999, during the dot-com bubble, the company became a public company via an initial public offering, raising $81 million. Shares rose 77% on the first day of trading to $26 per share. After completion of the IPO, IDT owned 57% of the company. Within a few weeks, the shares increased another 100% in value, to $53 per share.
In March 2000, in a transaction facilitated by IDT CEO Howard Jonas, a consortium of telecommunications companies led by AT&T announced a $1.4 billion investment for a 32% stake in the company, buying shares for $75 each. The transaction was completed in August 2000. AOL had expressed an interest in buying all or part of the company but was not agreeable to the price.
In August 2000, Jonathan Fram, president of the company, left the company to join eVoice.
In September 2000, the company formed Adir Technologies, a joint venture with Cisco Systems. In March 2002, the company sued Cisco for breach of contract.
In February 2002, the company announced 110 layoffs, or 28% of its workforce.
In October 2004, Liore Alroy became chief executive officer of the company.
On March 13, 2006, IDT Corporation acquired the shares of the company that it did not already own for $2.05 per share.
In 2015, net2phone began providing Unified Communications as a Service (UCaaS) targeted to the SMB market. net2phone’s UCaaS initiative was developed by the Company’s management team led by its President, Jonah Fink.
Over the next 3 years, net2phone continued to expand its UCaaS offering into Argentina, Brazil, Colombia, Mexico, and Peru leveraging its local infrastructure, communication licenses and local staff all while selling in the respective market’s local currency and language sets.
Acquisitions
In 2001, the company acquired iPing.
In 2000, the company acquired Aplio, an internet appliance maker located in San Bruno, California.
As Unified Communications demands more than just voice over IP, such as messaging – in January 2017 net2phone acquired Live Ninja, a Miami based provider of a customer-facing messaging and live chat management service.
In 2018, net2phone launched an updated version of its communications platform, incorporating the technology and capabilities from the Live Ninja acquisition.
Further expansion came in 2019 with the acquisition of Versature, a SaaS-based business communications and hosted VoIP provider serving the Canadian market.
In 2020 with the acquisition of RingSouth Europa, a business communications provider headquartered in Murcia, Spain.
In 2020, with the rise of the COVID-19 pandemic causing a shift in the workplace environment, net2phone launched a native integration into Microsoft Teams, as well as its own video conferencing platform, net2phone Huddle, followed by further integrations into CRM tools such as Salesforce and Zoho and collaboration tools such as Slack.
In 2022, net2phone acquired Integra CCS, a Contact Center as a Service (CCaaS) provider operating out of Uruguay.
Products
UNITE
UNITE is net2phone's Unified Communications as a Service (UCaaS) product, which provides businesses with voice, video, chat, text, and integrations. The product offers advanced call features, reporting, analytics, and integrations with popular SaaS tools that can be managed through a web-based interface.
uContact
net2phone's uContact is a Contact Center as a Service (CCaaS) platform introduced through their acquisition of Integra in 2022. uContact features a suite of contact center features, including omnichannel support, social media, chatbots, workflow management, and development tools.
Huddle
Huddle is a net2phone’s high-definition video conferencing platform released in April 2020. Huddle conferences are passcode-protected and encrypted. Huddle includes several features including screen sharing, YouTube casting, chat messaging, and a raise hand option. The application is accessible from desktop or mobile device.
net2phone AI
net2phone AI was released in July 2023 as an add-on service designed to optimize agent and client interactions. Key functionalities include sentiment analysis, automatic call transcription, auto-generated follow-up emails, auto-generated call summaries, AI-generated coaching notes, call analytics, and CRM integrations. net2phone AI is available in multiple languages and integrates with communication or voice platforms that support API webhooks.
SIP Trunking
net2phone offers SIP Trunking services, allowing businesses to merge voice and data into a unified communications platform without the need for equipment replacement. The SIP trunking solution includes features such as high-quality voice interactions, international calling, hybrid SIP and hosted support, increased security, codec support, and a stable, fully redundant network.
References
Communications in New Jersey
Telecommunications companies established in 1990
Companies based in Newark, New Jersey
Dot-com bubble
Instant messaging
VoIP software
Windows instant messaging clients
2000 mergers and acquisitions
2001 mergers and acquisitions
2017 mergers and acquisitions | Net2Phone | [
"Technology"
] | 1,177 | [
"Instant messaging"
] |
2,210,527 | https://en.wikipedia.org/wiki/Poster%20paint | Poster paint (also known as tempera paint in the US, poster color in Asia) is a distemper paint that usually uses starch, cornstarch, cellulose, gum-water or another glue size as its binder. It either comes in large bottles or jars or in a powdered form. It is normally a cheap paint used in school art classes.
Asian poster paints are similar to gouache, albeit has a thinner viscosity, uses gum arabic and/or dextrin as a binder, and use inexpensive and less lightfast pigments more coarsely ground, with added brighteners to make the paints affordable. Poster colors are used in art classes, in animation production, and in scanning and printing. Notable brands that produce poster colors include Kokuyo Camlin, Monami, Pentel, Sakura, and Nicker.
See also
Gouache
Tempera, the common name for Poster paint in the US and also a fine art painting material using egg yolk as a binder
References
Ralph Mayer, The Artist's Handbook of Materials and Techniques, page 231
Paints
Early childhood education in the United States
Children's art | Poster paint | [
"Chemistry"
] | 240 | [
"Paints",
"Coatings"
] |
2,210,572 | https://en.wikipedia.org/wiki/Procarbazine | Procarbazine is a chemotherapy medication used for the treatment of Hodgkin's lymphoma and brain cancers. For Hodgkin's it is often used together with chlormethine, vincristine, and prednisone while for brain cancers such as glioblastoma multiforme it is used with lomustine and vincristine. It is typically taken by mouth.
Common side effect include low blood cell counts and vomiting. Other side effects include tiredness and depression. It is not recommended in people with severe liver or kidney problems. Use in pregnancy is known to harm the baby. Procarbazine is in the alkylating agents family of medication. How it works is not clearly known.
Procarbazine was approved for medical use in the United States in 1969. It is on the World Health Organization's List of Essential Medicines. In the United Kingdom a month of treatment cost the National Health Service 450 to 750 pounds.
Medical uses
When used to treat Hodgkin's lymphoma, it is often delivered as part of the BEACOPP regimen that includes bleomycin, etoposide, adriamycin, cyclophosphamide, vincristine (tradename Oncovin), prednisone, and procarbazine. The first combination chemotherapy developed for Hodgkin's lymphoma (HL), MOPP also included procarbazine (ABVD has supplanted MOPP as standard first line treatment for HL, with BEACOPP as an alternative for advanced/unfavorable HL). Alternatively, when used to treat certain brain tumors (malignant gliomas), it is often dosed as PCV when combined with lomustine (often called CCNU) and vincristine.
Dose should be adjusted for kidney disease or liver disease.
Side effects
Very common (greater than 10% of people experience them) adverse effects include loss of appetite, nausea and vomiting. Other side effects of unknown frequency include reduction in leukocytes, reduction in platelets, reduction in neutrophils, which can lead to increased infections including lung infections; severe allergy-like reactions that can lead to angioedema and skin reactions; lethargy; liver complications including jaundice and abnormal liver function tests; reproductive effects including reduction in sperm count and ovarian failure.
When combined with ethanol, procarbazine may cause a disulfiram-like reaction in some people.
It weakly inhibits MAO in the gastrointestinal system, so it can cause hypertensive crises if associated with the ingestion of tyramine-rich foods such as aged cheeses; this appears to be rare.
Procarbazine rarely causes chemotherapy-induced peripheral neuropathy, a progressive, enduring, often irreversible tingling numbness, intense pain, and hypersensitivity to cold, beginning in the hands and feet and sometimes involving the arms and legs.
Pharmacology
Procarbazine works, in part, as an alkylating agent and methylates guanine at the O-6 position (much like dacarbazine also does). Guanine is one of the four nucleotides that makes up DNA. The methylated DNA is prone to breakage, and RNA and protein synthesis is inhibited. Proliferating cancer cells need to replicate their DNA and undergo programmed cell death (apoptosis) in response to DNA strand breaks. Normal or non-proliferating cells are more apt to repair the DNA damage, but still some of the healthy cells will be damaged. Procarbazine is metabolized in the liver to an azo-derivative and then further metabolized by the cytochrome P-450 system to an active azoxy-derivative.
References
External links
MOPP Treatment Regimen
PCV Information
Procarbazine Drug Information Provided by Lexi-Comp – Merck Manual
RX Listing for Matulane
Benzamides
Cancer treatments
Disulfiram-like drugs
DNA replication inhibitors
Hydrazines
IARC Group 2A carcinogens
Isopropylamino compounds
Monoamine oxidase inhibitors
Mutagens
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Procarbazine | [
"Chemistry"
] | 892 | [
"Functional groups",
"Hydrazines"
] |
2,210,620 | https://en.wikipedia.org/wiki/Attic%20%28architecture%29 | In classical architecture, the term attic refers to a storey (or low wall) above the cornice of a classical façade. The decoration of the topmost part of a building was particularly important in ancient Greek architecture and this came to be seen as typifying the Attica style, the earliest example known being that of the monument of Thrasyllus in Athens.
It was largely employed in Ancient Rome, where their triumphal arches utilized it for inscriptions or for bas-relief sculpture. It was used also to increase the height of enclosure walls such as those of the Forum of Nerva. By the Italian revivalists it was utilized as a complete storey, pierced with windows, as found in Andrea Palladio's work in Vicenza and in Greenwich Hospital, London. One well-known large attic surmounts the entablature of St. Peter's Basilica, which measures in height.
Decorated attics with pinnacles are often associated with the Late Renaissance (Mannerist architecture) period in Poland and are viewed as a distinct feature of Polish historical architecture (attyka polska). Many examples can be found throughout the country, notably at Wawel Castle in Kraków, Gdańsk, Poznań, Lublin, Tarnów, Zamość, Sandomierz and Kazimierz Dolny. Possibly the best example of a rich Italianate attic is at Krasiczyn Castle.
This usage became current in the 17th century from the use of Attica style pilasters as adornments on the top story's façade. By the 18th century this meaning had been transferred to the space behind the wall of the highest story (i.e., directly under the roof), producing the modern meaning of the word "attic".
References
American Journal of Archaeology, Vol. 44, No. 1 (Jan. - Mar., 1940), pp. 159-161
Architectural elements
Rooms | Attic (architecture) | [
"Technology",
"Engineering"
] | 386 | [
"Building engineering",
"Rooms",
"Architectural elements",
"Components",
"Architecture"
] |
2,210,759 | https://en.wikipedia.org/wiki/Finite%20strain%20theory | In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
Displacement field
Deformation gradient tensor
The deformation gradient tensor is related to both the reference and current configuration, as seen by the unit vectors and , therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of , has the inverse , where is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant must be nonsingular, i.e.
The material deformation gradient tensor is a second-order tensor that represents the gradient of the mapping function or functional relation , which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector , i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function , i.e. differentiable function of and time , which implies that cracks and voids do not open or close during the deformation. Thus we have,
Relative displacement vector
Consider a particle or material point with position vector in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by in the new configuration is given by the vector position . The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point neighboring , with position vector . In the deformed configuration this particle has a new position given by the position vector . Assuming that the line segments and joining the particles and in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as and . Thus from Figure 2 we have
where is the relative displacement vector, which represents the relative displacement of with respect to in the deformed configuration.
Taylor approximation
For an infinitesimal element , and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point , neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle as
Thus, the previous equation can be written as
Time-derivative of the deformation gradient
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of is
where is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
where is the spatial velocity gradient and where is the spatial (Eulerian) velocity at . If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
assuming at . There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
The above relation can be verified by taking the material time derivative of and noting that .
Polar decomposition of the deformation gradient tensor
The deformation gradient , like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e., where the tensor is a proper orthogonal tensor, i.e., and , representing a rotation; the tensor is the right stretch tensor; and the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor , respectively. and are both positive definite, i.e. and for all non-zero , and symmetric tensors, i.e. and , of second order.
This decomposition implies that the deformation of a line element in the undeformed configuration onto in the deformed configuration, i.e., , may be obtained either by first stretching the element by , i.e. , followed by a rotation , i.e., ; or equivalently, by applying a rigid rotation first, i.e., , followed later by a stretching , i.e., (See Figure 3).
Due to the orthogonality of
so that and have the same eigenvalues or principal stretches, but different eigenvectors or principal directions and , respectively. The principal directions are related by
This polar decomposition, which is unique as is invertible with a positive determinant, is a corollary of the singular-value decomposition.
Transformation of a surface and volume element
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as where is an area of a region in the deformed configuration, is the same area in the reference configuration, and is the outward normal to the area element in the current configuration while is the outward normal in the reference configuration, is the deformation gradient, and .
The corresponding formula for the transformation of the volume element is
Fundamental strain tensors
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change () we can exclude the rotation by multiplying the deformation gradient tensor by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
Cauchy strain tensor (right Cauchy–Green deformation tensor)
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
Invariants of are often used in the expressions for strain energy density functions. The most commonly used invariants are
where is the determinant of the deformation gradient and are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
Finger strain tensor
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e., , be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
Green strain tensor (left Cauchy–Green deformation tensor)
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of are also used in the expressions for strain energy density functions. The conventional invariants are defined as
where is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
Piola strain tensor (Cauchy deformation tensor)
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor, . This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
Spectral representation
If there are three distinct principal stretches , the spectral decompositions of and is given by
Furthermore,
Observe that
Therefore, the uniqueness of the spectral decomposition also implies that . The left stretch () is also called the spatial stretch tensor while the right stretch () is called the material stretch tensor.
The effect of acting on is to stretch the vector by and to rotate it to the new orientation , i.e.,
In a similar vein,
Examples
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of . If the volume remains constant, the contraction in the other two directions is such that or . Then:
Simple shear
Rigid body rotation
Derivatives of stretch
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
and follow from the observations that
Physical interpretation of deformation tensors
Let be a Cartesian coordinate system defined on the undeformed body and let be another system defined on the deformed body. Let a curve in the undeformed body be parametrized using . Its image in the deformed body is .
The undeformed length of the curve is given by
After deformation, the length becomes
Note that the right Cauchy–Green deformation tensor is defined as
Hence,
which indicates that changes in length are characterized by .
Finite strain tensors
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
or as a function of the displacement gradient tensor
or
The Green-Lagrangian strain tensor is a measure of how much differs from .
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
or as a function of the displacement gradients we have
Seth–Hill family of generalized strain tensors
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
For different values of we have:
Green-Lagrangian strain tensor
Biot strain tensor
Logarithmic strain, Natural strain, True strain, or Hencky strain
Almansi strain
The second-order approximation of these tensors is
where is the infinitesimal strain tensor.
Many other different definitions of tensors are admissible, provided that they all satisfy the conditions that:
vanishes for all rigid-body motions
the dependence of on the displacement gradient tensor is continuous, continuously differentiable and monotonic
it is also desired that reduces to the infinitesimal strain tensor as the norm
An example is the set of tensors
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at for any value of .
Physical interpretation of the finite strain tensor
The diagonal components of the Lagrangian finite strain tensor are related to the normal strain, e.g.
where is the normal strain or engineering strain in the direction .
The off-diagonal components of the Lagrangian finite strain tensor are related to shear strain, e.g.
where is the change in the angle between two line elements that were originally perpendicular with directions and , respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
Compatibility conditions
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
Compatibility of the deformation gradient
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
Compatibility of the right Cauchy–Green deformation tensor
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for -compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
Compatibility of the left Cauchy–Green deformation tensor
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional fields were found by Janet Blume.
See also
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
References
Further reading
External links
Prof. Amit Acharya's notes on compatibility on iMechanica
Tensors
Continuum mechanics
Elasticity (physics)
Non-Newtonian fluids
Solid mechanics | Finite strain theory | [
"Physics",
"Materials_science",
"Engineering"
] | 2,919 | [
"Solid mechanics",
"Physical phenomena",
"Tensors",
"Elasticity (physics)",
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics",
"Mechanics",
"Physical properties"
] |
2,211,036 | https://en.wikipedia.org/wiki/Suzaku%20%28satellite%29 | Suzaku (formerly ASTRO-EII) was an X-ray astronomy satellite developed jointly by the Institute of Space and Aeronautical Science at JAXA and NASA's Goddard Space Flight Center to probe high-energy X-ray sources, such as supernova explosions, black holes and galactic clusters. It was launched on 10 July 2005 aboard the M-V launch vehicle on the M-V-6 mission. After its successful launch, the satellite was renamed Suzaku after the mythical Vermilion bird of the South.
Just weeks after launch, on 29 July 2005, the first of a series of cooling system malfunctions occurred. These ultimately caused the entire reservoir of liquid helium to boil off into space by 8 August 2005. This effectively shut down the X-ray Spectrometer-2 (XRS-2), which was the spacecraft's primary instrument. The two other instruments, the X-ray Imaging Spectrometer (XIS) and the Hard X-ray Detector (HXD), were unaffected by the malfunction. As a result, another XRS was integrated into the Hitomi X-ray satellite, launched in 2016, which also was lost weeks after launch. A Hitomi successor, XRISM, launched on 7 September 2023, with an X-ray Spectrometer (Resolve) onboard as the primary instrument.
On 26 August 2015, JAXA announced that communications with Suzaku had been intermittent since 1 June 2015 and that the resumption of scientific operations would take a lot of work to accomplish, given the spacecraft's condition. Mission operators decided to complete the mission imminently, as Suzaku had exceeded its design lifespan by eight years at this point. The mission came to an end on 2 September 2015, when JAXA commanded the radio transmitters on Suzaku to switch themselves off.
Spacecraft instruments
Suzaku carried high spectroscopic resolution, very wide energy band instruments for detecting signals ranging from soft X-rays up to gamma-rays (0.3–600 keV). High-resolution spectroscopy and wide-band are essential factors in physically investigating high-energy astronomical phenomena, such as black holes and supernovas. One such feature, the K-line (x-ray), may be key to more direct imaging of black holes.
X-ray Telescope (XRT)
X-ray Spectrometer-2 (XRS-2)
X-ray Imaging Spectrometer (XIS)
Hard X-ray Detector (HXD)
Uses Gadolinium Silicate crystal (GSO), Gd2SiO5(Ce)
Uses Bismuth Germanate crystal (BGO), Bi4Ge3O12
Results
Suzaku discovered "fossil" light from a supernova remnant.
ASTRO-E
Suzaku was a replacement for ASTRO-E, which was lost in a launch failure. The M-V launch vehicle on the M-V-4 mission launched on 10 February 2000 at 01:30:00 UTC. It experienced a failure of 1st stage engine nozzle 42 seconds into the launch, causing control system breakdown and underperformance. Later stages could not compensate for underperformance, leaving payload in x orbit and subsequent reentry and crashed with its payload into the Indian Ocean.
References
Further reading
Special Issue: First Results from Suzaku Publications of the Astronomical Society of Japan. Vol. 59, No. SP1 30 January 2007. Retrieved 4 October 2010.
External links
X-ray Astronomy Satellite "Suzaku" (ASTRO-EII) (JAXA)
JAXA/ISAS Suzaku (ASTRO-EII) mission overview
JAXA/ISAS Suzaku Information for Researchers
JAXA report presentation of failure analysis of XRS (in Japanese)
NASA ASTRO-EII mission description
NASA/GSFC Suzaku Learning Center
NASA/GSFC XRS-2 project page
X-ray telescopes
Space telescopes
Satellites of Japan
Spacecraft launched in 2005 | Suzaku (satellite) | [
"Astronomy"
] | 818 | [
"Space telescopes"
] |
2,211,075 | https://en.wikipedia.org/wiki/First%20Things%20First%20%28book%29 | First Things First, sub-titled To Live, to Love, to Learn, to Leave a Legacy, (1994) is a self-help book written by Stephen Covey, A. Roger Merrill, and Rebecca R. Merrill. It offers a time management approach that, if established as a habit, is intended to help readers achieve "effectiveness" by aligning themselves to "First Things". The approach is a further development of the approach popularized in Covey's The Seven Habits of Highly Effective People and other titles.
Summary
The book asserts that there are three generations of time management: first-generation task lists, second-generation personal organizers with deadlines, and third-generation values clarification as incorporated in the Franklin Planner. Using the analogy of "the clock and the compass", the authors assert that identifying primary roles and principles provides a "true north" and reference when deciding what activities are most important, so that decisions are guided not merely by the "clock" of scheduling but by the "compass" of purpose and values. Asserting that people have a need "to live, to love, to learn, and to leave a legacy" they propose moving beyond "urgency".
In the book, Covey describes a framework for prioritizing work that is aimed at long-term goals, at the expense of tasks that appear to be urgent, but are in fact less important. He uses a time management formulation attributed to Dwight D. Eisenhower (see: The Eisenhower Method), categorizing tasks into whether they are urgent and whether they are important, recognizing that important tasks may not be urgent, and urgent tasks are not necessarily important. This is his 2x2 matrix: classifying tasks as urgent and non-urgent on one axis, and important or non-important on the other axis. His quadrant 2 (not the same as the quadrant II in a Cartesian coordinate system) has the items that are non-urgent but important. These are the ones he believes people are likely to neglect, but should focus on to achieve effectiveness.
Important items are identified by focusing on a few key priorities and roles which will vary from person to person, then identifying small goals for each role each week, in order to maintain a holistic life balance. One tool for this is a worksheet that lists up to seven key roles, with three weekly goals per role, to be evaluated and scheduled into each week before other appointments occupy all available time with things that seem urgent but are not important. This concept is illustrated with a story that encourages people to "place the big rocks first".
Delegation is presented as an important part of time management. Successful delegation, according to Covey, focuses on results and benchmarks that are to be agreed upon in advance, rather than on prescribing detailed work plans.
References
Management books
Personal development
First Things First
First Things First | First Things First (book) | [
"Biology"
] | 583 | [
"Personal development",
"Behavior",
"Human behavior"
] |
2,211,117 | https://en.wikipedia.org/wiki/Hot%20atom | In physical chemistry, a hot atom is an atom that has a high kinetic or internal energy.
When molecule AB adsorbs on a surface dissociatively,
both A and B adsorb on the surface, or
only A adsorbs on the surface, and B desorbs from the surface.
In case 2, B gains a high translational energy from the adsorption energy of A, and hot atom B is generated. For example, the hydrogen molecule, because of its light mass, gets a high translational energy. Such a hot atom does not fly into vacuum but is trapped on the surface, where it diffuses with high energy.
Hot atoms are expected to play important roles in catalytic reactions. For example, a reaction of a hydrogen atom with hydrogen atoms on a silicon surface and a reaction of an oxygen atom with oxygen molecules on Pt(111) have been reported. Hot atoms can also be generated by degenerating molecules on a metal surface with UV light. It has been reported that the reactivity of an oxygen atom generated in such a way on a platinum surface is different from that of chemisorbed oxygen atoms. Elucidating the role of hot atoms on surfaces will lead to a deeper understanding of the mechanism of reactions.
References
See also
Nonthermal surface reaction
Physical chemistry | Hot atom | [
"Physics",
"Chemistry"
] | 268 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Physical chemistry stubs",
"nan"
] |
2,211,120 | https://en.wikipedia.org/wiki/Beryllium%20oxide | Beryllium oxide (BeO), also known as beryllia, is an inorganic compound with the formula BeO. This colourless solid is an electrical insulator with a higher thermal conductivity than any other non-metal except diamond, and exceeds that of most metals. As an amorphous solid, beryllium oxide is white. Its high melting point leads to its use as a refractory material. It occurs in nature as the mineral bromellite. Historically and in materials science, beryllium oxide was called glucina or glucinium oxide, owing to its sweet taste.
Preparation and chemical properties
Beryllium oxide can be prepared by calcining (roasting) beryllium carbonate, dehydrating beryllium hydroxide, or igniting metallic beryllium:
BeCO3 → BeO + CO2
Be(OH)2 → BeO + H2O
2 Be + O2 → 2 BeO
Igniting beryllium in air gives a mixture of BeO and the nitride Be3N2. Unlike the oxides formed by the other Group 2 elements (alkaline earth metals), beryllium oxide is amphoteric rather than basic.
Beryllium oxide formed at high temperatures (>800 °C) is inert, but dissolves easily in hot aqueous ammonium bifluoride (NH4HF2) or a solution of hot concentrated sulfuric acid (H2SO4) and ammonium sulfate ((NH4)2SO4).
Structure
BeO crystallizes in the hexagonal wurtzite structure, featuring tetrahedral Be2+ and O2− centres, like lonsdaleite and w-BN (with both of which it is isoelectronic). In contrast, the oxides of the larger group-2 metals, i.e., MgO, CaO, SrO, BaO, crystallize in the cubic rock salt motif with octahedral geometry about the dications and dianions. At high temperature the structure transforms to a tetragonal form.
In the vapour phase, beryllium oxide is present as discrete diatomic molecules. In the language of valence bond theory, these molecules can be described as adopting sp orbital hybridisation on both atoms, featuring one σ bond (between one sp orbital on each atom) and one π bond (between aligned p orbitals on each atom oriented perpendicular to the molecular axis). Molecular orbital theory provides a slightly different picture with no net σ bonding (because the 2s orbitals of the two atoms combine to form a filled sigma bonding orbital and a filled sigma* anti-bonding orbital) and two π bonds formed between both pairs of p orbitals oriented perpendicular to the molecular axis. The sigma orbital formed by the p orbitals aligned along the molecular axis is unfilled. The corresponding ground state is ...(2sσ)2(2sσ*)2(2pπ)4 (as in the isoelectronic C2 molecule), where both bonds can be considered as dative bonds from oxygen towards beryllium.
Applications
High-quality crystals may be grown hydrothermally, or otherwise by the Verneuil method. For the most part, beryllium oxide is produced as a white amorphous powder, sintered into larger shapes. Impurities, like carbon, can give rise to a variety of colours to the otherwise colourless host crystals.
Sintered beryllium oxide is a very stable ceramic. Beryllium oxide is used in rocket engines and as a transparent protective over-coating on aluminised telescope mirrors. Metal-coated beryllium oxide (BeO) plates are used in the control systems of aircraft drive devices.
Beryllium oxide is used in many high-performance semiconductor parts for applications such as radio equipment because it has good thermal conductivity while also being a good electrical insulator. It is used as a filler in some thermal interface materials such as thermal grease. It is also employed in heat sinks and spreaders that cool electronic devices, such as CPUs, lasers, and power amplifiers. Some power semiconductor devices have used beryllium oxide ceramic between the silicon chip and the metal mounting base of the package to achieve a lower value of thermal resistance than a similar construction of aluminium oxide. It is also used as a structural ceramic for high-performance microwave devices, vacuum tubes, cavity magnetrons , and gas lasers. BeO has been proposed as a neutron moderator for naval marine high-temperature gas-cooled reactors (MGCR), as well as NASA's Kilopower nuclear reactor for space applications.
Safety
BeO is carcinogenic in powdered form and may cause a chronic allergic-type lung disease berylliosis. Once fired into solid form, it is safe to handle if not subjected to machining that generates dust. Clean breakage releases little dust, but crushing or grinding actions can pose a risk.
References
Cited sources
External links
Beryllium Oxide MSDS from American Beryllia
IARC Monograph "Beryllium and Beryllium Compounds"
International Chemical Safety Card 1325
National Pollutant Inventory – Beryllium and compounds
NIOSH Pocket guide to Chemical Hazards
Beryllium compounds
Oxides
IARC Group 1 carcinogens
Ceramic materials
Nuclear technology
II-VI semiconductors
Wurtzite structure type | Beryllium oxide | [
"Physics",
"Chemistry",
"Engineering"
] | 1,110 | [
"Inorganic compounds",
"Semiconductor materials",
"Oxides",
"Salts",
"Nuclear technology",
"II-VI semiconductors",
"Ceramic materials",
"Nuclear physics",
"Ceramic engineering"
] |
2,211,347 | https://en.wikipedia.org/wiki/Synchronous%20programming%20language | A synchronous programming language is a computer programming language optimized for programming reactive systems.
Computer systems can be sorted in three main classes:
Transformational systems take some inputs, process them, deliver their outputs, and terminate their execution. A typical example is a compiler.
Interactive systems interact continuously with their environment, at their own speed. A typical example is the web.
Reactive systems interact continuously with their environment, at a speed imposed by the environment. A typical example is the automatic flight control system of modern airplanes. Reactive systems must therefore react to stimuli from the environment within strict time bounds. For this reason they are often also called real-time systems, and are found often in embedded systems.
Synchronous programming, also called synchronous reactive programming (SRP), is a computer programming paradigm supported by synchronous programming languages. The principle of SRP is to make the same abstraction for programming languages as the synchronous abstraction in digital circuits. Synchronous circuits are indeed designed at a high level of abstraction where the timing characteristics of the electronic transistors are neglected. Each gate of the circuit (or, and, ...) is therefore assumed to compute its result instantaneously, each wire is assumed to transmit its signal instantaneously. A synchronous circuit is clocked and at each tick of its clock, it computes instantaneously its output values and the new values of its memory cells (latches) from its input values and the current values of its memory cells. In other words, the circuit behaves as if the electrons were flowing infinitely fast. The first synchronous programming languages were invented in France in the 1980s: Esterel, Lustre, and SIGNAL. Since then, many other synchronous languages have emerged.
The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion of logical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous, i.e., as if the processor executing them were infinitely fast. The statement "a||b" is therefore abstracted as the package "ab" where "a" and "b" are simultaneous. To take a concrete example, the Esterel statement "'every 60 second emit minute" specifies that the signal "minute" is exactly synchronous with the 60-th occurrence of the signal "second". At a more fundamental level, the synchronous abstraction eliminates the non-determinism resulting from the interleaving of concurrent behaviors. This allows deterministic semantics, therefore making synchronous programs amenable to formal analysis, verification and certified code generation, and usable as formal specification formalisms.
In contrast, in the asynchronous model of computation, on a sequential processor, the statement "a||b" can be either implemented as "a;b" or as "b;a". This is known as the interleaving-based non determinism. The drawback with an asynchronous model is that it intrinsically forbids deterministic semantics (e.g., race conditions), which makes formal reasoning such as analysis and verification more complex. Nonetheless, asynchronous formalisms are very useful to model, design and verify distributed systems, because they are intrinsically asynchronous.
Also in contrast are systems with processes that basically interact synchronously. An example would be systems based on the Communicating sequential processes (CSP) model, which allows deterministic (external) and nondeterministic (internal) choice.
Synchronous languages
Argos
Atom (a domain-specific language in Haskell for hard realtime embedded programming)
Averest
Blech
ChucK (a synchronous reactive programming language for audio)
Esterel
LabVIEW
LEA
Lustre
PLEXIL
SIGNAL (a dataflow-oriented synchronous language enabling multi-clock specifications)
SOL
SyncCharts
See also
Asynchronous programming
Concurrency (computer science)
References
Nicolas Halbwachs. "Synchronous programming of reactive systems". Kluwer Academic Publishers, 1993. http://www-verimag.imag.fr/~halbwach/newbook.pdf
External links
The Synchronous group at Verimag lab.
The SIGNAL programming language.
Unification of Synchronous and Asynchronous Models for Parallel Programming Languages —Proposes parallel languages based on C, lets programmers specify and manage parallelism on a broad range of computer architectures.
Programming language classification | Synchronous programming language | [
"Technology"
] | 973 | [
"Real-time computing",
"Synchronous programming languages"
] |
2,211,427 | https://en.wikipedia.org/wiki/Lithiophilite | Lithiophilite is a mineral containing the element lithium. It is lithium manganese(II) phosphate with chemical formula . It occurs in pegmatites often associated with triphylite, the iron end member in a solid solution series. The mineral with intermediate composition is known as sicklerite and has the chemical formula ). The name lithiophilite is derived from the Greek philos () "friend", as lithiophilite is usually found with lithium.
Lithiophylite is a resinous reddish to yellowish brown mineral crystallizing in the orthorhombic system often as slender prisms. It is usually associated with lepidolite, beryl, quartz, albite, amblygonite, and spodumene of pegmatitic origin. It rather readily weathers to a variety of secondary manganese phosphates and oxides. It is a late-stage mineral in some complex granite pegmatites. Members of the triphylite-lithiophilite series readily alter to secondary minerals.
The type locality is the Branchville Quarry, Branchville, Fairfield County, Connecticut where it was first reported in 1878. The largest documented single crystal of lithiophilite was found in New Hampshire, US, measured 2.44×1.83×1.22 m3 and weighed about 20 tonnes.
The synthetic form of triphylite, lithium iron phosphate, is a promising material for the production of lithium-ion batteries.
References
Bibliography
Palache, P.; Berman H.; Frondel, C. (1960). "Dana's System of Mineralogy, Volume II: Halides, Nitrates, Borates, Carbonates, Sulfates, Phosphates, Arsenates, Tungstates, Molybdates, Etc. (Seventh Edition)" John Wiley and Sons, Inc., New York, pp. 665–669.
External links
Mineral galleries
Lithium minerals
Manganese(II) minerals
Phosphate minerals
Orthorhombic minerals
Minerals in space group 62
Gemstones | Lithiophilite | [
"Physics"
] | 422 | [
"Materials",
"Gemstones",
"Matter"
] |
2,211,475 | https://en.wikipedia.org/wiki/Anaerobic%20lagoon | An anaerobic lagoon or manure lagoon is a man-made outdoor earthen basin filled with animal waste that undergoes anaerobic respiration as part of a system designed to manage and treat refuse created by concentrated animal feeding operations (CAFOs). Anaerobic lagoons are created from a manure slurry, which is washed out from underneath the animal pens and then piped into the lagoon. Sometimes the slurry is placed in an intermediate holding tank under or next to the barns before it is deposited in a lagoon. Once in the lagoon, the manure settles into two layers: a solid or sludge layer and a liquid layer. The manure then undergoes the process of anaerobic respiration, whereby the volatile organic compounds are converted into carbon dioxide and methane. Anaerobic lagoons are usually used to pretreat high strength industrial wastewaters and municipal wastewaters. This allows for preliminary sedimentation of suspended solids as a pretreatment process.
Anaerobic lagoons have been shown to harbor and emit substances which can cause adverse environmental and health effects. These substances are emitted through two main pathways: gas emissions and lagoon overflow. Gas emissions are continuous (though the amount may vary based on the season) and are a product of the manure slurry. The most prevalent gasses emitted by the lagoon are: ammonia, hydrogen sulfide, methane, and carbon dioxide. Lagoon overflow is caused by faulty lagoons, such as breaches or improper construction, or adverse weather conditions, such as increased rainfall or strong winds. These overflows release harmful substances into the surrounding land and water such as: antibiotics, estrogens, bacteria, pesticides, heavy metals, and protozoa.
In the U.S., the Environmental Protection Agency (EPA) has responded to environmental and health concerns by strengthening regulation of CAFOs under the Clean Water Act. Some states have imposed their own regulations as well. Because of repeated overflows and resultant health concerns, North Carolina banned the construction of new anaerobic lagoons in 1999. There has also been a significant push for the research, development and implementation of environmentally sound technologies which would allow for safer containment and recycling of CAFO waste.
Background
Beginning in the 1950s with poultry production, and then later in the 1970s and 1980s with cattle and swine, meat producers in the United States have turned to CAFO as a way to more efficiently produce large quantities of meat. This switch has decreased the price of meat. However, the increase in livestock has generated an increase in manure. In 2006, for example, livestock operations in the United States produced of manure. Unlike manure produced in a conventional farm, CAFO manure cannot all be used as direct fertilizer on agricultural land because of the poor quality of the manure. Moreover, CAFOs produce a high volume of manure. A feeding operation with 800,000 pigs could produce over of waste per year. The high quantity of manure produced by a CAFO must be dealt with in some way, as improper manure management can result in water, air and soil damage. As a result, manure collection and disposal has become an increasing problem.
In order to manage their waste, CAFOs have developed agricultural wastewater treatment plans. To save on manual labor, many CAFOs handle manure waste as a liquid. In this system, the animals are kept in pens with grated floors so the waste and spray water can be drained from underfloor gutters and piped to storage tanks or anaerobic lagoons. Once at a lagoon, the purpose is to treat the waste and make it suitable for spreading on agricultural fields. There are three main types of lagoon: anaerobic, which is inhibited by oxygen; aerobic, which requires oxygen; and facultative, which is maintained with or without oxygen. Aerobic lagoons provide a higher degree of treatment with less odor production, though they require a significant amount of space and maintenance. Because of this demand, almost all livestock lagoons are anaerobic lagoons.
Design
Description
Anaerobic lagoons are earthen basins with a usual depth of , though greater depths are more beneficial to digestion as they minimize oxygen diffusion from the surface. To minimize leakage of animal waste into the ground water, newer lagoons are generally lined with clay Studies have shown that in fact the lagoons typically leak at a rate of approximately per day, with or without a clay liner, because it is the sludge deposited at the base of the lagoon that limits the leakage rate, not the clay liner or underlying native soil.
Anaerobic lagoons are not heated, aerated or mixed. Anaerobic lagoons are most effective in warmer temperatures; anaerobic bacteria are ineffective below Lagoons must be separated from other structures by a certain distance to prevent contamination. States regulate this separation distance. The overall size of the lagoon is determined by addition of four components: minimum design volume, volume of manure storage between periods of disposal, dilution volume and the volume of sludge accumulation between periods of sludge removal.
Process
The lagoon is divided into two distinct layers: sludge and liquid. The sludge layer is a more solid layer formed by the stratification of sediments from the manure. After a while, this solid layer accumulates and eventually needs to be cleaned out. The liquid level is composed of grease, scum and other particulates. The liquid level CAFO wastewater enters at the bottom of the lagoon so that it can mix with the active microbial mass in the sludge layer. These anaerobic conditions are uniform throughout the lagoon, except in a small surface level.
Sometimes aeration is applied to this level to dampen the odors emitted by the lagoons. If surface aeration is not applied, a crust will form that will trap heat and odors. Anaerobic lagoons should retain and treat wastewater from 20 to 150 days. Lagoons should be followed by aerobic or facultative lagoons to provide further required treatment. The liquid layer is periodically drained and used for fertilizer. In some instances, a cover can be provided to trap methane, which is used for energy.
Anaerobic lagoons work through a process called anaerobic digestion. Decomposition of the organic matter begins shortly after the animals void. Lagoons become anaerobic because of the high biological oxygen demand (BOD) of the feces, which contains a high level of soluble solids, resulting in higher BOD. Anaerobic microorganisms convert organic compounds into carbon dioxide and methane through acid formation and methane production.
Advantages of construction
Manure can be easily manipulated with water using flushing systems, sewer lines, pumps and irrigation systems
Stabilization of the waste through digestion minimizes odor when manure is finally used as fertilizer
Manure is able to be stored long-term at a low cost
Manure is all in one area, instead of spread across a large area of land (This is called W.E.S., Waste Enlargement System).
Disadvantages of construction
Requires relatively large area of land
Produces strong undesirable odors especially during spring and fall
Take a fairly long time for organic stabilization because of the slow rate of sludge digestion and slow growth rate of methane formers
Manure used as fertilizer is of lower quality because of low nutrient availability
Wastewater seepage may occur if the tanks break or are improperly constructed
Weather and other environmental elements can strongly affect the safety and efficacy of anaerobic lagoons
Environmental and health impacts
Gas emissions
Rates of asthma in children living near a CAFO are consistently elevated. The process of anaerobic digestion has been shown to release over 400 volatile compounds from lagoons. The most prevalent of these are: ammonia, hydrogen sulfide, methane, and carbon dioxide.
Ammonia
In the United States, 80 percent of ammonia emissions come from livestock production. A lagoon can vaporize up to 80 percent of its nitrogen through the reaction: NH4+-N -> NH3 + H+. As pH or temperature increases, so does the amount of volatilized ammonia. Once ammonia has been volatilized, it can travel as far as 300 miles, and at closer ranges it is a respiratory irritant. Acidification and eutrophication of the ecosystem surrounding the lagoons could be caused by prolonged exposure to volatilized ammonia. This volatilized ammonia has been implicated in widespread ecological damage in Europe and is of growing concern for the United States.
Hydrogen sulfide
With averages greater than 30ppb, lagoons have high concentration of hydrogen sulfide, which is highly toxic. A study by the Minnesota Pollution Control Agency has found that concentrations of hydrogen sulfide near lagoons have exceeded the state standard, even as far away as 4.9 miles. Hydrogen sulfide is recognizable for its unpleasant rotten-egg odor. Because hydrogen sulfide is heavier than air, it tends to linger around lagoons even after ventilation. Levels of hydrogen sulfide are at their highest after agitation and during manure removal.
Methane
Methane is an odorless, tasteless, and colorless gas. Lagoons produce about 2,300,000 tonnes per year, with around 40 percent of this mass coming from hog farm lagoons. Methane is combustible at high temperatures, and explosions and fires are a real threat at or near lagoons. Additionally, methane is a greenhouse gas. The U.S. EPA estimated that 13 percent of all the methane emissions came from livestock manure in 1998, and this number has grown in recent years. Recently there has been interest in technology which would capture methane produced from lagoons and sell it as energy.
Water-soluble contaminants
Contaminants that are water-soluble can escape from anaerobic lagoons and enter the environment through leakage from badly constructed or poorly maintained manure lagoons as well as during excess rain or high winds, resulting in an overflow of lagoons. These leaks and overflows can contaminate surrounding surface and ground water with some hazardous materials which are contained in the lagoon. The most serious of these contaminants are pathogens, antibiotics, heavy metals and hormones. For example, runoff from farms in Maryland and North Carolina are a leading candidate for Pfiesteria piscicida. This contaminant has the ability to kill fish, and it can also cause skin irritation and short term memory loss in humans
Pathogens
More than 150 pathogens in manure lagoons have been found to impact human health. Healthy individuals who come into contact with pathogens usually recover promptly. However, those who have a weakened immune system, such as cancer patients and young children, have an increased risk for a more severe illness or even death. About 20 percent of the U.S. population are categorized in this risk group. Some of the more notable pathogens are:
E. coli
E. coli is found in the intestines and feces of both animal and humans. One particularly virulent strain, Escherichia coli O157:H7, is found specifically in the lumen of cattle raised in CAFOs. Because cattle are fed corn in CAFOs instead of grass, this changes the pH of the lumen so that it is more hospitable to E. coli. Grain-fed cattle have 80 percent more of this strain of E. coli than grass-fed cattle. However, the amount of E. coli found in the lumen of grain fed cattle can be significantly reduced by switching an animal to grass only a few days prior to slaughter. This reduction would decrease the pathogen's presence in both meat and waste of the cattle, and decrease the E. coli population found in anaerobic lagoons.
Cryptosporidium
Cryptosporidium is a parasite that causes diarrhea, vomiting, stomach cramps and fever. It is particularly problematic because it is resistant to most lagoon treatment regimens In a study performed in Canada, 37 percent of swine liquid-manure samples contained Cryptosporidium.
Other common pathogens
Other common pathogens (and their symptoms) include:
Bacillus anthracis, otherwise known as Anthrax (skin sores, headache, fever, chills, nausea, vomiting)
Leptospira pomona (abdominal pain, muscle pain, vomiting, fever)
Listeria monocytogenes (fever, fatigue, nausea, vomiting, diarrhea)
Salmonella (abdominal pain, diarrhea, nausea, chills, fever, headache)
Clostridium tetani (violent muscle spasms, lockjaw, difficulty breathing)
Histoplasma capsulatum (fever, chills, muscle ache, cough rash, joint pain and stiffness)
Microsporum and Trichophyton Ringworm (itching, rash)
Giardia lamblia (abdominal pain, abdominal gas, nausea, vomiting, fever)
Cryptosporidium (diarrhea, dehydration, weakness, abdominal cramping)
Pfiesteria piscicida (neurological damage)
Antibiotics
Antibiotics are fed to livestock to prevent disease and to increase weight and development, so that there is a shortened time from birth to slaughter. However, because these antibiotics are administered at sub-therapeutic levels, bacterial colonies can build up resistance to the drugs through the natural selection of bacteria resistant to these antibiotics. These antibiotic-resistant bacteria are then excreted and transferred to the lagoons, where they can infect humans and other animals.
Each year, 24.6 million pounds of antimicrobials are administered to livestock for non-therapeutic purposes. Seventy percent of all antibiotics and related drugs are given to animals as feed additives. Nearly half of the antibiotics used are nearly identical to ones given to humans. There is strong evidence that the use of antibiotics in animal feed is contributing to an increase in antibiotic-resistant microbes and causing antibiotics to be less effective for humans. Due to concerns over antibiotic-resistant bacteria, the American Medical Association passed a resolution stating its opposition to the use of sub-therapeutic levels of antimicrobials in livestock.
Hormones
Growth hormones such as rBST, estrogen, and testosterone are administered to increase development rate and muscle mass for the livestock. Yet, only a fraction of these hormones are actually absorbed by the animal. The rest are excreted and wind up in lagoons. Studies have shown that these hormones, if they escape the lagoon and are emitted into the surrounding surface water, can alter fertility and reproductive habits of aquatic animals.
One study found that several lagoons and monitoring wells from two facilities (a nursery and a farrowing sow operation) contained high levels of all three types of estrogen: for the nursery, lagoon effluent concentrations ranged from 390 to 620 ng/L for estrone, 180 to 220 ng/L for estriol, and 40 to 50 ng/L for estradiol. For the farrowing sow operation, digester and primary lagoon effluent concentrations ranged from 9,600 to 24,900 ng/L for estrone, 5,000 to 10,400 ng/L for estriol, and 2,200 to 3,000 ng/L for estradiol. Ethinylestradiol was not detected in any of the lagoon or ground water samples. Natural estrogen concentrations in ground water samples were generally less than 0.4 ng/L, although, a few wells at the nursery operation showed quantifiable but low levels."
Heavy metals
Manure contains trace elements of many heavy metals such as arsenic, cadmium, copper, iron, lead, manganese, molybdenum, nickel, and zinc. Sometimes these metals are given to animals as growth stimulants, some are introduced through pesticides used to rid livestock of insects, and some might pass through the animals as undigested food. Trace elements of these metals and salts from animal manure present risks to human health and ecosystems.
New River Spill
In 1999, Hurricane Floyd hit North Carolina, flooding hog waste lagoons, releasing 25 million gallons of manure into the New River and contaminating the water supply. Ronnie Kennedy, county director for environmental health, said that of 310 private wells he had tested for contamination since the storm, 9 percent, or three times the average across eastern North Carolina, had fecal coliform bacteria. Normally, tests showing any hint of feces in drinking water, an indication that it can be carrying disease-causing pathogens, are cause for immediate action.
Regulation
Anaerobic lagoons are built as part of a wastewater operation system. As such, compliance and permitting are handled as an extension of that operation. Therefore, manure lagoons are regulated on the state and national level through the CAFO which operates them. In recent years, because of the environmental and health effects associated with anaerobic lagoons, the EPA has increased regulation of CAFOs with a specific eye towards lagoons. North Carolina banned the construction of new anaerobic lagoons in 1999 and upheld that ban in 2007.
Further research
Some research has been done to develop and assess the economic feasibility of more environmentally superior technologies. Five main alternatives which have been implemented in North Carolina are: a solids separation/nitrification–denitrification/soluble phosphorus removal system; a thermophilic anaerobic digester system; a centralized composting system; a gasification system; and a fluidized-bed combustion system. These systems were judged based on their ability to: reduce impacts of CAFO waste in the surface and groundwater, decrease ammonia emissions, decrease the escape of disease-transmitting pathogens, and lower the concentration of heavy metal contamination.
The U.S. Department of Agriculture (USDA) has also evaluated the prospect of creating a cap and trade program for CAFO's carbon dioxide and nitrous oxide emissions. This program has yet to be implemented, however the USDA speculates that such a program would encourage corporations to adopt EST practices.
A comprehensive study of anaerobic swine lagoons nationwide has been launched by the U.S. Agricultural Research Service. This study aims to explore the composition of lagoons and anaerobic lagoon influence on environmental factors and agronomic practices.
See also
Agricultural wastewater treatment
Anaerobic digestion
Aerated lagoon
Factory farming
List of waste water treatment technologies
Sewage treatment
References
External links
Waste management
Sewerage | Anaerobic lagoon | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,779 | [
"Water pollution",
"Sewerage",
"Anaerobic digestion",
"Environmental engineering",
"Water technology"
] |
2,211,626 | https://en.wikipedia.org/wiki/Friant%20Dam | Friant Dam is a concrete gravity dam on the San Joaquin River in central California in the United States, on the boundary of Fresno and Madera Counties. It was built between 1937 and 1942 as part of a U.S. Bureau of Reclamation (USBR) water project to provide irrigation water to the southern San Joaquin Valley. The dam impounds Millerton Lake, a reservoir about north of Fresno.
Background
The valley in which Friant Dam and Millerton Lake now lie was once the location of the historic town of Millerton. Millerton was the first county seat of Fresno County. In 1880, the first dam on the San Joaquin River was constructed by the Upper San Joaquin Irrigation Company roughly on the present site of Friant Dam. Built of local rock, the dam was an long, tall structure designed to divert water for the irrigation of . The project was abandoned in the wake of floods that destroyed the dam two years later.
Friant Dam was originally proposed in the 1930s as a main feature of the Central Valley Project (CVP), a federal water project that would involve building an expansive system of dams and canals on the rivers of the Central Valley to provide water for agriculture, with secondary purposes of flood control, municipal supply, and hydroelectric power generation. The CVP was authorized by the 1935 Rivers and Harbors Act, while $20 million of initial funding for Friant Dam was provided by the Emergency Relief Appropriation Act of 1935.
Initial surveys of the Friant Dam site were carried out in November 1935 and continued through early 1936. In January 1938, a worker's camp was established near the town of Friant to house the laborers that would ultimately work on the dam. In the middle of the Great Depression, the Friant Dam site saw a huge influx of job seekers, many of whom had to live further away in surrounding cities. More than 50,000 people attended the groundbreaking of the dam on November 5, 1939 in a celebration that is now known as "one of the greatest in San Joaquin Valley history".
Construction
Construction of Friant Dam began with blasting and excavation of the dam site to remove more than of loose material above the bedrock. Before any concrete was laid on the dam's main wall, the underlying rock was extensively grouted to fill in 725 holes and seams that might otherwise cause instability in the foundation. The concrete used in the dam's construction was made from sand and gravel excavated from the San Joaquin River floodplain about below the dam to form Lost Lake. Notably, more than of placer gold – worth $176,000 at the time – were uncovered in the excavation site. A branch line of the Southern Pacific Railroad delivered this material to a concrete mixing plant, which could produce up to of concrete per hour, directly adjacent to the construction site. In July 1940, the San Joaquin River was diverted through a wooden flume so that work on the foundations could begin.
On July 29, the first concrete was poured into the main body of Friant Dam. In order to keep the structure in line, the dam was built in a series of blocks or forms, each measuring square. Concrete was placed via a massive steel trestle system high and long, along which ran small powered railcars that delivered buckets of concrete from the mixing plant. Two gantry cranes lifted the buckets from the cars and poured them onto the forms. In summer 1941, the labor force reached a peak of 1,500, and the monthly record for concrete placement, at , was set in August.
During the dam's construction several Native American burial sites had their graves removed and re-interred.
The workforce scrambled to complete the main wall of the dam after an act of the War Production Board (WPB) suspended resources in order to assist U.S. military efforts in World War II. The dam was topped out on June 16, 1942, just under two years after the first concrete was poured. However, the spillway gates, the water release valves and the two irrigation canals Friant was intended to support remained unfinished in the wake of the WPB's order.
The war, however, did not completely halt construction. Less than a year later, the WPB "[determined] the completion of the Madera Canal and the installation of valves at the Friant Dam, necessary for war-time food and fiber production" – allowing construction to resume on a limited scale. A pair of control valves were borrowed from Hoover Dam, allowing the closure of the river outlets and Millerton Lake began to fill on February 21, 1944. Work on the Madera Canal, the smaller of the two irrigation canals serviced by Friant Dam (the other, the Friant-Kern Canal, would not be completed for another four years), was completed in 1945 and water ran for its entire length for the first time on June 10, with irrigation deliveries commencing one month later.
The dam was formally dedicated on July 9, 1949 by California governor Earl Warren, who declared that the water furnished by Friant Dam and its canals would help the San Joaquin Valley to "become a modern Eden" as water was released into the partially completed Friant-Kern Canal for the first time. More than three thousand people, mostly residents of the San Joaquin Valley, attended the ceremonies.
Operations and usage
Friant Dam's primary purpose is to capture the fluctuating flows of the San Joaquin River and divert the water for irrigation through the Friant-Kern and Madera Canals. The Friant-Kern Canal is long, extending south from the dam to the Kern River near Bakersfield, and has an initial diversion capacity of ; the Madera Canal, which has a capacity of up to , travels north from the dam to the Chowchilla River. Together, these canals provide irrigation water to some of the San Joaquin Valley. In 1990, farmers who received their water from Friant Dam produced more than $1.9 billion worth of 90 different kinds of crops.
Millerton Lake has a capacity of at normal maximum pool, with a surcharge (above spillway gates, but below the dam crest) capacity of approximately for a total capacity of . About , or 32.7% of the reservoir's regular capacity, is reserved for flood control between October and January to protect against rain floods, while between February and July, this is increased to – 75.0% – to provide space for snowmelt floods. The dam is operated to maintain a flow of or less on the San Joaquin River at Mendota, downriver. However, large snowmelt floods often exceed the capacity of the dam and reservoir and force larger releases downstream, potentially causing damage to riverside property and infrastructure.
The dam is also used to generate up to 25 megawatts (MW) of hydroelectric power. The penstock releasing water into the Friant-Kern Canal is fitted with a Kaplan turbine with a capacity of 15 MW, and the Madera Canal penstock is equipped with a smaller 8 MW turbine. The smallest hydroelectric generator, with a capacity of 2 MW, is located at the outlet works on the base of the dam and produces power from water releases that serve local farms along the San Joaquin River directly downstream from Friant Dam, as well as releases to a fish hatchery below the dam and for wildlife management purposes.
Expansion
Because of its relatively small storage capacity relative to the average annual discharge of the San Joaquin River – versus – Friant Dam often has to release excessive amounts of water that could be otherwise used for irrigation or power generation, also causing downstream damage. From 1981 to 2011, an average of was spilled each year because the reservoir was unable to contain it. The USBR has proposed increasing the height of Friant Dam by up to , nearly tripling the reservoir's storage capacity to . A smaller raise would increase storage capacity to , while a raise would increase storage capacity to . The increase in height would also allow for the generation of between 4.7–30.4 MW of additional power.
Another proposal to increase storage in the upper San Joaquin River basin is Temperance Flat Dam, which would be located in the San Joaquin River canyon upstream of Friant Dam and impound between of water. The proposed dam would stand high above the river, and it would capture most of the floodwater that would otherwise be spilled from Friant Dam. However, Temperance Flat has come under heavy controversy because it would flood a large scenic section of the San Joaquin River gorge, negatively affect wildlife in the river and inundate two upstream hydroelectric power plants, causing a net loss in power generation. The water supplied from such a dam would be very expensive, ranging from $1000–1500 per acre foot (area farmers currently pay about $60 per acre foot). Raising Friant Dam would likely produce similar increases in the cost of irrigation water.
Environmental impacts
By diverting most of the San Joaquin River for irrigation, the Friant Dam has caused about of the river to run dry except in high water years when floodwaters are spilled from the dam. The desiccation of the river has caused the degradation of large stretches of riverside habitat and marshes, and has nearly eliminated the historic chinook salmon run that once numbered "possibly in the range of 200,000 to 500,000 spawners annually". Reduction in flows has also increased the concentration of pesticide and fertilizer runoff in the river contributing to pollution that has further impacted aquatic species.
On September 13, 2006, after eighteen years of litigation, environmental groups, fisherman and the USBR reached an agreement on releasing part of the water currently diverted into the irrigation canals into the San Joaquin River in order to help restore the river and its native fish and wildlife. The first water was released on October 2, 2009 at a rate of . By 2014, these "restoration flows" will be increased to per year, or , on top of the that is currently released for agricultural purposes. However, the river restoration project will cause a 12–20% reduction in irrigation water delivered from Friant Dam.
See also
Water in California
California Water Wars
List of dams and reservoirs in California
List of largest reservoirs of California
List of the tallest dams in the United States
References
Works cited
External links
Friant Water Authority
Live hydrologic data for Friant Dam
Buildings and structures in Fresno County, California
Buildings and structures in Madera County, California
Dams in California
Hydroelectric power plants in California
San Joaquin Valley
United States Bureau of Reclamation dams
Dams completed in 1942
Energy infrastructure completed in 1942
Dams on the San Joaquin River
Central Valley Project
1942 establishments in California | Friant Dam | [
"Engineering"
] | 2,167 | [
"Irrigation projects",
"Central Valley Project"
] |
2,211,700 | https://en.wikipedia.org/wiki/Closed-loop%20authentication | Closed-loop authentication, as applied to computer network communication, refers to a mechanism whereby one party verifies the purported identity of another party by requiring them to supply a copy of a token transmitted to the canonical or trusted point of contact for that identity. It is also sometimes used to refer to a system of mutual authentication whereby two parties authenticate one another by signing and passing back and forth a cryptographically signed nonce, each party demonstrating to the other that they control the secret key used to certify their identity.
E-mail Authentication
Closed-loop email authentication is useful for simple i another, as a weak form of identity verification. It is not a strong form of authentication in the face of host- or network-based attacks (where an imposter, Chuck, is able to intercept Bob's email, intercepting the nonce and thus masquerading as Bob.)
A use of closed-loop email authentication is used by parties with a shared secret relationship (for example, a website and someone with a password to an account on that website), where one party has lost or forgotten the secret and needs to be reminded. The party still holding the secret sends it to the other party at a trusted point of contact. The most common instance of this usage is the "lost password" feature of many websites, where an untrusted party may request that a copy of an account's password be sent by email, but only to the email address already associated with that account. A problem associated with this variation is the tendency of a naïve or inexperienced user to click on a URL if an email encourages them to do so. Most website authentication systems mitigate this by permitting unauthenticated password reminders or resets only by email to the account holder, but never allowing a user who does not possess a password to log in or specify a new one.
In some instances in web authentication, closed-loop authentication is employed before any access is granted to an identified user that would not be granted to an anonymous user. This may be because the nature of the relationship between the user and the website is one that holds some long-term value for one or both parties (enough to justify the increased effort and decreased reliability of the registration process.) It is also used in some cases by websites attempting to impede programmatic registration as a prelude to spamming or other abusive activities.
Closed-loop authentication (like other types) is an attempt to establish identity. It is not, however, incompatible with anonymity, if combined with a pseudonymity system in which the authenticated party has adequate confidence.
See also
See :Category:Computer security for a list of all computing and information-security related articles.
Information Security
Authentication
Cryptography
References
Computer access control | Closed-loop authentication | [
"Engineering"
] | 562 | [
"Cybersecurity engineering",
"Computer access control"
] |
2,211,723 | https://en.wikipedia.org/wiki/Sampled%20data%20system | In systems science, a sampled-data system is a control system in which a continuous-time plant is controlled with a digital device. Under periodic sampling, the sampled-data system is time-varying but also periodic; thus, it may be modeled by a simplified discrete-time system obtained by discretizing the plant. However, this discrete model does not capture the inter-sample behavior of the real system, which may be critical in a number of applications.
The analysis of sampled-data systems incorporating full-time information leads to challenging control problems with a rich mathematical structure. Many of these problems have only been solved recently.
References
External links
Digital control
Sampling (signal processing)
Discretization
Control theory
Control engineering
Systems engineering
Systems theory | Sampled data system | [
"Mathematics",
"Engineering"
] | 149 | [
"Systems engineering",
"Applied mathematics",
"Control theory",
"Control engineering",
"Dynamical systems"
] |
2,211,763 | https://en.wikipedia.org/wiki/Lov%C3%A1sz%20local%20lemma | In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a slight relaxation of the independence condition: As long as the events are "mostly" independent from one another and aren't individually too likely, then there will still be a positive probability that none of them occurs. This lemma is most commonly used in the probabilistic method, in particular to give existence proofs.
There are several different versions of the lemma. The simplest and most frequently used is the symmetric version given below. A weaker version was proved in 1975 by László Lovász and Paul Erdős in the article Problems and results on 3-chromatic hypergraphs and some related questions. For other versions, see . In 2020, Robin Moser and Gábor Tardos received the Gödel Prize for their algorithmic version of the Lovász Local Lemma, which uses entropy compression to provide an efficient randomized algorithm for finding an outcome in which none of the events occurs.
Statements of the lemma (symmetric version)
Let be a sequence of events such that each event occurs with probability at most p and such that each event is independent of all the other events except for at most d of them.
Lemma I (Lovász and Erdős 1973; published 1975) If
then there is a nonzero probability that none of the events occurs.
Lemma II (Lovász 1977; published by Joel Spencer) If
where e = 2.718... is the base of natural logarithms, then there is a nonzero probability that none of the events occurs.
Lemma II today is usually referred to as "Lovász local lemma".
Lemma III (Shearer 1985) If
then there is a nonzero probability that none of the events occurs.
The threshold in Lemma III is optimal and it implies that the bound
is also sufficient.
Asymmetric Lovász local lemma
A statement of the asymmetric version (which allows for events with different probability bounds) is as follows:
Lemma (asymmetric version). Let be a finite set of events in the probability space Ω. For let denote the neighbours of in the dependency graph (In the dependency graph, event is not adjacent to events which are mutually independent). If there exists an assignment of reals to the events such that
then the probability of avoiding all events in is positive, in particular
The symmetric version follows immediately from the asymmetric version by setting
to get the sufficient condition
since
Constructive versus non-constructive
As is often the case with probabilistic arguments, this theorem is nonconstructive and gives no method of determining an explicit element of the probability space in which no event occurs. However, algorithmic versions of the local lemma with stronger preconditions are also known (Beck 1991; Czumaj and Scheideler 2000). More recently, a constructive version of the local lemma was given by Robin Moser and Gábor Tardos requiring no stronger preconditions.
Non-constructive proof
We prove the asymmetric version of the lemma, from which the symmetric version can be derived. By using the principle of mathematical induction we prove that for all in and all subsets of that do not include , . The induction here is applied on the size (cardinality) of the set . For base case the statement obviously holds since . We need to show that the inequality holds for any subset of of a certain cardinality given that it holds for all subsets of a lower cardinality.
Let . We have from Bayes' theorem
We bound the numerator and denominator of the above expression separately. For this, let . First, exploiting the fact that does not depend upon any event in .
Expanding the denominator by using Bayes' theorem and then using the inductive assumption, we get
The inductive assumption can be applied here since each event is conditioned on lesser number of other events, i.e. on a subset of cardinality less than . From (1) and (2), we get
Since the value of x is always in . Note that we have essentially proved . To get the desired probability, we write it in terms of conditional probabilities applying Bayes' theorem repeatedly. Hence,
which is what we had intended to prove.
Example
Suppose 11n points are placed around a circle and colored with n different colors in such a way that each color is applied to exactly 11 points. In any such coloring, there must be a set of n points containing one point of each color but not containing any pair of adjacent points.
To see this, imagine picking a point of each color randomly, with all points equally likely (i.e., having probability 1/11) to be chosen. The 11n different events we want to avoid correspond to the 11n pairs of adjacent points on the circle. For each pair our chance of picking both points in that pair is at most 1/121 (exactly 1/121 if the two points are of different colors, otherwise 0), so we will take p = 1/121.
Whether a given pair (a, b) of points is chosen depends only on what happens in the colors of a and b, and not at all on whether any other collection of points in the other n − 2 colors are chosen. This implies the event "a and b are both chosen" is dependent only on those pairs of adjacent points which share a color either with a or with b.
There are 11 points on the circle sharing a color with a (including a itself), each of which is involved with 2 pairs. This means there are 21 pairs other than (a, b) which include the same color as a, and the same holds true for b. The worst that can happen is that these two sets are disjoint, so we can take d = 42 in the lemma. This gives
By the local lemma, there is a positive probability that none of the bad events occur, meaning that our set contains no pair of adjacent points. This implies that a set satisfying our conditions must exist.
See also
Shearer's inequality
Notes
References
Probability theorems
Combinatorics
Lemmas | Lovász local lemma | [
"Mathematics"
] | 1,306 | [
"Discrete mathematics",
"Mathematical theorems",
"Combinatorics",
"Theorems in probability theory",
"Mathematical problems",
"Lemmas"
] |
2,211,835 | https://en.wikipedia.org/wiki/Lustre%20%28programming%20language%29 | Lustre is a formally defined, declarative, and synchronous dataflow programming language for programming reactive systems. It began as a research project in the early 1980s. A formal presentation of the language can be found in the 1991 Proceedings of the IEEE. In 1993 it progressed to practical, industrial use in a commercial product as the core language of the industrial environment SCADE, developed by Esterel Technologies. It is now used for critical control software in aircraft, helicopters, and nuclear power plants.
Structure of Lustre programs
A Lustre program is a series of node definitions, written as:
node foo(a : bool) returns (b : bool);
let
b = not a;
tel
Where foo is the name of the node, a is the name of the single input of this node and b is the name of the single output.
In this example the node foo returns the negation of its input a, which is the expected result.
Inner variables
Additional internal variables can be declared as follows:
node Nand(X,Y: bool) returns (Z: bool);
var U: bool;
let
U = X and Y;
Z = not U;
tel
Note: The equations order doesn't matter, the order of lines U = X and Y; and Z = not U; doesn't change the result.
Special operators
Examples
Edge detection
node Edge (X : bool) returns (E : bool);
let
E = false -> X and not pre X;
tel
See also
Esterel
SIGNAL (another dataflow-oriented synchronous language)
References
External links
Synchrone Lab Official website
SCADE product page
Declarative programming languages
Synchronous programming languages
Hardware description languages
Formal methods
Software modeling language | Lustre (programming language) | [
"Technology",
"Engineering"
] | 361 | [
"Real-time computing",
"Hardware description languages",
"Software engineering",
"Electronic engineering",
"Formal methods",
"Synchronous programming languages"
] |
2,211,885 | https://en.wikipedia.org/wiki/Lactide | Lactide is the lactone cyclic ester derived by multiple esterification between two (usually) or more molecules from lactic acid (2-hydroxypropionic acid) or other hydroxy carboxylic acid. They are designated as dilactides, trilactides, etc., according to the number of hydroxy acid residues. The dilactide derived from lactic acid has the formula . All lactides are colorless or white solids. This lactide has attracted interest because it is derived from abundant renewable resources and is the precursor to a biodegradable plastic.
Stereoisomers
The dilactide derived from lactic acid can exist in three different stereoisomeric forms. This complexity arises because lactic acid is chiral. These enantiomers do not racemize readily.
All three stereoisomers undergo epimerisation in the presence of organic and inorganic bases in solution.
Polymerization
Lactide can be polymerized to polylactic acid (polylactide). Depending on the catalyst, syndiotactic or a heterotactic polymers can result. The resulting materials, polylactic acid, have many attractive properties.
References
Lactones
Dioxanes
Monomers | Lactide | [
"Chemistry",
"Materials_science"
] | 256 | [
"Monomers",
"Polymer chemistry"
] |
2,211,949 | https://en.wikipedia.org/wiki/Averest | Averest is a synchronous programming language and set of tools to specify, verify, and implement reactive systems. It includes a compiler for synchronous programs, a symbolic model checker, and a tool for hardware/software synthesis.
It can be used to model and verify finite and infinite state systems, at varied abstraction levels. It is useful for hardware design, modeling communication protocols, concurrent programs, software in embedded systems, and more.
Components: compiler to translate synchronous programs to transition systems, symbolic model checker, tool for hardware/software synthesis. These cover large parts of the design flow of reactive systems, from specifying to implementing. Though the tools are part of a common framework, they are mostly independent of each other, and can be used with 3rd-party tools.
See also
Synchronous programming language
Esterel
External links
Averest Toolbox Official home site
Embedded Systems Group Research group that develops the Averest Toolbox
Synchronous programming languages
Hardware description languages | Averest | [
"Technology",
"Engineering"
] | 205 | [
"Electronic engineering",
"Real-time computing",
"Hardware description languages",
"Synchronous programming languages"
] |
2,212,081 | https://en.wikipedia.org/wiki/Inspissation | Inspissation is the process of increasing the viscosity of a fluid, or even of causing it to solidify, typically by dehydration or otherwise reducing its content of solvents. The term also has been applied to coagulation by heating of some substances such as albumens, or cooling some such as solutions of gelatin or agar. Some forms of inspissation may be reversed by re-introducing solvent, such as by adding water to molasses or gum arabic; in other forms, its resistance to flow may include cross-linking or mutual adhesion of its component particles or molecules, in ways that prevent their dissolving again, such as in the irreversible setting or gelling of some kinds of rubber latex, egg-white, adhesives, or coagulation of blood.
Intentional use
Inspissation is the process used when heating high-protein containing media; for example to enable recovery of bacteria for testing. Once inspissation has occurred, any stained bacteria, such as Mycobacteria, can then be isolated.
A serum inspissation or fractional sterilization is a process of heating an article on 3 successive days as follows:
Pathologic inspissation
In cystic fibrosis, inspissation of secretions in the respiratory and gastrointestinal tracts is a major mechanism causing the disease.
References
Further reading
Textbook of Microbiology by Prof. C P Baveja,
Textbook of Microbiology by Ananthanarayan and Panikar,
Microbiology
Zoology | Inspissation | [
"Chemistry",
"Biology"
] | 318 | [
"Microbiology",
"Zoology",
"Microscopy"
] |
2,212,142 | https://en.wikipedia.org/wiki/R.%20J.%20Berry | Robert James "Sam" Berry (26 October 1934 – 29 March 2018) was a British geneticist, naturalist and Christian theorist. He was professor of genetics at University College London between 1974 and 2000. Before that he was a lecturer in genetics at The Royal Free Hospital School of Medicine in London. He was president from 1983 to 1986 of the Linnean Society, the British Ecological Society and the European Ecological Federation. He was one of the founding trustees on the creation of National Museums Liverpool in 1986. As a Christian, Berry spoke out in favour of theistic evolution, served as a lay member of the Church of England's General Synod and as president of Christians in Science. He was a member of the Board of Governors of Monkton Combe School from 1979 to 1991. He gave the 1997–98 Glasgow Gifford Lectures entitled Gods, Genes, Greens and Everything. His father, A. J. Berry, died in 1947.
Early life and education
He was educated at Kirkham Grammar School and Shrewsbury School. One of his first published works in 1961 was in the "Teach yourself books" series Genetics. The paperback version was released in 1972.
Bibliography
Biological works
Teach yourself books Genetics. (1965)
Inheritance and Natural History. New Naturalist series no. 61 (1977)
The Natural History of Shetland. New Naturalist series no. 64 (1980)
The Natural History of Orkney. New Naturalist series no. 70 (1985)
Genes in Ecology (ed. R. J. Berry, T. J. Crawford, G. M. Hewitt, N. R. Webb) (1992)
Islands. New Naturalist series no. 109 (2009)
Religious works
Adam and the Ape: a Christian Approach to the Theory of Evolution / [by] R. J. Berry · London : Church Pastoral-Aid Society, 1975 · 80 p.
God and the Biologist: Personal Exploration of Science and Faith (Apollos 1996)
Science, Life and Christian Belief:A Survey of Contemporary Issues (IVP 1998) (preface by Berry)
The Care of Creation: Focusing Concern and Action (IVP 2000) (edited by Berry)
God's Book of Works:The Nature and Theology of Nature (T & T Clark International 2003) (Gifford Lectures 1997–98)
"Did Darwin Kill God?" in God for the 21st Century, Russell Stannard ed., Templeton Foundation Press, 2000,
God and Evolution: Creation, Evolution, and the Bible (Regent College Publishing 2001)
Creation and Evolution, Not Creation or Evolution (2007, Faraday Institute Paper no. 12)
References
External links
Gifford Lecture Book summary
1934 births
2018 deaths
English biologists
English Anglicans
Fellows of the Linnean Society of London
English geneticists
Members of the International Society for Science and Religion
Presidents of the Linnean Society of London
Fellows of the Royal Society of Edinburgh
Theistic evolutionists
Academics of University College London
New Naturalist writers
Governors of Monkton Combe School
Writers about religion and science
Presidents of the British Ecological Society | R. J. Berry | [
"Biology"
] | 602 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
2,212,173 | https://en.wikipedia.org/wiki/Test%20harness | In software testing, a test harness is a collection of stubs and drivers configured to assist with the testing of an application or component. It acts as imitation infrastructure for test environments or containers where the full infrastructure is either not available or not desired.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness provides a hook for the developed code, which can be tested using an automation framework.
A test harness is used to facilitate testing where all or some of an application's production infrastructure is unavailable, this may be due to licensing costs, security concerns meaning test environments are air gapped, resource limitations, or simply to increase the execution speed of tests by providing pre-defined test data and smaller software components instead of calculated data from full applications.
These individual objectives may be fulfilled by unit test framework tools, stubs or drivers.
Example
When attempting to build an application that needs to interface with an application on a mainframe computer, but no mainframe is available during development, a test harness may be built to use as a substitute this can mean that normally complex operations can be handled with a small amount of resources by providing pre-defined data and responses so the calculations performed by the mainframe are not needed.
A test harness may be part of a project deliverable. It may be kept separate from the application source code and may be reused on multiple projects. A test harness simulates application functionality; it has no knowledge of test suites, test cases or test reports. Those things are provided by a testing framework and associated automated testing tools.
A part of its job is to set up suitable test fixtures.
The test harness will generally be specific to a development environment such as Java. However, interoperability test harnesses have been developed for use in more complex systems.
References
Further reading
Pekka Abrahamsson, Michele Marchesi, Frank Maurer, Agile Processes in Software Engineering and Extreme Programming, Springer, 1 January 2009
Harness | Test harness | [
"Engineering"
] | 407 | [
"Software engineering",
"Software testing"
] |
16,108,212 | https://en.wikipedia.org/wiki/Bessel%20ellipsoid | The Bessel ellipsoid (or Bessel 1841) is an important reference ellipsoid of geodesy. It is currently used by several countries for their national geodetic surveys, but will be replaced in the next decades by modern ellipsoids of satellite geodesy.
The Bessel ellipsoid was derived in 1841 by Friedrich Wilhelm Bessel, based on several arc measurements and other data of continental geodetic networks of Europe, Russia and the British Survey of India. It is based on 10 meridian arcs and 38 precise measurements of the astronomic latitude and longitude (see also astro geodesy). The dimensions of the Earth ellipsoid axes were defined by logarithms in keeping with former calculation methods.
The Bessel and GPS ellipsoids
The Bessel ellipsoid fits especially well to the geoid curvature of Europe and Eurasia. Therefore, it is optimal for National survey networks in these regions, although its axes are about 700 m shorter than that of the mean Earth ellipsoid derived by satellites.
Below there are the two axes , and the flattening . For comparison, the data of the modern World Geodetic System WGS84 are shown, which is mainly used for modern surveys and the GPS system.
Bessel ellipsoid 1841 (defined by log and ):
=
= 1 /
= .
Earth ellipsoid WGS84 (defined directly by and ):
=
= 1 /
= .
Usage
The ellipsoid data published by Bessel (1841) were then the best and most modern data mapping the Earth's figure. They were used by almost all national surveys. Some surveys in Asia switched to the Clarke ellipsoid of 1880. After the arrival of the geophysical reduction techniques many projects used other examples such as the Hayford ellipsoid of 1910 which was adopted in 1924 by the International Association of Geodesy (IAG) as the International ellipsoid 1924. All of them are influenced by geophysical effects like vertical deflection, mean continental density, rock density and the distribution of network data. Every reference ellipsoid deviates from the worldwide data (e.g. of satellite geodesy) in the same way as the pioneering work of Bessel.
In 1950 about 50% of the European triangulation networks and about 20% of other continents networks were based on the Bessel ellipsoid. In the following decades the American states switched mainly to the Hayford ellipsoid 1908 ("internat. Ell. 1924") which was also used for the European unification project ED50 sponsored by the United States after World War II. The Soviet Union forced its satellite states in Eastern Europe to use the Krasovsky ellipsoid of about 1940.
As of 2010 the Bessel ellipsoid is the geodetic system for Germany, for Austria and the Czech Republic. It is also used partly in the successor states of Yugoslavia and some Asian countries: Sumatra and Borneo, Belitung. In Africa it is the geodetic system for Eritrea and Namibia.
See also
References
External links
Conversion of Longitude and Latitude degrees into Universal Transverse Mercator coordinate system coordinates
Geodesy | Bessel ellipsoid | [
"Mathematics"
] | 667 | [
"Applied mathematics",
"Geodesy"
] |
16,108,496 | https://en.wikipedia.org/wiki/Electroless%20nickel%20immersion%20gold | Electroless nickel immersion gold (ENIG or ENi/IAu), also known as immersion gold (Au), chemical Ni/Au or soft gold, is a metal plating process used in the manufacture of printed circuit boards (PCBs), to avoid oxidation and improve the solderability of copper contacts and plated through-holes. It consists of an electroless nickel plating, covered with a thin layer of gold, which protects the nickel from oxidation. The gold is typically applied by quick immersion in a solution containing gold salts. Some of the nickel is oxidized to while the gold is reduced to metallic state. A variant of this process adds a thin layer of electroless palladium over the nickel, a process known by the acronym ENEPIG.
ENIG can be applied before or after the solder mask, also known as "overall" or "selective chemical Ni/Au," respectively. The latter type is more common and significantly cheaper as less gold is needed to cover only the solder pads.
Advantages and disadvantages
ENIG and ENEPIG are meant to replace the more conventional coatings of solder, such as hot air solder leveling (HASL/HAL). While more expensive and require more processing steps, they have several advantages, including excellent surface planarity (important for ball grid array component mounting), good oxidation resistance, prevents 'copper migration', and suitability for movable contacts such as membrane switches and plug-in connectors.
Early ENIG processes had poor adhesion to copper and lower solderability than HASL. In addition, a non-conductive layer containing nickel and phosphorus, known as "black pad", could form over the coating due to sulfur-containing compounds from the solder mask leaching into the plating bath.
Standards
The quality and other aspects of ENIG coatings for PCBs are covered by IPC Standard 4552A, while IPC standard 7095D, about ball array connectors, covers some ENIG problems and their remediation.
See also
Immersion silver plating (IAg)
Immersion tin plating (ISn)
Organic solderability preservative (OSP)
Reflow soldering
Wave soldering
References
Printed circuit board manufacturing
Metal plating | Electroless nickel immersion gold | [
"Chemistry",
"Engineering"
] | 466 | [
"Metallurgical processes",
"Coatings",
"Electronic engineering",
"Electrical engineering",
"Metal plating",
"Printed circuit board manufacturing"
] |
16,108,852 | https://en.wikipedia.org/wiki/Immersion%20silver%20plating | Immersion silver plating (or IAg plating) is a surface plating process that creates a thin layer of silver over copper objects. It consists in dipping the object briefly into a solution containing silver ions.
Immersion silver plating is used by the electronics industry in manufacture of printed circuit boards (PCBs), to protect copper conductors from oxidation and improve solderability.
Advantages and disadvantages
Immersion silver coatings have excellent surface planarity, compared to more traditional coating processes such as hot air solder leveling (HASL). They also have low losses in high-frequency applications due to the skin effect.
On the other hand, silver coatings will degrade over time due to oxidation or air contaminants such as sulfur compounds and chlorine. A problem peculiar to silver coatings is the formation of silver whiskers under electric fields, which may short out components.
Specifications
IPC Standard: IPC-4553
See also
Electroless nickel immersion gold (ENIG)
Hot air solder leveling (HASL)
Organic solderability preservative (OSP)
Reflow soldering
Wave soldering
References
External links
PCB Assembly & PCBA Manufacturing
Bluetooth PCBA Manufacturing
Printed circuit board manufacturing | Immersion silver plating | [
"Engineering"
] | 252 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
16,108,872 | https://en.wikipedia.org/wiki/Bonanno%20catheter | A Bonanno catheter is a medical device. It was originally designed for suprapubic cystostomy (drainage of urine from the bladder through the skin, bypassing the urethra). Described by Dr J. P. Bonanno in 1970 and patented in 1987, it is produced by the medical supplies company Becton Dickinson. Apart from bladder drainage, it also has various other uses for which it has not actually been designed, such as thoracostomy and paracentesis.
The drain consists of a straight metal trocar, which serves as a core and guide for a plastic tube with a curved end that is kept straight while the trocar is inside. At the end of the plastic tube, a small flat plate is present that can be taped or sutured to the skin. The drain then ends in a connector that can be connected with a drainage bag.
References
Medical devices | Bonanno catheter | [
"Biology"
] | 190 | [
"Medical devices",
"Medical technology"
] |
16,109,307 | https://en.wikipedia.org/wiki/Universal%20Avionics | Universal Avionics Systems Corporation, also known as Universal Avionics, is an international company headquartered in Tucson, Arizona in the United States. It primarily focuses on flight management systems (FMS) and cockpit instrument displays for private, business, and commercial aircraft. The company has domestic offices in Arizona, Kansas, Washington, and Georgia, and overseas offices in Switzerland.
History
Universal Avionics was founded in 1981 by Hubert L. Naimer. Its first FMS was introduced in 1982. In 1999, Universal Avionics started its Instrument Division with the purchase of a line of flat panel integrated displays from Avionic Displays Corporation of Norcross, Georgia. On September 12, 2004, Hubert L. Naimer died and his son Joachim L. Naimer assumed the position of President and CEO. On September 25, 2007, the Federal Aviation Administration (FAA) gave TSO approval to Universal's WAAS/SBAS enabled Flight Management Systems. It was the first FMS to be certified for WAAS LPV. In March 2018 it was announced that the Naimer family was selling the company to Israel's Elbit Systems; the sale was completed the following month.
Following the acquisition Universal Avionics will continue to operate, with the same management and workforce and under the same name, as a wholly-owned U.S. subsidiary of Elbit Systems of America.
Products
Flight Management Systems
Universal has been offering the UNS-1 line of Flight Management Systems since 1982.
Synthetic Vision
Universal offers the Vision-1 Synthetic Vision (SVS) System. The Vision-1 was the first SVS product certified for Part 25 aircraft.
Terrain Awareness and Warning System
Universal offers a Terrain Awareness and Warning System (TAWS) with a 3D perspective mode.
Flat Panel Integrated Displays
Universal offers Flat Panel Integrated Displays.
Communications Management Units
Universal offers the 1 MCU UniLink CMU (Communication Management Unit) with or without a built-in VDR (VHF Data Radio). The UniLink CMU is capable of operating in 25 kHz and 8.333 kHz channel spacing environments and operating as part of the ACARS data network.
References
Companies based in Tucson, Arizona
Electronics companies of the United States
Companies established in 1981
Electronic design
Avionics companies | Universal Avionics | [
"Engineering"
] | 462 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
16,109,653 | https://en.wikipedia.org/wiki/Three-click%20rule | The three-click rule or three click rule is an unofficial web design rule concerning the design of website navigation. It suggests that a user of a website should be able to find any information with no more than three mouse clicks. It is based on the belief that users of a site will become frustrated and often leave if they cannot find the information within the three clicks.
One of the earliest mentions of the three click rule comes from Jeffrey Zeldman, who wrote in Taking Your Talent to the Web (2001), that the Three-Click Rule is "based on the way people use the Web" and "the rule can help you create sites with intuitive, logical hierarchical structures". Although there is little analytical evidence that this is the case, it is a commonly held belief amongst designers that the rule is part of a good system of navigation. Critics of the rule suggest that the number of clicks is not as important as the success of the clicks or information sent.
The principle of the “three-click rule” is often used to test the user-friendliness of a program or application. The implementation of the rule of three clicks is evident in the design of modern day operating systems and applications where users can complete most tasks from starting the computer or app and completing a desired task in less than three clicks.
In 2024, the Federal Trade Commission announced a "click-to-cancel" rule that would online sellers to simplify the process for users to cancel services. This involved both transparent communication around cancellation and simplifying the user experience of canceling an online service.
Criticism
The three click rule has been challenged by usability test results, which have shown that the number of clicks needed to access the desired information affects neither user satisfaction, nor success rate.
In eCommerce websites, the rule can often be detrimental as in order to adhere to the rule, products on offer to customers must be grouped into categories that are far too large to be easily browsed.
See also
Six degrees of separation
References
Further reading
Web design
Rules of thumb | Three-click rule | [
"Engineering"
] | 411 | [
"Design",
"Web design"
] |
16,110,423 | https://en.wikipedia.org/wiki/Sodium%20monothiophosphate | Sodium monothiophosphate, or sodium phosphorothioate, is an inorganic compound with the chemical formula . It is a sodium salt of monothiophosphoric acid (). Sodium monothiophosphate forms hydrates . The anhydrous form and all hydrates are white solids. The anhydrous salt (x = 0) () decomposes without melting at 120-125 °C. More common is the dodecahydrate (). A nonahydrate is also known ().
Related salts are the sodium dithiophosphate undecahydrate , sodium trithiophosphate undecahydrate , and sodium tetrathiophosphate octahydrate .
Preparation
Sodium monothiophosphate is prepared by the base hydrolysis of thiophosphoryl chloride using aqueous sodium hydroxide:
This reaction affords the dodecahydrate, which is easily dehydrated.
Partial dehydration over 6.5 M gives the nonahydrate. Under flowing , the anhydrous salt is formed.
Sodium monothiophosphate decomposes at neutral pH. Silicone grease catalyses the hydrolysis of the monothiophosphate ion , so it is recommended that it is not used in the glass joints.
In the anhydrous salt, the P-S bond is 211 pm and the three equivalent P-O bonds are short at 151 pm. These disparate values suggest that the P-S bond is single.
References
Sodium compounds
Phosphorothioates | Sodium monothiophosphate | [
"Chemistry"
] | 335 | [
"Inorganic compounds",
"Phosphorothioates",
"Functional groups",
"Inorganic compound stubs"
] |
16,110,499 | https://en.wikipedia.org/wiki/Small%20Veblen%20ordinal | In mathematics, the small Veblen ordinal is a certain large countable ordinal, named after Oswald Veblen. It is occasionally called the Ackermann ordinal, though the Ackermann ordinal described by is somewhat smaller than the small Veblen ordinal.
There is no standard notation for ordinals beyond the Feferman–Schütte ordinal . Most systems of notation use symbols such as , , , some of which are modifications of the Veblen functions to produce countable ordinals even for uncountable arguments, and some of which are "collapsing functions".
The small Veblen ordinal or is the limit of ordinals that can be described using a version of Veblen functions with finitely many arguments. It is the ordinal that measures the strength of Kruskal's theorem. It is also the ordinal type of a certain ordering of rooted trees .
References
Ordinal numbers | Small Veblen ordinal | [
"Mathematics"
] | 207 | [
"Ordinal numbers",
"Mathematical objects",
"Number stubs",
"Order theory",
"Numbers"
] |
16,110,588 | https://en.wikipedia.org/wiki/Minimum%20audibility%20curve | Minimum audibility curve is a standardized graph of the threshold of hearing frequency for an average human, and is used as the reference level when measuring hearing loss with an audiometer as shown on an audiogram.
Audiograms are produced using a piece of test equipment called an audiometer, and this allows different frequencies to be presented to the subject, usually over calibrated headphones, at any specified level. The levels are, however, not absolute, but weighted with frequency relative to a standard graph known as the minimum audibility curve which is intended to represent 'normal' hearing. This is not the best threshold found for all subjects, under ideal test conditions, which is represented by around 0 phon or the threshold of hearing on the equal-loudness contours, but is standardised in an ANSI standard to a level somewhat higher at 1 kHz . There are several definitions of the minimal audibility curve, defined in different international standards, and they differ significantly, giving rise to differences in audiograms according to the audiometer used. The ASA-1951 standard for example used a level of 16.5 dB SPL at 1 kHz whereas the later ANSI-1969/ISO-1963 standard uses 6.5 dB SPL, and it is common to allow a 10 dB correction for the older standard.
See also
Articulation index
Audiogram
Audiology
Audiometry
A-weighting
Equal-loudness contour
Hearing range
Hearing (sense)
Psychoacoustics
Pure tone audiometry
External links
Hearing Loss by Robert Thayer Sataloff
Otology
Acoustics | Minimum audibility curve | [
"Physics"
] | 318 | [
"Classical mechanics",
"Acoustics"
] |
16,110,796 | https://en.wikipedia.org/wiki/Large%20Veblen%20ordinal | In mathematics, the large Veblen ordinal is a certain large countable ordinal, named after Oswald Veblen.
There is no standard notation for ordinals beyond the Feferman–Schütte ordinal Γ0. Most systems of notation use symbols such as ψ(α), θ(α), ψα(β), some of which are modifications of the Veblen functions to produce countable ordinals even for uncountable arguments, and some of which are ordinal collapsing functions.
The large Veblen ordinal is sometimes denoted by or or . It was constructed by Veblen using an extension of Veblen functions allowing infinitely many arguments.
References
Ordinal numbers | Large Veblen ordinal | [
"Mathematics"
] | 155 | [
"Ordinal numbers",
"Mathematical objects",
"Number stubs",
"Order theory",
"Numbers"
] |
16,110,878 | https://en.wikipedia.org/wiki/List%20of%20bacteria%20genera | This article lists the genera of the bacteria. The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). However many taxonomic names are taken from the GTDB release 08-RS214 (28 April 2023).
Phyla
List
Notes:
List of clades needed to be added:
Actinomycetota > Actinomycetia > Actinobacteridae
Bacillota A > Clostridiia > "Lachnospirales" > Oscillospiraceae, Ruminococcaceae
Bacteroidota > Bacteroidia
Cyanobacteriota > Cyanobacteria
Pseudomonadota (Proteobacteria s.s.) > "Caulobacteria", "Pseudomonadia"
See also
Branching order of bacterial phyla (Woese, 1987)
Branching order of bacterial phyla (Gupta, 2001)
Branching order of bacterial phyla (Cavalier-Smith, 2002)
Branching order of bacterial phyla (Rappe and Giovanoni, 2003)
Branching order of bacterial phyla (Battistuzzi et al., 2004)
Branching order of bacterial phyla (Ciccarelli et al., 2006)
Branching order of bacterial phyla after ARB Silva Living Tree
Branching order of bacterial phyla (Genome Taxonomy Database, 2018)
Bacterial phyla
List of Archaea genera
List of bacterial orders
LPSN, list of accepted bacterial and archaeal names
Human microbiome project
MicroorganismPhyla
References
External links
List of
Bacteria genera
Lists of bacteria | List of bacteria genera | [
"Biology"
] | 343 | [
"Lists of bacteria",
"Bacteria"
] |
16,110,989 | https://en.wikipedia.org/wiki/Affix%20grammar | An affix grammar is a two-level grammar formalism used to describe the syntax of languages, mainly computer languages, using an approach based on how natural language is typically described.
The formalism was invented in 1962 by Lambert Meertens while developing a grammar for generating English sentences. Meertens also applied affix grammars to the description and composition of music, and obtained a special prize from the jury at the 1968 International Federation for Information Processing (IFIP) Congress in Edinburgh for his computer-generated string quartet, Quartet No. 1 in C major for 2 violins, viola and violoncello, based on the first non-context-free affix grammar. The string quartet was published in 1968, as Mathematical Centre Report MR 96.
The grammatical rules of an affix grammar are those of a context-free grammar, except that certain parts in the nonterminals (the affixes) are used as arguments. If the same affix occurs multiple times in a rule, its value must agree, i.e. it must be the same everywhere. In some types of affix grammar, more complex relationships between affix values are possible.
Example
We can describe an extremely simple fragment of English in the following manner:
Sentence → Subject Predicate
Subject → Noun
Predicate → Verb Object
Object → Noun
Noun → John
Noun → Mary
Noun → children
Noun → parents
Verb → like
Verb → likes
Verb → help
Verb → helps
This context-free grammar describes simple sentences such as
John likes children
Mary helps John
children help parents
parents like John
With more nouns and verbs, and more rules to introduce other parts of speech, a large range of English sentences can be described; so this is a promising approach for describing the syntax of English.
However, the given grammar also describes sentences such as
John like children
children helps parents
These sentences are wrong: in English, subject and verb have a grammatical number, which must agree.
An affix grammar can express this directly:
Sentence → Subject + number Predicate + number
Subject + number → Noun + number
Predicate + number → Verb + number Object
Object → Noun + number
Noun + singular → John
Noun + singular → Mary
Noun + plural → children
Noun + plural → parents
Verb + singular → likes
Verb + plural → like
Verb + singular → helps
Verb + plural → help
This grammar only describes correct English sentences, although it could be argued that
John likes John
is still incorrect and should instead read
John likes himself
This, too, can be incorporated using affixes, if the means of describing the relationships between different affix values are powerful enough. As remarked above, these means depend on the type of affix grammar chosen.
Types
In the simplest type of affix grammar, affixes can only take values from a finite domain, and affix values can only be related through agreement, as in the example.
Applied in this way, affixes increase compactness of grammars, but do not add expressive power.
Another approach is to allow affixes to take arbitrary strings as values and allow concatenations of affixes to be used in rules. The ranges of allowable values for affixes can be described with context-free grammar rules. This produces the formalism of two-level grammars, also known as Van Wijngaarden grammars or 2VW grammars. These have been successfully used to describe complicated languages, in particular, the syntax of the Algol 68 programming language. However, it turns out that, even though affix values can only be manipulated with string concatenation, this formalism is Turing complete; hence, even the most basic questions about the language described by an arbitrary 2VW grammar are undecidable in general.
Extended Affix Grammars, developed in the 1980s, are a more restricted version of the same idea. They were mainly applied to describe the grammar of natural language, e.g. English.
Another possibility is to allow the values of affixes to be computed by code written in some programming language. Two basic approaches have been used:
In attribute grammars, the affixes (called attributes) can take values from arbitrary domains (e.g. integer or real numbers, complex data structures) and arbitrary functions can be specified, written in a language of choice, to describe how affix values in rules are derived from each other.
In CDL (the Compiler Description Language) and its successor CDL2, developed in the 1970s, fragments of source code (usually in assembly language) can be used in rules instead of normal right-hand sides, allowing primitives for input scanning and affix value computations to be expressed directly. Designed as a basis for practical compiler construction, this approach was used to write compilers, and other software, e.g. a text editor.
References
Formal languages
Compiler construction
Syntax
Grammar frameworks | Affix grammar | [
"Mathematics"
] | 1,001 | [
"Formal languages",
"Mathematical logic"
] |
16,111,341 | https://en.wikipedia.org/wiki/Edinburgh%20Phrenological%20Society | The Edinburgh Phrenological Society was founded in 1820 by George Combe, an Edinburgh lawyer, with his physician brother Andrew Combe. The Edinburgh Society was the first and foremost phrenology grouping in Great Britain; more than forty phrenological societies followed in other parts of the British Isles. The Society's influence was greatest over its first two decades and declined in the 1840s; the final meeting was recorded in 1870.
The central concept of phrenology is that the brain is the organ of the mind and that human behaviour can be usefully understood in broadly neuropsychological rather than philosophical or religious terms. Phrenologists discounted supernatural explanations and stressed the modularity of mind. The Edinburgh phrenologists also acted as midwives to evolutionary theory and inspired a renewed interest in psychiatric disorder and its moral treatment. Phrenology claimed to be scientific but is now regarded as a pseudoscience as its formal procedures did not conform to the usual standards of scientific method.
Edinburgh phrenologists included George and Andrew Combe; asylum doctor and reformer William A.F. Browne, father of James Crichton-Browne; Robert Chambers, author of the 1844 proto-Darwinian book Vestiges of the Natural History of Creation; William Ballantyne Hodgson, economist and pioneer of women's education; astronomer John Pringle Nichol; and botanist and evolutionary thinker Hewett Cottrell Watson. Charles Darwin, a medical student in Edinburgh in 1825–7, took part in phrenological discussions at the Plinian Society and returned to Edinburgh in 1838 when formulating his concepts concerning natural selection.
Background
Phrenology emerged from the views of the medical doctor and scientific researcher Franz Joseph Gall in 18th-century Vienna. Gall suggested that facets of the mind corresponded to regions of the brain, and that it was possible to determine character traits by examining the shape of a person's skull. This "craniological" aspect was greatly extended by his one-time disciple, Johann Spurzheim, who coined the term phrenology and saw it as a means of advancing society by social reform (improving the material conditions of human life).
In 1815, the Edinburgh Review published a hostile article by anatomist John Gordon, who called phrenology a "mixture of gross errors" and "extravagant absurdities". In response, Spurzheim went to Edinburgh to take part in public debates and to perform brain dissections in public. Although he was received politely by the scientific and medical community there, many were troubled by the philosophical materialism implicit in phrenology. George Combe, a lawyer who had previously been skeptical, became a convert to phrenology after listening to Spurzheim's commentary as he dissected a human brain.
Founding and function
The Edinburgh Phrenological Society was founded on 22 February 1820, by the Combe brothers with the support of the Evangelical minister David Welsh. The Society grew rapidly; in 1826, it had 120 members, an estimated one third of whom had a medical background.
The Society acquired large numbers of phrenological artefacts, such as marked porcelain heads indicating the location of cerebral organs, and endocranial casts of individuals with unusual personalities. Their museum was located on Chambers Street.
Members published articles, gave lectures, and defended phrenology. Critics included philosopher Sir William Hamilton and the editor of the Edinburgh Review, Francis Jeffrey, Lord Jeffrey. The hostility of other critics, including Alexander Monro tertius, anatomy professor at the University of Edinburgh Medical School, actually added to the glamour of phrenological concepts. Some anti-religionists, including the anatomist Robert Knox and the evolutionist Robert Edmond Grant, while sympathetic to its materialist implications, rejected the unscientific nature of phrenology and did not embrace its speculative and reformist aspects.
In 1823, Andrew Combe addressed the Royal Medical Society in a debate, arguing that phrenology explained the intellectual and moral abilities of mankind. Both sides claimed victory after the lengthy debate, but the Medical Society refused to publish an account. This prompted the Edinburgh Phrenological Society to establish its own journal in 1824: The Phrenological Journal and Miscellany, later renamed Phrenological Journal and Magazine of Moral Science.
In the mid-1820s, a split emerged between the Christian phrenologists and Combe's closer associates. Matters came to a head when Combe and his supporters passed a motion banning the discussion of theology in the Society, effectively silencing their critics. In response, David Welsh and other evangelical members left the Society.
In December 1826, the atheistic phrenologist William A.F. Browne caused a sensation at the university's Plinian Society with an attack on the recently republished theories of Charles Bell concerning the expression of the human emotions. Bell believed that human anatomy uniquely allowed the expression of the human moral self while Browne argued that there were no absolute distinctions between human and animal anatomy. Charles Darwin, then a 17-year-old student at the university, was there to listen. On 27 March 1827, Browne advanced phrenological theories concerning the human mind in terms of the Lamarckist evolution of the brain. This attracted the opposition of almost all members of the Plinian Society and, again, Darwin observed the ensuing outrage. In his private notebooks, including the M Notebook written ten years later, Darwin commented sympathetically on the views of the phrenologists.
George Combe published The Constitution of Man in 1828. After a slow start, it became an international bestseller in the 19th century, with around 350,000 copies sold. Almost a century later, psychiatrist Sir James Crichton-Browne said of the book: "The Constitution of Man on its first appearance was received in Edinburgh with an odium theologicum, analogous to that afterwards stirred up by the Vestiges of Creation and On The Origin of Species. It was denounced as an attack on faith and morals.... read today, it must be regarded as really rather more orthodox in its teaching than some of the lucubrations of the Dean of St Paul's and the Bishop of Durham".
Phrenologists from the Society applied their methods to the Burke and Hare murders in Edinburgh. Over the course of ten months in 1828, Burke and Hare murdered sixteen people and sold the bodies for dissection in the private anatomy schools. Burke was executed on 28 January 1829, while Hare turned King's evidence; Burke was publicly dissected by Professor Monro the next day, and the phrenologists were permitted to examine his skull. Face masks of both men – a death-mask for Burke and a life-mask for Hare – form part of the Edinburgh phrenology collection.
Scotswoman Agnes Sillars Hamilton made a living as a "practical phrenologist", travelling throughout Britain and Ireland. Her son, Archibald Sillars Hamilton left for Australia in 1854, developed a successful phrenology practice there, and published an account of Ned Kelly's skull.
Society co-founder and president Andrew Combe had two successful publications in the early 1830s: Observations on Mental Derangement in 1831 and Physiology applied to Health and Education in 1834. The latter, especially, sold well in Great Britain and the United States, with numerous editions and reprintings.
The Edinburgh Phrenological Society received a financial boost by the death of a wealthy supporter in 1832. William Ramsay Henderson left a large bequest to the Edinburgh Society to promote phrenology as it saw fit. The Henderson Trust enabled the society to publish an inexpensive edition of The Constitution of Man, which went on to become one of the best-selling books of the 19th century. However, despite the widespread interest in phrenology in the 1820s and 1830s, the Phrenological Journal always struggled to make a profit.
Influences from the society
W.A.F. Browne: In 1832–1834, Browne published a paper in The Phrenological Journal in three serialised episodes On Morbid Manifestations of the Organ of Language, as connected with Insanity, relating mental disorder to a disturbance in the neurological organization of language. Browne went on to a distinguished career as an asylum doctor and his internationally influential 1837 publication What Asylums Were, Are and Ought To Be was dedicated to Andrew Combe. In 1866, after his twenty years of leadership at The Crichton asylum in Dumfries, Browne was elected President of the Medico-Psychological Association. In his later years, Browne returned to relationships of psychosis, brain injury and language in his 1872 paper Impairment of Language, The Result of Cerebral Disease, published in the West Riding Lunatic Asylum Medical Reports, edited by his son James Crichton-Browne.
Robert Chambers: Although not formally admitted to the Society, Chambers occasionally acted as George Combe's publisher and became an enthusiast for phrenological thinking. In 1844, Chambers anonymously published Vestiges of the Natural History of Creation, written as he recovered from depression at his holiday home in St Andrews. Chambers' wife, Anne Kirkwood, transcribed the manuscript for the publishers (dictated by her husband) so that they would not recognise its origins. In a strange parallel, Prince Albert read it aloud to Queen Victoria in the Summer of 1845. It became an international bestseller and a powerful public influence, situated midway between Combe's The Constitution of Man (1828) and Darwin's On the Origin of Species in 1859.
Charles Darwin: Darwin attended the University of Edinburgh Medical School and, as an active member of Plinian Society, observed the 1826-1827 controversies with phrenologist William A.F. Browne. In 1838, some eleven years after his hurried departure, Darwin revisited Edinburgh and his undergraduate haunts, recording his psychological speculations in the M Notebook and teasing out the details of his theory of natural selection. At this time, Darwin was preparing for marriage with his religiously minded cousin Emma Wedgwood, and was in some emotional turmoil: on 21 September, after his return to England, he recorded a vivid and disturbing dream in which he seemed to be involved in an execution at which the corpse came to life and joked about having died as a hero. Darwin committed his "gigantic blunder" concerning the parallel roads of Glen Roy while on this Scottish trip, suggesting an element of mental distraction. He published On the Origin of Species some twenty years later, in 1859; the book was translated into many languages, and became a staple scientific text and a key fixture of modern scientific culture.
William Ballantyne Hodgson: Hodgson joined the phrenology movement as a student at Edinburgh University and later supported himself as a professional lecturer on literature, education, and phrenology. He became an educational reformer, a pioneering proponent of women's education and – in 1871 – the first Professor of Political Economy (and Mercantile Law) at Edinburgh University. In later life, Hodgson lived at Bonaly Tower outside Edinburgh, and was elected President of the Educational Institute of Scotland.
Thomas Laycock: Laycock was one of George Combe's "influential disciples". He was a pioneering neurophysiologist. In 1855, Laycock was appointed to the Chair of Medicine in Edinburgh University. In 1860, Laycock published his Mind and Brain, an extended essay on the neurological foundations of psychological life. Laycock was friendly with asylum reformer William A.F. Browne and was an important influence on Browne's son, Sir James Crichton-Browne.
John Pringle Nichol: Nichol was originally educated and licensed as a preacher, but the impact of phrenological thinking pushed him into education. He became a celebrated lecturer and Regius Professor of Astronomy in Glasgow University, and his 1837 book The Architecture of the Heavens was a classic of popular science. In the 1840s, Nichol became addicted to prescription opiates, and he recorded his successful hydropathic rehabilitation in his autobiographical correspondence Memorials from Ben Rhydding.
Hewett Cottrell Watson: In 1836, Watson published a paper in The Phrenological Journal entitled What Is The Use of the Double Brain? in which he speculated on the differential development of the two human cerebral hemispheres. This theme of cerebral asymmetry was picked up rather casually by the London society physician Sir Henry Holland in 1840, and then much more extensively by the eccentric Brighton medical practitioner Arthur Ladbroke Wigan in his 1844 treatise A New View of Insanity: On the Duality of Mind. It did not achieve scientific status until Paul Broca, encouraged by the French phrenologist/physician Jean-Baptiste Bouillaud, published his research into the speech centres of the brain in 1861. In 1868, Broca presented his findings at the Norwich meeting of the British Association for the Advancement of Science. In 1889, Henry Maudsley published a searching review of this topic entitled The Double Brain in the philosophical journal Mind. Like Robert Chambers, Watson later turned his energies to the question of the transmutation of species, and, having bought the Phrenological Journal with the proceeds of a large inheritance, appointed himself as its editor in 1837. In the 1850s, Watson conducted an extensive correspondence with Charles Darwin concerning the geographical distribution of British plant species, and Darwin made generous acknowledgement of Watson's scientific assistance in On The Origin of Species (second edition). Watson was unusual amongst phrenologists in explicitly disavowing phrenological ideas in later life.
Decline
Interest in phrenology declined in Edinburgh in the 1840s. Some of the phrenologists' concerns drifted into the related fields of anthropometry, psychiatry and criminology, and also into degeneration theory as set out by Bénédict Morel, Arthur de Gobineau and Cesare Lombroso. In the 1870s, the eminent social psychologist Gustav Le Bon (1841–1931) invented a cephalometer which facilitated the measurement of cranial capacity and variation. In 1885, the German medical scientist Rudolf Virchow launched a large scale craniometric investigation of the supposed racial stereotypes with decisively negative results for the proponents of racial science. Worldwide, interest in phrenology remained high throughout the nineteenth century, with George Combe's The Constitution of Man being much in demand. Combe devoted his later years to international travel, lecturing on phrenology. He was preparing the ninth edition of The Constitution of Man when he died while receiving hydrotherapy treatment at Moor Park, Farnham.
The last recorded meeting of the Society took place in 1870. The Society's museum closed in 1886.
Legacy of the Society
Together with mesmerism, phrenology exerted an extraordinary influence on the Victorian literary imagination in the later 19th century, especially in the fin-de-siècle aesthetic, and comparable to the later cultural influences of spiritualism and psychoanalysis. Examples of phrenology's literary legacy feature in the works of Sir Arthur Conan Doyle, George du Maurier, Bram Stoker, Robert Louis Stevenson and H. G. Wells.
On 29 February 1924, Sir James Crichton-Browne (the son of William A.F. Browne) delivered the Ramsay Henderson Bequest Lecture entitled The Story of the Brain in which he recorded a generous appreciation of the role of the Edinburgh phrenologists in the later development of neurology and neuropsychiatry. Crichton-Browne did not remark, however, on his father's having joined the Society a century earlier, almost to the day.
The Henderson Trust was wound up in 2012. Many of the society's phrenological artefacts survive today, having passed to the University of Edinburgh's Anatomical Museum under the direction of Professor Matthew Kaufman, and some are now on display at the Scottish National Portrait Gallery.
The activities of the Edinburgh phrenologists have enjoyed an unusual afterlife in the history and sociology of scientific knowledge (science studies), as an example of a discarded cultural production.
References
External links
Anatomical Museum at the University of Edinburgh
Organizations established in 1820
1870 disestablishments in Scotland
Phrenology
Organisations based in Edinburgh
History of Edinburgh
History of psychology
History of neuroscience
Clubs and societies in Edinburgh
History of mental health in the United Kingdom
Former mental health organisations in the United Kingdom
Charles Darwin
1820 establishments in Scotland
Organizations disestablished in 1870 | Edinburgh Phrenological Society | [
"Biology"
] | 3,301 | [
"Phrenology",
"Biology theories",
"Obsolete biology theories"
] |
16,111,428 | https://en.wikipedia.org/wiki/Korg%20PadKontrol | The Korg PadKontrol was a USB MIDI controller manufactured by Korg. The PadKontrol was released in 2005 as a competitor to the Akai MPD and the M-Audio Triggerfinger. The PadKontrol has sixteen assignable, velocity sensitive pads, with sixteen "scenes" which allow the user to toggle between various pad configurations, and an assignable X-Y pad for drum rolls, flams, or controller input inside a VSTi or a MIDI sequencer.
Use
The PadKontrol is commonly used for controlling virtual drum instruments in a MIDI sequencer (such as ezdrummer or BFD). Additionally, the PadKontrol can be used to control a software sampler Kontakt, for example) or can be used to control values within a MIDI sequencer.
Native Mode
When the PadKontrol is placed in native mode, the user has control over every button and light on the unit (including the LED display) and can use software on the computer to send MIDI data to the PadKontrol, giving programmers a means by which to write their own software for the PadKontrol.
See also
Korg
MIDI controller
External links
PadKontrol Native Mode
Manufacturer Information
Video Overview of the PadKontrol
Computer peripherals
Electronic musical instruments
MIDI
Korg | Korg PadKontrol | [
"Technology"
] | 272 | [
"Computer peripherals",
"Components"
] |
16,112,279 | https://en.wikipedia.org/wiki/TVLM%20513-46546 | TVLM 513-46546 is an M9 ultracool dwarf at the red dwarf/brown dwarf mass boundary in the constellation Boötes. It exhibits flare star activity, which is most pronounced at radio wavelengths. The star has a mass approximately 80 times the mass of Jupiter (or 8 percent of the Sun's mass). The radio emission is broadband and highly circularly polarized, similar to planetary auroral radio emissions. The radio emission is periodic, with bursts emitted every 7054 s, with nearly one hundredth of a second precision. Subtle variations in the radio pulses could suggest that the ultracool dwarf rotates faster at the equator than the poles (differential rotation) in a manner similar to the Sun.
Planetary system
On 4 August 2020 astronomers announced the discovery of a Saturn-like planet TVLM 513b around this star with a period of days, a mass of between 0.35 and 0.42 , a circular orbit (e≃0), a semi-major axis of between 0.28 and 0.31 AU and an inclination angle of 71−88°. The companion was detected by the radio astrometry method.
References
Boötes
M-type main-sequence stars
Planetary systems with one confirmed planet
J15010818+2250020 | TVLM 513-46546 | [
"Astronomy"
] | 262 | [
"Boötes",
"Constellations"
] |
16,112,368 | https://en.wikipedia.org/wiki/Earth%20ellipsoid | An Earth ellipsoid or Earth spheroid is a mathematical figure approximating the Earth's form, used as a reference frame for computations in geodesy, astronomy, and the geosciences. Various different ellipsoids have been used as approximations.
It is a spheroid (an ellipsoid of revolution) whose minor axis (shorter diameter), which connects the geographical North Pole and South Pole, is approximately aligned with the Earth's axis of rotation. The ellipsoid is defined by the equatorial axis () and the polar axis (); their radial difference is slightly more than 21 km, or 0.335% of (which is not quite 6,400 km).
Many methods exist for determination of the axes of an Earth ellipsoid, ranging from meridian arcs up to modern satellite geodesy or the analysis and interconnection of continental geodetic networks. Amongst the different set of data used in national surveys are several of special importance: the Bessel ellipsoid of 1841, the international Hayford ellipsoid of 1924, and (for GPS positioning) the WGS84 ellipsoid.
Types
There are two types of ellipsoid: mean and reference.
A data set which describes the global average of the Earth's surface curvature is called the mean Earth Ellipsoid. It refers to a theoretical coherence between the geographic latitude and the meridional curvature of the geoid. The latter is close to the mean sea level, and therefore an ideal Earth ellipsoid has the same volume as the geoid.
While the mean Earth ellipsoid is the ideal basis of global geodesy, for regional networks a so-called reference ellipsoid may be the better choice. When geodetic measurements have to be computed on a mathematical reference surface, this surface should have a similar curvature as the regional geoid; otherwise, reduction of the measurements will get small distortions.
This is the reason for the "long life" of former reference ellipsoids like the Hayford or the Bessel ellipsoid, despite the fact that their main axes deviate by several hundred meters from the modern values. Another reason is a judicial one: the coordinates of millions of boundary stones should remain fixed for a long period. If their reference surface changes, the coordinates themselves also change.
However, for international networks, GPS positioning, or astronautics, these regional reasons are less relevant. As knowledge of the Earth's figure is increasingly accurate, the International Geoscientific Union IUGG usually adapts the axes of the Earth ellipsoid to the best available data.
Reference ellipsoid
In geodesy, a reference ellipsoid is a mathematically defined surface that approximates the geoid, which is the truer, imperfect figure of the Earth, or other planetary body, as opposed to a perfect, smooth, and unaltered sphere, which factors in the undulations of the bodies' gravity due to variations in the composition and density of the interior, as well as the subsequent flattening caused by the centrifugal force from the rotation of these massive objects (for planetary bodies that do rotate).
Because of their relative simplicity, reference ellipsoids are used as a preferred surface on which geodetic network computations are performed and point coordinates such as latitude, longitude, and elevation are defined.
In the context of standardization and geographic applications, a geodesic reference ellipsoid is the mathematical model used as foundation by spatial reference system or geodetic datum definitions.
Ellipsoid parameters
In 1687 Isaac Newton published the Principia in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of a flattened ("oblate") ellipsoid of revolution, generated by an ellipse rotated around its minor diameter; a shape which he termed an oblate spheroid.
In geophysics, geodesy, and related areas, the word 'ellipsoid' is understood to mean 'oblate ellipsoid of revolution', and the older term 'oblate spheroid' is hardly used. For bodies that cannot be well approximated by an ellipsoid of revolution a triaxial (or scalene) ellipsoid is used.
The shape of an ellipsoid of revolution is determined by the shape parameters of that ellipse. The semi-major axis of the ellipse, , becomes the equatorial radius of the ellipsoid: the semi-minor axis of the ellipse, , becomes the distance from the centre to either pole. These two lengths completely specify the shape of the ellipsoid.
In geodesy publications, however, it is common to specify the semi-major axis (equatorial radius) and the flattening , defined as:
That is, is the amount of flattening at each pole, relative to the radius at the equator. This is often expressed as a fraction 1/; then being the "inverse flattening". A great many other ellipse parameters are used in geodesy but they can all be related to one or two of the set , and .
A great many ellipsoids have been used to model the Earth in the past, with different assumed values of and as well as different assumed positions of the center and different axis orientations relative to the solid Earth. Starting in the late twentieth century, improved measurements of satellite orbits and star positions have provided extremely accurate determinations of the Earth's center of mass and of its axis of revolution; and those parameters have been adopted also for all modern reference ellipsoids.
The ellipsoid WGS-84, widely used for mapping and satellite navigation has close to 1/300 (more precisely, 1/298.257223563, by definition), corresponding to a difference of the major and minor semi-axes of approximately (more precisely, 21.3846857548205 km). For comparison, Earth's Moon is even less elliptical, with a flattening of less than 1/825, while Jupiter is visibly oblate at about 1/15 and one of Saturn's triaxial moons, Telesto, is highly flattened, with between 1/3 and 1/2 (meaning that the polar diameter is between 50% and 67% of the equatorial.
Determination
Arc measurement is the historical method of determining the ellipsoid.
Two meridian arc measurements will allow the derivation of two parameters required to specify a reference ellipsoid.
For example, if the measurements were hypothetically performed exactly over the equator plane and either geographical pole, the radii of curvature so obtained would be related to the equatorial radius and the polar radius, respectively a and b (see: Earth polar and equatorial radius of curvature). Then, the flattening would readily follow from its definition:
.
For two arc measurements each at arbitrary average latitudes , , the solution starts from an initial approximation for the equatorial radius and for the flattening . The theoretical Earth's meridional radius of curvature can be calculated at the latitude of each arc measurement as:
where .
Then discrepancies between empirical and theoretical values of the radius of curvature can be formed as . Finally, corrections for the initial equatorial radius and the flattening can be solved by means of a system of linear equations formulated via linearization of :
where the partial derivatives are:
Longer arcs with multiple intermediate-latitude determinations can completely determine the ellipsoid that best fits the surveyed region. In practice, multiple arc measurements are used to determine the ellipsoid parameters by the method of least squares adjustment. The parameters determined are usually the semi-major axis, , and any of the semi-minor axis, , flattening, or eccentricity.
Regional-scale systematic effects observed in the radius of curvature measurements reflect the geoid undulation and the deflection of the vertical, as explored in astrogeodetic leveling.
Gravimetry is another technique for determining Earth's flattening, as per Clairaut's theorem.
Modern geodesy no longer uses simple meridian arcs or ground triangulation networks, but the methods of satellite geodesy, especially satellite gravimetry.
Geodetic coordinates
Historical Earth ellipsoids
The reference ellipsoid models listed below have had utility in geodetic work and many are still in use. The older ellipsoids are named for the individual who derived them and the year of development is given. In 1887 the English surveyor Colonel Alexander Ross Clarke CB FRS RE was awarded the Gold Medal of the Royal Society for his work in determining the figure of the Earth. The international ellipsoid was developed by John Fillmore Hayford in 1910 and adopted by the International Union of Geodesy and Geophysics (IUGG) in 1924, which recommended it for international use.
At the 1967 meeting of the IUGG held in Lucerne, Switzerland, the ellipsoid called GRS-67 (Geodetic Reference System 1967) in the listing was recommended for adoption. The new ellipsoid was not recommended to replace the International Ellipsoid (1924), but was advocated for use where a greater degree of accuracy is required. It became a part of the GRS-67 which was approved and adopted at the 1971 meeting of the IUGG held in Moscow. It is used in Australia for the Australian Geodetic Datum and in the South American Datum 1969.
The GRS-80 (Geodetic Reference System 1980) as approved and adopted by the IUGG at its Canberra, Australia meeting of 1979 is based on the equatorial radius (semi-major axis of Earth ellipsoid) , total mass , dynamic form factor and angular velocity of rotation , making the inverse flattening a derived quantity. The minute difference in seen between GRS-80 and WGS-84 results from an unintentional truncation in the latter's defining constants: while the WGS-84 was designed to adhere closely to the GRS-80, incidentally the WGS-84 derived flattening turned out to differ slightly from the GRS-80 flattening because the normalized second degree zonal harmonic gravitational coefficient, that was derived from the GRS-80 value for , was truncated to eight significant digits in the normalization process.
An ellipsoidal model describes only the ellipsoid's geometry and a normal gravity field formula to go with it. Commonly an ellipsoidal model is part of a more encompassing geodetic datum. For example, the older ED-50 (European Datum 1950) is based on the Hayford or International Ellipsoid. WGS-84 is peculiar in that the same name is used for both the complete geodetic reference system and its component ellipsoidal model. Nevertheless, the two concepts—ellipsoidal model and geodetic reference system—remain distinct.
Note that the same ellipsoid may be known by different names. It is best to mention the defining constants for unambiguous identification.
See also
Equatorial bulge
Earth radius of curvature
Geodetic datum
Geoid
Great ellipse
Meridian arc
Normal gravity
Planetary coordinate system
History of geodesy
Planetary ellipsoid
References
Bibliography
P. K. Seidelmann (Chair), et al. (2005), “Report Of The IAU/IAG Working Group On Cartographic Coordinates And Rotational Elements: 2003,” Celestial Mechanics and Dynamical Astronomy, 91, pp. 203–215.
Web address: https://astrogeology.usgs.gov/Projects/WGCCRE
OpenGIS Implementation Specification for Geographic information - Simple feature access - Part 1: Common architecture, Annex B.4. 2005-11-30
Web address: http://www.opengeospatial.org
External links
Geographic coordinate system
Coordinate systems and transformations (SPENVIS help page)
Coordinate Systems, Frames and Datums
Geodesy
Earth sciences | Earth ellipsoid | [
"Mathematics"
] | 2,516 | [
"Applied mathematics",
"Geodesy"
] |
16,113,773 | https://en.wikipedia.org/wiki/Large%20extra%20dimensions | In particle physics and string theory (M-theory), the ADD model, also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem (Why is the force of gravity so weak compared to the electromagnetic force and the other fundamental forces?). The model tries to explain this problem by postulating that our universe, with its four dimensions (three spatial ones plus time), exists on a membrane in a higher dimensional space. It is then suggested that the other forces of nature (the electromagnetic force, strong interaction, and weak interaction) operate within this membrane and its four dimensions, while the hypothetical gravity-bearing particle, the graviton, can propagate across the extra dimensions. This would explain why gravity is very weak compared to the other fundamental forces. The size of the dimensions in ADD is around the order of the TeV scale, which results in it being experimentally probeable by current colliders, unlike many exotic extra dimensional hypotheses that have the relevant size around the Planck scale.
The model was proposed by Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali in 1998.
One way to test the theory is performed by colliding together two protons in the Large Hadron Collider so that they interact and produce particles. If a graviton were to be formed in the collision, it could propagate into the extra dimensions, resulting in an imbalance of transverse momentum. No experiments from the Large Hadron Collider have been decisive thus far. However, the operation range of the LHC (13 TeV collision energy) covers only a small part of the predicted range in which evidence for LED would be recorded (a few TeV to 1016 TeV). This suggests that the theory might be more thoroughly tested with more advanced technology.
Proponents' views
Traditionally, in theoretical physics, the Planck scale is the highest energy scale and all dimensionful parameters are measured in terms of the Planck scale. There is a great hierarchy between the weak scale and the Planck scale, and explaining the ratio of strength of weak force and gravity is the focus of much of beyond-Standard-Model physics. In models of large extra dimensions, the fundamental scale is much lower than the Planck. This occurs because the power law of gravity changes. For example, when there are two extra dimensions of size , the power law of gravity is for objects with and for objects with . If we want the Planck scale to be equal to the next accelerator energy (1 TeV), we should take to be approximately 1 mm. For larger numbers of dimensions, fixing the Planck scale at 1 TeV, the size of the extra-dimensions become smaller and as small as 1 femtometer for six extra dimensions.
By reducing the fundamental scale to the weak scale, the fundamental theory of quantum gravity, such as string theory, might be accessible at colliders such as the Tevatron or the LHC. There has been recent progress in generating large volumes in the context of string theory. Having the fundamental scale accessible allows the production of black holes at the LHC, though there are constraints on the viability of this possibility at the energies at the LHC. There are other signatures of large extra dimensions at high energy colliders.
Many of the mechanisms that were used to explain the problems in the Standard Model used very high energies. In the years after the publication of ADD, much of the work of the beyond the Standard Model physics community went to explore how these problems could be solved with a low scale of quantum gravity. Almost immediately, there was an alternative explanation to the see-saw mechanism for the neutrino mass. Using extra dimensions as a new source of small numbers allowed for new mechanisms for understanding the masses and mixings of the neutrinos.
Another problem with having a low scale of quantum gravity was the existence of possibly TeV-suppressed proton decay, flavor violating, and CP violating operators. These would be disastrous phenomenologically. Physicists quickly realized that there were novel mechanisms for getting small numbers necessary for explaining these very rare processes.
Opponents' views
In the traditional view, the enormous gap in energy between the mass scales of ordinary particles and the Planck mass is reflected in the fact that virtual processes involving black holes or gravity are strongly suppressed. The suppression of these terms is the principle of renormalizabilityin order to see an interaction at low energy, it must have the property that its coupling only changes logarithmically as a function of the Planck scale. Nonrenormalizable interactions are weak only to the extent that the Planck scale is large.
Virtual gravitational processes do not conserve anything except gauge charges, because black holes decay into anything with the same charge. Therefore, it is difficult to suppress interactions at the gravitational scale. One way to do it is by postulating new gauge symmetries. A different way to suppress these interactions in the context of extra-dimensional models is the "split fermion scenario" proposed by Arkani-Hamed and Schmaltz in their paper "Hierarchies without Symmetries from Extra Dimensions". In this scenario, the wavefunctions of particles that are bound to the brane have a finite width significantly smaller than the extra-dimension, but the center (e.g. of a Gaussian wave packet) can be dislocated along the direction of the extra dimension in what is known as a "fat brane". Integrating out the additional dimension(s) to obtain the effective coupling of higher-dimensional operators on the brane, the result is suppressed with the exponential of the square of the distance between the centers of the wave functions, a factor that generates a suppression by many orders of magnitude already by a dislocation of only a few times the typical width of the wave function.
In electromagnetism, the electron magnetic moment is described by perturbative processes derived in the QED Lagrangian:
which is calculated and measured to one part in a trillion. But it is also possible to include a Pauli term in the Lagrangian:
and the magnetic moment would change by . The reason the magnetic moment is correctly calculated without this term is because the coefficient has the dimension of inverse mass. The mass scale is at most the Planck mass, so would only be seen at the 20th decimal place with the usual Planck scale.
Since the electron magnetic moment is measured so accurately, and since the scale where it is measured is at the electron mass, a term of this kind would be visible even if the Planck scale were only about 109 electron masses, which is . This is much higher than the proposed Planck scale in the ADD model.
QED is not the full theory, and the Standard Model does not have many possible Pauli terms. A good rule of thumb is that a Pauli term is like a mass termin order to generate it, the Higgs must enter. But in the ADD model, the Higgs vacuum expectation value is comparable to the Planck scale, so the Higgs field can contribute to any power without any suppression. One coupling which generates a Pauli term is the same as the electron mass term, except with an extra where is the U(1) gauge field. This is dimension-six, and it contains one power of the Higgs expectation value, and is suppressed by two powers of the Planck mass. This should start contributing to the electron magnetic moment at the sixth decimal place. A similar term should contribute to the muon magnetic moment at the third or fourth decimal place.
The neutrinos are only massless because the dimension-five operator does not appear. But neutrinos have a mass scale of approximately eV, which is 14 orders of magnitude smaller than the scale of the Higgs expectation value of 1 TeV. This means that the term is suppressed by a mass such that
Substituting TeV gives eV GeV. So this is where the neutrino masses suggest new physics; at close to the traditional Grand Unification Theory (GUT) scale, a few orders of magnitude less than the traditional Planck scale. The same term in a large extra dimension model would give a mass to the neutrino in the MeV-GeV range, comparable to the mass of the other particles.
In this view, models with large extra dimensions miscalculate the neutrino masses by inappropriately assuming that the mass is due to interactions with a hypothetical right-handed partner. The only reason to introduce a right-handed partner is to produce neutrino masses in a renormalizable GUT. If the Planck scale is small so that renormalizability is no longer an issue, there are many neutrino mass terms which do not require extra particles.
For example, at dimension-six, there is a Higgs-free term which couples the lepton doublets to the quark doublets, , which is a coupling to the strong interaction quark condensate. Even with a relatively low energy pion scale, this type of interaction could conceivably give a mass to the neutrino of size , which is only a factor of 107 less than the pion condensate itself at . This would be some of mass, about a thousand times bigger than what is measured.
This term also allows for lepton number violating pion decays, and for proton decay. In fact, in all operators with dimension greater than four, there are CP, baryon, and lepton-number violations. The only way to suppress them is to deal with them term by term, which nobody has done.
The popularity, or at least prominence, of these models may have been enhanced because they allow the possibility of black hole production at the LHC, which has attracted significant attention.
Empirical tests
Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions.
In 2012, the Fermi/LAT collaboration published limits on the ADD model of Large Extra Dimensions from astrophysical observations of neutron stars. If the unification scale is at a TeV, then for , the results presented here imply that the compactification topology is more complicated than a torus, i.e., all large extra dimensions (LED) having the same size. For flat LED of the same size, the lower limits on the unification scale results are consistent with n ≥ 4. The details of the analysis is as follows: A sample of 6 gamma-ray faint NS sources not reported in the first Fermi gamma-ray source catalog that are good candidates are selected for this analysis, based on age, surface magnetic field, distance, and galactic latitude. Based on 11 months of data from Fermi-LAT, 95% CL upper limits on the size of extra dimensions from each source are obtained, as well as 95% CL lower limits on the (n+4)-dimensional Planck scale . In addition, the limits from all of the analyzed NSs have been combined statistically using two likelihood-based methods. The results indicate more stringent limits on LED than quoted previously from individual neutron star sources in gamma-rays. In addition, the results are more stringent than current collider limits, from the LHC, for .
See also
Universal extra dimensions
Kaluza–Klein theory
Randall–Sundrum model
DGP model
References
Further reading
S. Hossenfelder, Extra Dimensions, (2006).
Kaustubh Agashe and Alex Pomarol
Physics beyond the Standard Model
Theories of gravity
String theory
Dimension | Large extra dimensions | [
"Physics",
"Astronomy"
] | 2,364 | [
"Geometric measurement",
"Astronomical hypotheses",
"Physical quantities",
"Theoretical physics",
"Unsolved problems in physics",
"Theories of gravity",
"Particle physics",
"Theory of relativity",
"String theory",
"Dimension",
"Physics beyond the Standard Model"
] |
16,115,156 | https://en.wikipedia.org/wiki/Gyromitra%20caroliniana | Gyromitra caroliniana, known commonly as the Carolina false morel or big red, is an ascomycete fungus of the genus Gyromitra, within the Pezizales group of fungi. It is found in hardwood forests of the southeastern United States, where it fruits in early spring soon after snowmelt.
The fruit body, or ascocarp, appears on the ground in woodland, and can grow to massive sizes. The heavily wrinkled cap is red-brown in color, nearly spherical to roughly elliptical in shape, and typically measures tall and wide. The stipe is massive, up to thick, with a white felt-like surface. The brittle flesh is densely packed into the cap in convoluted folds that form internal locules.
Taxonomy
The species was originally named Morchella caroliniana by French botanist Louis Augustin Guillaume Bosc in 1811, and later sanctioned under this name by Elias Fries in 1822. It was transferred to Gyromitra by Fries in 1871.
Gyromitra caroliniana is the type species of subgenus Caroliniana of genus Gyromitra. This grouping comprises species that have, in maturity, coarsely reticulate ascospores (i.e., with a network of ridges on the surface) with multiple blunt spines that originate from the reticulum on the spores. Other species in this subgenus include G. fastigiata and the central European species G. parma. In 1969, Erich Heinz Benedix believed that the spore reticulation was sufficiently unique to be worthy of designation as a separate genus, and he described Fastigiella to contain G. caroliniana. Harri Harmaja disagreed, later placing Fastigiella in synonymy with Gyromitra.
In a 2009 review of the genus Gyromitra, authors van Vooren and Moreau say that Bosc's original species description is ambiguous, leaving much room for interpretation, and they suggest that several reports of the species occurring in Europe should be referred to Gyromitra fastigiata. They point out that in 1970, Estonian mycologist Ain Raitviir considered Bosc's Morchella caroliniana a nomen dubium, and Fries's description as nomen confusum, and advocated the abandonment of the specific epithet caroliniana. In the early 1970s, Kent McKnight redefined the taxon and selected a neotype, based on five specimens collected from Lorton, Virginia in 1942.
The specific epithet refers to the Carolinas, where it was first collected scientifically. Common names include the "brown false morel", "Carolina false morel", "big red", (particularly in Missouri and Arkansas), or "river red".
Description
The cap is roughly spherical to elliptical, and features a folded, crumpled, or corrugated surface that somewhat resembles the surface of a brain. It has areas of more or less symmetrical pits, or ribs arranged vertically. The cap margin is close to the stipe and sometimes adheres to it. The color is reddish to reddish-brown, but becomes darker in weathered specimens; the reverse side is whitish. Fruit bodies are typically across but can grow to be much larger. Fred J. Seaver reported one specimen to have grown to a height of , but a more usual height range is . The underside is whitish, but not readily visible. The stipe is short and stout, furrowed, typically long by wide but sometimes much larger, and usually thickest at the base. Pure white when young and with a felt-like surface, it discolors in age or with handling. The upper portion of the stipe is usually branched, but the branches are hidden by the cap. The whitish flesh forms locules (chambers) and is densely packed in the stipe and cap, forming branches to the points of attachment.
The spores are narrowly elliptical, hyaline (translucent), and apiculate (with a sharply pointed tip), measuring 30–33 by 11.5–14 μm. Spores usually have one large oil droplet and one or two smaller ones. Initially smooth, the spore surface becomes reticulate and coarse, developing small warts. The use of scanning electron microscopy has revealed up to 6 short apiculi (the part of a spore that attaches to the sterigmata) that originate from extensions of the reticulation. Asci (spore-bearing cells) are 320–420 by 18.5–23 μm, and the paraphyses are 6.5–5.9 μm wide.
Although some guides indicate the species is edible with suitable preparation (such as boiling), it is generally not recommended for consumption because of the risk of confusion with other toxic Gyromitra species that contain the compound gyromitrin. When boiled in water, or digested in the body, this compound is readily hydrolyzed to the toxic compound monomethylhydrazine—used as a propellant in some rocket fuels.
Similar species
Gyromitra brunnea is similar in appearance to G. caroliana, and has an overlapping geographical range. G. brunnea is distinctly lobed, and lacks ribs and cross-ribs. Consequently, "seams" can usually be found where the undersurface is exposed. In contrast, G. caroliniana is almost never lobed and thus lacks seams. Its tightly wrinkled and attached cap mostly hide the undersurface. G. korfii has a more block-like or square appearance, and its yellowish-brown to reddish-brown cap surface has fewer wrinkles, folds, and convolutions. G. fastigiata is a European species that resembles the North American G. brunnea. The common and widespread G. esculenta has a loosely lobed, irregularly shaped, brainlike cap. It has shorter spores measuring 21–25 by 12–13 μm.
Habitat and distribution
The fungus fruits singly or in loose groups on the ground under hardwood trees, in rich humus. Common habitats include near stumps and other dead wood, particularly oak, and along river bottoms. In the southern states, it can appear as early as March, but elsewhere it typically fruits in April and May. The species has been used as an indicator signalling the start of "morel season". Gyromitra species are "officially" considered saprobic, but exhibit some mycorrhizal tendencies, and may integrate both ecological lifestyles in their life cycle.
The range of G. caroliniana includes Oklahoma to the Carolinas and north to the Great Lakes. Erich Benedix reported the fungus in Thuringia and Austria, where he claimed it had previously often been misidentified with young forms of Gyromitra infula. A more recent revision disputes those claims, saying "Reports from Europe are unsubstantiated and are due to confusion with G. fastigiata and G. gigas". The fruit bodies develop slowly, and specimens left until late in the season can grow up to five pounds or more.
See also
Morchella, the true morels
References
Cited literature
Discinaceae
Fungi of North America
Fungi described in 1811
Taxa named by Louis Augustin Guillaume Bosc
Fungus species | Gyromitra caroliniana | [
"Biology"
] | 1,509 | [
"Fungi",
"Fungus species"
] |
16,115,172 | https://en.wikipedia.org/wiki/List%20of%20Facebook%20features | Facebook is a social-network service website launched on February 4, 2004, by Mark Zuckerberg. The following is a list of software and technology features that can be found on the Facebook website and mobile app and are available to users of the social media site.
Facebook structure
News Feed
The news feed is the primary system through which users are exposed to content posted on the network. Using a secret method (initially known as EdgeRank), Facebook selects a handful of updates to actually show users every time they visit their feed, out of an average of 1500 updates they can potentially receive.
On September 6, 2006, Ruchi Sanghvi announced a new home page feature called News Feed. Originally, when users logged into Facebook, they were presented with a customizable version of their own profile. The new layout, by contrast, created an alternative home page in which users saw a constantly updated list of their friends' Facebook activity. News Feed highlights information that includes profile changes, upcoming events, and birthdays, among other updates. This has enabled spammers and other users to manipulate these features by creating illegitimate events or posting fake birthdays to attract attention to their profile or cause. News Feed also shows conversations taking place between the walls of a user's friends. An integral part of the News Feed interface is the Mini Feed, a news stream on the user's profile page that shows updates about that user. Unlike in the News Feed, the user can delete events from the Mini Feed after they appear so that they are no longer visible to profile visitors. In 2011, Facebook updated the News Feed to show top stories and most recent stories in one feed, and the option to highlight stories to make them top stories, as well as to un-highlight stories. In response to users' criticism, Facebook later updated the News Feed to allow users to view recent stories first.
Initially, the addition of the News Feed caused some discontent among Facebook users. Many users complained that the News Feed was too cluttered with excess information. Others were concerned that the News Feed made it too easy for other people to track activities like changes in relationship status, events, and conversations with other users. This tracking is often casually referred to as "Facebook-Stalking". In response to this dissatisfaction, creator Mark Zuckerberg issued an apology for the site's failure to include appropriate customizable privacy features. Thereafter, users were able to control what types of information were shared automatically with friends. Currently, users may prevent friends from seeing updates about several types of especially private activities, although other events are not customizable in this way.
With the introduction of the "New Facebook" in early February 2010 came a complete redesign of the pages, several new features and changes to News Feeds. On their personal Feeds (now integrated with Walls), users were given the option of removing updates from any application as well as choosing the size they show up on the page. Furthermore, the community feed (containing recent actions by the user's friends) contained options to instantly select whether to hear more or less about certain friends or applications.
On March 7, 2013, Facebook announced a redesigned newsfeed. In 2022, Facebook's parent company, Meta Platforms, announced it is renaming the "News Feed" to simply be named "Feed".
Friends
"Friending" someone on the platform is the act of sending another user a "friend request" on Facebook. The two people are Facebook friends once the receiving party accepts the friend request. In addition to accepting the request, the user has the option of declining the friend request or hiding it using the "Not Now" feature. Deleting a friend request removes the request, but does allow the sender to resend it in the future. The "Not Now" feature hides the request but does not delete it, allowing the receiver to revisit the request at a later date.
It is also possible to remove a user from one's friends, which is referred to as "unfriending" by Facebook. Many Facebook users also refer to the process as "de-friending". "Unfriend" was New Oxford American Dictionary's word of the year in 2009. Facebook does not notify a user if they have been unfriended, but there are scripts that provide this functionality. There has also been a study on why Facebook users unfriend, which found that differences, especially between ages, and few mutual friendships were the dominant factors correlated with unfriending, all of which mirrors the decline of physical-world relationships.
Facebook profiles also have advanced privacy features to restrict content to certain users, such as non-friends or persons on a specific list.
Wall
The wall is the original profile space where Facebook users' content was displayed, until December 2011. It allowed the posting of messages, often short or temporal notes, for the user to see while displaying the time and date the message was written. A user's wall is visible to anyone with the ability to see their full profile, and friends' wall posts appear in the user's News Feed.
In July 2007, Facebook allowed users to post attachments to the wall, whereas previously the wall was limited to text only. In May 2008, the Wall-to-Wall for each profile was limited to only 40 posts. Facebook later allowed users to insert HTML code in boxes attached to the wall via apps like Static FBML which has allowed marketers to track use of their fan pages with Google Analytics.
The concept of tagging in status updates, an attempt to imitate Twitter, began September 14, 2009. This meant putting the name of a user, a brand, an event or a group in a post in such a way that it linked to the wall of the Facebook page being tagged, and made the post appear in news feeds for that page, as well as those of selected friends. This was first done using the "@" symbol followed by the person's name. Later, a numerical ID for the person could be used. Visually, this was displayed with bold text. Early in 2011, tagging in comments was added.
In addition to postings by other users, the wall also displayed other events that happened to the user's profile. This included when information was changed, when they changed their profile picture, and when they connected with new people, among other things.
The wall has been replaced by the Timeline profile layout, which was introduced in December 2011.
Timeline
In September 2011, Facebook introduced "Timeline" at its developer conference, intended to revamp users' profiles in order to show content based on year, month and date. "Cover" photos were introduced, taking up a significant portion of the top of pages, and a redesigned display of personal information such as friends, likes and photos appeared on the left-hand side, while story posts appeared on the right. The new design introduced flexible sizing for story posts in the feed, along with more prominent location and photo placement. The Timeline also encouraged scrolling, with constantly loading story posts of users' pasts. Timeline began gradually rolling out to users in New Zealand starting December 7, 2011, and was made officially available to all users worldwide on December 15. By January, the switch to Timeline became required for all users. In February 2012, Timeline became available for Facebook Pages.
Likes and Reactions
The like button, first enabled on February 9, 2009, enables users to easily interact with status updates, comments, photos, links shared by friends, videos and advertisements. Once clicked by a user, the designated content appears in the News Feeds of that user's friends, and the button also displays the number of other users who have liked the content, including a full or partial list of those users. The like button was extended to comments in June 2010. After extensive testing and years of questions from the public about whether it had an intention to incorporate a "Dislike" button, Facebook officially rolled out "Reactions" to users worldwide on February 24, 2016, letting users long-press on the like button for an option to use one of six pre-defined emotions, including "Like", "Love", "Haha", "Wow", "Sad", or "Angry" and for a limited time the following reactions, "Care", "Pride Flag", "Thankful". Reactions were also extended to comments in May 2017.
In June 2017, in celebration of Pride month, Facebook introduced a rainbow flag as part of its Reactions options. The design of the reactions was updated in April 2019, with more frames comprising the icons' animations as well as a general graphical overhaul. The reactions were first shown off by reverse engineering expert Jane Manchun Wong on Twitter, with mixed reactions both as replies and on Facebook itself. In September 2019 it was revealed that Facebook is conducting a trial in Australia to hide the like count on posts. In 2020 during the COVID-19 outbreak, a "Care" reaction was added to Facebook.
Comments
To mark the 30th anniversary of the GIF, Facebook has introduced a new feature enabling users to add GIFs to comments. The eagerly awaited feature can be accessed using the GIF button located beside the emoji picker. Users can choose from the available GIFs sourced from Facebook's GIF partners, but cannot upload other GIFs.
GIFs aside, the comments feature also allow users to attach stickers. Facebook has a standard sticker set, whereby sticker options are categorised according to popular moods and activities such as "Happy", "Eating", and "Confused". In 2020, Facebook introduced "Make Your Avatar" which enables users to customize a virtual look-alike of yourself to use as stickers in comments as well as Messenger chats. Essentially Facebook's version of Snap's Bitmoji, Avatars have been since made available in Australia, New Zealand, Europe and Canada.
In December 2015, an indicator was added to the comment area to show when a friend is typing a new comment.
Messages and inbox
Facebook Messenger is an instant messaging service and software application. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010, and subsequently released standalone iOS and Android apps in August 2011. Over the years, Facebook has released new apps on a variety of different operating systems, launched a dedicated website interface, and separated the messaging functionality from the main Facebook app, requiring users to download the standalone apps.
Facebook Messenger lets Facebook users send messages to each other. Complementing regular conversations, Messenger lets users make voice calls and video calls both in one-to-one interactions and in group conversations. Its Android app has integrated support for SMS and "Chat Heads", which are round profile photo icons appearing on-screen regardless of what app is open, while both apps support multiple accounts, conversations with optional end-to-end encryption, and playing "Instant Games", which are select games built into Messenger. Some features, including sending money and requesting transportation, are limited to the United States. In 2017, Facebook has added "Messenger Day", a feature that lets users share photos and videos in a story-format with all their friends with the content disappearing after 24 hours; Reactions, which lets users tap and hold a message to add a reaction through an emoji; and Mentions, which lets users in group conversations type @ to give a particular user a notification.
In March 2015, Facebook announced that it would start letting businesses and users interact through Messenger with features such as tracking purchases and receiving notifications, and interacting with customer service representatives. It also announced that third-party developers could integrate their apps into Messenger, letting users enter an app while inside Messenger and optionally share details from the app into a chat. In April 2016, it introduced an API for developers to build chatbots into Messenger, for uses such as news publishers building bots to give users news through the service, and in April 2017, it enabled the M virtual assistant for users in the U.S., which scans chats for keywords and suggests relevant actions, such as its payments system for users mentioning money. Additionally, Facebook expanded the use of bots, incorporating group chatbots into Messenger as "Chat Extensions", adding a "Discovery" tab for finding bots, and enabling special, branded QR codes that, when scanned, take the user to a specific bot.
In August 2018, Facebook discontinued users' ability to post to their Timeline using SMS.
In September 2022, Facebook added the "Community Chats" function, allowing people in a Facebook group to chat between each other on Messenger and on the Messenger app.
Notifications
Notifications tell the user that something has been added to their profile page. Examples include: a message being shared on the user's wall or a comment on a picture of the user or on a picture that the user has previously commented on. Initially, notifications for events were limited to one per event; these were eventually grouped category-wise. For instance, 10 users having liked a user's picture now count for one notification, whereas in the earlier stages, these would have accounted for ten separate notifications. The number of notifications can be changed in the settings section, to a maximum of 99. There is a red notification counter at the top of the page, which if clicked displays the most recent ones.
Groups
Facebook groups can be created by individual users. Groups allow members to post content such as links, media, questions, events, editable documents, and comments on these items.
Groups are used for collaboration and allow discussions, events, and numerous other activities. They are a way of enabling a number of people to come together online to share information and discuss specific subjects. They are increasingly used by clubs, companies and public sector organizations to engage with stakeholders, be they members of the public, employees, members, service users, shareholders or customers. Groups can have two different levels of privacy settings:
"Open" means both the group, its members and their comments are visible to the public (which includes non-members) but they cannot interact without joining.
"Secret" means that nothing can be viewed by the public unless a member specifically invites another user to join the group.
Previously, in October 2010, there were version 0 (legacy) and version 1 (current) groups. Version 1 or "new" groups can contain the name of the group in their URL if the email address of the group is set. Groups do not have a RSS feed to export the wall or the member list, such as Pages or Events have, but third parties provide such service if the group is set to an "open" privacy setting. All groups have since been migrated to a single design.
Applications
Events
Facebook events are a way for members to let friends know about upcoming events in their community and to organize social gatherings. Events require an event name, network, host name, event type, start time, location, and a guest list of friends invited. Events can be public or private. Private events cannot be found in searches and are by invitation only. People who have not been invited cannot view a private event's description, Wall, or photos. They also will not see any Feed stories about the event. When setting up an event the user can choose to allow friends to upload photos or videos. Note that unlike real world events, all events are treated as separate entities (when the reality is some events sit inside other events, going to one event would preclude going to another, and so on).
In February 2011, Facebook began to use the hCalendar microformat to mark up events, and the hCard microformat for the events' venues, enabling the extraction of details to users' own calendar or mapping applications. Third parties facilitate events to be exported from Facebook pages to the iCalendar-format.
Marketplace
In 2007, Facebook introduced the Facebook Marketplace, allowing users to post classified ads within sale, housing, and jobs categories. However, the feature never gained traction, and in 2009, control was transferred to Oodle, the platform powering the functionality. The feature was then eventually shut down in 2014. In October 2016, Facebook announced a new Marketplace, citing the growth of organized "buy and sell" Facebook Groups, and gave the new version a higher prominence in the main Facebook app, taking the navigation position previously held by Facebook Messenger.
According to Facebook's internal data from 2019, the Marketplace used to only be a C2C platform but now there is a major B2C opportunity for US retailers.
In June 2021, the European Commission and Competition and Markets Authority launched antitrust probes over concerns that Facebook's Marketplace took advantage of data from competing services that advertise on the platform and used it to gain "an undue competitive advantage".
Notes
Facebook Notes was introduced on August 22, 2006, as a blogging platform offering users the ability to write notes, attach photos, and optionally import blog entries from external sources.
The most known usage form of the Notes feature was the Internet meme "25 Random Things About Me", which involves writing 25 things about the user that their friends do not already know about them and using the tag function to ask 25 friends to do the same. The trend became popular in February 2009, with The New York Times discussing its sudden surge, noting that nearly five million notes were created for the purpose, a doubling of the feature's use in the previous week and larger than any other week in Facebook's history.
In September 2015, the Notes feature received an update, bringing additional features, such as adding a cover photo and caption, the ability to resize photos, and text formatting options.
Places
Facebook announced Places on August 18, 2010. It is a feature that lets users check into Facebook using a mobile device to let a user's friends know where they are at the moment.
In November 2010, Facebook announced "Deals", a subset of the Places offering, which allows for users to check in from restaurants, supermarkets, bars, and coffee shops using an app on a mobile device and then be rewarded discounts, coupons, and free merchandise. This feature is marketed as a digital version of a loyalty card or coupon where a customer gets rewarded for loyal buying behavior.
On October 10, 2010, Places became available on BlackBerry, iPhone, and Android. Other users, including Windows Mobile users, must use an HTML5 browser to use Places via Facebook Touch Site.
Facebook Places was reported discontinued on August 24, 2011, but was relaunched in November 2014, now including cover images, discovery sections, city/category landing pages, a deeper integration with the Location API, Graph Search queries and user generated content.
Platform
The Facebook Platform provides a set of APIs and tools which enable third-party developers to integrate with the "open graph", whether through applications on Facebook.com or external websites and devices. Launched on May 24, 2007, Facebook Platform has evolved from enabling development just on Facebook.com to one also supporting integration across the web and devices.
Facebook Platform Statistics as of May 2010:
More than one million developers and entrepreneurs from more than 180 countries
More than 550,000 active applications currently on Facebook Platform
Every month, more than 70% of Facebook users engage with Platform applications
More than 250,000 websites have integrated with Facebook Platform
More than 100 million Facebook users engage with Facebook on external websites every month
On August 29, 2007, Facebook changed the way in which the popularity of applications is measured, to give attention to the more engaging applications, following criticism that ranking applications only by the number of people who had installed the application was giving an advantage to the highly viral, yet useless applications. Tech blog Valleywag has criticized Facebook Applications, labeling them a "cornucopia of uselessness". Others have called for limiting third-party applications so the Facebook "user experience" is not degraded.
Primarily attempting to create viral applications is a method that has certainly been employed by numerous Facebook application developers. Stanford University even offered a class in the Fall of 2007, entitled, Computer Science (CS) 377W: "Create Engaging Web Applications Using Metrics and Learning on Facebook". Numerous applications created by the class were highly successful, and ranked amongst the top Facebook applications, with some achieving over 3.5 million users in a month.
Facebook Questions
In May 2010, Facebook began testing Questions, which is expected to compete with services such as Yahoo! Answers.
On March 24, 2011, Facebook announced that its new product, Facebook Questions, facilitates short, poll -like answers in addition to long-form responses, and also links directly to relevant items in Facebook's directory of "fan pages".
Photos
Facebook allows users to upload photos, and to add them to albums. In December 2010, the company enabled facial recognition technology, helping users identify people to tag in uploaded photos. In May 2011, Facebook launched a feature to tag specific Facebook pages in photos, including brands, products, and companies. On mobile, Facebook introduced photo filters in August 2011.
In May 2016, Facebook started allowing users to upload and view 360-degree photos. Mobile users will move their device around to navigate the environment, while website users will have to click and drag.
According to Facebook in 2010, there were over 50 billion photos stored on the service.
Videos
In May 2007, Facebook officially launched its video platform, allowing users to upload recorded videos or livestream videos from their webcams. The service supports the ability to "tag" friends in similar ways to photos. In December 2014, Facebook began rolling out functionality for business Pages to pin ("Feature") a video to the top of their Videos tab.
In January 2015, Facebook published a report detailing a significant growth in video viewing on the platform, specifically highlighting the fact that Facebook has seen an average of one billion video views every day since June 2014.
In September 2015, Facebook announced that it would begin showing view counts for publicly posted videos. A few weeks later, the company announced that users will be able to view 360-degree videos. On the website, users can click around to change the perspective, whereas mobile users can physically move their device to interact with the virtual space. The result is the work of a collaboration between Facebook and its Oculus division.
Live streaming
In August 2015, Facebook began to allow users to live stream video. Streams appear on the News Feed, and users can comment on them in real-time. Live broadcasts are automatically saved as a video post to the streamer's page. The feature was positioned as a competitor to services such as Meerkat and Periscope.
The feature was initially available only to verified public figures through the Facebook Mentions app (which is also exclusive to these users). Live streaming began to roll out for public use in January 2016, beginning with the Facebook iOS app in the United States.
In April 2016, Facebook unveiled a live-streaming API, aimed to allow developers to use any device, including professional video cameras and drones, to integrate with the live-video streaming platform. Facebook also updated its mobile app to provide a dedicated section for showcasing current and recent live broadcasts. To drive its adoption, Facebook provided incentives to publishers and celebrities to perform live broadcasts, including monetary rewards.
In March 2017, Facebook extended live-streaming support to PCs. In May, Facebook Live was updated on iOS to let two users livestream together, and the following month, Facebook added support for closed captioning to live video. This is limited to the CEA-608 standard, a notable difference from the automatic closed captioning available for Page videos that are recorded and then uploaded, due to difficulties in adapting the same standard at scale on the low-latency real-time nature for live content.
At the end of 2017, Facebook Live was updated to offer support for livestreaming Facebook Messenger games.
Controversial use
Facebook Live was used by the perpetrators of an incident in which four black young adults kidnapped and tortured a mentally disabled white male. All four were charged and convicted of hate crimes. Facebook Live was also used by Brenton Tarrant, perpetrator of the Christchurch mosque shootings to broadcast the attack on Al Noor Mosque. A total of 51 people were killed and another 40 were injured at Al Noor and in a subsequent attack at Linwood Islamic Centre. This video was viewed over 4,000 times and had 200 watching it live. Because of this, Facebook announced it would be considering restrictions on the service. The service was also used to broadcast the hostage taking during the Nakhon Ratchasima shootings, which ultimately left 31 people dead including the perpetrator and 57 others injured. A shooting spree in Memphis in September 2022 was livestreamed by the suspect, a 19-year-old male; witnesses who viewed the stream saw him entering a store and shooting at customers inside. Additionally, Ronnie McNutt, an army veteran, committed suicide on a Facebook Live stream, leading to the footage spreading outside of Facebook Live to other social media platforms, including TikTok and Instagram, also owned by Meta.
Facebook Paper
During the same week as its tenth anniversary (in 2014), Facebook launched the Paper iPhone app. The app consists of two major features: Firstly, Facebook's News Feed is more graphic, as the app uses technology such as full-screen photos and video footage. Content is organized under headings such as "Creators" and "Planet"; secondly, Paper allows users to post statuses, photos, and "stories" to Facebook that has been described as a different, more presentation-focused design.
Facebook Mentions
Facebook Mentions, initially an iOS-only app, was released by the company in 2014. It allows public figures with a verified account to engage with their respective fanbases in a more concentrated experience. The app had been in testing with select celebrities for nearly a year before its launch. In September 2015, Facebook expanded the availability of the Mentions app to journalists and other verified pages, and also gave users of the app the ability to post exclusively to their Facebook followers rather than both followers and friends. The update also enabled the first livestreaming functionality through Facebook Live. Facebook Mentions became available on Android in January 2016. In December 2016, Facebook Live on Mentions received several updates, including comment moderation tools, broadcasting appearance customization, and editing features to remove unnecessary footage at the beginning or end of a broadcast.
Facebook Moments
Facebook Moments was a private photo sharing app launched by Facebook in 2015 but discontinued on February 25, 2019. The app was powered by Facebook's facial recognition technology to group photos and let users easily share them.
Facebook Gaming
Facebook Podcasts
Facebook Podcast was unveiled in April and launched on June 22, 2021. The integration allows listeners to find, subscribe to and listen to shows within the Facebook platform.
In addition to the podcast product, Facebook is also working on other audio-focused offerings like a virtual chatroom feature akin to Clubhouse and short-form audio posts dubbed "Soundbites".
General features
Facebook dynamic text/type
In November 2015, Facebook made changes to their text-only status update on Timeline to allow for adjustable text sizes (dynamic text) on mobile apps.
Credits
Facebook Credits are a virtual currency users can use to buy gifts, and virtual goods in many games and applications on the Facebook platform. As of July 2010, users of Facebook can purchase Facebook credits in Australian Dollars, British Pounds, Canadian Dollars, Chilean peso, Colombian peso, Danish krone, Euro, Hong Kong dollar, Japanese yen, Norwegian krone, Swedish krona, Swiss franc, Turkish lira, US Dollars, and Venezuelan Bolivar. Facebook credits can be used on many popular games such as Happy Aquarium, Happy Island, Zoo Paradise, Happy Pets, Hello City, It Girl, FarmVille, and Mafia Wars.
Facebook Credits went into its alpha stage in May 2009 and progressed into the beta stage in February 2010, which ended in January 2011. At that time, Facebook announced all Facebook game developers would be required to process payments only through Facebook Credits from July 1, 2011. In March 2011, Facebook created an official subsidiary to handle payments: Facebook Payments Inc. In June 2012, Facebook announced it would no longer use its own money system, Facebook Credits. Users with credits will see them converted into their own currencies. Facebook Credits was officially removed from Facebook in September 2013.
Feature phones
Although like all other website apps Facebook made its presence on the smartphones as mentioned but also is present for the feature phones. As the company said that the feature phones dominate the American cell phone markets, hence an app was exclusively made for this purpose as well.
Graph Search
Released in July 2013, Graph Search allows users to search within their network of friends for answers to natural language questions such as, "Movies my friends who like The Hobbit liked" and receive direct answers, rather than the list of websites that search engines usually provide.
IPv6
According to a June 2010 report by Network World, Facebook said that it was offering "experimental, non-production" support for IPv6, the long-anticipated upgrade to the Internet's main communications protocol. The news about Facebook's IPv6 support was expected; Facebook told Network World in February 2010, that it planned to support native IPv6 user requests "by the midpoint of this year".
In a presentation at the Google IPv6 Implementors Conference, Facebook's network engineers said it was "easy to make [the] site available on v6". Facebook said it deployed dual-stack IPv4 and IPv6 support on its routers, and that it made no changes to its hosts in order to support IPv6. Facebook also said it was supporting an emerging encapsulation mechanism known as Locator/Identifier Separation Protocol (LISP), which separates Internet addresses from endpoint identifiers to improve the scalability of IPv6 deployments. "Facebook was the first major Web site on LISP (v4 and v6)", Facebook engineers said during their presentation. Facebook said that using LISP allowed them to deploy IPv6 services quickly with no extra cost. In addition, Facebook enabled IPv6 on its main domain names during World IPv6 Launch.
Listen with Friends
Listen with Friends allows Facebook users to listen to music and discuss the tunes using Facebook Chat with friends at the same time. Users can also listen in as a group while one friend acts as a DJ. Up to 50 friends can listen to the same song at the same time, and chat about it. Every time a user begins listening to music with a friend, a "story will be posted to her/his friends" ticker and/or news feed. Users will have control over who will be able to see when they are listening with a friend through their App Settings page after installing the compatible music app. This feature was initially supported through Audizer.com, but as of August 2012, services were discontinued and the Facebook / Audizer splash page has been redirected to Facebook.com.
Mood faces
Facebook chat supports numerous emoticons, like (^^^) for a shark. Recently, it has also become possible to post larger, animated images through Facebook's built in emotion system.
At one time, entering the Konami Code followed by Enter at the home page caused a lensflare-style series of circles to display when clicking, typing, or scrolling.
Asking "how is babby formed?" with the Questions feature released September 23, 2010, will Rickroll the user.
A user can change his/her language to upside down English.
Entering @[x:y] resolves a user's name, where x is a positive integer and y is 0 or 1. For example, @[4:0] resolves to "Mark Zuckerberg".
Phone
At an event in April 2013, Mark Zuckerberg announced a new Android-based "Home" feature, which would show content from users' Facebook pages on the home page of their mobile phones, without having to open an app.
Poke and Greetings
Since Facebook's inception, users have had the ability to "poke" other users. The feature, its actual purpose never officially explained by the company, served as a quick way to attract the attention of another user. In a 2007 opinion article in The Guardian, Facebook explained to a question about the "poke" that "When we created the poke, we thought it would be cool to have a feature without any specific purpose. People interpret the poke in many different ways, and we encourage you to come up with your own meanings." The feature was never removed from Facebook; in December 2017, the company gave the button a significantly more prominent placement on users' profiles, along with new forms of quick interactions, including "hug", "wink" and "high-five", collectively all referred to as "Greetings". Facebook's inception, users have had the ability to "poke" other users—a feature that, despite its enigmatic purpose, persisted throughout the platform's evolution.
Smartphone integration
Many smartphones offer access to the Facebook services either through their respective web browsers or through mobile apps.
The iPhone-compatible website was launched in August 2007, followed by a dedicated iOS app in July 2008. The early mobile website was severely limited in its feature set, only gaining the ability to post comments in late 2008, a year after launch. By 2009, other companies had developed Facebook mobile apps for Nokia, HTC, LG, Motorola, Samsung, Sony Ericsson, and Windows Mobile devices, though a significant portion of Facebook's userbase was still using the original mobile website. During the early success of app stores, Facebook gambled on the idea of a universal webpage rather than specific operating systems, choosing to maintain its primary focus on its mobile site. CEO Mark Zuckerberg told Fortune that such a decision was "probably one of the biggest mistakes we've ever made". While the app was experiencing significant criticism for software bugs and crashes, Facebook began its "Facebook for Every Phone" initiative in January 2011, designing an app for a large number of feature phones. As Android and iOS rose in popularity, Facebook shifted its focus, creating dedicated apps for each platform. However, Facebook was still not entirely convinced, using a "hybrid" solution of native computing code as a sort of "picture frame" for its mobile website. Mashable described it as a "one-size-fits-all nightmare". In October 2011, Facebook updated its iOS app with support for iPad, adding larger photos and enabling more functionality, including the ability to post status updates and photos. Finally, in 2012, the company relaunched its Android and iOS apps, going mobile-first and putting all of its resources into making an optimized experience for smartphones, including significant speed improvements. In the years since, the company has increasingly expanded the feature set of its apps, dedicating more resources and seeing its userbase shifting from the mobile web to its apps.
Third-party companies also created Facebook apps for their platforms. Microsoft developed a Facebook app for their Windows Phone 7 platform in February 2012, Nokia offered a Facebook app on its Ovi Store for Nokia S60 devices in June 2009, while BlackBerry also offered a Facebook application for its software platform in September 2012.
Fundraising
In December 2013, Facebook enabled a "Donate" button for charities and non-profit organizations to raise money. Approximately two years later, the company released a new fundraiser feature, exclusively allowing non-profits to set up campaign pages and collect payments. This was expanded in June 2016, when anyone could set up fundraisers on behalf of non-profit organizations, and again expanded in March 2017 to offer personal users in the United States the ability to raise money, as well as for Facebook Pages to add a "Donate" button to their Facebook Live video streams. In May, fundraisers were expanded with support for communities and sports teams, and subsequently, in September, expanded internationally for charities in Europe.
Status updates
"Status updates" (also called a "status") allows users to post messages for their friends to read. In turn, friends can respond with their own comments, as well as clicking the "Like" button. A user's most recent updates appear at the top of their Timeline/Wall and are also noted in the "Recently Updated" section of a user's friend list. Originally, the purpose of the feature was to allow users to inform their friends of their current "status", including feelings, whereabouts, or actions, where Facebook prompted the status update with "Username is"... and users filled in the rest. This feature first became available in September 2006, though on December 13, 2007, the requirement to start a status update with is was removed.
The is updates were followed by the "What are you doing right now?" status update question; in March 2009, the question was changed to "What's on your mind?" In 2009, Facebook added the feature to tag certain friends (or groups, etc.) within one's status update by adding an @ character before their name, turning the friend's name into a link to their profile and including the message on the friend's wall. Tagging has since been updated to recognize friends' names by typing them into a status while a list of friends whose names match the inputted letters appears. A large percentage of the updates that are posted are humorous and as a result, many apps, websites and books have sprung up to help users to update their own.
Subscribe
In September 2011, Facebook launched a "Subscribe" button, allowing users to follow public updates from people without requiring a Facebook friendship connection. The feature was expanded to Pages in July 2012, and to stories in the News Feed in August 2012.
Ticker
In September 2011, Facebook launched the "Ticker", a continually-updated feed on the right side of the screen showing friends' activities, including "likes", status updates, and comments. The feed was criticized by users for offering a quiet way to stalk users' every move, prompting the company to consider removing it in a March 2013 redesign, though never did. In December 2017, the company officially ended the "Ticker" feature, though quietly and without an announcement or explanation.
URL shortener
Starting June 13, 2009, Facebook lets users choose a username specifically for their profile, enabling them to share links bearing their own www.facebook.com/username URL address. There are limitations, however, to what usernames can be used, including only alphanumerical characters (A-Z, 0–9), a length of over five characters, only one username that is unique to the profile, and must adhere to Facebook's Statement of Rights and Responsibilities agreement. The following December, Facebook launched its own URL shortener based on the FB.me domain name.
Verified accounts
TechCrunch reported in February 2012 that Facebook would introduce a "Verified Account" concept, denoting official pages for public figures. Such pages gain more prominence in the "People To Subscribe To" suggestions lists. Persons with established stage names, such as Stefani Germanotta known as Lady Gaga, can also choose to use their specific stage name for their profile, with the real name in the profile's "About" page. However, at the time, the feature did not show any visual signs of distinction from other pages. In May 2013, the concept was updated to include a blue checkmark badge to highlight the account's Verified status. In October 2015, Facebook introduced a "gray badge" verification system for local businesses with physical addresses, with the gray color intended to differentiate from its typical blue checkmarks assigned to celebrities, public figures, sports teams and media organizations.
Hashtagging support
On June 12, 2013, Facebook introduced its support for clickable hashtags to help users search for topics being actively discussed on the social network.
Impressum
In March 2014, some page administrators in Italy started being prompted to add an impressum to their Facebook page, described as "a legally mandated statement of the ownership and authorship of a document".
Tor hidden service
In October 2014, Facebook announced that users could connect to the website through a Tor hidden service using the privacy-protecting Tor browser and encrypted using SSL. Announcing the feature, Facebook engineer Alec Muffett said that "Facebook's onion address provides a way to access Facebook through Tor without losing the cryptographic protections provided by the Tor cloud. [...] It provides end-to-end communication, from your browser directly into a Facebook datacenter."
"Say Thanks"
In November 2014, Facebook introduced "Say Thanks", an experience that lets user create personalized video greeting cards for friends on Facebook.
Call-to-Action button
In December 2014, Facebook announced that Pages run by businesses can display a so-called "call-to-action button" next to the page's like button. "Call to action" is a customizable button that lets page administrators add external links for easy visitor access to the business' primary objective, with options ranging from "Book Now", "Contact Us", "Use App", "Play Game", "Shop Now", "Sign Up", and "Watch Video". Initially only rolled out in the United States, the feature was expanded internationally in February 2015.
Snooze
In September 2017, Facebook began testing a "Snooze" button, letting users temporarily unfollow friends for 24 hours, 7 days or 30 days. The following December, the feature was enabled for all users, though the period of temporary unfollowing is specifically for 30 days.
"Did You Know?" social questionnaires
In response to decreased use of status updates on Facebook, the company began enabling "Did You Know?" social questionnaires in December 2017. The feature, which asks users to answer questions that are then shared as a status update, includes such questions as "The superpower I want most is...", "The first thing I'd do after winning the lottery is...", and "A guilty pleasure that I'm willing to admit to is..."
Sound Collection music archive
In December 2017, Facebook announced "Sound Collection"; an archive of copyright- and payment-free soundtracks and audio effects its users can use in their videos.
Off-Facebook Activity
In an August 20 blogpost, Facebook's Chief Privacy Officer Erin Egan, and Director of Product Management David Baser, announced "Off-Facebook Activity", to be released in Ireland, South Korea, and Spain, before being rolled out globally. Egan and Baser outline that with the feature, "you can:
See a summary of the information other apps and websites have sent Facebook through our online business tools, like Facebook Pixel or Facebook Login
Disconnect this information from your account if you want to; and
Choose to disconnect future off-Facebook activity from your account. You can do this for all of your off-Facebook activity, or just for specific apps and websites."
A second blogpost on Facebook's Engineering website says that, while users will be able to "Choose to disconnect future off-Facebook activity" from their accounts, there will be a 48-hour window in which data from other websites will remain linked to the account." During the 48-hour window when incoming off-Facebook data is still linked to your account, "it may be used for measurement purposes and to make improvements to our ads systems".
Memories
The Memories feature, introduced in late 2010, allows browsing ones timeline by year. A feature under the same name was introduced in June 2018, showing events from the same day of earlier years.
Security
On May 12, 2011, Facebook announced that it is launching several new security features designed to protect users from malware and from getting their accounts hijacked.
Facebook will display warnings when users are about to be duped by clickjacking and cross-site scripting attacks in which they think they are following a link to an interesting news story or taking action to see a video and instead end up spamming their friends.
Facebook also offers two-factor authentication called "login approvals", which, if turned on, will require users to enter a code whenever they log into the site from a new or unrecognized device. The code is sent via text message to the user's mobile phone.
Facebook is partnering with the free Web of Trust safe surfing service to give Facebook users more information about the sites they are linking to from the social network. When a user clicks on a potentially malicious link, a warning box will appear that gives more information about why the site might be dangerous. The user can either ignore the warning or go back to the previous page.
Removed features
Email
In February 2010, TechCrunch reported that Facebook was working to rewrite its messaging service to turn it into a "fully featured webmail product", dubbed "Project Titan". The feature, unofficially dubbed a "Gmail killer" internally, was launched on November 15, 2010, and allowed users to directly communicate with each other via Facebook using several different methods. Users could create their own "username@facebook.com" email address to communicate, use text messaging, or through the Facebook website or mobile app's instant messaging chat. All messages were contained within single threads in a unified inbox. The email service was terminated in February 2014 because of low uptake.
FBML
Facebook Markup Language (FBML) was considered to be Facebook's own version of HTML. While many of the tags of HTML can be used in FBML, there were also important tags that could not be used, such as HTML, HEAD, and BODY. Also, JavaScript could not be used with FBML.
According to the Facebook Markup Language (FBML) Developer's page, FBML is now deprecated. No new features will be added to FBML and developers are recommended to develop new applications utilizing HTML, JavaScript and CSS. FBML support ended January 1, 2012, and FBML was no longer functioning as of June 1, 2012.
Lite
In August 2009, Facebook announced the rollout of a "lite" version of the site, optimized for users on slower or intermittent Internet connections. Facebook Lite offered fewer services, excluded most third-party applications and required less bandwidth. A beta version of the slimmed-down interface was released first to invited testers before a broader rollout across users in the United States, Canada, and India. It was announced on April 20, 2010, that support for the "lite" service had ended and that users would be redirected back to the normal, full content, Facebook website. The service was operational for only eight months.
In June 2015, this feature was reintroduced as an app with a total size of less than 1 MB, primarily focusing markets where internet access is slow or limited.
Deals
Facebook announced a pilot program called Deals, which offered online coupons and discounts from local businesses, at an event at its Palo Alto office on 3 November 2010.
Deals launched on April 25, 2011, in five cities—Atlanta, Austin, Dallas, San Diego, and San Francisco—with the hope of expanding. This new offering was a direct competitor to other social commerce sites such as LivingSocial and Groupon for online coupons and deals-of-the-day. Facebook users were able to use Facebook Credits to purchase vouchers that could be redeemed for real goods and services.
Deals expanded to Charlotte, St. Louis and Minneapolis in late June 2011.
Facebook closed the Deals program on 26 August 2011, describing the product as a "test."
Jobs
References
Features
Software features | List of Facebook features | [
"Technology"
] | 9,794 | [
"Software features"
] |
16,115,433 | https://en.wikipedia.org/wiki/LY-334370 | LY-334370 is a selective 5-HT1F receptor agonist which was under development by Eli Lilly and Company for the treatment of migraine headaches. The drug showed efficacy in a phase II clinical trial but further development was halted due to toxicity detected in animals.
See also
CP-135807
Lasmiditan
SN-22
References
External links
5-HT1F agonists
Drugs developed by Eli Lilly and Company
Indoles
Piperidines
Benzamides
4-Fluorophenyl compounds
Abandoned drugs | LY-334370 | [
"Chemistry"
] | 112 | [
"Drug safety",
"Abandoned drugs"
] |
16,115,756 | https://en.wikipedia.org/wiki/Primary%20care%20psychologist | A Primary care psychologist (PCP) is a psychologist with specialist training in psychological knowledge and principles of common physical diseases and mental disorders experienced by patients and families throughout the lifespan, and which tend to present in primary care clinics.
Scotland
Clinical associates in applied psychology are a related "New Ways of Working" initiative in Scotland.
United Kingdom
Most recently, the UK Improving Access to Psychological Therapies (IAPT) initiative, which focuses on primary care psychological therapies provision, has benchmarked professionals at all career levels, from closely supervised psychological wellbeing practitioners (many of which have a psychology undergraduate degree and a post-graduate one year certificate/diploma, although several have master's degrees too) to high-intensity psychological therapists, who, in terms of pay, are benchmarked against doctorate level training (many of these are counselling psychologists and clinical psychologists). Moreover, Clinical lead posts are pivotal in leading related primary care psychological services and these are benchmarked against Consultant Psychologist level (post doctoral expertise).
See also
Doctor of Clinical Psychology
References
External links
Clinical Psychology
Clinical psychology | Primary care psychologist | [
"Biology"
] | 223 | [
"Behavioural sciences",
"Behavior",
"Clinical psychology"
] |
16,116,128 | https://en.wikipedia.org/wiki/Beryllium-8 | Beryllium-8 (8Be, Be-8) is a radionuclide with 4 neutrons and 4 protons. It is an unbound resonance and nominally an isotope of beryllium. It decays into two alpha particles with a half-life on the order of 8.19 seconds. This has important ramifications in stellar nucleosynthesis as it creates a bottleneck in the creation of heavier chemical elements. The properties of 8Be have also led to speculation on the fine tuning of the universe, and theoretical investigations on cosmological evolution had 8Be been stable.
Discovery
The discovery of beryllium-8 occurred shortly after the construction of the first particle accelerator in 1932. Physicists John Douglas Cockcroft and Ernest Walton performed their first experiment with their accelerator at the Cavendish Laboratory in Cambridge, in which they irradiated lithium-7 with protons. They reported that this populated a nucleus with A = 8 that near-instantaneously decays into two alpha particles. This activity was observed again several months later, and was inferred to originate from 8Be.
Properties
Beryllium-8 is unbound with respect to alpha emission by 92 keV; it is a resonance having a width of 6 eV. The nucleus of helium-4 is particularly stable, having a doubly magic configuration and larger binding energy per nucleon than 8Be. As the total energy of 8Be is greater than that of two alpha particles, the decay into two alpha particles is energetically favorable, and the synthesis of 8Be from two 4He nuclei is endothermic. The decay of 8Be is facilitated by the structure of the 8Be nucleus; it is highly deformed, and is believed to be a molecule-like cluster of two alpha particles that are very easily separated. Furthermore, while other alpha nuclides have similar short-lived resonances, 8Be is exceptionally already in the ground state. The unbound system of two α-particles has a low energy of the Coulomb barrier, which enables its existence for any significant length of time. Namely, 8Be decays with a half-life of 8.19 seconds.
Beryllium-8 is the only unstable nuclide with the same even number ≤ 20 of protons and neutrons. It is also one of the only two unstable nuclides (the other is helium-5) with mass number ≤ 143 which are stable to both beta decay and double beta decay.
There are also several excited states of 8Be, all short-lived resonances – having widths up to several MeV and varying isospins – that quickly decay to the ground state or into two alpha particles.
Decay anomaly and possible fifth force
A 2015 experiment by Attila Krasznahorkay et al. at the Hungarian Academy of Sciences's Institute for Nuclear Research found anomalous decays in the 17.64 and 18.15 MeV excited states of 8Be, populated by proton irradiation of 7Li. An excess of decays creating electron-positron pairs at a 140° angle with a combined energy of 17 MeV was observed. Jonathan Feng et al. attribute this 6.8-σ anomaly to a 17 MeV protophobic X-boson dubbed the X17 particle. This boson would mediate a fifth fundamental force acting over a short range (12 fm) and perhaps explain the decay of these 8Be excited states. A 2018 rerun of this experiment found the same anomalous particle scattering and set a narrower mass range of the proposed fifth boson, MeV/c2. While further experiments are needed to corroborate these observations, the influence of a fifth boson has been proposed as "the most straightforward possibility".
Role in stellar nucleosynthesis
In stellar nucleosynthesis, two helium-4 nuclei may collide and fuse into a single beryllium-8 nucleus. Beryllium-8 has an extremely short half-life (8.19 seconds), and decays back into two helium-4 nuclei. This, along with the unbound nature of 5He and 5Li, creates a bottleneck in Big Bang nucleosynthesis and stellar nucleosynthesis, for it necessitates a very fast reaction rate. This impedes formation of heavier elements in the former, and limits the yield in the latter process. If the beryllium-8 collides with a helium-4 nucleus before decaying, they can fuse into a carbon-12 nucleus. This reaction was first theorized independently by Öpik and Salpeter in the early 1950s.
Owing to the instability of 8Be, the triple-alpha process is the only reaction in which 12C and heavier elements may be produced in observed quantities. The triple-alpha process, despite being a three-body reaction, is facilitated when 8Be production increases such that its concentration is approximately 10−8 relative to 4He; this occurs when 8Be is produced faster than it decays. However, this alone is insufficient, as the collision between 8Be and 4He is more likely to break apart the system rather than enable fusion; the reaction rate would still not be fast enough to explain the observed abundance of 12C. In 1954, Fred Hoyle thus postulated the existence of a resonance in carbon-12 within the stellar energy region of the triple-alpha process, enhancing the creation of carbon-12 despite the extremely short half-life of beryllium-8. The existence of this resonance (the Hoyle state) was confirmed experimentally shortly thereafter; its discovery has been cited in formulations of the anthropic principle and the fine-tuned Universe hypothesis.
Hypothetical universes with stable 8Be
As beryllium-8 is unbound by only 92 keV, it is theorized that very small changes in nuclear potential and the fine tuning of certain constants (such as α, the fine structure constant), could sufficiently increase the binding energy of 8Be to prevent its alpha decay, thus making it stable. This has led to investigations of hypothetical scenarios in which 8Be is stable and speculation about other universes with different fundamental constants. These studies suggest that the disappearance of the bottleneck created by 8Be would result in a very different reaction mechanism in Big Bang nucleosynthesis and the triple-alpha process, as well as alter the abundances of heavier chemical elements. As Big Bang nucleosynthesis only occurred within a short period having the necessary conditions, it is thought that there would be no significant difference in carbon production even if 8Be were stable. However, stable 8Be would enable alternative reaction pathways in helium burning (such as 8Be + 4He and 8Be + 8Be; constituting a "beryllium burning" phase) and possibly affect the abundance of the resultant 12C, 16O, and heavier nuclei, though 1H and 4He would remain the most abundant nuclides. This would also affect stellar evolution through an earlier onset and faster rate of helium burning (and beryllium burning), and result in a different main sequence than our Universe.
Notes
References
Isotopes of beryllium
Nucleosynthesis | Beryllium-8 | [
"Physics",
"Chemistry"
] | 1,480 | [
"Nuclear fission",
"Astrophysics",
"Isotopes",
"Nucleosynthesis",
"Isotopes of beryllium",
"Nuclear physics",
"Nuclear fusion"
] |
16,117,330 | https://en.wikipedia.org/wiki/.htpasswd | .htpasswd is a flat-file used to store usernames and password for basic authentication on an Apache HTTP Server. The name of the file is given in the .htaccess configuration, and can be anything, although ".htpasswd" is the canonical name. The file name starts with a dot, because most Unix-like operating systems consider any file that begins with a dot to be hidden. The htpasswd command is used to manage .htpasswd file entries.
History
htpasswd was first added in the NCSA HTTPd server, which is the predecessor to Apache. The hash historically used "UNIX crypt" style with MD5 or SHA1 as common alternatives. In Apache 2.4, the bcrypt algorithm was added.
Usage
The file consists of lines, with each line containing a username, followed by a colon, followed by a string containing the hashed password optionally prepended by an algorithm specifier ("$2y$", "$apr1$" or "{SHA}") and/or salt.
Athelstan:RLjXiyxx56D9s
Mama:RLMzFazUFPVRE
Papa:RL8wKTlBoVLKk
Resources available from the Apache HTTP Server can be restricted to just the users listed in the files created by htpasswd. The .htpasswd file can be used to protect the entire directory it is placed in, as well as particular files.
Security issues
The only algorithm accepted by htpasswd that is still considered secure by today's standards is bcrypt, and many formats do not use salting making it vulnerable to dictionary attacks. The crypt() algorithm only uses the first 8 characters of any given password, discarding any past that.
See also
Apache HTTP Server
Configuration file
References
External links
Apache: htpasswd - Manage user files for basic authentication
Configuration files
Web technology | .htpasswd | [
"Technology"
] | 403 | [
"Computing stubs",
"World Wide Web stubs"
] |
16,117,606 | https://en.wikipedia.org/wiki/Cystine%20knot | A cystine knot is a protein structural motif containing three disulfide bridges (formed from pairs of cysteine residues). The sections of polypeptide that occur between two of them form a loop through which a third disulfide bond passes, forming a rotaxane substructure. The cystine knot motif stabilizes protein structure and is conserved in proteins across various species. There are three types of cystine knot, which differ in the topology of the disulfide bonds:
The growth factor cystine knot (GFCK)
inhibitor cystine knot (ICK) common in spider and snail toxins
Cyclic Cystine Knot, or cyclotide
The growth factor cystine knot was first observed in the structure of nerve growth factor (NGF), solved by X-ray crystallography and published in 1991 by Tom Blundell in Nature. The GFCK is present in four superfamilies. These include nerve growth factor, transforming growth factor beta (TGF-β), platelet-derived growth factor, and glycoprotein hormones including human chorionic gonadotropin. These are structurally related due to the presence of the cystine knot motif but differ in sequence. All GFCK structures that have been determined are dimeric, but their dimerization modes in different classes are different. The vascular endothelial growth factor subfamily, categorized as part of the platelet-derived growth factor superfamily, includes proteins that are angiogenic factors.
The presence of the cyclic cystine knot (CCK) motif was discovered when cyclotides were isolated from various plant families. The CCK motif has a cyclic backbone, triple stranded beta sheet, and cystine knot conformation.
Novel proteins are being added to the cystine knot motif family, also known as the C-terminal cystine knot (CTCK) proteins. They share approximately 90 amino acid residues in their cysteine-rich C-terminal regions.
Inhibitor cystine knot (ICK) is a structural motif with a triple stranded antiparallel beta sheet linked by three disulfide bonds, forming a knotted core. The ICK motif can be found under the category of phylum, such as animals and plants. It is often found in many venom peptides such as those of snails, spiders, and scorpions. Peptide K-PVIIA, which contains an ICK, can undergo a successful enzymatic backbone cyclization. The disulfide connectivity and the common sequence pattern of the ICK motif provides the stability of the peptides that support cyclization.
Drug implications
The stability and structure of the cystine knot motif implicates possible applications in drug design. The hydrogen bonding between the disulfide bonds of the motif and beta-sheet structures gives rise to highly efficient structure stabilization. In addition, the size of the motif is approximately 30 amino acid residues. These two characteristics make it an attractive biomolecule to be used for drug delivery as it exhibits thermal stability, chemical stability, and proteolytic resistance. The biological activities of these molecules are partially due to the unique interlocking arrangement and cyclized peptide backbone which contains a conserved sequence shared among circulins. Circulins have previously been identified in a screen for anti-HIV activity. Studies have shown that cystine knot proteins can be incubated at temperatures of 65 °C or placed in 1N HCl/1N NaOH without loss of structural and functional integrity. Its resistance from oral and some intestinal proteases suggest possible use for oral delivery. Possible future applications include pain relief as well as antiviral and antibacterial functions.
References
Protein structure | Cystine knot | [
"Chemistry"
] | 772 | [
"Protein structure",
"Structural biology"
] |
16,118,019 | https://en.wikipedia.org/wiki/Rural%20cluster%20development | A rural cluster development (RCD) is a form of residential subdivision. In an RCD, houses are clustered together in areas zoned for larger properties. The remainder of the land is often designated open space. The effect is tract-home density in the middle of rural communities.
Notes and references
Urban planning | Rural cluster development | [
"Engineering"
] | 64 | [
"Urban planning",
"Architecture"
] |
16,119,760 | https://en.wikipedia.org/wiki/Autonomously%20replicating%20sequence | An autonomously replicating sequence (ARS) contains the origin of replication in the yeast genome. It contains four regions (A, B1, B2, and B3), named in order of their effect on plasmid stability. The A-Domain is highly conserved, any mutation abolishes origin function. Mutations on B1, B2, and B3 will diminish, but not prevent functioning of the origin.
Element A is highly conserved, consisting of the consensus sequence:
(where Y is either pyrimidine and R is either purine). When this element is mutated, the ARS loses all activity.
As seen above the ARS are considerably A-T rich which makes it easy for replicative proteins to disrupt the H-bonding in that area. ORC protein complex (origin recognition complex) is bound at the ARS throughout the cell cycle, allowing replicative proteins access to the ARS.
Mutational analysis for the yeast ARS elements have shown that any mutation in the B1, B2 and B3 regions result in a reduction of function of the ARS element. A mutation in the A region results in a complete loss of function.
Melting of DNA occurs within domain B2, induced by attachment of ARS binding factor 1 to B3.
A1 and B1 domain binds with origin recognition complex.
To identify these sequences, yeast mutants unable to synthesize histidine were transformed with plasmids containing the His gene and random fragments of the yeast genome. If the genome fragment contained an origin of replication, cells were able to grow in a medium lacking histidine. These sequences were termed autonomously replicating sequences, because they were replicated and inherited by progeny without integrating into the host chromosome.
References
Genomics techniques | Autonomously replicating sequence | [
"Chemistry",
"Biology"
] | 356 | [
"Genetics techniques",
"Genomics techniques",
"Molecular biology techniques"
] |
16,120,282 | https://en.wikipedia.org/wiki/British%20Organic%20Geochemical%20Society | British Organic Geochemical Society (BOGS) is an organization that aims to promote, exchange and discuss all aspects of organic geochemistry. It also aims to facilitate academic and social networking between British organic geochemists.
History
BOGS was formed in 1987. The founding members were Prof G.A. Wolff (University of Liverpool),
Dr G.D. Abbott (Newcastle University), Dr J. McEvoy (then at University of Bangor) and Prof S.J. Rowland (University of Plymouth).
Meetings
The first meeting of BOGS was held in Bangor (Wales) on 13–15 July 1988. The society meets annually, usually at (or near) a university department with links to research in organic geochemistry.
BOGS meetings are usually held over two days, and involve oral presentations (lasting 15 minutes), poster presentations and social events (i.e. evening meal).
Annual meetings have been held at Liverpool (1989), Bideford (1990), Newcastle-upon-Tyne (1992), Plymouth (1993), Aberdeen (1994), Bristol (1995), Liverpool (1996), Newcastle-upon-Tyne (1997), Plymouth (1998), York (1999), Bristol (2000), Gregynog, Wales (2001), Newcastle-upon-Tyne (2002), Plymouth (2003), Nottingham (2004), Liverpool (2005), Milton Keynes (2006). BOGS did not meet in 2007, as this would have clashed with the 23rd International Meeting of Organic Geochemistry (IMOG) event, which occurred a few months later in Torquay. Since 2007 BOGS has met at Newcastle (2008), Bristol (2009), Manchester (2010), Swansea (2011), Leeds (2012), Plymouth (2013), Liverpool (2014), Glasgow (2015), Imperial College London (2016) and Open University, Milton Keynes (2017).
Membership
It is free to become a member of BOGS. To join the mailing list for BOGS, an email is sent to the BOGS webmaster at calewis@plymouth.ac.uk.
See also
List of geoscience organizations
References
External links
https://web.archive.org/web/20180422183917/http://www.research.plymouth.ac.uk/bogs/
Geochemistry organizations
Geology of the United Kingdom
Geology societies
1987 establishments in the United Kingdom
Scientific organisations based in the United Kingdom
Scientific organizations established in 1987 | British Organic Geochemical Society | [
"Chemistry"
] | 518 | [
"Geochemistry organizations"
] |
16,121,238 | https://en.wikipedia.org/wiki/Peter%20the%20Great%20%28miniseries%29 | Peter the Great is a 1986 American biographical historical drama television miniseries directed by Marvin J. Chomsky and Lawrence Schiller, based on Robert K. Massie's 1980 non-fiction book Peter the Great: His Life and World. It stars an ensemble cast consisting of Maximilian Schell, Vanessa Redgrave, Omar Sharif, Trevor Howard, Laurence Olivier, Helmut Griem, Jan Niklas, Elke Sommer, Renée Soutendijk, Ursula Andress, and Mel Ferrer.
The miniseries received generally positive reviews from critics and won three Primetime Emmy Awards, including Outstanding Miniseries. It was also nominated for three Golden Globe Awards, including Best Miniseries or Television Film.
Cast
Maximilian Schell as Peter the Great
Jan Niklas as Peter the Great in early adulthood
Vanessa Redgrave as Tsarevna Sophia
Omar Sharif as Prince Feodor Romodanovsky
Laurence Olivier as William III and II, King of England, Scotland and Ireland
Trevor Howard as Sir Isaac Newton
Ursula Andress as Athalie
Olegar Fedoro as Boyar Lopukhin
Natalya Andrejchenko as Tsaritsa Eudoxia Lopukhina
Helmut Griem as Captain Alexander Menshikov
Renée Soutendijk as Anna Mons (Peter's Dutch mistress)
Hanna Schygulla as Catherine Skavronskaya (Peter's 2nd mistress and later on wife)
Christoph Eichhorn as Charles XII, King of Sweden
Lilli Palmer as Tsarina Natalya, mother of Peter the Great
Mel Ferrer as Frederick I, King in Prussia
Elke Sommer as Charlotte, Queen in Prussia
Jan Malmsjö as The Patriarch
Boris Plotnikov as Tsarevich Alexis
Jeremy Kemp as General Patrick Gordon
Geoffrey Whitehead as Prince Vasily Golitsyn
Graham McGrath as young adult Peter the Great
Günther Maria Halmer as Tolstoi
Dennis DeMarne as the figure of Peter the Great at the narrating scenes of the later years
Ann Zacharias as Daria Lund, the mistress of Captain Alexander Menshikov
Algis Arlauskas as Father Theodosius
The series was released as a three-tape VHS box (set in 1992, then, in 1994, as a single, lengthy VHS tape).
Awards and nominations
References
The Complete Films of Laurence Olivier by Jerry Vermilye, Citadel Press, 1992.
External links
1986 American television series debuts
1986 American television series endings
1980s American drama television series
1980s American television miniseries
American biographical series
American historical television series
Cultural depictions of Peter the Great
Cultural depictions of Isaac Newton
Cultural depictions of William III of England
Primetime Emmy Award for Outstanding Miniseries winners
Television shows based on biographies
Television shows set in Russia
Television series set in the 17th century
Television series set in the 18th century
Films directed by Marvin J. Chomsky
NBC television dramas | Peter the Great (miniseries) | [
"Astronomy"
] | 557 | [
"Cultural depictions of Isaac Newton",
"Cultural depictions of astronomers"
] |
16,122,488 | https://en.wikipedia.org/wiki/Wave%20Mate%20Bullet | The Wave Mate Bullet was a Z80 single-board computer from 1982 which used the CP/M operating system. It was sold in Australia, the United States and Europe and was apparently popular in academic settings.
Notability
The Wave Mate Bullet is notable because it represents CP/M machines at their apex. Small yet affordable machines which were quite powerful at the time with plentiful applications. Wave Mate, Inc. is a historically relevant company because one of the original microcomputer companies which released their first computer kit the Wave Mate Jupiter II in 1975. The Wave Mate Bullet represents the end of the CP/M era as the IBM PC and its clones ascended to marketplace domination.
Configurations
The Wave Mate Bullet runs CP/M 3.0 and CP/M 2.2 is available. It is available in many configurations but typically is found in a small chassis with two 96 tracks per inch 5.25" floppy disk drives. The 5.25" disks were formatted on both sides with five 1024 byte sectors per track with 80 tracks per side for a total of 800K per disk.
The standard configurations includes two serial ports, a parallel port, a general purpose external DMA bus (GPED), separate connectors for 5.25" and 8" floppy disk drives, and a hard disk interface. The hard disk interface is either IMI hard disk controller model #7710 or SCSI depending on the motherboard version.
References
Notes
Wave Mate Bullet manual
External links
Google Group for people interested in the Wave Mate Bullet
Home computers
Z80-based home computers
Computer-related introductions in 1982 | Wave Mate Bullet | [
"Technology"
] | 325 | [
"Computing stubs",
"Computer hardware stubs"
] |
16,123,532 | https://en.wikipedia.org/wiki/Sleeping%20while%20on%20duty | Sleeping while on duty or sleeping on the job – falling asleep while one is not supposed to – is considered gross misconduct and grounds for disciplinary action, including termination of employment, in some occupations. Recently however, there has been a movement in support of sleeping, or napping at work, with scientific studies highlighting health and productivity benefits, and over 6% of employers in some countries providing facilities to do so. In some types of work, such as firefighting or live-in caregiving, sleeping at least part of the shift may be an expected part of paid work time. While some employees who sleep while on duty in violation do so intentionally and hope not to get caught, others intend in good faith to stay awake, and accidentally doze.
Sleeping while on duty is such an important issue that it is addressed in the employee handbook in some workplaces. Concerns that employers have may include the lack of productivity, the unprofessional appearance, and danger that may occur when the employee's duties involve watching to prevent a hazardous situation. In some occupations, such as pilots, truck and bus drivers, or those operating heavy machinery, falling asleep while on duty puts lives in danger. However, in many countries, these workers are supposed to take a break and rest every few hours.
Frequency
The frequency of sleeping while on duty that occurs varies depending on the time of day. Daytime employees are more likely to take short naps, while graveyard shift workers have a higher likelihood of sleeping for a large portion of their shift, sometimes intentionally.
A survey by the National Sleep Foundation has found that 30% of participants have admitted to sleeping while on duty. More than 90% of Americans have experienced a problem at work because of a poor night's sleep. One in four admit to shirking duties on the job for the same reason, either calling in sick or napping during work hours.
Views
Employers have varying views of sleeping while on duty. Some companies have instituted policies to allow employees to take napping breaks during the workday in order to improve productivity while others are strict when dealing with employees who sleep while on duty and use high-tech means, such as video surveillance, to catch their employees who may be sleeping on the job. Those who are caught in violation may face disciplinary action such as suspension or firing.
Some employees sleep, nap, or take a power-nap only during their allotted break time at work. This may or may not be permitted, depending on the employer's policies. Some employers may prohibit sleeping, even during unpaid break time, for various reasons, such as the unprofessional appearance of a sleeping employee, the need for an employee to be available during an emergency, or legal regulations. Employees who may endanger others by sleeping on the job may face more serious consequences, such as legal sanctions. For example, airline pilots risk loss of their licenses.
In some industries and work cultures sleeping at work is permitted and even encouraged. Such work cultures typically have flexible schedules, and variant work loads with extremely demanding periods where employees feel unable to spend time commuting. In such environments it is common for employers to provide makeshift sleeping materials for employees, such as a couch and/or inflatable mattress and blankets. This practice is particularly common in start-ups and during political campaigns. In those work cultures sleeping in the office is seen as evidence of dedication.
In 1968, New York police officers admitted that sleeping while on duty was customary.
In Japan, the practice of napping in public, called , may occur in work meetings or classes. Brigitte Steger, a scholar who focuses on Japanese culture, writes that sleeping at work is considered a sign of dedication to the job, such that one has stayed up late doing work or worked to the point of complete exhaustion, and may therefore be excusable.
Notable incidents
Airline pilots
February 2008 – the pilots on a Go! airline flight were suspended during an investigation when it was suspected they fell asleep mid-flight from Honolulu, Hawaii to Hilo, Hawaii, resulting in their overshooting Hilo Airport by about 24 kilometers (15 miles) before turning around to land safely.
January 2024 – the pilots on a Batik Air flight were suspended during an investigation when it was suspected they fell asleep mid-flight from Haluoleo International Airport to Soekarno-Hatta International Airport, resulting in their overshooting 210 nautical miles from last record of SIC activity. The co-pilot had month-old twin babies at home and was busy moving to a new house with his family during the rest period.
Air traffic controllers
October 1984 – Aeroflot Flight 3352 hit maintenance vehicles on the runway while attempting to land in Omsk, Russia. The ground controller, who had been up at nights due to recently becoming a father of two, allowed the workers to dry the runway during heavy rain and fell asleep on the job. 178 people were killed in the crash; the controller later killed himself in prison.
October 2007 – four Italian air traffic controllers were suspended after they were caught asleep while on duty.
March 2011 – the lone night shift air traffic controller at Ronald Reagan Washington National Airport fell asleep on duty. During the period he was asleep two airliners landed uneventfully. In the weeks that followed, there were other similar incidents and it was revealed that other lone air traffic controllers on duty fell asleep in the towers. This led to the resignation of United States air traffic chief Hank Krakowski and a new policy being set requiring two controllers to be on duty at all times.
Bus drivers
March 2011 – a tour bus driver crashed while returning from a casino in Connecticut to New York City. Fifteen people were killed and many others injured. Although the driver, who was found to be sober, denied sleeping, a survivor who witnessed the crash reported that he was speeding and sleeping.
Police officers/security officers
December 1947 – a Washington, D.C. police officer was fined $75 for sleeping while on duty.
October 2007 – a CBS news story revealed nearly a dozen security guards at a nuclear power plant who were videotaped sleeping while on duty.
December 2009 – The New York Post published a photo of a prison guard sleeping next to an inmate at the Rikers Island penitentiary. The photo was allegedly captured on the cell phone camera of another guard. Both guards were disciplined for this action, the sleeping officer for sleeping and the officer who took the photo for violating a prison policy forbidding cell phones while on duty. The inmate was not identified.
August 2019 - The prison guards in charge of guarding Jeffrey Epstein were publicized as sleeping on duty and online shopping while he was on suicide watch. Epstein was found dead in his cell, leading to the investigation of the prison guards. U.S. Prosecutors eventually dropped the criminal case against the guards.
Other
March 1987 – The Peach Bottom Nuclear Generating Station was ordered shut down by the Nuclear Regulatory Commission after four operators were found sleeping while on duty.
See also
Nap
Power nap
References
Duty
Grounds for termination of employment
Occupational hazards | Sleeping while on duty | [
"Biology"
] | 1,417 | [
"Behavior",
"Sleep"
] |
16,124,413 | https://en.wikipedia.org/wiki/Therminol | Therminol is a synthetic heat transfer fluid produced by Eastman Chemical Company.
Therminol fluids are used in a variety of applications, including:
Hydrocarbon processing (oil and gas, refining, asphalt, gas-to-liquid, etc.)
Alternative energy and technologies (concentrated solar power, biofuel, organic Rankine cycle, desalination, etc.)
Plastics processing
Chemical processing (pharmaceutical, environmental test chambers, etc.)
Food and beverage processing
Heat transfer system maintenance
Prior to 1997, Therminol fluids were sold in Europe under the trade names SantoTherm and GiloTherm. Since 1997, all forms of Therminol fluid have been sold with the Therminol name and extension to define its uses.
Therminol Products From Eastman Chemical Company
Therminol 55 Heat Transfer Fluid
Therminol 59 Heat Transfer Fluid
Therminol FF (Flush Fluid)
Therminol 66 (Heat Transfer Fluid)
Therminol D-12 (Thermal Fluid)
Therminol LT ( Head Transfer Fluid)
History
Therminol heat transfer fluids were developed in 1963 by Monsanto. In 1997, the chemical businesses of Monsanto were spun off to form a new company called Solutia Inc. In 2012, Solutia was acquired by Eastman Chemical Company.
Polychlorinated biphenyl in Therminol
Prior to 1971, Monsanto marketed a series of polychlorinated biphenyl-(PCB)-containing heat transfer fluids designated as Therminol FR series in the United States and Santotherm FR series in Europe. FR series Therminol heat transfer fluids contained PCBs, which imparted fire resistance. Monsanto voluntarily ceased sales of these fluids in 1971. No form of Therminol heat transfer fluids have contained PCBs since that time. Polychlorinated biphenyl was banned by the United States Congress in 1979 and the Stockholm Convention on Persistent Organic Pollutants in 2001.
References
Transport phenomena
Heat transfer | Therminol | [
"Physics",
"Chemistry",
"Engineering"
] | 401 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical engineering",
"Thermodynamics"
] |
16,124,925 | https://en.wikipedia.org/wiki/Veblen%20function | In mathematics, the Veblen functions are a hierarchy of normal functions (continuous strictly increasing functions from ordinals to ordinals), introduced by Oswald Veblen in . If φ0 is any normal function, then for any non-zero ordinal α, φα is the function enumerating the common fixed points of φβ for β<α. These functions are all normal.
Veblen hierarchy
In the special case when φ0(α)=ωα
this family of functions is known as the Veblen hierarchy.
The function φ1 is the same as the ε function: φ1(α)= εα. If then . From this and the fact that φβ is strictly increasing we get the ordering: if and only if either ( and ) or ( and ) or ( and ).
Fundamental sequences for the Veblen hierarchy
The fundamental sequence for an ordinal with cofinality ω is a distinguished strictly increasing ω-sequence that has the ordinal as its limit. If one has fundamental sequences for α and all smaller limit ordinals, then one can create an explicit constructive bijection between ω and α, (i.e. one not using the axiom of choice). Here we will describe fundamental sequences for the Veblen hierarchy of ordinals. The image of n under the fundamental sequence for α will be indicated by α[n].
A variation of Cantor normal form used in connection with the Veblen hierarchy is: every nonzero ordinal number α can be uniquely written as , where k>0 is a natural number and each term after the first is less than or equal to the previous term, and each If a fundamental sequence can be provided for the last term, then that term can be replaced by such a sequence to get
For any β, if γ is a limit with then let
No such sequence can be provided for = ω0 = 1 because it does not have cofinality ω.
For we choose
For we use and i.e. 0, , , etc..
For , we use and
Now suppose that β is a limit:
If , then let
For , use
Otherwise, the ordinal cannot be described in terms of smaller ordinals using and this scheme does not apply to it.
The Γ function
The function Γ enumerates the ordinals α such that φα(0) = α.
Γ0 is the Feferman–Schütte ordinal, i.e. it is the smallest α such that φα(0) = α.
For Γ0, a fundamental sequence could be chosen to be and
For Γβ+1, let and
For Γβ where is a limit, let
Generalizations
Finitely many variables
To build the Veblen function of a finite number of arguments (finitary Veblen function), let the binary function be as defined above.
Let be an empty string or a string consisting of one or more comma-separated zeros and be an empty string or a string consisting of one or more comma-separated ordinals with . The binary function can be written as where both and are empty strings.
The finitary Veblen functions are defined as follows:
if , then denotes the -th common fixed point of the functions for each
For example, is the -th fixed point of the functions , namely ; then enumerates the fixed points of that function, i.e., of the function; and enumerates the fixed points of all the . Each instance of the generalized Veblen functions is continuous in the last nonzero variable (i.e., if one variable is made to vary and all later variables are kept constantly equal to zero).
The ordinal is sometimes known as the Ackermann ordinal. The limit of the where the number of zeroes ranges over ω, is sometimes known as the "small" Veblen ordinal.
Every non-zero ordinal less than the small Veblen ordinal (SVO) can be uniquely written in normal form for the finitary Veblen function:
where
is a positive integer
is a string consisting of one or more comma-separated ordinals where and each
Fundamental sequences for limit ordinals of finitary Veblen function
For limit ordinals , written in normal form for the finitary Veblen function:
,
,
and if and is a successor ordinal,
and if and are successor ordinals,
if is a limit ordinal,
if and is a limit ordinal,
if is a successor ordinal and is a limit ordinal.
Transfinitely many variables
More generally, Veblen showed that φ can be defined even for a transfinite sequence of ordinals αβ, provided that all but a finite number of them are zero. Notice that if such a sequence of ordinals is chosen from those less than an uncountable regular cardinal κ, then the sequence may be encoded as a single ordinal less than κκ (ordinal exponentiation). So one is defining a function φ from κκ into κ.
The definition can be given as follows: let α be a transfinite sequence of ordinals (i.e., an ordinal function with finite support) that ends in zero (i.e., such that α0=0), and let α[γ@0] denote the same function where the final 0 has been replaced by γ. Then γ↦φ(α[γ@0]) is defined as the function enumerating the common fixed points of all functions ξ↦φ(β) where β ranges over all sequences that are obtained by decreasing the smallest-indexed nonzero value of α and replacing some smaller-indexed value with the indeterminate ξ (i.e., β=α[ζ@ι0,ξ@ι] meaning that for the smallest index ι0 such that αι0 is nonzero the latter has been replaced by some value ζ<αι0 and that for some smaller index ι<ι0, the value αι=0 has been replaced with ξ).
For example, if α=(1@ω) denotes the transfinite sequence with value 1 at ω and 0 everywhere else, then φ(1@ω) is the smallest fixed point of all the functions ξ↦φ(ξ,0,...,0) with finitely many final zeroes (it is also the limit of the φ(1,0,...,0) with finitely many zeroes, the small Veblen ordinal).
The smallest ordinal α such that α is greater than φ applied to any function with support in α (i.e., that cannot be reached "from below" using the Veblen function of transfinitely many variables) is sometimes known as the "large" Veblen ordinal, or "great" Veblen number.
Further extensions
In , the Veblen function was extended further to a somewhat technical system known as dimensional Veblen. In this, one may take fixed points or row numbers, meaning expressions such as φ(1@(1,0)) are valid (representing the large Veblen ordinal), visualised as multi-dimensional arrays. It was proven that all ordinals below the Bachmann–Howard ordinal could be represented in this system, and that the representations for all ordinals below the large Veblen ordinal were aesthetically the same as in the original system.
Values
The function takes on several prominent values:
is the proof-theoretic ordinal of Peano arithmetic and the limit of what ordinals can be represented in terms of Cantor normal form and smaller ordinals.
, a bound on the order types of the recursive path orderings with finitely many function symbols, and the smallest ordinal closed under primitive recursive ordinal functions.
The Feferman–Schütte ordinal is equal to .
The small Veblen ordinal is equal to .
References
Hilbert Levitz, Transfinite Ordinals and Their Notations: For The Uninitiated, expository article (8 pages, in PostScript)
contains an informal description of the Veblen hierarchy.
Citations
Ordinal numbers
Proof theory
Hierarchy of functions | Veblen function | [
"Mathematics"
] | 1,752 | [
"Ordinal numbers",
"Proof theory",
"Mathematical logic",
"Mathematical objects",
"Order theory",
"Numbers"
] |
16,127,475 | https://en.wikipedia.org/wiki/Hybrid%20testing | Hybrid testing is what most frameworks evolve/develop into over time and multiple projects. The most successful automation frameworks generally accommodate both grammar and spelling, as well as information input. This allows information given to be cross-checked against existing and confirmed information. This helps to prevent false or misleading information being posted. It still however allows others to post new and relevant information to existing posts, and so increases the usefulness and relevance of the site. This said, no system is perfect, and it may not perform to this standard on all subjects all the time but will improve with increasing input and increasing use.
Pattern
The Hybrid-Driven Testing pattern is made up of a number of reusable modules / function libraries that are developed with the following characteristics in mind:
Maintainability – significantly reduces the test maintenance effort
Reusability – due to modularity of test cases and library functions
Manageability – effective test design, execution, and traceability
Accessibility – to design, develop & modify tests whilst executing
Availability – scheduled execution can run unattended on a 24/7 basis
Reliability – due to advanced error handling and scenario recovery
Flexibility – framework independent of system or environment under test
Measurability – customisable reporting of test results ensure quality
See also
Keyword-driven testing
Test automation framework
Test-driven development
Modularity-driven testing
References
Software testing | Hybrid testing | [
"Engineering"
] | 271 | [
"Software engineering",
"Software testing"
] |
16,128,216 | https://en.wikipedia.org/wiki/Double-flowered | "Double-flowered" describes varieties of flowers with extra petals, often containing flowers within flowers. The double-flowered trait is often noted alongside the scientific name with the abbreviation fl. pl. (flore pleno, a Latin ablative form meaning "with full flower"). The first abnormality to be documented in flowers, double flowers are popular varieties of many commercial flower types, including roses, camellias and carnations. In some double-flowered varieties all of the reproductive organs are converted to petals. As a result, they are sexually sterile and must be propagated through cuttings. Many double-flowered plants have little wildlife value as access to the nectaries is typically blocked by the mutation.
History
Double flowers are the earliest documented form of floral abnormality, first recognized more than two thousand years ago. Theophrastus mentioned double roses in his Enquiry into Plants, written before 286 BC. Pliny also described double roses in 1st century BC. In China, double peonies were known and selected by around 750 AD, and around 1000 AD double varieties of roses were cultivated to form the China rose (one of the ancestors of modern hybrid tea roses). Today, most cultivated rose varieties bear this double-flower trait.
Herbalists of the Renaissance recognized double flowers and began to cultivate them in their gardens—Rembert Dodoens published a description of double flowers in 1568, and John Gerard created illustrations of many double flowers beside their wild-type counterparts in 1597. A double-flowered variety of Marsh Marigold was discovered and cultivated in Austria in the late 16th century, becoming a valued garden plant.
The first documented double-flowered mutant of Arabidopsis, a model organism for plant development and genetics, was recorded in 1873. The mutated gene likely responsible for the phenotype, AGAMOUS, was cloned and characterized in 1990 in Elliot Meyerowitz's lab as part of his study of molecular mechanisms of pattern formation in flowers.
Genetics of double-flower mutations
Double-flower forms often arise when some or all of the stamens in a flower are replaced by petals. These types of mutations, where one organ in a developing organism is replaced with another, are known as homeotic mutations. They are usually recessive, although the double flower mutation in carnations exhibits incomplete dominance.
In Arabidopsis, which has been used as a model for understanding flower development, the double-flower gene AGAMOUS encodes a protein responsible for tissue specification of stamen and carpel flower segments. When both copies of the gene are deleted or otherwise damaged, developing flowers lack the signals to form stamen and carpel segments. Regions which would have formed stamens instead default to petals and the carpel region develops into a new flower, resulting in a recursive sepal-petal-petal pattern. Because no stamens and carpels form, the plants have no reproductive organs and are sexually sterile.
Mutations affecting flower morphology in Arabidopsis can be described by the ABC model of flower development. In this model, genes involved in flower formation belong to one of three classes of genes: A class genes which affect sepal and petal formation, B class genes which affect petal and stamen formation, and C class genes which affect stamen and carpel formation. These genes are expressed in certain regions of the developing flower and are responsible for development of organs in those regions. Agamous is a C class gene, a transcription factor responsible for activating genes involved in stamen and carpel development.
Gallery
References
Flowers
Plant morphology | Double-flowered | [
"Biology"
] | 740 | [
"Plant morphology",
"Plants"
] |
16,129,058 | https://en.wikipedia.org/wiki/Proton-coupled%20electron%20transfer | A Proton-coupled electron transfer (PCET) is a chemical reaction that involves the transfer of electrons and protons from one atom to another. The term was originally coined for single proton, single electron processes that are concerted, but the definition has relaxed to include many related processes. Reactions that involve the concerted shift of a single electron and a single proton are often called Concerted Proton-Electron Transfer or CPET.
In PCET, the proton and the electron (i) start from different orbitals and (ii) are transferred to different atomic orbitals. They transfer in a concerted elementary step. CPET contrast to step-wise mechanisms in which the electron and proton are transferred sequentially.
ET
[HX] + [M] → [HX]+ + [M]−
PT
[HX] + [M] → [X]− + [HM]+
CPET
[HX] + [M] → [X] + [HM]
Examples
PCET is thought to be pervasive. Important examples include water oxidation in photosynthesis, nitrogen fixation, oxygen reduction reaction, and the function of hydrogenases. These processes are relevant to respiration.
Simple models
Reactions of relatively simple coordination complexes have been examined as tests of PCET.
The comproportionation of a Ru(II) aquo and a Ru(IV) oxo (bipy = (2,2'-bipyridine, py = pyridine):
[(bipy)2(py)RuIV(O)]2+ + [(bipy)2(py)RuII(OH2)]2+ → 2 [(bipy)2(py)RuIII(OH)]2+
Electrochemical reactions where reduction is coupled to protonation or where oxidation is coupled to deprotonation.
The square scheme
Although it is relatively simple to demonstrate that the electron and proton begin and end in different orbitals, it is more difficult to prove that they do not move sequentially. The main evidence that PCET exists is that a number of reactions occur faster than expected for the sequential pathways. In the initial electron transfer (ET) mechanism, the initial redox event has a minimum thermodynamics barrier associate with the first step. Similarly, the initial proton transfer (PT) mechanism has a minimum barrier associated with the protons initial pKa. Variations on these minimum barriers are also considered. The important finding is that there are a number of reactions with rates greater than these minimum barriers would permit. This suggests a third mechanism lower in energy; the concerted PCET has been offered as this third mechanism. This assertion has also been supported by the observation of unusually large kinetic isotope effects (KIE).
A typical method for establishing PCET pathway is to show that the individual ET and PT pathways operate at higher activation energy than the concerted pathway.
In proteins
SOD2 uses cyclic proton-coupled electron transfer reactions to convert superoxide (O2•-) into either oxygen (O2) or hydrogen peroxide (H2O2), depending on the oxidation state of the manganese metal and the protonation status of the active site.
Mn3+ + O2•- ↔ Mn2+ + O2
Mn2+ + O2•- + 2H+ ↔ Mn3+ + H2O2
The protons of the active site have been directly visualized and revealed that SOD2 utilizes proton transfers between a glutamine residue and a Mn-bound solvent molecule in concert with its electron transfers. During the Mn3+ to Mn2+ redox reaction, Gln143 donates an amide proton to hydroxide bound to the Mn and forms an amide anion. The amide anion is stabilized by short-strong hydrogen bonds (SSHBs) with the Mn-bound solvent and the nearby Trp123 residue. For the Mn2+ to Mn3+ redox reaction, the proton is donated back to the glutamine to reform the neutral amide state. The fast and efficient PCET catalysis of SOD2 is explained by the use of a proton that is always present and never lost to bulk solvent.
Related processes
Hydrogen atom transfer (HAT) is distinct from PCET. In HAT, the proton and electron start in the same orbitals and move together to the final orbital. HAT is recognized as a radical pathway, although the stoichiometry is similar to that for PCET.
References
Electrochemistry
Proton
Reaction mechanisms | Proton-coupled electron transfer | [
"Chemistry"
] | 941 | [
"Reaction mechanisms",
"Electrochemistry",
"Chemical kinetics",
"Physical organic chemistry"
] |
9,424,378 | https://en.wikipedia.org/wiki/Titanium%20yellow | Titanium yellow, also nickel antimony titanium yellow, nickel antimony titanium yellow rutile, CI Pigment Yellow 53, or C.I. 77788, is a yellow pigment with the chemical composition of NiO·Sb2O3·20TiO2. It is a complex inorganic compound. Its melting point lies above 1000 °C, and has extremely low solubility in water. While it contains antimony and nickel, their bioavailability is very low, so the pigment is relatively safe.
The pigment has crystal lattice of rutile, with 2–5% of titanium ions replaced with nickel(II) and 9–12% of them replaced with antimony(III).
Titanium yellow is manufactured by reacting fine powders of metal oxides, hydroxides, or carbonates in solid state in temperatures between 1000 and 1200 °C, either in batches or continuously in a pass-through furnace.
Titanium yellow is used primarily as a pigment for plastics and ceramic glazes, and in art painting.
See also
List of colors
List of inorganic pigments
External links
Database of Painting Pigments
chemicalbook.com
Inorganic pigments
Nickel compounds
Antimony(III) compounds
Titanium(IV) compounds
Shades of yellow | Titanium yellow | [
"Chemistry"
] | 250 | [
"Inorganic pigments",
"Inorganic compounds"
] |
9,424,530 | https://en.wikipedia.org/wiki/Zenit-2 | The Zenit-2 was a Ukrainian, previously Soviet, expendable carrier rocket. First flown in 1985, it has been launched 37 times, with 6 failures. It is a member of the Zenit family of rockets and was designed by the Yuzhmash.
History
With 13–15 ton payload in LEO, it was intended as up-middle-class launcher greater than 7-ton-payload middle Soyuz and smaller than 20-ton-payload heavy Proton. Zenit-2 would be certified for crewed launches and placed in specially built launch pad at Baykonur spaceport, carrying the new crewed partially reusable Zarya spacecraft that developed in end of the 1980s but was cancelled. Also in the 1980s Vladimir Chelomey's firm proposed the never realised 15-ton Uragan spaceplane, which would have been launched by Zenit-2.
A modified version, the Zenit-2S, is used as the first two stages of the Sea Launch Zenit-3SL rocket. Launches of Zenit-2 rockets are conducted from Baikonur Cosmodrome Site 45/1. A second pad, 45/2, was also constructed, but was only used for two launches before being destroyed in an explosion. A third pad, Site 35 at the Plesetsk Cosmodrome was never completed, and work was abandoned after the dissolution of the Soviet Union.
The Zenit-2 had its last flight in 2004; it has been superseded by the Zenit-2M, which incorporates enhancements made during the development of the Zenit-3SL. The Zenit-2 has a fairly low flight rate, as the Russian government usually avoids flying national-security payloads on Ukrainian rockets. Zenit-2M itself flew only twice: in 2007 and 2011.
During the late 1990s, the Zenit-2 was marketed for commercial launches. Only one such launch was conducted, with a group of Globalstar satellites, which ended in failure after a computer error resulted in the premature cutoff of the second stage.
The second stage, called the SL-16 by western governments, along with the second stages of the Vostok and Kosmos launch vehicles, makes up about 20% of the total mass of launch debris in Low Earth Orbit (LEO). An analysis that determined the 50 “statistically most concerning” debris objects in low Earth orbit determined that the top 20 were all SL-16 upper stages.
Launch history
References
Vehicles introduced in 1985
Zenit (rocket family)
Spacecraft that broke apart in space | Zenit-2 | [
"Astronomy",
"Technology"
] | 519 | [
"Outer space",
"Rocketry stubs",
"Astronomy stubs",
"Space debris",
"Spacecraft that broke apart in space",
"Outer space stubs"
] |
9,424,921 | https://en.wikipedia.org/wiki/Beater%20%28weaving%29 | A beater or batten, is a weaving tool designed to push the weft yarn securely into place. In small hand weaving such as Inkle weaving and tablet weaving the beater may be combined with the shuttle into a single tool. In rigid heddle looms the beater is combined with the heddles. Beaters appear both in a hand-held form, and as an integral part of a loom.
Hand beaters must have enough mass to force the weaving into place, so they come in a variety of weights and sizes. Some may have lead inserts to provide additional heft for a smaller beater, and some are made entirely from metal.
Loom beaters typically take the form of a bar mounted across the loom. The actual beating is done by a metal insert known as a reed, which contains a number of slots, known as dents, which the warp threads pass through. This is the more common form, as floor looms and mechanized looms both use a beater with a reed.
See also
Loom
References
Weaving equipment | Beater (weaving) | [
"Engineering"
] | 219 | [
"Weaving equipment"
] |
9,425,248 | https://en.wikipedia.org/wiki/Virtual%20engineering | Virtual engineering (VE) is defined as integrating geometric models and related engineering tools such as analysis, simulation, optimization, and decision making tools, etc., within a computer-generated environment that facilitates multidisciplinary collaborative product development. Virtual engineering shares many characteristics with software engineering, such as the ability to obtain many different results through different implementations.
Description
The concept
A virtual engineering environment provides a user-centered, first-person perspective that enables users to interact with an engineered system naturally and provides users with a wide range of accessible tools. This requires an engineering model that includes the geometry, physics, and any quantitative or qualitative data from the real system. The user should be able to walk through the operating system and observe how it works and how it responds to changes in design, operation, or any other engineering modification. Interaction within the virtual environment should provide an easily understood interface, appropriate to the user's technical background and expertise, that enables the user to explore and discover unexpected but critical details about the system's behavior. Similarly, engineering tools and software should fit naturally into the environment and allow the user to maintain her or his focus on the engineering problem at hand. A key aim of virtual engineering is to engage the human capacity for complex evaluation.
The key components of such an environment include:
User-centered virtual reality visualization techniques. When presented in a familiar and natural interface, complex three-dimensional data becomes more understandable and usable, enhancing the user's understanding. Coupled with an appropriate expert (e.g., a design engineer, a plant engineer, or a construction manager), virtual reality can reduce design time for better solutions.
Computer-aided manufacturing (CAM) Computer-aided manufacturing#cite note-ota-1Interactive analysis and engineering. Today nearly all aspects of power plant simulation require extensive off-line setup, calculation, and iteration. The time required for each iteration can range from one day to several weeks. Tools for interactive collaborative engineering in which the engineer can establish a dynamic thinking process are needed to permit real-time exploration of the “what-if” questions that are essential to the engineering process. In nearly all circumstances, an engineering answer now has much greater value than an answer tomorrow, next week, or next month. Although many excellent engineering analysis techniques have been developed, they are not routinely used as a fundamental part of engineering design, operations, control, and maintenance. The time required to set up, compute, and understand the result, then repeat the process until an adequate answer is obtained, significantly exceeds the time available. This includes techniques such as computational fluid dynamics (CFD), finite elements analysis (FEA), and optimization of complex systems. Instead, these engineering tools are used to provide limited insight to the problem, to sharpen an answer, or to understand what went wrong after a bad design and how to improve the results next time. This is particularly true of CFD analysis.
Computer-aided engineering (CAE): Integration of real processes into the virtual environment. Engineering is more than analysis and design. A methodology for storage and rapid access to engineering analyses, plant data, geometry, and all other qualitative and quantitative engineering data related to plant operation still needs to be developed.
Engineering decision support tools. Optimization, cost analysis, scheduling, and knowledge-based tools need to be integrated into the engineering processes.
Virtual engineering allows engineers to work with objects in a virtual space without having to think about the objects' underlying technical information. When an engineer takes hold of a virtual component and moves or alters it, he or she should only have to think about the consequences of such a move in the component's real world counterpart. Engineers must also be able to create a picture of the system, the various parts of the system, and how the parts will interact with each other. When engineers can focus on the making decisions for particular engineering issues rather than the underlying technical information, design cycles and costs are reduced.
Software
UC-win/Road and VR Studio by FORUM8
IC.IDO by ESI-Group
Usual denomination
Usually, the modules of virtual engineering are named as such:
Computer-aided design (CAD): It designate the capability to model a geometry using geometric operations that can be close to real life industrial machining process such as revolution, dressing, extruding. The CAD module is made to ease the generation of a geometrical shape. It comes usually with other modules, such as an engineering drawing making tool.
Computer-aided manufacturing (CAM): Even if the CAD provide an accurate virtual shape of the objects or parts, the manufacturing of these can be far different, just because the previous tool just dealt with perfect mathematical operation (perfect point, lines, plan, volumes). To take into account in a more realistic manner of the succession of manufacturing operations and to be able to certify that the end product will be close to the virtual model, engineers make use of a manufacturing module which represent a tool that machine the parts.
Computer-aided engineering (CAE): Another aspect is integrated in a Virtual engineering tool, which is the engineering analysis (finite element analysis of strains, stress, temperature distribution, flow etc.). Such tool can be integrated to the main software or separated. It is usual that the CAE modules software dedicated to that task, having less features in the CAD aspect. Often the tools can perform import/export to make the most of the each tool.
Other modules can exist performing various other tasks, such as prototype manufacturing and product life cycle management.
See also
V-business
References
McCorkle, D. S., Bryden, K. M., "Using the Semantic Web to Enable Integration with Virtual Engineering Tools", Proceedings of the 1st International Virtual Manufacturing Workshop (27), Washington, DC, March 2006.
Huang, G., Bryden, K. M., McCorkle, D. S., “Interactive Design using CFD and Virtual Engineering”, Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA-2004-4364, Albany, September 2004.
McCorkle, D. S., Bryden, K. M., and Swensen, D. A., “Using Virtual Engineering Tools to Reduce NOx Emissions”, Proceedings of ASME Power 2004, POWER2004-52021, 441–446, March 2004.
McCorkle, D. S., Bryden, K. M., and Kirstukas, S. J., “Building a Foundation for Power Plant Virtual Engineering”, 28th International Technical Conference on Coal Utilization & Fuel Systems, 63–71, Clearwater, FL, April 2003.
Davis, Michael Andrew. "Improving electrical power grid resiliency and optimizing post-storm recovery using LiDAR and machine learning." MS thesis, 2020.
External links
EnergyBill Networks
Trianz
Virtual Engineering Inc.,
Virtual Engineering.se
Virtual reality
Engineering concepts | Virtual engineering | [
"Engineering"
] | 1,430 | [
"nan"
] |
9,426,238 | https://en.wikipedia.org/wiki/Averted%20vision | Averted vision is a technique for viewing faint objects which uses peripheral vision. It involves not looking directly at the object, but looking a little off to the side, while continuing to concentrate on the object. This subject is discussed in the popular astronomy literature but only a few rigorous studies have quantified the effect.
There is some evidence that the technique has been known since ancient times, as it seems to have been reported by Aristotle while observing the star cluster now known as M41. This technique of being able to see very dim lights over a long distance has also been passed down over hundreds of generations of sailors whose duties included standing lookout watches, making one better able to spot dim lights from other ships or shore locations at night. The technique has also been used in military training.
The same technique can be employed with or without a telescope (looking to the side with the naked eye or looking towards the edge of the telescope's field of view). An additional technique called scope rocking may also be used, which is done by simply moving the telescope back and forth slightly to move the object around in the field of view. This technique is based on the fact that the visual system is more sensitive to motion than to static objects.
Physiology
Averted vision works because there are virtually no rods (cells which detect dim light in black and white) in the fovea: a small area in the center of the eye. The fovea contains primarily cone cells, which serve as bright light and color detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. Based on the early work of Osterberg (1935), and later confirmed by modern adaptive optics, the density of the rod cells usually reaches a maximum around 20 degrees off the center of vision. Some researchers have contested the claim that averted vision is due solely to rod cell density, because the peak sensitivity to stars is not at 20 degrees.
See also
References
External links
"Just What is Averted Vision, Anyway?"
"Avertedvision.net | For Deep Sky Astronomers"
Eye
Vision
Observational astronomy | Averted vision | [
"Astronomy"
] | 430 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
9,427,109 | https://en.wikipedia.org/wiki/Mozambique%20tilapia | The Mozambique tilapia (Oreochromis mossambicus) is an oreochromine cichlid fish native to southeastern Africa. Dull colored, the Mozambique tilapia often lives up to a decade in its native habitats. It is a popular fish for aquaculture. Due to human introductions, it is now found in many tropical and subtropical habitats around the globe, where it can become an invasive species because of its robust nature. These same features make it a good species for aquaculture because it readily adapts to new situations. It is known as black tilapia in Colombia and as blue kurper in South Africa.
Description
The native Mozambique tilapia is laterally compressed, and has a deep body with long dorsal fins, the front part of which have spines. Native coloration is a dull greenish or yellowish, and weak banding may be seen. Adults reach up to in standard length and up to . Size and coloration may vary in captive and naturalized populations due to environmental and breeding pressures. It lives up to 11 years.
Distribution and habitat
The Mozambique tilapia is native to inland and coastal waters in southeastern Africa, from the Zambezi basin in Mozambique, Malawi, Zambia and Zimbabwe to Bushman River in South Africa's Eastern Cape province. It is threatened in its home range by the introduced Nile tilapia. In addition to competing for the same resources, the two readily hybridize. This has already been documented from the Zambezi and Limpopo Rivers, and it is expected that pure Mozambique tilapia eventually will disappear from both.
Otherwise it is a remarkably robust and fecund fish, readily adapting to available food sources and breeding under suboptimal conditions. Among others, it occurs in rivers, streams, canals, ponds, lakes, swamps and estuaries, although it typically avoids fast-flowing waters, waters at high altitudes and the open sea. It inhabits waters that range from .
Invasiveness
The Mozambique tilapia or hybrids involving this species and other tilapia are invasive in many parts of the world outside their native range, having escaped from aquaculture or been deliberately introduced to control mosquitoes. The Mozambique tilapia has been nominated by the Invasive Species Specialist Group as one of the 100 worst invasive species in the world. It can harm native fish populations through competition for food and nesting space, as well as by directly consuming small fish. In Hawaii, striped mullet Mugil cephalus are threatened because of the introduction of this species. The population of hybrid Mozambique tilapia x Wami tilapia in California's Salton Sea may also be responsible for the decline of the desert pupfish, Cyprinodon macularius.
Hybridization
As with most species of tilapia, Mozambique tilapia have a high potential for hybridization. They are often crossbred with other tilapia species in aquaculture because purebred Mozambique tilapia grow slowly and have a body shape poorly suited to cutting large fillets. However, Mozambique tilapia have the desirable trait of being especially tolerant of salty water. Also, hybrids between certain parent combinations (such as between Mozambique and Wami tilapia) result in offspring that are all or predominantly male. Male tilapia are preferred in aquaculture as they grow faster and have a more uniform adult size than females. The "Florida Red" tilapia is a popular commercial hybrid of Mozambique and blue tilapia.
Behavior
Feeding
Mozambique tilapia are omnivorous. They can consume detritus, diatoms, phytoplankton, invertebrates, small fry and vegetation ranging from macroalgae to rooted plants. This broad diet helps the species thrive in diverse locations.
Due to their robust nature, Mozambique tilapias often over-colonize the habitat around them, eventually becoming the most abundant species in a particular area. When over-crowding happens and resources get scarce, adults will sometimes cannibalize the young for more nutrients. Mozambique tilapia, like other fish such as Nile tilapia and trout, are opportunistic omnivores and will feed on algae, plant matter, organic particles, small invertebrates and other fish. Feeding patterns vary depending on which food source is the most abundant and the most accessible at the time. In captivity, Mozambique tilapias have been known to learn how to feed themselves using demand feeders. During commercial feeding, the fish may energetically jump out of the water for food.
Social structure
Mozambique tilapias often travel in groups where a strict dominance hierarchy is maintained. Positions within the hierarchy correlate with territoriality, courtship rate, nest size, aggression, and hormone production. In terms of social structure, Mozambique tilapias engage in a system known as lek-breeding, where males establish territories with dominance hierarchies while females travel between them. Social hierarchies typically develop because of competition for limited resources including food, territories, or mates. During the breeding season, males cluster around certain territory, forming a dense aggregation in shallow water. This aggregation forms the basis of the lek through which the females preferentially choose their mates. Reproductive success by males within the lek is highly correlated to social status and dominance.
In experiments with captive tilapias, evidence demonstrates the formation of linear hierarchies where the alpha male participates in significantly more agonistic interactions. Thus, males that are higher ranked initiate much more aggressive acts than subordinate males. However, contrary to popular belief, Mozambique tilapias display more agonistic interactions towards fish that are farther apart in the hierarchy scale than they do towards individuals closer in rank. One hypothesis behind this action rests with the fact that aggressive actions are costly. In this context, members of this social system tend to avoid confrontations with neighboring ranks in order to conserve resources rather than engage in an unclear and risky fight. Instead, dominant individuals seek to bully subordinate tilapias both for an easy fight and to keep their rank.
Communication and aggression
Urine in Mozambique tilapias, like many freshwater fish species, acts as a vector for communication amongst individuals. Hormones and pheromones released with urine by the fish often affect the behavior and physiology of the opposite sex. Dominant males signal females through the use of a urinary odorant. Further studies have suggested that females respond to the ratio of chemicals within the urine, as opposed to the odor itself. Nevertheless, females are known to be able to distinguish between hierarchical rank and dominant vs. subordinate males through chemicals in urine.
Urinary pheromones also play a part in male – male interaction for Mozambique tilapias. Studies have shown that male aggression is highly correlated with increased urination. Symmetrical aggression between males resulted in an increase in the release of urination frequency. Dominant males both store and release more potent urine during agonistic interactions. Thus, both the initial stage of lek formation and the maintenance of social hierarchy may highly depend on the males’ varying urinary output.
Aggression amongst males usually involve a typical sequence of visual, acoustic, and tactile signals that eventually escalates to physical confrontation if no resolution is reached. Usually, conflict ends before physical aggression as fights are both costly and risky. Bodily damage may impede an individual's ability to find a mate in the future. In order to prevent cheating, in which individual may fake his own fitness, these aggressive rituals incur significant energetic costs. Thus, cheating is prevented by the sheer fact that the costs of initiating a ritual often outweigh the benefits of cheating. In this regard, differences between individuals in endurance plays a critical role in resolving the winner and the loser.
Reproduction
In the first step in the reproductive cycle for Mozambique tilapia, males excavate a nest into which a female can lay her eggs. After the eggs are laid, the male fertilizes them. Then the female stores the eggs in her mouth until the fry hatch; this act is called mouthbrooding. One of the main reasons behind the aggressive actions of Mozambique tilapias is access to reproductive mates. The designation of Mozambique tilapias as an invasive species rests on their life-history traits: Tilapias exhibit high levels of parental care as well as the capacity to spawn multiple broods through an extended reproductive season, both contributing to their success in varying environments. In the lek system, males congregate and display themselves to attract females for matings. Thus, mating success is highly skewed towards dominant males, who tend to be larger, more aggressive, and more effective at defending territories. Dominant males also build larger nests for the spawn. During courtship rituals, acoustic communication is widely used by the males to attract females. Studies have shown that females are attracted to dominant males who produce lower peak frequencies as well as higher pulse rates. At the end of mating, males guard the nest while females take both the eggs and the sperm into their mouth. Due to this, Mozambique tilapias can occupy many niches during spawning since the young can be transported in the mouth. These proficient reproductive strategies may be the cause behind their invasive tendencies.
Male Mozambique tilapias synchronize breeding behavior in terms of courtship activity and territoriality in order to take advantage of female spawning synchrony. One of the costs associated with this synchronization is the increase in competition among males, which are already high on the dominance hierarchy. As a result, different mating tactics have evolved in these species. Males may mimic females and sneak reproduction attempts when the dominant male is occupied. Likewise, another strategy for males is to exist as a floater, travelling between territories in an attempt to find a mate. Nevertheless, it is the dominant males who have the greatest reproductive advantage.
Parental care
Typically, Mozambique tilapias, like all species belonging to the genus Oreochromis and species like Astatotilapia burtoni, are maternal mouthbrooders, meaning that spawn is incubated and raised in the mouth of the mother. Parental care is, therefore, almost exclusive to the female. Males do contribute by providing nests for the spawn before incubation, but the energy costs associated with nest production is low relative to mouthbrooding. Compared to nonmouthbrooders, both mouthbrooding and growing a new clutch of eggs is not energetically feasible. Thus, Mozambique tilapias arrest oocyte growth during mouthbrooding to conserve energy. Even with oocyte arrest, females that mouthbrood take significant costs in body weight, energy, and low fitness. Hence, parental-offspring conflict is visible through the costs and benefits to the parents and the young. A mother caring for her offspring carries the cost of reducing her own individual fitness. Unlike most fish, Mozambique tilapias exhibit an extended maternal care period believed to allow social bonds to be formed.
Use in aquaculture
Mozambique tilapia are hardy individuals that are easy to raise and harvest, making them a good aquacultural species. They have a mild, white flesh that is appealing to consumers. This species constitutes about 4% of the total tilapia aquaculture production worldwide, but is more commonly hybridized with other tilapia species. Tilapia are very susceptible to diseases such as whirling disease and ich. Mozambique tilapia are resistant to wide varieties of water quality issues and pollution levels. Because of these abilities they have been used as bioassay organisms to generate metal toxicity data for risk assessments of local freshwater species in South Africa rivers.
Mozambique tilapia were one of the species flown on the Bion-M No.1 spacecraft in 2013, but they all died due to equipment failure.
Other names
The species is known by a number of other names including:
Mujair in Indonesia, the name derived from a Javanese breeder Moedjair.
*Daya in Pakistan
Jesus fish in the Elim area of Jamaica, named for their habit of multiplying
References
References
Courtenay W.R. Jr. 1989. Exotic fishes in the National Park System. Pages 237–252 in: Thomas L.K. (ed.). Proceedings of the 1986 conference on science in the national parks, volume 5. Management of exotic species in natural communities. U.S. National Park Service and George Wright Society, Washington, D.C.
Courtenay W.R. Jr., and C.R. Robins. 1989. Fish introductions: Good management, mismanagement, or no management? CRC Critical Reviews in Aquatic Sciences 1:159–172.
Gupta M.V. and B.O. Acosta. 2004. A review of global tilapia farming practices. WorldFish Center P.O. Box 500 GPO, 10670, Penang, Malaysia.
Moyle P.B. 1976. Inland fishes of California. University of California Press, Berkeley, CA. 330 p.
Popma, T. Tilapia Life History and Biology 1999 Southern Region Aquaculture Center
Trewevas E. 1983. Tilapiine Fishes Of The Genera Sarotherodon, Oreochromis And Danakilia. British Museum Of Natural History, Publication Number 878.Comstock Publishing Associates. Ithaca, New York. 583 p.
Waal, Ben van der, 2002. Another fish on its way to extinction?. Science in Africa.
External links
Photo of "Florida Red" hybrid. Retrieved 12 July 2007.
Mozambique tilapia
Freshwater fish of Africa
Freshwater fish of South Africa
Fish of Mozambique
Fish of the Dominican Republic
Taxa named by Wilhelm Peters
Fish described in 1852
Near threatened animals
Vulnerable biota of Africa
Space-flown life | Mozambique tilapia | [
"Biology"
] | 2,780 | [
"Space-flown life"
] |
9,427,239 | https://en.wikipedia.org/wiki/Public%20computer | A public computer (or public access computer) is any of various computers available in public areas. Some places where public computers may be available are libraries, schools, or dedicated facilities run by government.
Public computers share similar hardware and software components to personal computers, however, the role and function of a public access computer is entirely different. A public access computer is used by many different untrusted individuals throughout the course of the day. The computer must be locked down and secure against both intentional and unintentional abuse. Users typically do not have authority to install software or change settings. A personal computer, in contrast, is typically used by a single responsible user, who can customize the machine's behavior to their preferences.
Public access computers are often provided with tools such as a PC reservation system to regulate access.
The world's first public access computer center was the Marin Computer Center in California, co-founded by David and Annie Fox in 1977.
Kiosks
A kiosk is a special type of public computer using software and hardware modifications to provide services only about the place the kiosk is in. For example, a movie ticket kiosk can be found at a movie theater. These kiosks are usually in a secure browser with zero access to the desktop. Many of these kiosks may run Linux, however, ATMs, a kiosk designed for depositing money, often run Windows XP.
Public computers in the United States
Library computers
In the United States and Canada, almost all public libraries have computers available for the use of patrons, though some libraries will impose a time limit on users to ensure others will get a turn and keep the library less busy. Users are often allowed to print documents that they have created using these computers, though sometimes for a small fee.
Privacy
Privacy is an important part of the public library institution, since the libraries entitle the public to intellectual freedom. Use of any computer or network may create records of users' activities that can jeopardize their privacy. It is possible for a patron to jeopardize their privacy if they do not delete cache, clear cookies, or documents from the public computer. In order for a member of the public to remain private on a computer, the American Library Association (ALA) has guidelines. These give patrons an idea of the right way to keep using public library computers. In their provision of services to library users, librarians have an ethical responsibility, expressed in the ALA Code of Ethics, to preserve users' right to privacy. A librarian is also responsible for giving users an understanding of private patron use and access. Libraries must ensure that users have the following rights when browsing on public computers: the computer automatically will clear a users history; libraries should display privacy screens so users do not see another patron's screen; updating software for effective safety measures; restoration data software to clear documents that users may have left on their computers and to combat possible malware; security practices; and making users aware of any possible monitoring of their browsing activities. Users can also view the Library Privacy Checklist for Public Access Computers and Networks to better understand what libraries strive for when protecting privacy.
School computers
The U.S. government has given money to many school boards to purchase computers for educational applications. Schools may have multiple computer labs, which contain these computers for students to use. There is usually Internet access on these machines, but some schools will put up a blocking service to limit the websites that students are able to access to only include educational resources, such as Google. In addition to controlling the content students are viewing, putting up these blocks can also help to keep the computers safe by preventing students from downloading malware and other threats. However, the effectiveness of such content filtering systems is questionable since it can easily be circumvented by using proxy websites, Virtual Private Networks, and for some weak security systems, merely knowing the IP address of the intended website is enough to bypass the filter.
School computers often have advanced operating system security to prevent tech-savvy students from inflicting damage (i.e. the Windows Registry Editor and Task Manager, etc.) are disabled on Microsoft Windows machines. Schools with very advanced tech services may also install a locked down BIOS/firmware or make kernel-level changes to the operating system, precluding the possibility of unauthorized activity.
See also
Personal computer
Telecenter
Internet cafe
References
Personal computing
Computer security
Computer | Public computer | [
"Technology"
] | 905 | [
"Computing and society",
"Personal computing"
] |
9,427,669 | https://en.wikipedia.org/wiki/Dilution%20assay | The term dilution assay is generally used to designate a special type of bioassay in which one or more preparations (e.g. a drug) are administered to experimental units at different dose levels inducing a measurable biological response. The dose levels are prepared by dilution in a diluent that is inert in respect of the response. The experimental units can for example be cell-cultures, tissues, organs or living animals. The biological response may be quantal (e.g. positive/negative) or quantitative (e.g. growth). The goal is to relate the response to the dose, usually by interpolation techniques, and in many cases to express the potency/activity of the test preparation(s) relative to a standard of known potency/activity.
Dilution assays can be direct or indirect. In a direct dilution assay the amount of dose needed to produce a specific (fixed) response is measured, so that the dose is a stochastic variable defining the tolerance distribution. Conversely, in an indirect dilution assay the dose levels are administered at fixed dose levels, so that the response is a stochastic variable.
In some assays, there may be strong reasons for believing that all the constituents of the test preparation except one, are without any effect on the studied response of the subjects. An assay of the preparation against a standard preparation of the effective constituent, is then equivalent to an analysis for determining the content of the constituent. This may be described as analytical dilution assay.
Statistical models
For a mathematical definition of a dilution assay an observation space is defined and a function so that the responses are mapped to the set of real numbers. It is now assumed that a function exists which relates the dose to the response
in which is an error term with expectation 0. is usually assumed to be continuous and monotone. In situations where a standard preparation is included it is furthermore assumed that the test preparation behaves like a dilution (or concentration) of the standard
, for all
where is the relative potency of . This is the fundamental assumption of similarity of dose-response curves which is necessary for a meaningful and unambiguous definition of the relative potency. In many cases it is convenient to apply a power transformation with or a logarithmic transformation . The latter can be shown to be a limit case of so if is written for the log transformation the above equation can be redefined as
, for all .
Estimates of are usually restricted to be member of a well-defined parametric family of functions, for example the family of linear functions characterized by an intercept and a slope. Statistical techniques such as optimization by Maximum Likelihood can be used to calculate estimates of the parameters. Of notable importance in this respect is the theory of Generalized Linear Models with which a wide range of dilution assays can be modelled. Estimates of may describe satisfactorily over the range of doses tested, but they do not necessarily have to describe beyond that range. However, this does not mean that dissimilar curves can be restricted to an interval where they happen to be similar.
In practice, itself is rarely of interest. More of interest is an estimate of or an estimate of the dose that induces a specific response. These estimates involve taking ratios of statistically dependent parameter estimates. Fieller's theorem can be used to compute confidence intervals of these ratios.
Some special cases deserve particular mention because of their widespread use: If is linear and this is known as a slope-ratio model. If is linear and this is known as a parallel line model. Another commonly applied model is the probit model where is the cumulative normal distribution function, and follows a binomial distribution.
Example: Microbiological assay of antibiotics
An antibiotic standard (shown in red) and test preparation (shown in blue) are applied at three dose levels to sensitive microorganisms on a layer of agar in petri dishes. The stronger the dose the larger the zone of inhibition of growth of the microorganisms. The biological response is in this case the zone of inhibition and the diameter of this zone can be used as the measurable response. The doses are transformed to logarithms and the method of least squares is used to fit two parallel lines to the data. The horizontal distance between the two lines (shown in green) serves as an estimate of the potency of the test preparation relative to the standard.
Software
The major statistical software packages do not cover dilution assays although a statistician should not have difficulties to write suitable scripts or macros to that end. Several special purpose software packages for dilution assays exist.
References
Finney, D.J. (1971). Probit Analysis, 3rd Ed. Cambridge University Press, Cambridge.
Finney, D.J. (1978). Statistical Method in Biological Assay, 3rd Ed. Griffin, London.
Govindarajulu, Z. (2001). Statistical Techniques in Bioassay, 2nd revised and enlarged edition, Karger, New York.
External links
Software for dilution assays:
PLA
CombiStats
Unistat
BioAssay
Drug manufacturing
Drug discovery
Biostatistics | Dilution assay | [
"Chemistry",
"Biology"
] | 1,061 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
9,427,744 | https://en.wikipedia.org/wiki/Isogonal | Isogonal, a mathematical term meaning "having similar angles", may refer to:
Isogonal figure or polygon, polyhedron, polytope or tiling
Isogonal trajectory, in curve theory
Isogonal conjugate, in triangle geometry
See also
Isogonic line, in the study of Earth's magnetic field, a line of constant magnetic declination
Geometry | Isogonal | [
"Mathematics"
] | 77 | [
"Geometry"
] |
9,428,917 | https://en.wikipedia.org/wiki/Snub%20%28geometry%29 | In geometry, a snub is an operation applied to a polyhedron. The term originates from Kepler's names of two Archimedean solids, for the snub cube () and snub dodecahedron ().
In general, snubs have chiral symmetry with two forms: with clockwise or counterclockwise orientation. By Kepler's names, a snub can be seen as an expansion of a regular polyhedron: moving the faces apart, twisting them about their centers, adding new polygons centered on the original vertices, and adding pairs of triangles fitting between the original edges.
The terminology was generalized by Coxeter, with a slightly different definition, for a wider set of uniform polytopes.
Conway snubs
John Conway explored generalized polyhedron operators, defining what is now called Conway polyhedron notation, which can be applied to polyhedra and tilings. Conway calls Coxeter's operation a semi-snub.
In this notation, snub is defined by the dual and gyro operators, as s = dg, and it is equivalent to an alternation of a truncation of an ambo operator. Conway's notation itself avoids Coxeter's alternation (half) operation since it only applies for polyhedra with only even-sided faces.
In 4-dimensions, Conway suggests the snub 24-cell should be called a semi-snub 24-cell because, unlike 3-dimensional snub polyhedra are alternated omnitruncated forms, it is not an alternated omnitruncated 24-cell. It is instead actually an alternated truncated 24-cell.
Coxeter's snubs, regular and quasiregular
Coxeter's snub terminology is slightly different, meaning an alternated truncation, deriving the snub cube as a snub cuboctahedron, and the snub dodecahedron as a snub icosidodecahedron. This definition is used in the naming of two Johnson solids: the snub disphenoid and the snub square antiprism, and of higher dimensional polytopes, such as the 4-dimensional snub 24-cell, with extended Schläfli symbol s{3,4,3}, and Coxeter diagram .
A regular polyhedron (or tiling), with Schläfli symbol , and Coxeter diagram , has truncation defined as , and , and has snub defined as an alternated truncation , and . This alternated construction requires q to be even.
A quasiregular polyhedron, with Schläfli symbol or r{p,q}, and Coxeter diagram or , has quasiregular truncation defined as or tr{p,q}, and or , and has quasiregular snub defined as an alternated truncated rectification or htr{p,q} = sr{p,q}, and or .
For example, Kepler's snub cube is derived from the quasiregular cuboctahedron, with a vertical Schläfli symbol , and Coxeter diagram , and so is more explicitly called a snub cuboctahedron, expressed by a vertical Schläfli symbol , and Coxeter diagram . The snub cuboctahedron is the alternation of the truncated cuboctahedron, , and .
Regular polyhedra with even-order vertices can also be snubbed as alternated truncations, like the snub octahedron, as , , is the alternation of the truncated octahedron, , and . The snub octahedron represents the pseudoicosahedron, a regular icosahedron with pyritohedral symmetry.
The snub tetratetrahedron, as , and , is the alternation of the truncated tetrahedral symmetry form, , and .
Coxeter's snub operation also allows n-antiprisms to be defined as or , based on n-prisms or , while is a regular n-hosohedron, a degenerate polyhedron, but a valid tiling on the sphere with digon or lune-shaped faces.
The same process applies for snub tilings:
Examples
Nonuniform snub polyhedra
Nonuniform polyhedra with all even-valance vertices can be snubbed, including some infinite sets; for example:
Coxeter's uniform snub star-polyhedra
Snub star-polyhedra are constructed by their Schwarz triangle (p q r), with rational ordered mirror-angles, and all mirrors active and alternated.
Coxeter's higher-dimensional snubbed polytopes and honeycombs
In general, a regular polychoron with Schläfli symbol , and Coxeter diagram , has a snub with extended Schläfli symbol , and .
A rectified polychoron = r{p,q,r}, and has snub symbol = sr{p,q,r}, and .
Examples
There is only one uniform convex snub in 4-dimensions, the snub 24-cell. The regular 24-cell has Schläfli symbol, , and Coxeter diagram , and the snub 24-cell is represented by , Coxeter diagram . It also has an index 6 lower symmetry constructions as or s{31,1,1} and , and an index 3 subsymmetry as or sr{3,3,4}, and or .
The related snub 24-cell honeycomb can be seen as a or s{3,4,3,3}, and , and lower symmetry or sr{3,3,4,3} and or , and lowest symmetry form as or s{31,1,1,1} and .
A Euclidean honeycomb is an alternated hexagonal slab honeycomb, s{2,6,3}, and or sr{2,3,6}, and or sr{2,3[3]}, and .
Another Euclidean (scaliform) honeycomb is an alternated square slab honeycomb, s{2,4,4}, and or sr{2,41,1} and :
The only uniform snub hyperbolic uniform honeycomb is the snub hexagonal tiling honeycomb, as s{3,6,3} and , which can also be constructed as an alternated hexagonal tiling honeycomb, h{6,3,3}, . It is also constructed as s{3[3,3]} and .
Another hyperbolic (scaliform) honeycomb is a snub order-4 octahedral honeycomb, s{3,4,4}, and .
See also
Snub polyhedron
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 154–156 8.6 Partial truncation, or alternation)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, , Googlebooks
(Paper 17) Coxeter, The Evolution of Coxeter–Dynkin diagrams, [Nieuw Archief voor Wiskunde 9 (1991) 233–248]
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10]
(Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559–591]
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3–45]
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 3: Wythoff's Construction for Uniform Polytopes)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008,
Richard Klitzing, Snubs, alternated facetings, and Stott–Coxeter–Dynkin diagrams, Symmetry: Culture and Science, Vol. 21, No.4, 329–344, (2010)
Geometry
Snub tilings | Snub (geometry) | [
"Physics",
"Mathematics"
] | 1,857 | [
"Tessellation",
"Snub tilings",
"Geometry",
"Symmetry"
] |
9,429,153 | https://en.wikipedia.org/wiki/Upjohn%20dihydroxylation | The Upjohn dihydroxylation is an organic reaction which converts an alkene to a cis vicinal diol. It was developed by V. VanRheenen, R. C. Kelly and D. Y. Cha of the Upjohn Company in 1976. It is a catalytic system using N-methylmorpholine N-oxide (NMO) as stoichiometric re-oxidant for the osmium tetroxide. It is superior to previous catalytic methods.
Prior to this method, use of stoichiometric amounts of the toxic and expensive reagent osmium tetroxide was often necessary. The Upjohn dihydroxylation is still often used for the formation of cis-vicinal diols; however, it can be slow and is prone to ketone byproduct formation. One of the peculiarities of the dihydroxylation of olefins is that the standard "racemic" method (the Upjohn dihydroxylation) is slower and often lower yielding than the asymmetric method (the Sharpless asymmetric dihydroxylation).
Improvements to Upjohn dihydroxylation
In response to these problems, Stuart Warren and co-workers employed similar reaction conditions to the Sharpless asymmetric dihydroxylation, but replacing the chiral ligands with the achiral quinuclidine to give a racemic reaction product (assuming an achiral starting material is employed). This approach takes advantage of the fact that when using the Sharpless alkaloid ligands, the dihydroxylation of alkenes is faster and higher yielding than in their absence. This phenomenon became known as "ligand accelerated catalysis", a term coined by Barry Sharpless during the development of his asymmetric protocol.
See also
Milas hydroxylation
Sharpless asymmetric dihydroxylation
References
Organic oxidation reactions
Name reactions | Upjohn dihydroxylation | [
"Chemistry"
] | 406 | [
"Name reactions",
"Organic oxidation reactions",
"Organic reactions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.