id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
979,213
https://en.wikipedia.org/wiki/Derbyshire%20lead%20mining%20history
This article details some of the history of lead mining in Derbyshire, England. Background It has been claimed that Odin Mine, near Castleton, one of the oldest lead mines in England, may have been worked in the tenth century or even as early as Roman Britain, but it was certainly productive in the 1200s. Derbyshire lead mines are mentioned in the Pipe Rolls. Recent analysis of a Swiss ice-core extracted in 2013 indicates that levels of lead in atmospheric pollution between 1170 and 1216 were as high as those during the Industrial Revolution and correlate accurately with lead production from Peak District mines, the main European source at the time. On one of the walls in Wirksworth Church is a crude stone carving, found nearby at Bonsall and placed in the church in the 1870s. Probably executed in Anglo-Saxon times, it shows a man carrying a kibble or basket in one hand and a pick in the other. He is a lead miner. The north choir aisle of Wirksworth church is dominated by a far more ostentatious monument, a large ornate alabaster chest tomb, a memorial to Ralph Gell of Hopton, who died in 1563. The simple figure of the miner bears witness to the fact that for centuries the people of Wirksworth and their neighbours relied on lead mining. Ralph Gell's imposing tomb is evidence that a few people became rich and powerful from the trade. While Derbyshire lead made Gell and others rich, for poor families it was both a living and an adventure, with the possibility of a better life from a lucky find. The industry was organised in a way that gave a measure of independence to many of them. Mining was hard and dangerous work: death, illness and injury came from poisonous lead dust, underground floods, falling rock, methane gas in shale workings and lack of oxygen in badly ventilated galleries. From the later years of the 17th century gunpowder introduced a further hazard. Nonetheless the thousands of shafts, hillocks and ruined buildings in the limestone landscape of the old lead mining areas, and the miles of galleries underground, make it plain that the veins of lead were intensively exploited. In the words of a petition to King Charles I "many thousand people are dailie imployed in the lead mynes, to the greatt proffitt of your Majestie ... and to the whole Comonwealth ... in getting great quantities of lead for the use of the Kingdome in generall, and in transporting the rest to forraigne Nations...". By the 17th century lead was second in importance in the national economy only to wool. It was essential for the roofs of public buildings and the new houses being built in every part of the country by the nobility and gentry. All houses, including farmhouses and cottages by then, had glazed windows, with lead glazing bars. It was the only material for water storage and piping. Every army used it as ammunition. There was a thriving export trade as well as the home market and the Wirksworth area was the main source of the ore. Wirksworth was the administrative centre of one of the hundreds, local government units, of Derbyshire. Uniquely, the Wirksworth Hundred was still known by the archaic term Wapentake. Lead ore was Crown property in most places and the mining area of Derbyshire under royal control was known as the King's Field, with two separately administered divisions, the High and Low Peaks, each further divided into liberties, based on parishes. Wirksworth Wapentake was the Low Peak area of the King's Field. At different times there were liberties based on Wirksworth, Middleton-by-Wirksworth, Cromford, Brassington, Matlock, Elton, Middleton-by-Youlgreave, Bonsall, Hopton and Carsington, and from 1638 until 1654 there was a separate liberty for the Dovegang, on Cromford Moor which had become extremely productive after being drained by the first of the Derbyshire drainage schemes, or soughs. There had always been lead mining in and around Wirksworth. This is limestone country and the fissures characteristic of limestone contained rich deposits of minerals, and especially of galena: lead ore. The Romans mined there and left inscribed "pigs", or ingots, of smelted lead as evidence. In the 9th century Repton Abbey owned mines at Wirksworth and when the abbey was destroyed by Danish troops in 874 they were taken by their Mercian puppet king Ceolwulf. They remained in royal hands after the Norman conquest of England and paid royalties to the Crown for centuries afterwards. Lead mining and smelting was an established industry in 1086, when the mines at Wirksworth and Bakewell were recorded in the Domesday Book. Mining methods Lead had traditionally been found by following veins from surface outcroppings, particularly in "rakes" or vertical fissures. By the 17th century, however, most surface lead had been mined and prospecting was achieved by less direct methods. Miners searched for surface signs that were similar to known lead-rich areas, they checked ploughed and other disturbed land for traces of ore, and they checked for signs in plants and trees and poorly performing crops, since lead is poisonous to most living things. They used probes to check for signs of ore in soil a few feet under the surface and dug exploratory holes or trenches in promising places. This was usually done to choose the best places to sink shafts ahead of existing working and the rules defined when and where these activities could be carried out. The miners sank their shafts in turns of up to , each turn being a few yards away from the bottom of the preceding one, along a gallery which may have been the working level reached by the earlier shaft. They climbed up and down their shafts using either footholes in the shaft walls or stemples, wooden steps built into the sides, an exhausting and dangerous way to start and finish a day's work. These climbing shafts were usually within the miners' coe, the limestone-walled cabin in which they stored tools, a change of clothes and food. Where the mine was on a hillside the vein could often be reached via an adit or tunnel driven into the slope. Ore was brought to the surface up a winding shaft outside the coe. The miners' equipment included picks, hammers and wedges to split the rock, wiskets or baskets to contain it, corves or sledges to drag it to the shaft bottom, and windlasses or stows, to lift it to the surface. In later years underground transport was improved by replacing corves by wagons, often running on wooden or metal rails. A good example of an 18th-century wooden railway can be found in the Merry Tom mine, near Via Gellia. The miners avoided the need to excavate hard rock whenever they could and where it was unavoidable sometimes resorted to fire-setting. A fire was built against the rock face after mining had finished for the day and allowed to burn through the night. Fragmentation of the heated rock was increased by throwing water on to it. The rule about fire-setting only after the end of the day's work was important because in the confined mines the smoke was deadly. Fire-setting was a skilled technique and was used sparingly for that reason as well as because of the disruption caused by the smoke and the danger from splintering rock. 16th-century technical change After a mid-16th-century slump the industry recovered, new mines were opened on Middleton Moor, and production increased, a recovery mainly due to technical developments. While traditional extraction methods had persisted there were vital changes in the ways in which ore was prepared for smelting and in the smelting process itself. Bole smelting The traditional smelter was a bole, a large fire built on a hill and relying on wind power. It functioned best with large pieces of rich ore known as bing and could not deal with anything small enough to pass through a half-inch mesh riddle. The bole smelter therefore resulted in large amounts of ore accumulating on waste heaps. It required two days of strong wind and could only function when the conditions were favourable. Smelting mills In the late 16th century wind power was abandoned and the smelting blast was provided by a bellows driven first by foot, to an ore hearth, and later by water-power in a smelting mill. The mills were fuelled by "white coal", which was in fact kiln-dried branch wood. Wood was preferred to charcoal for the main furnace, which smelted ore from the mines, as charcoal generated more heat than this furnace required. Drying the wood eliminated smoke, which would have made it difficult for the smelters to keep the necessary close observation of the process. Charcoal was used in a second furnace, which resmelted the slag from the first, and required greater heat. Draught for the furnaces came from two large bellows driven by the water wheels. Lead ore of all grades was first broken or ground again into finer particles and rewashed to produce very pure ore for the furnace. These smelters could deal with much finer particles of ore and new techniques were introduced to provide them. Dressing Before a miner could sell his ore he had to dress it. Dressing was the process of extracting the ore from the rock in which it was embedded and washing it, a further refining process. In the days of bole smelting the ore was roughly washed clean of waste minerals and dirt before being riddled for bing ore. The ore for the new smelters was smashed, or crushed, into pieces about the size of peas. This was done by hand, using a hammer called a bucker or, in larger mines, on a crushing circle, where a horse dragged a roller round a paved circle on which the ore was placed. Crushed ore was washed either by running water over it in a sloping trough called a buddle or by placing it in a sieve fine enough to prevent any ore particles passing through. The sieve was then plunged several times into a trough. In each case the object was to allow the heavier, lead-rich, particles to sink, enabling those containing lighter, unwanted minerals to be skimmed off the top and removed. These processes were then repeated at the smelter. By the 17th century new mines were being opened, shafts driven deeper, and old waste heaps were yielding new supplies for the smelters. Mining customs Everything about the old lead industry, from the mining of ore to its sale, stemmed from the ancient claim of the monarch to all mineral rights. The whole structure was designed to enable the Duchy of Lancaster, a royal possession, to collect the king's royalties and, since these were farmed out, the miners paid them to the king's farmer. By the 17th century the local holder of the mineral rights was also the barmaster, who ran the industry, helped by deputies responsible for the liberties, and by the miners' juries of the Barmote Court. The lead industry is long gone, but its traditions are still maintained and the barmaster and the jury still meet in the Moot Hall in Wirksworth. It was the royal possession of the mineral rights and the royal wish to encourage lead mining, that dictated the two characteristic features of so-called "free mining". Any man who could demonstrate to the barmaster that he had discovered a significant amount of ore was allowed to open a mine and retain the title to it as long as he continued to work it, and, secondly, mining took precedence over land ownership. No land owner or farmer could interfere with lead mining, though there were many attempts to limit its damage. In 1620 the Duchy of Lancaster's tenants at Brassington complained that lead mining was poisoning their cattle. In 1663 the Brassington manor court forbade miners from taking water from the village well to wash ore, on pain of a fine of 1 shilling, and in 1670 imposed fines of 3/4d on miners who left shafts uncovered or raised heaps of soil and waste minerals against fences, allowing cattle to climb over them. But the customs raised the possibility of ordinary families making a living independently of farmers or other employers and in the regular conflict between miners and landowners in the Wirksworth area the miners usually managed to hang on to them, though they did lose some of their fights. King's farmers and chief barmasters The coveted and valuable farm of the Duchy of Lancaster's right to the lead mine duties, coupled as it was with the office of chief barmaster, endowed its owner with both a considerable income and authority over the running of the industry. It was always resold at a much higher price than that charged by the Duchy, which was £110 plus annual payments of £72 for the duties and £1-6-8d for the barmastership. Chief barmasters and the 24 At dinner in Wirksworth after meetings of the 17th-century Barmote Court, the landlord of the inn had three tables for those attending the Court. There was the "24 table", where the members of the 24-man jury sat, and where he charged 8d per head, "the barmasters' table", at 10d a head, and a table where "gentlemen's dinners" cost 1 shilling each. The gentlemen drank sack or claret with their dinner; the men were served with beer. The bill was paid by the king's farmer and chief barmaster. There were usually about a dozen gentlemen, some of whom were members of the jury, while others were there to present a case to the Court. Also among the gentlemen were the steward of the court, who was a lawyer and who conducted the sessions. When the chief barmaster for the Wapentake, always a man of wealth and rank, was a local gentleman such as Sir John Gell of Hopton or his son John, the 2nd baronet, he often attended the Court himself. If the current chief barmaster was an absentee member of the gentry or nobility he relied on his deputy barmasters. In addition to helping the barmasters to carry out their duties the 24 jurors brought practical experience to bear when the Barmote Court was adjudicating in disputes and trials. The main requirement of the jurymen was that they should be knowledgeable in mining matters and they included both working miners and, when it was thought necessary, local gentry. Deputy barmasters The deputy barmasters whom the chief barmaster appointed were experienced local men. Some of them were yeoman farmer/miners and others local gentlemen. The deputy barmasters actually ran the system. It was they who initiated much of the business of the Court. It was they, in administering the rules, who determined whether a miner should have a particular mine or whether another should lose one. Their duties required them to be able to read, write and keep account of granting and removing title to mines and of ore production and the duties levied on it. As ore was brought from a mine, it was measured by the dish and the barmaster collected each 13th dish, a royalty or duty known as lot. This was the barmaster's reckoning. A further duty of sixpence a load (9 dishes) was paid by the merchants who bought the ore from the miners. This second duty was called cope. Giving a mine The barmaster or his deputy granted title in a mine, the usual name for which was grove or groove, on receipt of proof that it was viable. The proof was a standard container, a dish, filled with about of ore from the mine in question. Every dish was calibrated by the barmaster twice a year against a brass standard dish. The miner thus granted title to the mine was said to have freed it, either for old if a development in an existing mine, or for new in the case of a new discovery. He was given permission to work 2 meers of ground, known as founder meers, with no restriction on width or depth. A third meer was the king's, and other miners were each allowed to open a further meer, taker meers, along the vein. The miner marked each meer with his possessions or stows (a miniature version of the stows or windlass used to wind the ore from the shaft). A meer was , in the Wirksworth Wapentake. Since the course of a vein of lead was unpredictable, there were many disputes caused by one group of miners following a vein into another mine. There were occasions when possession was disputed by physical means. Title-holding and record keeping The deputy barmasters were responsible for settling disputes over ownership or of arresting or suspending operation of mines pending decisions of the Barmote Court. They could withdraw title whenever a mine was left unworked. They checked the mines regularly and used their knives to nick the stows at any neglected mine. After three nicks at weekly intervals title could be transferred to another miner. The mining rules required working shareholders in a mine to pull their weight. Any who did not were dispossessed, after a warning at the Barmote Court. Typical was this injunction from the court on 2 April 1630: "Wee saie that Thomas Taylor Henry Lowe and John Wooley shall come within tenn daies of warning given them by the Barrmaster and shall keepe Thomas Redforde companie at their groves in Home Rake or else to loose theire parte." The deputy barmasters kept records of all changes of title and of the amounts of ore measured and the amounts of lot ore and cope collected at their regular reckonings at the mines. The lot and cope accounts involved quite complicated arithmetic. The information given included the period covered, the name of the miner or mine (occasionally both were given), the amount of ore mined, the number of dishes of lot ore received, the amount of ore sold to each buyer and the sum of money chargeable to each buyer for cope. Traditional methods were used at the reckonings; barmasters carried knives "to worke uppon a sticke the nomber of dishes of oare as they were measured which is usuall to be done at a reckoning". Many of their records have survived. Accidents In conjunction with the jury of 24 sitting at the Barmote Courts, the deputy barmasters adjudicated in disputes and enforced compliance with the customs of the mines. Their duties extended to acting as the coroner in the case of fatal accidents, where a specially summoned jury of twelve or thirteen local miners decided the cause of death. In an 18th-century example the Brassington barmaster, Edward Ashton, followed the rules after a death in Throstle Nest mine. Wirksworth Wapentake March 26th 1761. We, whose names are under written, being this day summoned by Mr. Edward Ashton, Bar-Master for the Liberty of Brassington, to a groove called by the name of Throstle Nest on Brassington Pasture; to enquire into the cause of the death of T.W. now lying before us; accordingly we have been down the shaft to the Foot thereof, and down one sump or turn to the foot thereof, and on a gate North-wardly about sixteen yards to the Forefield, where the deceased had been at work; and by the information of William Briddon who was working near him; it appears that a large stone fell upon him out of the roof, and it is our opinions the said stone was the cause of his death. Structure of industry The free mining arrangement under the rules of the Duchy of Lancaster was the normal state of affairs in the Duchy manor of Wirksworth. The Duchy's lessee of the mineral rights at the end of the 16th century, Gilbert, Earl of Shrewsbury, had established his right to the dues of lot and cope against attempts by local landowners to assert right to mines on their land. Shrewsbury's success embedded the old rules and facilitated free mining. There was one independent area within the Wapentake, Griffe Grange, near Brassington, held by the Gell family of Hopton since Ralph Gell had leased it from Dale Abbey and bought it from the Crown Commissioners at the dissolution of the religious houses by Henry VIII. The Gells, however, ran their mines under similar rules to those in the Duchy, the only difference being that the miners paid their dues to them. Attempts by other landowners to establish the same rights as the Gells were largely unsuccessful. An example occurred at Elton, where the landowner, Francis Foljambe, prevented the application of Duchy rules, employing wage labourers in the Elton mines and seeking sanction for his action in the courts. The Duchy Court, however, issued an injunction against him in 1627, instructing him "to conveane or execute noe other suite or suites hereafter att the Comon Lawe ... concerning the lott and copp and lead mynes in Elton". This ruling was applied in all the Duchy liberties, though there was a renewal of opposition from landowners after the Restoration of 1660. For historical reasons the structure of the industry was different in the High Peak where, mainly because of very long leases, there had been a blurring of the Duchy's authority, and the two largest landowners, the Manners and Cavendish families, maintained claims to mining rights and dues. The miners fought hard, physically and in the courts, to obtain the free mining rights enjoyed in the Wirksworth Wapentake. The Manners family met them head-on, refused all attempts to establish free mining, and employed miners as day labourers in their mines. Their Cavendish neighbours at Chatsworth, after a period of conflict, adopted the same pattern as the Gells at Griffe Grange, collecting the dues from mines run by Duchy of Lancaster rules. The operation of the old rules and customs in the Wirksworth Wapentake did not prevent the development of a complicated structure there. The barmasters' ore accounts, identifying mines and/or proprietors, reveal a mixture of free mining, ownership of large mines by rich entrepreneurs and the rewashing of old spoil heaps. Where there was a rich source of ore and especially where access required dewatering, the mines were owned by venture capitalists employing miners either as contracted groups, who considered themselves as free miners negotiating a price for their work or, more rarely, as wage labourers. The barmasters' accounts for 1653 show that the ore from the Brassington, Middleton and Wirksworth liberties, all low producers at the time, consisted of small amounts mined by a large number of names. Clearly, in these liberties, at this time, it was the small-time miners, most of whom would have had other sources of income, usually farming, who were paying their dues and selling to the lead merchants and smelters. In a fourth liberty, Cromford, the picture was different. With the Dovegang dewatered by Vermuyden's Sough (see below), the output there dwarfed the combined output of the other three liberties, and 51% of it came from mines owned by the rich lead merchant Lionel Tynley. 88% came from four sources, while the rest was mined by 45 independent miners. Finally, 6,108 loads (about 1,527 tons), or 23% of the total ore sold in the four liberties, was won from old hillocks by so-called "cavers". Pollution Lead is poisonous to both plants and animals. For people the smelting processes are the most dangerous – the restored smelter at Spitewinter, near Chesterfield, stands a few yards from Belland Lane, belland being lead poisoning. The current owners of the smelter on the site of the former Mill Close mine at Darley Bridge have bought much of the adjoining land and turned arable and pasture into woodland, to avoid the danger to crops and animals. The danger to plants and animals, particularly from washing or "buddling", has been known for centuries and the nuisance to the people of Brassington described above was typical of the conflict between farmers and miners. In the 1680s Sir John Gell II gave his opinion during such a dispute. "For buddling ... I have heard that miners have been indicted for it, and the freeholders and occupiers of land are much prejudiced by it. It sets the cattle upon the belland, which is destructive to the cattle and horses and often kills them." In 1794 several groups of Wensley miners who had buddled mine waste in the river Derwent above and below Darley Bridge were brought to court, accused of polluting the river. Witnesses described the river as being muddied as far down-river as a mile below Cromford Bridge, and a Matlock publican claimed to have been prevented from his usual practice of using river water for his brewing by the state of the Derwent. He had had to sink a well to stay in business. Mining law was cited by both sides. The miners quoted the custom which allowed them to wash their ore, while plaintiffs replied with the law which stipulated that sludge from washing should be emptied into "some convenient place within their quarter cord (which is a space of seven yards and a quarter, or the fourth part of a meer, on each side of their vein)" to prevent pollution of the adjacent land. The miners, who had carted their ore to the river, as being easier than carting water to their mine – their mine was dry and there is a steep hill from Wensley to the Derwent – lost their case. In addition to the mining law to prevent water pollution quoted in the Wensley case, often ignored by the miners, attempts to prevent pollution of farmland included tree planting to deter cattle from grazing near mining operations – there are many examples of trees planted on the lines of veins in the old mining areas as well as the recent afforestation at Darley Bridge. Smelting mills had chimneys to dissipate the fumes from the ore hearths. These were only partly successful as the mills were often sited in or near to settlements, which suffered from lead deposits from the chimneys. The mills also polluted the streams which powered their bellows. Cupolas, described below, conveyed their emissions via tunnels to chimneys, which were often a considerable distance from the smelter. The limited success achieved by these efforts is exemplified by the naming of Belland Lane. Mine drainage Until the 17th century mining had usually been abandoned when the work reached the water table. Efforts at draining lead mines by horse-powered pumps, or "engines", had little success. In the later years of the industry mines were successfully drained by hydraulic, steam, internal combustion and electric power, but the first successes were achieved by soughs, drainage tunnels driven into flooded veins to allow the water to run off. Dr Rieuwerts has provided a comprehensive gazetteer of the Derbyshire lead mining soughs. By lowering the water table and opening up large new deposits of lead ore, they transformed the industry. The first sough, designed by Sir Cornelius Vermuyden, knighted for his work in draining the East Anglian fens, was driven over a twenty-year period from a point on Cromford Hill, between Cromford and Wirksworth, into an area called the Dovegang. When it was completed in 1652 there was an immediate jump in ore production in the area. Vermuyden's was followed by a succession of soughs which by the end of the century had drained enough of the mines in the Wirksworth Wapentake to cause a dramatic rise in production in the whole area. The most important were the Cromford Sough, which was over thirty years in driving, between 1662 and 1696, and was continued in the 18th century, and Hannage Sough, begun in 1693 and also continued into the next century. The Cromford Sough provided the power for Richard Arkwright's mills at Cromford, the first of which was built in 1771. Also among the important 17th century soughs were the Raventor, begun in 1655, Bates (1657–84), Lees (1664), and Baileycroft (1667–73). The Baileycroft Sough drained mines in Wirksworth. Those in the area just to the north of Wirksworth called the Gulf were drained by the Raventor and Lees Soughs. The Bates and Cromford Soughs drained mines on Cromford Moor – Bates Sough had reached the Dovegang by 1684. Hannage Sough drained the area to the east of Yokecliffe Rake, on the south of Wirksworth. Drainage of the mines in the whole of the Wirksworth area was eventually accomplished by the Meerbrook Sough, begun at the level of the river Derwent in 1772, at a time when lead-mining ventures had become only intermittently profitable. The entrance to this sough is wide and high and has a keystone inscribed "FH 1772". FH was Francis Hurt of Alderwasley, smelter, lead mine shareholder, iron-master and the main shareholder in the sough. It still discharges to a day, and by the 1830s had so reduced the flow from the Cromford Sough that in 1846 Richard Arkwright's successor had to end production at the Cromford mills. In other areas the Mill Close mines between Winster and Wensley, and the mines of Youlgreave were soughed. Cupola smelting The mills which had superseded the ancient bolehills in the late 16th century, a development described above, were themselves superseded in the 18th century by the gradual introduction of a new type of furnace known as the cupola. The old mills had a number of disadvantages. Their characteristic overheating and dissemination of polluting fumes made it necessary to close the smelter down at the end of each day's work. The hearth burned out quickly and regular weekly repairs or rebuilding were necessary – between 24 June and 29 September 1657, for instance, thirteen new hearths were required at the Upper Mill in Wirksworth. Water-powered smelting mills were restricted to riverside sites and "white coal" fuel required a good supply of timber. By the 18th century timber supplies were running out and, where coke or coal was used because of timber shortages, impurities, particularly sulphur, were introduced into the lead. It was, finally, less efficient than the cupola. The cupola was a reverberatory furnace. The fuel was burned in a combustion chamber at the side of the furnace, separate from the "charge" of ore, thus avoiding any contamination. This removed the disadvantage in using coal, which was far more plentiful than timber. The ore was loaded from a hopper into a concave furnace with a low, arched roof and a tall chimney or a flue at the opposite end from the combustion chamber. The flames and heated gases from the fuel were drawn across the charge by the draught from the chimney and beaten down by reverberation from the low roof. Slag on the surface of the molten lead was raked off and the lead itself poured into an iron pot at the side, before being ladled into moulds. Several factors contributed to the cupola's greater efficiency than the smelting mill. Unlike the smelting mill, the cupola could be operated continuously. Since the air flow over the ore was less powerful than that from the bellows of the blast furnace fewer lead particles were blown away. Further lead was saved by the fact that since the fuel and the charge were separate none of the lead was lost into the ash. Since no water power was needed the cupola had a fourth theoretical advantage of being freed from the riverside location of the blast furnace, and able to be placed in the most convenient site for supply of ore and coal. However, the higher temperatures needed to melt the slag recovered from the primary melt required a water-powered furnace and, since slag mills tended to be placed next to the cupolas, most cupolas remained in riverside sites. Many cupolas had long horizontal flues, which were introduced to trap pollutants before they could be discharged into the air. Since the pollutants included metal vapour, the sweepings of the flue could also be recovered for resmelting. Closure The Derbyshire lead industry declined after the late 18th century because of worked-out veins, increased production costs and the discovery of much cheaper foreign sources. The industry was protected from this foreign ore by import duty in the late 18th and early 19th centuries. A reduction in the duty in 1820 and its abolition in 1845 brought a steep rise in the volume of lead imported into England and accelerated the local industry's decline. There were still bursts of high production, and indeed the output of certain mines during the 18th and 19th centuries exceeded anything achieved in the 17th century; over 2658 loads (about 641 tons or 651 metric tonnes) were mined at Brassington, traditionally an area of low output, in 1862. At a meeting of the Barmote Court in Wirksworth in 1862 one mine owner announced "that by perseverance for upwards of twenty years, they had at last found the long sought for treasure, which he hoped would be prosperous, and they should be able to continue employing, as they are at the present time, upwards of 100 men at one mine in Brassington". However, by 1901 the number of men employed in all the Derbyshire lead mines had fallen to 285 most of whom worked at the Mill Close Mine at Darley Bridge. Mill Close, the biggest lead mine in the country, took the Derbyshire lead industry into the 20th century, and just before its enforced closure in 1939, caused by flooding, employed about 600 men. The smelter at Mill Close, established in 1934, was bought in 1941 by H J Enthoven and Sons, a London-based lead producer, and still operates. See also Beans and Bacon mine Magpie Mine Odin Mine Lathkill Dale (lead mining) Peak District Mining Museum Killhope Wheel, Lead Mining Museum, Co. Durham. Mining in Roman Britain Metal mining in Wales References Footnotes Bibliography Barnatt, J. "Prehistoric and Roman mining in the Peak District". Mining History 14, 2, Winter 1999, pp. 19–30 Burt, R. The British lead mining industry. Dyllansow Truran, 1984. Cooper, B. Transformation of a valley. Heinemann, 1983. Crossley, D. & Kiernan, D. "The lead-smelting mills of Derbyshire". Derbyshire Archaeological Society Journal 112, 1992, 6–47. Ford, T.D. and Rieuwerts, J.H. Lead mining in the Peak District. 4th ed. Landmark, 2000. Hardy, W. Miner's guide. 2nd ed. 1762 Henstock, A. "T'owd mon war frum Bonser". Mining History 14, 2, Winter 1999, pp. 68–69 Kiernan, D. The Derbyshire lead industry in the sixteenth century. Derbyshire Record Society, 1989. Kiernan, D.& R. Van de Noort, 'Bole Smelting in Derbyshire'. in L. Willies and D. Cranstone (eds.) Boles and Smeltmills: report of a seminar on the History and Archaeology of Lead Smelting held at Reeth, Yorkshire, 15–17 May 1992. Historical Metallurgy Society: Special Publications, 1992, pp. 19–21. Nixon, F. The industrial archaeology of Derbyshire, 1969 Rieuwerts, J.H. "Early gunpoder work in Longe or Cromford Sough, Derbyshire, 1662–1663 and 1676–1680". Mining History 13, 6, Winter 1998, pp. 1–5 Rieuwerts, J.H. History and gazetteer of the lead mine soughs of Derbyshire. Sheffield, 1987. Slack, R. "Gentlemen barmasters: a seventeenth century mining dynasty". Peak District Mines Historical Society Bulletin 12, 4, Winter 1991, pp. 203–205 Slack, R. Lead miner's heyday: the great days of mining in Wirksworth and the Low Peak of Derbyshire. Chesterfield, 2000. Slack, R. "A survey of lead mining in Wirksworth Wapentake, 1650". Peak District Mines Historical Society Bulletin, 10, 4, 1988, pp. 213–216 (Transcript of The National Archives E 317/Derb/29A. ) Willies, L. "Derbyshire lead smelting in the 18th and 19th centuries". Peak District Mines Historical Society Bulletin 11,1,1990, pp. 1–19 Willies, L. "Firesetting technology". Peak District Mines Historical Society Bulletin 12,3, 1994, pp. 1–8 Willies, L. Lead and lead mining, Shire, 1982 Willies, L., Gregory, K., Parker, H. Millclose: the mine that drowned. Scarthin Books and Peak District Mines Historical Society, 1989. Wood, A. The politics of social conflict: the Peak Country, 1520–1770. Cambridge University Press, 1999. History of Derbyshire Mining in Derbyshire Peak District History of mining in the United Kingdom Archaeological sites in Derbyshire Lead mining in the United Kingdom Smelting
Derbyshire lead mining history
[ "Chemistry" ]
7,744
[ "Metallurgical processes", "Smelting" ]
979,229
https://en.wikipedia.org/wiki/Working%20range
Each instrument used in analytical chemistry has a useful working range. This is the range of concentration (or mass) that can be adequately determined by the instrument, where the instrument provides a useful signal that can be related to the concentration of the analyte. All instruments have an upper and a lower working limit. Concentrations below the working limit do not provide enough signal to be useful, and concentrations above the working limit provide too much signal to be useful. When calibrating an instrument for use, the experimenter must be familiar with both the lower and upper working range of the chosen instrument; results obtained from a sample of concentration outside the working range are often statistically uncertain. References Analytical chemistry
Working range
[ "Chemistry" ]
139
[ "nan", "Analytical chemistry stubs" ]
979,251
https://en.wikipedia.org/wiki/Orbital%20maneuver
In spaceflight, an orbital maneuver (otherwise known as a burn) is the use of propulsion systems to change the orbit of a spacecraft. For spacecraft far from Earth, an orbital maneuver is called a deep-space maneuver (DSM). When a spacecraft is not conducting a maneuver, especially in a transfer orbit, it is said to be coasting. General Rocket equation The Tsiolkovsky rocket equation, or ideal rocket equation, can be useful for analysis of maneuvers by vehicles using rocket propulsion. A rocket applies acceleration to itself (a thrust) by expelling part of its mass at high speed. The rocket itself moves due to the conservation of momentum. Delta-v The applied change in velocity of each maneuver is referred to as delta-v (). The delta-v for all the expected maneuvers are estimated for a mission are summarized in a delta-v budget. With a good approximation of the delta-v budget designers can estimate the propellant required for planned maneuvers. Propulsion Impulsive maneuvers An impulsive maneuver is the mathematical model of a maneuver as an instantaneous change in the spacecraft's velocity (magnitude and/or direction) as illustrated in figure 1. It is the limit case of a burn to generate a particular amount of delta-v, as the burn time tends to zero. In the physical world no truly instantaneous change in velocity is possible as this would require an "infinite force" applied during an "infinitely short time" but as a mathematical model it in most cases describes the effect of a maneuver on the orbit very well. The off-set of the velocity vector after the end of real burn from the velocity vector at the same time resulting from the theoretical impulsive maneuver is only caused by the difference in gravitational force along the two paths (red and black in figure 1) which in general is small. In the planning phase of space missions designers will first approximate their intended orbital changes using impulsive maneuvers that greatly reduces the complexity of finding the correct orbital transitions. Low thrust propulsion Applying a low thrust over a longer period of time is referred to as a non-impulsive maneuver. 'Non-impulsive' refers to the momentum changing slowly over a long time, as in electrically powered spacecraft propulsion, rather than by a short impulse. Another term is finite burn, where the word "finite" is used to mean "non-zero", or practically, again: over a longer period. For a few space missions, such as those including a space rendezvous, high fidelity models of the trajectories are required to meet the mission goals. Calculating a "finite" burn requires a detailed model of the spacecraft and its thrusters. The most important of details include: mass, center of mass, moment of inertia, thruster positions, thrust vectors, thrust curves, specific impulse, thrust centroid offsets, and fuel consumption. Assists Oberth effect In astronautics, the Oberth effect is where the use of a rocket engine when travelling at high speed generates much more useful energy than one at low speed. Oberth effect occurs because the propellant has more usable energy (due to its kinetic energy on top of its chemical potential energy) and it turns out that the vehicle is able to employ this kinetic energy to generate more mechanical power. It is named after Hermann Oberth, the Austro-Hungarian-born, German physicist and a founder of modern rocketry, who apparently first described the effect. The Oberth effect is used in a powered flyby or Oberth maneuver where the application of an impulse, typically from the use of a rocket engine, close to a gravitational body (where the gravity potential is low, and the speed is high) can give much more change in kinetic energy and final speed (i.e. higher specific energy) than the same impulse applied further from the body for the same initial orbit. Since the Oberth maneuver happens in a very limited time (while still at low altitude), to generate a high impulse the engine necessarily needs to achieve high thrust (impulse is by definition the time multiplied by thrust). Thus the Oberth effect is far less useful for low-thrust engines, such as ion thrusters. Historically, a lack of understanding of this effect led investigators to conclude that interplanetary travel would require completely impractical amounts of propellant, as without it, enormous amounts of energy are needed. Gravity assist In astrodynamics a gravity assist maneuver, gravitational slingshot or swing-by is the use of the relative movement and gravity of a planet or other celestial body to alter the trajectory of a spacecraft, typically in order to save propellant, time, and expense. Gravity assistance can be used to accelerate, decelerate and/or re-direct the path of a spacecraft. The "assist" is provided by the motion (orbital angular momentum) of the gravitating body as it pulls on the spacecraft. The technique was first proposed as a mid-course maneuver in 1961, and used by interplanetary probes from Mariner 10 onwards, including the two Voyager probes' notable fly-bys of Jupiter and Saturn. Transfer orbits Orbit insertion maneuvers leave a spacecraft in a destination orbit. In contrast, orbit injection maneuvers occur when a spacecraft enters a transfer orbit, e.g. trans-lunar injection (TLI), trans-Mars injection (TMI) and trans-Earth injection (TEI). These are generally larger than small trajectory correction maneuvers. Insertion, injection and sometimes initiation are used to describe entry into a descent orbit, e.g. the Powered Descent Initiation maneuver used for Apollo lunar landings. Hohmann transfer In orbital mechanics, the Hohmann transfer orbit is an elliptical orbit used to transfer between two circular orbits of different altitudes, in the same plane. The orbital maneuver to perform the Hohmann transfer uses two engine impulses which move a spacecraft onto and off the transfer orbit. This maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book Die Erreichbarkeit der Himmelskörper (The Accessibility of Celestial Bodies). Hohmann was influenced in part by the German science fiction author Kurd Laßwitz and his 1897 book Two Planets. Bi-elliptic transfer In astronautics and aerospace engineering, the bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver. The bi-elliptic transfer consists of two half elliptic orbits. From the initial orbit, a delta-v is applied boosting the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point, a second delta-v is applied sending the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third delta-v is performed, injecting the spacecraft into the desired orbit. While they require one more engine burn than a Hohmann transfer and generally requires a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen. The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934. Low energy transfer A low energy transfer, or low energy trajectory, is a route in space which allows spacecraft to change orbits using very little fuel. These routes work in the Earth-Moon system and also in other systems, such as traveling between the satellites of Jupiter. The drawback of such trajectories is that they take much longer to complete than higher energy (more fuel) transfers such as Hohmann transfer orbits. Low energy transfer are also known as weak stability boundary trajectories, or ballistic capture trajectories. Low energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little expenditure of delta-v. Orbital inclination change Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes). In general, inclination changes can require a great deal of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life. Maximum efficiency of inclination change is achieved at apoapsis, (or apogee), where orbital velocity is the lowest. In some cases, it may require less total delta v to raise the spacecraft into a higher orbit, change the orbit plane at the higher apogee, and then lower the spacecraft to its original altitude. Constant-thrust trajectory Constant-thrust and constant-acceleration trajectories involve the spacecraft firing its engine in a prolonged constant burn. In the limiting case where the vehicle acceleration is high compared to the local gravitational acceleration, the spacecraft points straight toward the target (accounting for target motion), and remains accelerating constantly under high thrust until it reaches its target. In this high-thrust case, the trajectory approaches a straight line. If it is required that the spacecraft rendezvous with the target, rather than performing a flyby, then the spacecraft must flip its orientation halfway through the journey, and decelerate the rest of the way. In the constant-thrust trajectory, the vehicle's acceleration increases during thrusting period, since the fuel use means the vehicle mass decreases. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust must decrease during the trajectory. This trajectory requires that the spacecraft maintain a high acceleration for long durations. For interplanetary transfers, days, weeks or months of constant thrusting may be required. As a result, there are no currently available spacecraft propulsion systems capable of using this trajectory. It has been suggested that some forms of nuclear (fission or fusion based) or antimatter powered rockets would be capable of this trajectory. More practically, this type of maneuver is used in low thrust maneuvers, for example with ion engines, Hall-effect thrusters, and others. These types of engines have very high specific impulse (fuel efficiency) but currently are only available with fairly low absolute thrust. Rendezvous and docking Orbit phasing In astrodynamics orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly. Space rendezvous and docking A space rendezvous is a sequence of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous is commonly followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them. See also Clohessy-Wiltshire equations for co-orbit analysis Collision avoidance (spacecraft) Flyby (spaceflight) Spacecraft propulsion Orbital spaceflight References External links Handbook Automated Rendezvous and Docking of Spacecraft by Wigbert Fehse Astrodynamics
Orbital maneuver
[ "Engineering" ]
2,379
[ "Astrodynamics", "Aerospace engineering" ]
979,306
https://en.wikipedia.org/wiki/Orbital%20station-keeping
In astrodynamics, orbital station-keeping is keeping a spacecraft at a fixed distance from another spacecraft or celestial body. It requires a series of orbital maneuvers made with thruster burns to keep the active craft in the same orbit as its target. For many low Earth orbit satellites, the effects of non-Keplerian forces, i.e. the deviations of the gravitational force of the Earth from that of a homogeneous sphere, gravitational forces from Sun/Moon, solar radiation pressure and air drag, must be counteracted. For spacecraft in a halo orbit around a Lagrange point, station-keeping is even more fundamental, as such an orbit is unstable; without an active control with thruster burns, the smallest deviation in position or velocity would result in the spacecraft leaving orbit completely. Perturbations The deviation of Earth's gravity field from that of a homogeneous sphere and gravitational forces from the Sun and Moon will in general perturb the orbital plane. For a Sun-synchronous orbit, the precession of the orbital plane caused by the oblateness of the Earth is a desirable feature that is part of mission design but the inclination change caused by the gravitational forces of the Sun and Moon is undesirable. For geostationary spacecraft, the inclination change caused by the gravitational forces of the Sun and Moon must be counteracted by a rather large expense of fuel, as the inclination should be kept sufficiently small for the spacecraft to be tracked by non-steerable antennae. For spacecraft in a low orbit, the effects of atmospheric drag must often be compensated for, often to avoid re-entry; for missions requiring the orbit to be accurately synchronized with the Earth’s rotation, this is necessary to prevent a shortening of the orbital period. Solar radiation pressure will in general perturb the eccentricity (i.e. the eccentricity vector); see Orbital perturbation analysis (spacecraft). For some missions, this must be actively counter-acted with maneuvers. For geostationary spacecraft, the eccentricity must be kept sufficiently small for a spacecraft to be tracked with a non-steerable antenna. Also for Earth observation spacecraft for which a very repetitive orbit with a fixed ground track is desirable, the eccentricity vector should be kept as fixed as possible. A large part of this compensation can be done by using a frozen orbit design, but often thrusters are needed for fine control maneuvers. Low Earth orbit For spacecraft in a very low orbit, the atmospheric drag is sufficiently strong to cause a re-entry before the intended end of mission if orbit raising maneuvers are not executed from time to time. An example of this is the International Space Station (ISS), which has an operational altitude above Earth's surface of between 400 and 430 km (250-270 mi). Due to atmospheric drag the space station is constantly losing orbital energy. In order to compensate for this loss, which would eventually lead to a re-entry of the station, it has to be reboosted to a higher orbit from time to time. The chosen orbital altitude is a trade-off between the average thrust needed to counter-act the air drag and the impulse needed to send payloads and people to the station. GOCE which orbited at 255 km (later reduced to 235 km) used ion thrusters to provide up to 20 mN of thrust to compensate for the drag on its frontal area of about 1 m2. Earth observation spacecraft For Earth observation spacecraft typically operated in an altitude above the Earth surface of about 700 – 800 km the air-drag is very faint and a re-entry due to air-drag is not a concern. But if the orbital period should be synchronous with the Earth's rotation to maintain a fixed ground track, the faint air-drag at this high altitude must also be counter-acted by orbit raising maneuvers in the form of thruster burns tangential to the orbit. These maneuvers will be very small, typically in the order of a few mm/s of delta-v. If a frozen orbit design is used these very small orbit raising maneuvers are sufficient to also control the eccentricity vector. To maintain a fixed ground track it is also necessary to make out-of-plane maneuvers to compensate for the inclination change caused by Sun/Moon gravitation. These are executed as thruster burns orthogonal to the orbital plane. For Sun-synchronous spacecraft having a constant geometry relative to the Sun, the inclination change due to the solar gravitation is particularly large; a delta-v in the order of 1–2 m/s per year can be needed to keep the inclination constant. Geostationary orbit For geostationary spacecraft, thruster burns orthogonal to the orbital plane must be executed to compensate for the effect of the lunar/solar gravitation that perturbs the orbit pole with typically 0.85 degrees per year. The delta-v needed to compensate for this perturbation keeping the inclination to the equatorial plane amounts to in the order 45 m/s per year. This part of the GEO station-keeping is called North-South control. The East-West control is the control of the orbital period and the eccentricity vector performed by making thruster burns tangential to the orbit. These burns are then designed to keep the orbital period perfectly synchronous with the Earth rotation and to keep the eccentricity sufficiently small. Perturbation of the orbital period results from the imperfect rotational symmetry of the Earth relative the North/South axis, sometimes called the ellipticity of the Earth equator. The eccentricity (i.e. the eccentricity vector) is perturbed by the solar radiation pressure. The fuel needed for this East-West control is much less than what is needed for the North-South control. To extend the life-time of geostationary spacecraft with little fuel left one sometimes discontinues the North-South control only continuing with the East-West control. As seen from an observer on the rotating Earth the spacecraft will then move North-South with a period of 24 hours. When this North-South movement gets too large a steerable antenna is needed to track the spacecraft. An example of this is Artemis. To save weight, it is crucial for GEO satellites to have the most fuel-efficient propulsion system. Almost all modern satellites are therefore employing a high specific impulse system like plasma or ion thrusters. Lagrange points Orbits of spacecraft are also possible around Lagrange points—also referred to as libration points—five equilibrium points that exist in relation to two larger solar system bodies. For example, there are five of these points in the Sun-Earth system, five in the Earth-Moon system, and so on. Spacecraft may orbit around these points with a minimum of propellant required for station-keeping purposes. Two orbits that have been used for such purposes include halo and Lissajous orbits. One important Lagrange point is Earth-Sun , and three heliophysics missions have been orbiting L1 since approximately 2000. Station-keeping propellant use can be quite low, facilitating missions that can potentially last decades should other spacecraft systems remain operational. The three spacecraft—Advanced Composition Explorer (ACE), Solar Heliospheric Observatory (SOHO), and the Global Geoscience WIND satellite—each have annual station-keeping propellant requirements of approximately 1 m/s or less. Earth-Sun —approximately 1.5 million kilometers from Earth in the anti-sun direction—is another important Lagrange point, and the ESA Herschel space observatory operated there in a Lissajous orbit during 2009–2013, at which time it ran out of coolant for the space telescope. Small station-keeping orbital maneuvers were executed approximately monthly to maintain the spacecraft in the station-keeping orbit. The James Webb Space Telescope will use propellant to maintain its halo orbit around the Earth-Sun L2, which provides an upper limit to its designed lifetime: it is being designed to carry enough for ten years. However, the precision of trajectory following launch by an Ariane 5 is credited with potentially doubling the lifetime of the telescope by leaving more hydrazine propellant on-board than expected. The CAPSTONE orbiter and the planned Lunar Gateway is stationed along a 9:2 synodically resonant Near Rectilinear Halo Orbit (NRHO) around the Earth-Moon L2 Lagrange point. See also Delta-v budget Orbital perturbation analysis Reboost Teleoperator Retrieval System (robotic device for attaching to another spacecraft and boosting or changing its orbit) References External links Station-keeping at the Encyclopedia of Astrobiology, Astronomy, and Spaceflight XIPS Xenon Ion Propulsion Systems Jules Verne boosts ISS orbit Jules Verne boosts ISS orbit (report from the European Space Agency) Orbital maneuvers Astrodynamics Earth orbits
Orbital station-keeping
[ "Engineering" ]
1,805
[ "Astrodynamics", "Aerospace engineering" ]
979,374
https://en.wikipedia.org/wiki/Orbital%20inclination%20change
Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta-v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes). In general, inclination changes can take a very large amount of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life. Planetary flybys are the most efficient way to achieve large inclination changes, but they are only effective for interplanetary missions. Efficiency The simplest way to perform a plane change is to perform a burn around one of the two crossing points of the initial and final planes. The delta-v required is the vector change in velocity between the two planes at that point. However, maximum efficiency of inclination changes are achieved at apoapsis, (or apogee), where orbital velocity is the lowest. In some cases, it can require less total delta-v to raise the satellite into a higher orbit, change the orbit plane at the higher apogee, and then lower the satellite to its original altitude. For the most efficient example mentioned above, targeting an inclination at apoapsis also changes the argument of periapsis. However, targeting in this manner limits the mission designer to changing the plane only along the line of apsides. For Hohmann transfer orbits, the initial orbit and the final orbit are 180 degrees apart. Because the transfer orbital plane has to include the central body, such as the Sun, and the initial and final nodes, this can require two 90 degree plane changes to reach and leave the transfer plane. In such cases it is often more efficient to use a broken plane maneuver where an additional burn is done so that plane change only occurs at the intersection of the initial and final orbital planes, rather than at the ends. Inclination entangled with other orbital elements An important subtlety of performing an inclination change is that Keplerian orbital inclination is defined by the angle between ecliptic North and the vector normal to the orbit plane, (i.e. the angular momentum vector). This means that inclination is always positive and is entangled with other orbital elements primarily the argument of periapsis which is in turn connected to the longitude of the ascending node. This can result in two very different orbits with precisely the same inclination. Calculation In a pure inclination change, only the inclination of the orbit is changed while all other orbital characteristics (radius, shape, etc.) remains the same as before. Delta-v () required for an inclination change () can be calculated as follows: where: is the orbital eccentricity is the argument of periapsis is the true anomaly is the mean motion is the semi-major axis For more complicated maneuvers which may involve a combination of change in inclination and orbital radius, the delta-v is the vector difference between the velocity vectors of the initial orbit and the desired orbit at the transfer point. These types of combined maneuvers are commonplace, as it is more efficient to perform multiple orbital maneuvers at the same time if these maneuvers have to be done at the same location. According to the law of cosines, the minimum Delta-v () required for any such combined maneuver can be calculated with the following equation Here and are the initial and target velocities. Circular orbit inclination change Where both orbits are circular (i.e. ) and have the same radius the Delta-v () required for an inclination change () can be calculated using: where is the orbital velocity and has the same units as . Other ways to change inclination Some other ways to change inclination that do not require burning propellant (or help reduce the amount of propellant required) include aerodynamic lift (for bodies within an atmosphere, such as the Earth) solar sails Transits of other bodies such as the Moon can also be done. None of these methods will change the delta-V required, they are simply alternate means of achieving the same end result and, ideally, will reduce propellant usage. See also Orbital inclination Orbital maneuver References Astrodynamics Orbital maneuvers
Orbital inclination change
[ "Engineering" ]
907
[ "Astrodynamics", "Aerospace engineering" ]
979,452
https://en.wikipedia.org/wiki/Sculptor%20Galaxy
The Sculptor Galaxy (also known as the Silver Coin Galaxy, Silver Dollar Galaxy, NGC 253, or Caldwell 65) is an intermediate spiral galaxy in the constellation Sculptor. The Sculptor Galaxy is a starburst galaxy, which means that it is currently undergoing a period of intense star formation. Observation Observational history The galaxy was discovered by Caroline Herschel in 1783 during one of her systematic comet searches. About half a century later, John Herschel observed it using his 18-inch metallic mirror reflector at the Cape of Good Hope. He wrote: "very bright and large (24′ in length); a superb object.... Its light is somewhat streaky, but I see no stars in it except 4 large and one very small one, and these seem not to belong to it, there being many near..." In 1961, Allan Sandage wrote in the Hubble Atlas of Galaxies that the Sculptor Galaxy is "the prototype example of a special subgroup of Sc systems....photographic images of galaxies of the group are dominated by the dust pattern. Dust lanes and patches of great complexity are scattered throughout the surface. Spiral arms are often difficult to trace.... The arms are defined as much by the dust as by the spiral pattern." Bernard Y. Mills, working out of Sydney, discovered that the Sculptor Galaxy is also a fairly strong radio source. In 1998, the Hubble Space Telescope took a detailed image of NGC 253. Amateur As one of the brightest galaxies in the sky, the Sculptor Galaxy can be seen through binoculars and is near the star Beta Ceti. It is considered one of the most easily viewed galaxies in the sky after the Andromeda Galaxy. The Sculptor Galaxy is a good target for observation with a telescope with a 300 mm diameter or larger. In such telescopes, it appears as a galaxy with a long, oval bulge and a mottled galactic disc. Although the bulge appears only slightly brighter than the rest of the galaxy, it is fairly extended compared to the disk. In 400 mm scopes and larger, a dark dust lane northwest of the nucleus is visible, and over a dozen faint stars can be seen superimposed on the bulge. Some people claim to have observed the galaxy with the unaided eye under exceptional viewing conditions. Features The Sculptor Galaxy is located at the center of the Sculptor Group, one of the nearest groups of galaxies to the Milky Way. The Sculptor Galaxy (the brightest galaxy in the group and one of the intrinsically brightest galaxies in the vicinity of ours, only surpassed by the Andromeda Galaxy and the Sombrero Galaxy) and the companion galaxies NGC 247, PGC 2881, PGC 2933, Sculptor-dE1, and UGCA 15 form a gravitationally-bound core near the center of the group. Most other galaxies associated with the Sculptor Group are only weakly gravitationally bound to this core. Starburst NGC 253's starburst has created several super star clusters on NGC 253's center (discovered with the aid of the Hubble Space Telescope): one with a mass of solar masses, and absolute magnitude of at least −15, and two others with solar masses and absolute magnitudes around −11; later studies have discovered an even more massive cluster heavily obscured by NGC 253's interstellar dust with a mass of solar masses, an age of around years, and rich in Wolf-Rayet stars. The super star clusters are arranged in an ellipse around the center of NGC 253, which from the Earth's perspective appears as a flat line. Star formation is also high in the northeast of NGC 253's disk, where a number of red supergiant stars can be found, and in its halo there are young stars as well as some amounts of neutral hydrogen. This, along with other peculiarities found in NGC 253, suggest that a gas-rich dwarf galaxy collided with it 200 million years ago, disturbing its disk and starting the present starburst. As happens in other galaxies suffering strong star formation such as Messier 82, NGC 4631, or NGC 4666, the stellar winds of the massive stars produced in the starburst as well as their deaths as supernovae have blown out material to NGC 253's halo in the form of a superwind that seems to be inhibiting star formation in the galaxy. Novae and Supernovae Although supernovae are generally associated with starburst galaxies, only one has been detected within the Sculptor Galaxy. SN 1940E (type unknown, mag. 14) was discovered by Fritz Zwicky on 22 November 1940, located approximately 54″ southwest of the galaxy's nucleus. NGC 253 is close enough that classical novae can also be detected. The first confirmed nova in this galaxy was discovered by BlackGEM at magnitude 19.6 on 12 July 2024, and designated AT 2024pid. Central black hole Research suggests the presence of a supermassive black hole in the center of this galaxy with a mass estimated to be 5 million times that of the Sun, which is slightly heavier than Sagittarius A*. Distance estimates At least two techniques have been used to measure distances to Sculptor in the past ten years. Using the planetary nebula luminosity function method, an estimate of 10.89 million light years (or Mly; 3.34 Megaparsecs, or Mpc) was achieved in 2005. The Sculptor Galaxy is close enough that the tip of the red-giant branch (TRGB) method may also be used to estimate its distance. The estimated distance to Sculptor using this technique in 2004 yielded (). A weighted average of the most reliable distance estimates gives a distance of (). Satellite An international team of researchers has used the Subaru Telescope to identify a faint dwarf galaxy disrupted by NGC 253. The satellite galaxy is called NGC 253-dw2 and may not survive its next passage by its much larger host. The host galaxy may suffer some damage too if the dwarf is massive enough. The interplay between the two galaxies is responsible for the disturbance in NGC 253's structure. See also Globular cluster NGC 288, located 1.8° south-southeast of the Sculptor Galaxy. 2MASX J00482185-2507365 occulting pair, discovered while photographing NGC 253 References External links STScI news release: Hubble Probes the Violent Birth of Stars in Galaxy NGC 253 STScI news release: Behind a Dusty Veil Lies a Cradle of Star Birth SEDS – NGC 253 Starburst galaxies Intermediate spiral galaxies Sculptor (constellation) Sculptor Group NGC objects 02789 065b 17830923 013 Discoveries by Caroline Herschel
Sculptor Galaxy
[ "Astronomy" ]
1,361
[ "Constellations", "Sculptor (constellation)" ]
979,488
https://en.wikipedia.org/wiki/Ice%20rink
An ice rink (or ice skating rink) is a frozen body of water or an artificial sheet of ice where people can ice skate or play winter sports. Ice rinks are also used for exhibitions, contests and ice shows. The growth and increasing popularity of ice skating during the 1800s marked a rise in the deliberate construction of ice rinks in numerous areas of the world. The word "rink" is a word of Scottish origin meaning "course", used to describe the ice surface used in the sport of curling, but was kept in use once the winter team sport of ice hockey became established. There are two types of ice rinks in prevalent use today: natural ice rinks, where freezing occurs from cold ambient temperatures, and artificial ice rinks (or mechanically frozen), where a coolant produces cold temperatures underneath the water body (on which the game is played), causing the water body to freeze and then stay frozen. There are also synthetic ice rinks where skating surfaces are made out of plastics. Besides recreational ice skating, some of its uses include: ice hockey, sledge hockey ( "Para ice hockey", or "sled hockey"), spongee ( sponge hockey), bandy, rink bandy, rinkball, ringette, broomball (both indoor and outdoor versions), Moscow broomball, speed skating, figure skating, ice stock sport, curling, and crokicurl. However, Moscow broomball is typically played on a tarmac tennis court that has been flooded with water and allowed to freeze. The sports of broomball, curling, ice stock sport, spongee, Moscow broomball, and the game of crokicurl, do not use ice skates of any kind. While technically not an ice rink, ice tracks and trails, such as those used in the sport of speed skating and recreational or pleasure skating are sometimes referred to as "ice rinks". Etymology Rink, a Scottish word meaning 'course', was used as the name of a place where curling was played. As curling is played on ice, the name has been retained for the construction of ice areas for other sports and uses. History Great Britain London, England Early attempts in the construction of artificial ice rinks were first made in the 'rink mania' of 1841–44. The technology for the maintenance of natural ice did not exist, therefore these early rinks used a substitute consisting of a mixture of hog's lard and various salts. An item in the May 8, 1844 issue of Eliakim Littell's Living Age headed "The Glaciarium" reported that "This establishment, which has been removed to Grafton street East' Tottenham Court Road, was opened on Monday afternoon. The area of artificial ice is extremely convenient for such as may be desirous of engaging in the graceful and manly pastime of skating". By 1844, these venues fell out of fashion as customers grew tired of the 'smelly' ice substitute. It wasn't until thirty years later that refrigeration technology developed to the point where natural ice could finally be feasibly used in the rink. The world's first mechanically frozen ice rink was the Glaciarium, opened by John Gamgee, a British veterinarian and inventor, in a tent in a small building just off the Kings Road in Chelsea, London, on 7 January 1876. Gamgee had become fascinated by the refrigeration technology he encountered during a study trip to America to look at Texas fever in cattle. In March of that same year it moved to a permanent venue at 379 Kings Road, where a rink measuring was established. The rink was based on a concrete surface, with layers of earth, cow hair and timber planks. Atop these were laid oval copper pipes carrying a solution of glycerine with ether, nitrogen peroxide and water. The pipes were covered by water and the solution was pumped through, freezing the water into ice. Gamgee discovered the process while attempting to develop a method to freeze meat for import from Australia and New Zealand, and patented it as early as 1870. Gamgee operated the rink on a membership-only basis and attempted to attract a wealthy clientele, experienced in open-air ice skating during winters in the Alps. He installed an orchestra gallery, which could also be used by spectators, and decorated the walls with views of the Swiss Alps. The rink initially proved a success, and Gamgee opened two further rinks later in the year: at Rusholme in Manchester and the "Floating Glaciarium" at Charing Cross in London, this last significantly larger at . The Southport Glaciarium opened in 1879, using Gamgee's method. The Fens, England In the marshlands of The Fens, skating was developed early as a pastime during winter where there were plenty of natural ice surfaces. This is the origin of the Fen skating and is said to be the birthplace of bandy. The Great Britain Bandy Association has its home in the area. Hungary In Austria-Hungary, the first artificial ice skating rink opened in 1870 in The City Park of Budapest, which is still in operation to this day and is considered one of the largest in Europe. Germany In Germany, the first ice skating rink opened in 1882 in Frankfurt during a patent exhibition. It covered and operated for two months; the refrigeration system was designed by Jahre Linde, and was probably the first skating rink where ammonia was used as a refrigerant. Ten years later, a larger rink was permanently installed on the same site. United States Early indoor ice rinks Ice skating quickly became a favorite pastime and craze in several American cities around the mid 1800s spawning a construction period of several ice rinks. Two early indoor ice rinks made of mechanically frozen ice in the United States opened in 1894, the North Avenue Ice Palace in Baltimore, Maryland, and the Ice Palace in New York City. The St. Nicholas Rink, ( "St. Nicholas Arena"), was an indoor ice rink in New York City which existed from 1896 until its demolition in the 1980s. It was one of the earliest American indoor ice rinks made of mechanically frozen ice in North America and gave ice skaters the opportunity to enjoy an extended skating season. The rink was used for pleasure skating, ice hockey, and ice skating, and was an important rink involved in the development of the sports of ice hockey and boxing in the United States. Oldest indoor artificial ice rink in use The oldest indoor artificial ice rink still in use in the United States is Boston, Massachusetts's, Matthews Arena (formerly Boston Arena) which was built between 1909 and 1910. The rink is located on the campus of Northeastern University. This American rink is the original home of the National Hockey League (NHL) Boston Bruins. The Bruins are the only remaining NHL team who are members of the NHL's Original Six with their original home arena still in existence. Contemporary The Guidant John Rose Minnesota Oval is an outdoor ice rink in Roseville, Minnesota, that is large enough to allow ice skaters to play the sport of bandy. Its perimeter is used as an oval speed skating track. The facility was constructed between June and December 1993. It is the only regulation-sized bandy field in North America and serves as the home of USA Bandy and its national bandy teams. The $3.9 million renovation project planned for the Guidant John Rose Minnesota Oval was set to be completed before the opening of the rink's 29th season on November 18, 2022. The oval measures at 400 meters long and 200 meters wide, which makes it the largest artificial outdoor refrigerated sheet of ice in North America. It is a world-class facility that is primarily used for ice sports such as ice skating, ice hockey, speed skating, and bandy. The oval hosts several national and international competitions throughout the year, including the USA Cup in bandy. Canada The first building in Canada to be electrified was the Victoria Skating Rink which opened in 1862 in Montreal, Quebec, Canada. The rink was created using natural ice. At the start of the twentieth century it had been described as "one of the finest covered rinks in the world" and was used during winter for pleasure skating, ice hockey, and skating sports. In summer months, the building was used for various other events. Types Natural ice Many ice rinks consist of, or are found on, open bodies of water such as lakes, ponds, canals, and sometimes rivers; these can be used only in the winter in climates where the surface freezes thickly enough to support human weight. Rinks can also be made in cold climates by enclosing a level area of ground, filling it with water, and letting it freeze. Snow may be packed to use as a containment material. An example of this type of "rink", which is a body of water converted into a skating trail during winter, is the Rideau Canal Skateway in Ottawa, Ontario. Artificial ice In any climate, an arena ice surface can be installed in a properly built space. This consists of a bed of sand or occasionally a slab of concrete, through (or on top of) which pipes run. The pipes carry a chilled fluid (usually either a salt brine or water with antifreeze, or in the case of smaller rinks, refrigerant) which can lower the temperature of the slab so that water placed atop will freeze. This method is known as 'artificial ice' to differentiate from ice rinks made by simply freezing water in a cold climate, indoors or outdoors, although both types are of frozen water. A more proper technical term is 'mechanically frozen' ice. An example of this type of rink is the outdoor rink at Rockefeller Center in New York. Construction Modern rinks have a specific procedure for preparing the surface. With the pipes cold, a thin layer of water is sprayed on the sand or concrete to seal and level it (or in the case of concrete, to keep it from being marked). This thin layer is painted white or pale blue for better contrast; markings necessary for hockey or curling are also placed, along with logos or other decorations. Another thin layer of water is sprayed on top of this. The ice is built up to a thickness of . Synthetic Synthetic rinks are constructed from a solid polymer material designed for skating using normal metal-bladed ice skates. High density polyethelene (HDPE) and ultra-high molecular weight polyethylene (UHMW) are the only materials that offer reasonable skating characteristics, with UHMW synthetic rinks offering the most ice-like skating but also being the most expensive. A typical synthetic rink will consist of many panels of thin surface material assembled on top of a sturdy, level and smooth sub-floor (anything from concrete to wood or even dirt or grass) to create a large skating area. Operation Periodically after the ice has been used, it is resurfaced using a machine called an ice resurfacer (sometimes colloquially referred to as a Zamboni – referring to a major manufacturer of such machinery). For curling, the surface is 'pebbled' by allowing loose drops of cold water to fall onto the ice and freeze into rounded peaks. Between events, especially if the arena is being used without need for the ice surface, it is either covered with a heavily insulated floor or melted by allowing the fluid in the pipes below the ice to warm. A highly specialized form of rink is used for speed skating; this is a large oval (or ring) much like an athletic track. Because of their limited use, speed skating ovals are far less common than hockey or curling rinks. Those skilled at preparing arena ice are often in demand for major events where ice quality is critical. The popularity of the sport of hockey in Canada has led its icemakers to be particularly sought after. One such team of professionals was responsible for placing a loonie coin under center ice at the 2002 Winter Olympics in Salt Lake City, Utah; as both Canadian teams (men's and women's) won their respective hockey gold medals, the coin was christened "lucky" and is now in the possession of the Hockey Hall of Fame after having been retrieved from beneath the ice. Standard rink sizes Bandy In bandy, the size of the playing field is x . For internationals, the size must not be smaller than . The variety rink bandy is played on ice hockey rinks. Figure skating The size of figure skating rinks can be quite variable, but the International Skating Union prefers Olympic-sized rinks for figure skating competitions, particularly for major events. These are . The ISU specifies that competition rinks must not be larger than this and not smaller than . Ice hockey Although there is a great deal of variation in the dimensions of actual ice rinks, there are basically two rink sizes in use at the highest levels of ice hockey. Historically, earlier ice rinks were smaller than today. Official National Hockey League rinks are . The dimensions originate from the size of the Victoria Skating Rink in Montreal, Quebec, Canada. Official Olympic and International ice hockey rinks have dimensions of . Para ice hockey Sledge hockey ( "Para ice hockey", or "sled hockey"), uses the same rink dimensions used by ice hockey rinks. Ringette Ringette utilizes most of the standard ice hockey markings used by Hockey Canada, but the ringette rink uses additional free-pass dots in each of the attacking zones and centre zone areas as well as a larger goal crease area. Two additional free-play lines (one in each attacking zone) are also required. A ringette rink is an ice rink designed for ice hockey which has been modified to enable ringette to be played. Though some ice surfaces are designed strictly for ringette, these ice rinks with exclusive lines and markings for ringette are usually created only at venues hosting major ringette competitions and events. Most ringette rinks are found in Canada and Finland. Playing area, size, lines and markings for the standard Canadian ringette rink are similar to the average ice hockey rink in Canada with certain modifications. Early in its history, ringette was played mostly on rinks constructed for ice hockey, broomball, figure skating, and recreational skating, and was mostly played on outdoor rinks since few indoor ice rinks were available at the time. Broomball The organized format of broomball uses the rink dimensions defined by a standard Canadian ice hockey rink. Spongee The sport of spongee, "sponge hockey", does not use ice skates. A skateless outdoor winter variant of ice hockey, spongee has its own rules codes and is played strictly within the Canadian city of Winnipeg as a cult sport. The sport generally uses the rink dimensions defined by a standard Canadian ice hockey rink. Rinkball Rinkball rinks today typically use the measurements of an ice hockey rink, though may be slightly larger due to the sport having originated in Europe where the bandy field influenced the size and development of smaller ice rinks. Tracks and trails Tracks and trails are occasionally referred to as ice rinks in spite of their differences. Ice skating tracks and ice skating trails are used for recreational exercise and sporting activities during the winter season including distance ice skating. Ice trails are created by natural bodies of water such as rivers, which freeze during winter, though some trails are created by removing snow to create skating lanes on large frozen lakes for ice skaters. Ice trails are usually used for pleasure skating, though the sport and recreational activity of Tour skating can involve ice skaters passing over ice trails and open areas created by frozen lakes. To date, speed skating and ice cross downhill are the only winter activities or sports whereby ice skaters use tracks and lanes designed to include bends rather than using a simple straightway. Some ice rinks are constructed in a manner allowing for a speed skating rink to be created around its outside perimeter. Tracks Speed skating track Speed skating tracks or "rinks" can either be created naturally or artificially and are made either outdoors or inside indoor facilities. Tracks may be created by having the lanes surround the exterior of an ice rink. The sport requires the use of a special type of racing skate, the speed skating ice skate. In speed skating, for short track, the official Olympic rink size is , with an oval ice track of in circumference. In long track speed skating the oval ice track is usually in circumference. Ice skating marathon tracks An ice skating marathon is a long distance speed skating race which may be held on natural ice on canals and bodies of water such as lakes and rivers. Marathon is a discipline of speed skating, which is founded in the Netherlands. The races concern speed skating by at least five skaters who start all together on an ice rink with a minimum length of 333.33 meters or on a track: Minimum distance longer than 6.4 kilometers and up to 200 kilometers for skaters who have reached the age of 17 prior to the skating season on July 1. Minimum distance longer than 4 kilometers and up to 20 kilometers for skaters who have reached the age of or the age of 13, but have not yet reached the age of 17 before July 1 preceding the skating season. Minimum distance of 2 kilometers and up to 10 kilometers for skaters who have not yet reached the age of 13 before July 1 preceding the skating season. Dutch skating tracks The Netherlands is home of Elfstedentocht, a 200 km distance skating race of which the tracks leads through the 11 different cities in Friesland which is a northern province of the Netherlands. Skate tracks on natural ice are maintained by the towns and communities, who take care of the safety of the tracks. Ice cross downhill tracks Ice cross downhill, (formerly known as "Red Bull Crashed Ice" or "Crashed Ice"), is a winter extreme sporting event involving direct competitive downhill skating. Skaters race down a walled track which features sharp turns and high vertical drops. Trails Rideau Canal Skateway An example of an ice skating trail, or "rink", is the Rideau Canal Skateway in Ottawa, Ontario, Canada, estimated at and long, which is equivalent to 90 Olympic-size skating rinks. The rink is prepared by lowering the canal's water level and letting the canal water freeze. The rink is then resurfaced nightly by cleaning the ice of snow and flooding it with water from below the ice. The rink is recognized as the "world's largest naturally frozen ice rink" by the Guinness Book of World Records because "its entire length receives daily maintenance such as sweeping, ice thickness checks and there are toilet and recreational facilities along its entire length". Longest trail The longest ice skating trail is in Invermere, British Columbia, Canada, on Lake Windermere Whiteway. The naturally frozen trail measures . Combined Outdoor ice skating activities and competitions involving a goal of distance travel for recreation, exercise, competition and adventure, can involve frozen lakes, rivers, and canals. Tour skating The sport and recreational activity, Tour skating ( "Nordic skating" in North America), is strictly an outdoor activity for ice skaters. Nordic skating originated during the 1900s in Sweden. Ice skaters traverse naturally frozen bodies of water, which sometimes, but not always, includes interconnected ice trails as well as frozen ponds, lakes, and even marsh areas. Tour skaters use a special ice skate with long blades. Elfstedentocht (Eleven cities tour) The Elfstedentocht (Eleven Cities Tour) is a long-distance tour skating event on natural ice, almost long, which is held both as a speed skating competition (with 300 contestants) and a leisure tour (with 16,000 skaters). It is the biggest ice-skating tour in the world and held in the province of Friesland in the north of the Netherlands. The event leads past all eleven historical cities of the province and is held at most once a year, only when the natural ice along the entire course is at least thick. It is sometimes held on consecutive years, while at other times, gaps between the touring years have exceeded 20 years. When the ice is suitable, the tour is announced and starts within 48 hours. The last Elfstedentocht was held in 1997. Laneways The sports of curling and Ice stock sport are played on either ice rinks or simple ice surfaces with lanes marked out for play. Curling The sport of curling uses an ice rink known as a "curling rink" or curling sheet. Curling does not involve ice skating. Curling uses lanes. The curling sheet is a carefully prepared rectangular area of ice created to be as flat and level as possible. The ice surface dimensions are in length by in width. A curling sheet includes areas marked off in a manner specific to the sport, including the house, the button, hog lines, hacks, and shorter borders along the ends of the sheet called the backboards. The dimensions of an official curling sheet is defined by the World Curling Federation Rules of Curling. At major events, ice preparation and maintenance is extremely important. Curling clubs usually have an ice maker whose main job is to care for the ice. Ice stock sport Ice stock sport (sometimes spelt "Icestocksport" or "Bavarian curling") is a winter sport comparable to curling. It's called Eisstockschießen in German. Although the sport is typically played on ice, summer competitions are performed on asphalt. Other Crokicurl Crokicurl is a Canadian winter sport and is a large scale hybrid of curling and the board game Crokinole. It is played outdoors by teams consisting of two players who take turns trying to score points on a quadrant shaped area with the playing area marked off on a sheet of ice. The quadrant includes posts, starting line, wooden edge side-rail, and a 20-point "button". Depending on the area involved, players can score 5, 10, or 15 points. Outdoor ice Outdoor ice rinks and frozen ponds, rivers, and canals, serve several purposes, allowing for physical activities during the winter season such as recreational ice skating and figure skating, and also function as an affordable place for players to engage in team winter sports such as ice hockey, bandy, rinkball, ringette, broomball, and spongee, as a pastime. These areas and facilities also help individuals, youth sporting organizations, and families, offset the expensive cost of indoor ice-time. They are also used as a part of outdoor winter festivals and to host pond hockey tournaments and the like. Decline Rinks The length of outdoor ice skating season began to experience a noticeable decline in North America in the early part of the 21st century. One of the correlated factors involved has been attributed to climate change. One of the consequences involved includes reducing access to outdoor facilities needed by youth who require opportunities to participate in ice-based sports at length and with low-cost, a problematic development considering winter sports become increasingly expensive over time resulting in economic exclusion. RinkWatch RinkWatch is a citizen science program in Canada run by researchers at Wilfrid Laurier University in Waterloo, Ontario. Beginning in 2013 the program started collecting data on outdoor rinks and frozen ponds across North America. The objective is to better understand how climate change may be impacting the outdoor skating season. Tracks and trails Elfstedentocht, the world's biggest ice-skating tour involving tour skating and speed skating, has been declared to be in danger of "extinction" due to climate change. The last Elfstedentocht was held in 1997. See also Bandy field Figure skating rink Ice hockey rink Speed skating rink Curling sheet Synthetic ice List of ice hockey arenas by capacity References External links The Ice Rink – A Brief History RinkWatch is a citizen science research initiative that asks people to help environmental scientists monitor winter weather conditions and study the long-term impacts of climate change. Comprehensive list of ice skating rinks in the U.S. and Canada Backyard Ice Rink Builder Community Playing field surfaces Sports venues Figure skating Bandy Ice hockey Sledge hockey Ringette Speed skating Broomball Sports rules and regulations
Ice rink
[ "Engineering" ]
4,903
[ "Structural engineering", "Ice rinks" ]
979,500
https://en.wikipedia.org/wiki/Gravity%20loss
In astrodynamics and rocketry, gravity loss is a measure of the loss in the net performance of a rocket while it is thrusting in a gravitational field. In other words, it is the cost of having to hold the rocket up in a gravity field. Gravity losses depend on the time over which thrust is applied as well the direction the thrust is applied in. Gravity losses as a proportion of delta-v are minimised if maximum thrust is applied for a short time, and by avoiding thrusting directly away from the local gravitational field. During the launch and ascent phase, however, thrust must be applied over a long period with a major component of thrust in the opposite direction to gravity, so gravity losses become significant. For example, to reach a speed of 7.8 km/s in low Earth orbit requires a delta-v of between 9 and 10 km/s. The additional 1.5 to 2 km/s delta-v is due to gravity losses, steering losses and atmospheric drag. Example Consider the simplified case of a vehicle with constant mass accelerating vertically with a constant thrust per unit mass a in a gravitational field of strength g. The actual acceleration of the craft is a-g and it is using delta-v at a rate of a per unit time. Over a time t the change in speed of the spacecraft is (a-g)t, whereas the delta-v expended is at. The gravity loss is the difference between these figures, which is gt. As a proportion of delta-v, the gravity loss is g/a. A very large thrust over a very short time will achieve a desired speed increase with little gravity loss. On the other hand, if a is only slightly greater than g, the gravity loss is a large proportion of delta-v. Gravity loss can be described as the extra delta-v needed because of not being able to spend all the needed delta-v instantaneously. This effect can be explained in two equivalent ways: The specific energy gained per unit delta-v is equal to the speed, so efficiency is maximized when the delta-v is spent when the craft already has a high speed, due to the Oberth effect. Efficiency drops drastically with increasing time spent thrusting against gravity. Therefore, it is advisable to minimize the burn time. These effects apply whenever climbing to an orbit with higher specific orbital energy, such as during launch to low Earth orbit (LEO) or from LEO to an escape orbit. This is a worst case calculation - in practice, gravity loss during launch and ascent is less than the maximum value of gt because the launch trajectory does not remain vertical and the vehicle's mass is not constant, due to consumption of propellant and staging. Vector considerations Thrust is a vector quantity, and the direction of the thrust has a large impact on the size of gravity losses. For instance, gravity loss on a rocket of mass m would reduce a 3mg thrust directed upward to an acceleration of 2g. However, the same 3mg thrust could be directed at such an angle that it had a 1mg upward component, completely canceled by gravity, and a horizontal component of mg× = 2.8mg (by Pythagoras' theorem), achieving a 2.8g horizontal acceleration. As orbital speeds are approached, vertical thrust can be reduced as centrifugal force (in the rotating frame of reference around the center of the Earth) counteracts a large proportion of the gravitation force on the rocket, and more of the thrust can be used to accelerate. Gravity losses can therefore also be described as the integral of gravity (irrespective of the vector of the rocket) minus the centrifugal force. Using this perspective, when a spacecraft reaches orbit, the gravity losses continue but are counteracted perfectly by the centrifugal force. Since a rocket has very little centrifugal force at launch, the net gravity losses per unit time are large at liftoff. It is important to note that minimising gravity losses is not the only objective of a launching spacecraft. Rather, the objective is to achieve the position/velocity combination for the desired orbit. For instance, the way to maximize acceleration is to thrust straight downward; however, thrusting downward is clearly not a viable course of action for a rocket intending to reach orbit. See also Delta-v budget Oberth effect References . External links General Theory of Optimal Trajectory for Rocket Flight in a Resisting Medium Astrodynamics
Gravity loss
[ "Engineering" ]
912
[ "Astrodynamics", "Aerospace engineering" ]
4,095,147
https://en.wikipedia.org/wiki/Value%20%28mathematics%29
In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an integer such as 42. The value of a variable or a constant is any number or other mathematical object assigned to it. Physical quantities have numerical values attached to units of measurement. The value of a mathematical expression is the object assigned to this expression when the variables and constants in it are assigned values. The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values. For example, if the function is defined by , then assigning the value 3 to its argument yields the function value 10, since . If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values. See also Value function Value (computer science) Absolute value Truth value References Elementary mathematics nl:Reëel-waardige functie
Value (mathematics)
[ "Mathematics" ]
231
[ "Elementary mathematics" ]
4,095,359
https://en.wikipedia.org/wiki/Logic%20of%20information
The logic of information, or the logical theory of information, considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce. In this line of work, the concept of information serves to integrate the aspects of signs and expressions that are separately covered, on the one hand, by the concepts of denotation and extension, and on the other hand, by the concepts of connotation and comprehension. Peirce began to develop these ideas in his lectures "On the Logic of Science" at Harvard University (1865) and the Lowell Institute (1866). See also Charles Sanders Peirce bibliography Information theory Inquiry Philosophy of information Pragmatic maxim Pragmatic theory of information Pragmatic theory of truth Pragmaticism Pragmatism Scientific method Semeiotic Semiosis Semiotics Semiotic information theory Sign relation Sign relational complex Triadic relation References Luciano Floridi, The Logic of Information, presentation, discussion, Télé-université (Université du Québec), 11 May 2005, Montréal, Canada. Luciano Floridi, The logic of being informed, Logique et Analyse. 2006, 49.196, 433–460. External links Peirce, C.S. (1867), "Upon Logical Comprehension and Extension", Eprint Information theory Semiotics Logic Charles Sanders Peirce
Logic of information
[ "Mathematics", "Technology", "Engineering" ]
274
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
4,095,710
https://en.wikipedia.org/wiki/Zeta%20Leporis
Zeta Leporis, Latinized from ζ Leporis, is a star approximately away in the southern constellation of Lepus. It has an apparent visual magnitude of 3.5, which is bright enough to be seen with the naked eye. In 2001, an asteroid belt was confirmed to orbit the star. Stellar components Zeta Leporis has a stellar classification of A2 IV-V(n), suggesting that it is in a transitional stage between an A-type main-sequence star and a subgiant. The (n) suffix indicates that the absorption lines in the star's spectrum appear nebulous because it is spinning rapidly, causing the lines to broaden because of the Doppler effect. The projected rotational velocity is 245 km/s, giving a lower limit on the star's actual equatorial azimuthal velocity. The star has about 1.46 times the mass of the Sun, along with 1.5 times the radius, and 14 times the luminosity. The abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is only 17% of the abundance in the Sun. The star appears to be very young, probably around 231 million years in age, but the margin of error spans 50–347 million years old. Asteroid belt In 1983, based on radiation in the infrared portion of the electromagnetic spectrum, the InfraRed Astronomical Satellite was used to identify dust orbiting this star. This debris disk is constrained to a diameter of 12.2 AU. By 2001, the Long Wavelength Spectrometer at the Keck Observatory on Mauna Kea, Hawaii, was used more accurately to constrain the radius of the dust. It was found to lie within a 5.4 AU radius. The temperature of the dust was estimated as about 340 K. Based on heating from the star, this could place the grains as close as 2.5 AU from Zeta Leporis. It is now believed that the dust is coming from a massive asteroid belt in orbit around Zeta Leporis, making it the first extra-solar asteroid belt to be discovered. The estimated mass of the belt is about 200 times the total mass in the Solar System's asteroid belt, or . For comparison, this is more than half the total mass of the Moon. Astronomers Christine Chen and professor Michael Jura found that the dust contained within this belt should have fallen into the star within 20,000 years, a time period much shorter than Zeta Leporis's estimated age, suggesting that some mechanism must be replenishing the belt. The belt's age is estimated to be years. Solar encounter Bobylev's calculations from 2010 suggest that this star passed as close as 1.28 parsecs (4.17 light-years) from the Sun about 861,000 years ago. García-Sánchez 2001 suggested that the star passed 1.64 parsecs (5.34 light-years) from the Sun about 1 million years ago. It was the brightest star in the night sky over 1 million years ago, peaking with an apparent magnitude of -2.05. See also Delta Trianguli HD 69830 Vega References Further reading External links UCLA astronomers identify evidence of asteroid belt around nearby star: Findings indicate potential for planet or asteroid formation, 2001. Wikisky image of Zeta Leporis Leporis, Zeta Circumstellar disks Leporis, 14 038678 027288 Lepus (constellation) A-type main-sequence stars 1998 Durchmusterung objects Gliese and GJ objects
Zeta Leporis
[ "Astronomy" ]
733
[ "Lepus (constellation)", "Constellations" ]
4,095,924
https://en.wikipedia.org/wiki/Value%20%28ethics%29
In ethics and social sciences, value denotes the degree of importance of some thing or action, with the aim of determining which actions are best to do or what way is best to live (normative ethics), or to describe the significance of different actions. Value systems are proscriptive and prescriptive beliefs; they affect the ethical behavior of a person or are the basis of their intentional activities. Often primary values are strong and secondary values are suitable for changes. What makes an action valuable may in turn depend on the ethical values of the objects it increases, decreases, or alters. An object with "ethic value" may be termed an "ethic or philosophic good" (noun sense). Values can be defined as broad preferences concerning appropriate courses of actions or outcomes. As such, values reflect a person's sense of right and wrong or what "ought" to be. "Equal rights for all", "Excellence deserves admiration", and "People should be treated with respect and dignity" are representatives of values. Values tend to influence attitudes and behavior and these types include moral values, doctrinal or ideological values, social values, and aesthetic values. It is debated whether some values that are not clearly physiologically determined, such as altruism, are intrinsic, and whether some, such as acquisitiveness, should be classified as vices or virtues. Fields of study Ethical issues that value may be regarded as a study under ethics, which, in turn, may be grouped as philosophy. Similarly, ethical value may be regarded as a subgroup of a broader field of philosophic value sometimes referred to as axiology. Ethical value denotes something's degree of importance, with the aim of determining what action or life is best to do, or at least attempt to describe the value of different actions. The study of ethical value is also included in value theory. In addition, values have been studied in various disciplines: anthropology, behavioral economics, business ethics, corporate governance, moral philosophy, political sciences, social psychology, sociology and theology. Similar concepts Ethical value is sometimes used synonymously with goodness. However, "goodness" has many other meanings and may be regarded as more ambiguous. Social value is a concept used in the public sector and in philanthropic contexts to cover the net social, environmental and economic benefits of individual and collective actions for which the concepts of economic value or profit are inadequate. For example, UK public procurement legislation refers to "social value" in its requirement that public bodies commissioning public services take account of the social, environmental and economic well-being of the area where a contract will be delivered: see Public Services (Social Value) Act 2012. Local authorities have adopted "social value" policies which facilitate a fair and consistent approach to implementing this duty: for example, Cornwall Council's Social Value policy notes that "bringing consistency in performance and our outcomes will mean that the process for defining social value will be standardised". In regard to central government procurement, the "Social Value Model" aims to ensure that comparable benefits are secured through procurement, and the Welsh government refers to "social value requirements" and "social value clauses" in its own public procurement guidance. The Bill & Melinda Gates Foundation refers to "social value creation" as a quantifiable objective in public policy and philanthropic decision-making. Types of value Personal versus cultural Personal values exist in relation to cultural values, either in agreement with or divergence from prevailing norms. A culture is a social system that shares a set of common values, in which such values permit social expectations and collective understandings of the good, beautiful and constructive. Without normative personal values, there would be no cultural reference against which to measure the virtue of individual values and so cultural identity would disintegrate. Relative or absolute Relative values differ between people, and on a larger scale, between people of different cultures. On the other hand, there are theories of the existence of absolute values, which can also be termed noumenal values (and not to be confused with mathematical absolute value). An absolute value can be described as philosophically absolute and independent of individual and cultural views, as well as independent of whether it is known or apprehended or not. Ludwig Wittgenstein was pessimistic about the idea that an elucidation would ever happen regarding the absolute values of actions or objects; "we can speak as much as we want about "life" and "its meaning", and believe that what we say is important. But these are no more than expressions and can never be facts, resulting from a tendency of the mind and not the heart or the will". Intrinsic or extrinsic Philosophic value may be split into instrumental value and intrinsic values. An instrumental value is worth having as a means towards getting something else that is good (e.g., a radio is instrumentally good in order to hear music). An intrinsically valuable thing is worth for itself, not as a means to something else. It is giving value intrinsic and extrinsic properties. An ethic good with instrumental value may be termed an ethic mean, and an ethic good with intrinsic value may be termed an end-in-itself. An object may be both a mean and end-in-itself. Summation Intrinsic and instrumental goods are not mutually exclusive categories. Some objects are both good in themselves, and also good for getting other objects that are good. "Understanding science" may be such a good, being both worthwhile in and of itself, and as a means of achieving other goods. In these cases, the sum of instrumental (specifically the all instrumental value) and intrinsic value of an object may be used when putting that object in value systems, which is a set of consistent values and measures. Universal values S. H. Schwartz, along with a number of psychology colleagues, has carried out empirical research investigating whether there are universal values, and what those values are. Schwartz defined 'values' as "conceptions of the desirable that influence the way people select action and evaluate events". He hypothesised that universal values would relate to three different types of human need: biological needs, social co-ordination needs, and needs related to the welfare and survival of groups Intensity The intensity of philosophic value is the degree it is generated or carried out, and may be regarded as the prevalence of the good, the object having the value. It should not be confused with the amount of value per object, although the latter may vary too, e.g. because of instrumental value conditionality. For example, taking a fictional life-stance of accepting waffle-eating as being the end-in-itself, the intensity may be the speed that waffles are eaten, and is zero when no waffles are eaten, e.g. if no waffles are present. Still, each waffle that had been present would still have value, no matter if it was being eaten or not, independent on intensity. Instrumental value conditionality in this case could be exampled by every waffle not present, making them less valued by being far away rather than easily accessible. In many life stances it is the product of value and intensity that is ultimately desirable, i.e. not only to generate value, but to generate it in large degree. Maximizing life-stances have the highest possible intensity as an imperative. Positive and negative value There may be a distinction between positive and negative philosophic or ethic value. While positive ethic value generally correlates with something that is pursued or maximized, negative ethic value correlates with something that is avoided or minimized. Protected value A protected value (also sacred value) is one that an individual is unwilling to trade off no matter what the benefits of doing so may be. For example, some people may be unwilling to kill another person, even if it means saving many other individuals. Protected values tend to be "intrinsically good", and most people can in fact imagine a scenario when trading off their most precious values would be necessary. If such trade-offs happen between two competing protected values such as killing a person and defending your family they are called tragic trade-offs. Protected values have been found to be play a role in protracted conflicts (e.g., the Israeli-Palestinian conflict) because they can hinder businesslike (''utilitarian'') negotiations. A series of experimental studies directed by Scott Atran and Ángel Gómez among combatants on the ISIS front line in Iraq and with ordinary citizens in Western Europe suggest that commitment to sacred values motivate the most "devoted actors" to make the costliest sacrifices, including willingness to fight and die, as well as a readiness to forsake close kin and comrades for those values if necessary. From the perspective of utilitarianism, protected values are biases when they prevent utility from being maximized across individuals. According to Jonathan Baron and Mark Spranca, protected values arise from norms as described in theories of deontological ethics (the latter often being referred to in context with Immanuel Kant). The protectedness implies that people are concerned with their participation in transactions rather than just the consequences of it. Economic versus philosophic value Philosophical value is distinguished from economic value, since it is independent from some other desired condition or commodity. The economic value of an object may rise when the exchangeable desired condition or commodity, e.g. money, become high in supply, and vice versa when supply of money becomes low. Nevertheless, economic value may be regarded as a result of philosophical value. In the subjective theory of value, the personal philosophic value a person puts in possessing something is reflected in what economic value this person puts on it. The limit where a person considers to purchase something may be regarded as the point where the personal philosophic value of possessing something exceeds the personal philosophic value of what is given up in exchange for it, e.g. money. In this light, everything can be said to have a "personal economic value" in contrast to its "societal economic value." Personal values Personal values provide an internal reference for what is good, beneficial, important, useful, beautiful, desirable and constructive. Values are one of the factors that generate behavior (besides needs, interests and habits) and influence the choices made by an individual. Values may help common human problems for survival by comparative rankings of value, the results of which provide answers to questions of why people do what they do and in what order they choose to do them. Moral, religious, and personal values, when held rigidly, may also give rise to conflicts that result from a clash between differing world views. Over time the public expression of personal values that groups of people find important in their day-to-day lives, lay the foundations of law, custom and tradition. Recent research has thereby stressed the implicit nature of value communication. Consumer behavior research proposes there are six internal values and three external values. They are known as List of Values (LOV) in management studies. They are self respect, warm relationships, sense of accomplishment, self-fulfillment, fun and enjoyment, excitement, sense of belonging, being well respected, and security. From a functional aspect these values are categorized into three and they are interpersonal relationship area, personal factors, and non-personal factors. From an ethnocentric perspective, it could be assumed that a same set of values will not reflect equally between two groups of people from two countries. Though the core values are related, the processing of values can differ based on the cultural identity of an individual. Individual differences Schwartz proposed a theory of individual values based on surveys data. His model groups values in terms of growth versus protection, and personal versus social focus. Values are then associated with openness to change (which Schwartz views as related to personal growth), self-enhancement (which Schwartz views as mostly to do with self-protection), conservation (which Schwartz views as mostly related to social-protection), and self-transcendence (which Schwartz views as a form of social growth). Within this Schwartz places 10 universal values: self-direction, stimulation and hedonism (related to openness growth), achievement and power (related to self enhancement), security, conformity and tradition (related to conservation), and humility, benevolence and universalism (relate to self-transcendence). Personality traits using the big 5 measure correlate with Schwartz's value construct. Openness and extraversion correlates with the values related to openness-to-change (openness especially with self-direction, extraversion especially with stimulation); agreeableness correlates with self-transcendence values (especially benevolence); extraversion is correlated with self-enhancement and negatively with traditional values. Conscientiousness correlates with achievement, conformity and security. Men are found to value achievement, self-direction, hedonism, and stimulation more than women, while women value benevolence, universality and tradition higher. The order of Schwartz's traits are substantially stability amongst adults over time. Migrants values change when they move to a new country, but the order of preferences is still quite stable. Motherhood causes women to shift their values towards stability and away from openness-to-change but not fathers. Moral foundations theory Moral foundation theory identifies five forms of moral foundation: harm/care, fairness/reciprocity, in-group/loyalty, authority/respect, and purity/sanctity. The first two are often termed individualizing foundations, with the remaining three being binding foundations. The moral foundations were found to be correlated with the theory of basic human values. The strong correlations are between conservatives values and binding foundations. Cultural values Individual cultures emphasize values which their members broadly share. Values of a society can often be identified by examining the level of honor and respect received by various groups and ideas. Values clarification differs from cognitive moral education:Respect Value clarification consists of "helping people clarify what their lives are for and what is worth working for. It encourages students to define their own values and to understand others' values." Cognitive moral education builds on the belief that students should learn to value things like democracy and justice as their moral reasoning develops. Values relate to the norms of a culture, but they are more global and intellectual than norms. Norms provide rules for behavior in specific situations, while values identify what should be judged as good or evil. While norms are standards, patterns, rules and guides of expected behavior, values are abstract concepts of what is important and worthwhile. Flying the national flag on a holiday is a norm, but it reflects the value of patriotism. Wearing dark clothing and appearing solemn are normative behaviors to manifest respect at a funeral. Different cultures represent values differently and to different levels of emphasis. "Over the last three decades, traditional-age college students have shown an increased interest in personal well-being and a decreased interest in the welfare of others." Values seemed to have changed, affecting the beliefs, and attitudes of the students. Members take part in a culture even if each member's personal values do not entirely agree with some of the normative values sanctioned in that culture. This reflects an individual's ability to synthesize and extract aspects valuable to them from the multiple subcultures they belong to. If a group member expresses a value that seriously conflicts with the group's norms, the group's authority may carry out various ways of encouraging conformity or stigmatizing the non-conforming behavior of that member. For example, imprisonment can result from conflict with social norms that the state has established as law. Furthermore, cultural values can be expressed at a global level through institutions participating in the global economy. For example, values important to global governance can include leadership, legitimacy, and efficiency. Within our current global governance architecture, leadership is expressed through the G20, legitimacy through the United Nations, and efficiency through member-driven international organizations. The expertise provided by international organizations and civil society depends on the incorporation of flexibility in the rules, to preserve the expression of identity in a globalized world. Nonetheless, in warlike economic competition, differing views may contradict each other, particularly in the field of culture. Thus audiences in Europe may regard a movie as an artistic creation and grant it benefits from special treatment, while audiences in the United States may see it as mere entertainment, whatever its artistic merits. EU policies based on the notion of "cultural exception" can become juxtaposed with the liberal policy of "cultural specificity" in English-speaking countries. Indeed, international law traditionally treats films as property and the content of television programs as a service. Consequently, cultural interventionist policies can find themselves opposed to the Anglo-Saxon liberal position, causing failures in international negotiations. Development and transmission Values are generally received through cultural means, especially diffusion and transmission or socialization from parents to children. Parents in different cultures have different values. For example, parents in a hunter–gatherer society or surviving through subsistence agriculture value practical survival skills from a young age. Many such cultures begin teaching babies to use sharp tools, including knives, before their first birthdays. Italian parents value social and emotional abilities and having an even temperament. Spanish parents want their children to be sociable. Swedish parents value security and happiness. Dutch parents value independence, long attention spans, and predictable schedules. American parents are unusual for strongly valuing intellectual ability, especially in a narrow "book learning" sense. The Kipsigis people of Kenya value children who are not only smart, but who employ that intelligence in a responsible and helpful way, which they call ng'om. Luos of Kenya value education and pride which they call "nyadhi". Factors that influence the development of cultural values are summarized below. The Inglehart–Welzel cultural map of the world is a two-dimensional cultural map showing the cultural values of the countries of the world along two dimensions: The traditional versus secular-rational values reflect the transition from a religious understanding of the world to a dominance of science and bureaucracy. The second dimension named survival values versus self-expression values represents the transition from industrial society to post-industrial society. Cultures can be distinguished as tight and loose in relation to how much they adhere to social norms and tolerates deviance. Tight cultures are more restrictive, with stricter disciplinary measures for norm violations while loose cultures have weaker social norms and a higher tolerance for deviant behavior. A history of threats, such as natural disasters, high population density, or vulnerability to infectious diseases, is associated with greater tightness. It has been suggested that tightness allows cultures to coordinate more effectively to survive threats. Studies in evolutionary psychology have led to similar findings. The so-called regality theory finds that war and other perceived collective dangers have a profound influence on both the psychology of individuals and on the social structure and cultural values. A dangerous environment leads to a hierarchical, authoritarian, and warlike culture, while a safe and peaceful environment fosters an egalitarian and tolerant culture. Value system A value system is a set of consistent values used for the purpose of ethical or ideological integrity. Consistency As a member of a society, group or community, an individual can hold both a personal value system and a communal value system at the same time. In this case, the two value systems (one personal and one communal) are externally consistent provided they bear no contradictions or situational exceptions between them. A value system in its own right is internally consistent when its values do not contradict each other and its exceptions are or could be abstract enough to be used in all situations and consistently applied. Conversely, a value system by itself is internally inconsistent if: its values contradict each other and its exceptions are highly situational and inconsistently applied. Value exceptions Abstract exceptions serve to reinforce the ranking of values. Their definitions are generalized enough to be relevant to any and all situations. Situational exceptions, on the other hand, are ad hoc and pertain only to specific situations. The presence of a type of exception determines one of two more kinds of value systems: An idealized value system is a listing of values that lacks exceptions. It is, therefore, absolute and can be codified as a strict set of proscriptions on behavior. Those who hold to their idealized value system and claim no exceptions (other than the default) are called absolutists. A realized value system contains exceptions to resolve contradictions between values in practical circumstances. This type is what people tend to use in daily life. The difference between these two types of systems can be seen when people state that they hold one value system yet in practice deviate from it, thus holding a different value system. For example, a religion lists an absolute set of values while the practice of that religion may include exceptions. Implicit exceptions bring about a third type of value system called a formal value system. Whether idealized or realized, this type contains an implicit exception associated with each value: "as long as no higher-priority value is violated". For instance, a person might feel that lying is wrong. Since preserving a life is probably more highly valued than adhering to the principle that lying is wrong, lying to save someone's life is acceptable. Perhaps too simplistic in practice, such a hierarchical structure may warrant explicit exceptions. Conflict Although sharing a set of common values, like hockey is better than baseball or ice cream is better than fruit, two different parties might not rank those values equally. Also, two parties might disagree as to certain actions are right or wrong, both in theory and in practice, and find themselves in an ideological or physical conflict. Ethonomics, the discipline of rigorously examining and comparing value systems, enables us to understand politics and motivations more fully in order to resolve conflicts. An example conflict would be a value system based on individualism pitted against a value system based on collectivism. A rational value system organized to resolve the conflict between two such value systems might take the form below. Added exceptions can become recursive and often convoluted. Individuals may act freely unless their actions harm others or interfere with others' freedom or with functions of society that individuals need, provided those functions do not themselves interfere with these proscribed individual rights and were agreed to by a majority of the individuals. A society (or more specifically the system of order that enables the workings of a society) exists for the purpose of benefiting the lives of the individuals who are members of that society. The functions of a society in providing such benefits would be those agreed to by the majority of individuals in the society. A society may require contributions from its members in order for them to benefit from the services provided by the society. The failure of individuals to make such required contributions could be considered a reason to deny those benefits to them, although a society could elect to consider hardship situations in determining how much should be contributed. A society may restrict behavior of individuals who are members of the society only for the purpose of performing its designated functions agreed to by the majority of individuals in the society, only insofar as they violate the aforementioned values. This means that a society may abrogate the rights of any of its members who fails to uphold the aforementioned values. See also Attitude (psychology) Axiological ethics Axiology Clyde Kluckhohn and his value orientation theory Hofstede's Framework for Assessing Culture Instrumental and intrinsic value Intercultural communication Meaning of life Paideia Rokeach Value Survey Spiral Dynamics The Right and the Good Value judgment World Values Survey Western values References Further reading see https://www.researchgate.net/publication/290349218_The_political_algebra_of_global_value_change_General_models_and_implications_for_the_Muslim_world External links Concepts in ethics Concepts in metaphysics Codes of conduct Moral psychology Motivation Social philosophy Social psychology Social systems
Value (ethics)
[ "Biology" ]
4,900
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
4,095,925
https://en.wikipedia.org/wiki/Mass-to-light%20ratio
In astrophysics and physical cosmology the mass-to-light ratio, normally designated with the Greek letter upsilon, , is the quotient between the total mass of a spatial volume (typically on the scales of a galaxy or a cluster) and its luminosity. These ratios are calculated relative to the Sun as a baseline ratio which is a constant  = 5133 kg/W: equal to the solar mass divided by the solar luminosity , . The mass-to-light ratios of galaxies and clusters are all much greater than due in part to the fact that most of the matter in these objects does not reside within stars and observations suggest that a large fraction is present in the form of dark matter. Luminosities are obtained from photometric observations, correcting the observed brightness of the object for the distance dimming and extinction effects. In general, unless a complete spectrum of the radiation emitted by the object is obtained, a model must be extrapolated through either power law or blackbody fits. The luminosity thus obtained is known as the bolometric luminosity. Masses are often calculated from the dynamics of the virialized system or from gravitational lensing. Typical mass-to-light ratios for galaxies range from 2 to 10  while on the largest scales, the mass to light ratio of the observable universe is approximately 100 , in concordance with the current best fit cosmological model. References External links Concepts in astrophysics Physical cosmology Ratios
Mass-to-light ratio
[ "Physics", "Astronomy", "Mathematics" ]
306
[ "Astronomical sub-disciplines", "Concepts in astrophysics", "Theoretical physics", "Astrophysics", "Ratios", "Arithmetic", "Physical cosmology" ]
4,096,160
https://en.wikipedia.org/wiki/ObjectARX
ObjectARX (AutoCAD Runtime eXtension) is an API for customizing and extending AutoCAD. The ObjectARX SDK is published by Autodesk and freely available under license from Autodesk. The ObjectARX SDK consists primarily of C++ headers and libraries that can be used to build Windows DLLs that can be loaded into the AutoCAD process and interact directly with the AutoCAD application. ObjectARX modules use the file extensions .arx and .dbx instead of the more common .dll. ObjectARX is the most powerful of the various AutoCAD APIs, and the most difficult to master. The typical audience for the ObjectARX SDK includes professional programmers working either as commercial application developers or as in-house developers at companies using AutoCAD. New versions of the ObjectARX SDK are released with each new AutoCAD release, and ObjectARX modules built with a specific SDK version are typically limited to running inside the corresponding version of AutoCAD. Recent versions of the ObjectARX SDK include support for the .NET platform by providing managed wrapper classes for native objects and functions. The native classes and libraries that are made available via the ObjectARX API are also used internally by the AutoCAD code. As a result of this tight linkage with AutoCAD itself, the libraries are very compiler specific, and work only with the same compiler that Autodesk uses to build AutoCAD. Historically, this has required ObjectARX developers to use various versions of Microsoft Visual Studio, with different versions of the SDK requiring different versions of Visual Studio. Although ObjectARX is specific to AutoCAD, Open Design Alliance announced in 2008 a new API called DRX (included in their DWGdirect library) that attempts to emulate the ObjectARX API in products like IntelliCAD that use the DWGdirect libraries. References See also Autodesk Developer Network Autodesk AutoCAD Application programming interfaces
ObjectARX
[ "Technology" ]
418
[ "Computing stubs" ]
4,096,592
https://en.wikipedia.org/wiki/Cray%20CS6400
The Cray Superserver 6400, or CS6400, is a discontinued multiprocessor server computer system produced by Cray Research Superservers, Inc., a subsidiary of Cray Research, and launched in 1993. The CS6400 was also sold as the Amdahl SPARCsummit 6400E. The CS6400 (codenamed SuperDragon during development) superseded the earlier SPARC-based Cray S-MP system, which was designed by Floating Point Systems. However, the CS6400 adopted the XDBus packet-switched inter-processor bus also used in Sun Microsystems' SPARCcenter 2000 (Dragon) and SPARCserver 1000 (Baby Dragon or Scorpion) Sun4d systems. This bus originated in the Xerox Dragon multiprocessor workstation designed at Xerox PARC. The CS6400 was available with either 60 MHz SuperSPARC-I or 85 MHz SuperSPARC-II processors, maximum RAM capacity was 16 GB. Other features shared with the Sun servers included use of the same SuperSPARC microprocessor and Solaris operating system. However, the CS6400 could be configured with four to 64 processors on quad XDBusses at 55 MHz, compared with the SPARCcenter 2000's maximum of 20 on dual XDBusses at 40 or 50 MHz and the SPARCserver 1000's maximum of 8 on a single XDBus. Unlike the Sun SPARCcenter 2000 and SPARCserver 1000, each CS6400 is equipped with an external System Service Processor (SSP), a SPARCstation fitted with a JTAG interface to communicate with the CS6400 to configure its internal bus control card. The other systems have a JTAG interface, but it is not used for this purpose. While the CS6400 only requires the SSP to be used for configuration changes (e.g. a CPU card is pulled for maintenance), some derivative designs, in particular the Sun Enterprise 10000, are useless without their SSP. Upon Silicon Graphics' acquisition of Cray Research in 1996, the Superserver business (by now the Cray Business Systems Division) was sold to Sun. This included Starfire, the CS6400's successor then under development, which became the Sun Enterprise 10000. References External links "Cray: Faster Than A Bottleneck Bullet", Byte, January 1996 Enthusiast photographs Running system (circa 2004) More board-level photographs Alternative location for some of above Cs6400 Sun servers Supercomputers 32-bit computers
Cray CS6400
[ "Technology" ]
536
[ "Supercomputers", "Supercomputing" ]
4,096,680
https://en.wikipedia.org/wiki/Tunnel%20washer
A tunnel washer, also called a continuous batch washer, is an industrial washing machine designed specifically to handle heavy loads of laundry. The screw is made of perforated metal, so items can progress through the washer in one direction, while water and washing chemicals move through in the opposite direction. Thus, the linen moves through pockets of progressively cleaner water and fresher chemicals. Soiled linen can be continuously fed into one end of the tunnel while clean linen emerges from the other. Originally, one of the machine's major drawbacks was the necessity of using one wash formula for all items. Modern computerized tunnel washers can monitor and adjust the chemical levels in individual pockets, effectively overcoming this problem. See also Washing machine References Laundry washing equipment Machines
Tunnel washer
[ "Physics", "Technology", "Engineering" ]
154
[ "Physical systems", "Machines", "Mechanical engineering" ]
4,097,228
https://en.wikipedia.org/wiki/Shimizu%20Mega-City%20Pyramid
The Shimizu TRY 2004 Mega-City Pyramid is a proposed Shimizu Corporation project for the construction of a massive self-sustaining arcology-pyramid over Tokyo Bay in Japan that would have businesses, parks, and other services contained within the building. The structure would house 1,000,000 people. The structure would be 2,004 meters (6,575 feet) high, including five stacked trusses, each with similar dimensions to that of the Great Pyramid of Giza. This pyramid would help answer Tokyo's increasing lack of space, although the project would only handle a small fraction of the population of the Greater Tokyo Area. The proposed structure is so large that it could not be built with current conventional materials, due to their weight. The design relies on the future availability of super-strong lightweight materials based on carbon nanotubes and graphene presently being researched. The plan was to start construction in 2030, but no further action has been taken. Shimizu is still determined to complete the project by 2110; if built, it would make history as the largest man-made structure in world history. History Tokyo has been having issues in regard to overpopulation, due to internal migration, among other causes. The Shimizu Corporation, which has been located in Tokyo since the Edo period, witnessed it firsthand. They had an idea for a solution that was different from other proposed ideas. It started in 1982, when one of the employees went to see Blade Runner and saw the opening scene showing the two pyramids the Tyrell Corporation operated from. The architectural marvel amazed the engineer, who told his other colleagues about it the following day, leading to the idea for the Mega-City pyramid. 10 years later, in October of 1992, representatives traveled to patent offices the world over to patent their idea internationally. Materials and construction process The pyramid's foundation would be formed by 36 piers made of special concrete. Because the seismically active Pacific Ring of Fire cuts right through Japan, the external structure of the pyramid would be an open network of megatrusses, supporting struts made from carbon nanotubes to allow the pyramid to stand against and let through high winds, and survive earthquakes and tsunamis. The trusses would be coated with photovoltaic film to convert sunlight into electricity and help power the city. The city would also be powered by pond scum or algae. Robotic systems are planned to play a major part in both construction and building maintenance. Interior traffic and buildings Transportation within the city would be provided by accelerating walkways, inclined elevators, and a personal rapid transit system where automated pods would travel within the trusses. Housing and office space would be provided by twenty-four or more 30-story high skyscrapers suspended from above and below and attached to the pyramid's supporting structure with nanotube cables. See also Arcology Megacity Sky City 1000 X-Seed 4000 Proposed tall buildings and structures References External links Discovery Channel's Extreme Engineering: City in a Pyramid Home Page for Bini Systems' proposed pneumatic construction method TRY 2004-Shimizu's Dream - Shimizu Corporation (Project site) Unbuilt buildings and structures in Japan Proposed buildings and structures in Japan Science and technology in Japan Robotics in Japan Proposed populated places Artificial islands of Tokyo Pyramids in Japan Shimizu Corporation Proposed arcologies
Shimizu Mega-City Pyramid
[ "Technology" ]
685
[ "Exploratory engineering", "Proposed arcologies" ]
4,097,704
https://en.wikipedia.org/wiki/Furoxan
Furoxan or 1,2,5-oxadiazole 2-oxide is a heterocycle of the isoxazole family and an amine oxide derivative of furazan. It is a nitric oxide donor. As such, furoxan and its derivatives are actively researched as potential new drugs (Ipramidil) and insensitive high density explosives (4,4’-Dinitro-3,3’-diazenofuroxan). Furoxanes can be formed by dimerization of nitrile oxides. References Amine oxides Oxadiazoles
Furoxan
[ "Chemistry" ]
127
[ "Amine oxides", "Functional groups" ]
4,097,837
https://en.wikipedia.org/wiki/Lat%C3%ADn%20dos%20canteiros
("Latin of the stonecutters") or is an argot employed by stonecutters in Galicia, Spain, particularly in the area of Pontevedra, based on the Galician language. They handed down their knowledge in the art of how to split and cut stone by means of this secret language to the next generation. Description The argot contains a number of Basque loanwords. Sample of text See also Barallete Bron Cant Gacería References Cant languages Occupational cryptolects Galician language Cants with Basque influence Culture of Galicia Stonemasonry
Latín dos canteiros
[ "Engineering" ]
119
[ "Construction", "Stonemasonry" ]
4,098,234
https://en.wikipedia.org/wiki/Sort%20%28Unix%29
In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r" flag will reverse the sort order. History A command that invokes a general sort facility was first implemented within Multics. Later, it appeared in Version 1 Unix. This version was originally written by Ken Thompson at AT&T Bell Laboratories. By Version 4 Thompson had modified it to use pipes, but sort retained an option to name the output file because it was used to sort a file in place. In Version 5, Thompson invented "-" to represent standard input. The version of bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. This implementation employs the merge sort algorithm. Similar commands are available on many other operating systems, for example a command is part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. Syntax sort [OPTION]... [FILE]... With no FILE, or when FILE is -, the command reads from standard input. Parameters Examples Sort a file in alphabetical order $ cat phonebook Smith, Brett 555-4321 Doe, John 555-1234 Doe, Jane 555-3214 Avery, Cory 555-4132 Fogarty, Suzie 555-2314 $ sort phonebook Avery, Cory 555-4132 Doe, Jane 555-3214 Doe, John 555-1234 Fogarty, Suzie 555-2314 Smith, Brett 555-4321 Sort by number The -n option makes the program sort according to numerical value. The command produces output that starts with a number, the file size, so its output can be piped to to produce a list of files sorted by (ascending) file size: $ du /bin/* | sort -n 4 /bin/domainname 24 /bin/ls 102 /bin/sh 304 /bin/csh The command with the option prints file sizes in the 7th field, so a list of the files sorted by file size is produced by: $ find . -name "*.tex" -ls | sort -k 7n Columns or fields Use the -k option to sort on a certain column. For example, use "-k 2" to sort on the second column. In old versions of sort, the +1 option made the program sort on the second column of data (+2 for the third, etc.). This usage is deprecated. $ cat zipcode Adam 12345 Bob 34567 Joe 56789 Sam 45678 Wendy 23456 $ sort -k 2n zipcode Adam 12345 Wendy 23456 Bob 34567 Sam 45678 Joe 56789 Sort on multiple fields The -k m,n option lets you sort on a key that is potentially composed of multiple fields (start at column m, end at column n): $ cat quota fred 2000 bob 1000 an 1000 chad 1000 don 1500 eric 500 $ sort -k2,2n -k1,1 quota eric 500 an 1000 bob 1000 chad 1000 don 1500 fred 2000 Here the first sort is done using column 2. -k2,2n specifies sorting on the key starting and ending with column 2, and sorting numerically. If -k2 is used instead, the sort key would begin at column 2 and extend to the end of the line, spanning all the fields in between. -k1,1 dictates breaking ties using the value in column 1, sorting alphabetically by default. Note that bob, and chad have the same quota and are sorted alphabetically in the final output. Sorting a pipe delimited file $ sort -k2,2,-k1,1 -t'|' zipcode Adam|12345 Wendy|23456 Sam|45678 Joe|56789 Bob|34567 Sorting a tab delimited file Sorting a file with tab separated values requires a tab character to be specified as the column delimiter. This illustration uses the shell's dollar-quote notation to specify the tab as a C escape sequence. $ sort -k2,2 -t $'\t' phonebook Doe, John 555-1234 Fogarty, Suzie 555-2314 Doe, Jane 555-3214 Avery, Cory 555-4132 Smith, Brett 555-4321 Sort in reverse The -r option just reverses the order of the sort: $ sort -rk 2n zipcode Joe 56789 Sam 45678 Bob 34567 Wendy 23456 Adam 12345 Sort in random The GNU implementation has a -R --random-sort option based on hashing; this is not a full random shuffle because it will sort identical lines together. A true random sort is provided by the Unix utility shuf. Sort by version The GNU implementation has a -V --version-sort option which is a natural sort of (version) numbers within text. Two text strings that are to be compared are split into blocks of letters and blocks of digits. Blocks of letters are compared alpha-numerically, and blocks of digits are compared numerically (i.e., skipping leading zeros, more digits means larger, otherwise the leftmost digits that differ determine the result). Blocks are compared left-to-right and the first non-equal block in that loop decides which text is larger. This happens to work for IP addresses, Debian package version strings and similar tasks where numbers of variable length are embedded in strings. See also Collation List of Unix commands uniq shuf References Further reading External links Original Sort manpage The original BSD Unix program's manpage Further details about sort at Softpanorama Computing commands Sorting algorithms Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Sort (Unix)
[ "Mathematics", "Technology" ]
1,299
[ "IBM i Qshell commands", "Plan 9 commands", "Sorting algorithms", "Computing commands", "Order theory", "Inferno (operating system) commands" ]
4,098,326
https://en.wikipedia.org/wiki/Pulsed%20DC
Pulsed DC (PDC) or pulsating direct current is a periodic current which changes in value but never changes direction. Some authors use the term pulsed DC to describe a signal consisting of one or more rectangular ("flat-topped"), rather than sinusoidal, pulses. Pulsed DC is commonly produced from AC (alternating current) by a half-wave rectifier or a full-wave rectifier. Full wave rectified ac is more commonly known as Rectified AC. PDC has some characteristics of both alternating current (AC) and direct current (DC) waveforms. The voltage of a DC wave is roughly constant, whereas the voltage of an AC waveform continually varies between positive and negative values. Like an AC wave, the voltage of a PDC wave continually varies, but like a DC wave, the sign of the voltage is constant. Pulsating direct current is used on PWM controllers. Smoothing Most modern electronic items function using a DC voltage, so the PDC waveform must usually be smoothed before use. A reservoir capacitor converts the PDC wave into a DC waveform with some superimposed ripple. When the PDC voltage is initially applied, it charges the capacitor, which acts as a short term storage device to keep the output at an acceptable level while the PDC waveform is at a low voltage. Voltage regulation is often also applied using either linear or switching regulation. Difference from AC Pulsating direct current has an average value equal to a constant (DC) along with a time-dependent pulsating component added to it, while the average value of alternating current is zero in steady state (or a constant if it has a DC offset, value of which will then be equal to that offset). Devices and circuits may respond differently to pulsating DC than they would to non-pulsating DC, such as a battery or regulated power supply and should be evaluated. Uses Pulsed DC may also be generated for purposes other than rectification. It is often used to reduce electric arcs when generating thin carbon films, and for increasing yield in semiconductor fabrication by reducing electrostatic build-up. It is also generated by the voltage regulators in some automobiles, e.g., the classic air-cooled Volkswagen Beetle. Pulsed DC is also commonly used in driving light-emitting diodes (LEDs) to lower the intensity. Since light-emitting diodes cannot be reliably dimmed through the simple reduction of voltage as in an incandescent bulb, pulsed DC is used to produce many rapid flashes of light that to the human eye are indiscernible as individual flashes, but are seen as a lower brightness. Since an LED has no traditional filament to stress, this also has the added effect of prolonging the lifespan of the LED by reducing its on-time. This can sometimes be seen on videos where an LED or lamp assembly composed of LEDs is filmed at a frame rate very close to - but not exactly - the same as the pulsed DC frequency, which causes the lamp to slowly and occasionally fade in and out as the LEDs fall out of synchronization with the video frame. Dangers While smoothed DC is less dangerous than the same current level of AC, pulsed DC can be more dangerous than AC. Short DC pulses below 100 μs are more efficient at evoking action potentials and disturbing the conduction systems in the body than AC which usually has a much smoother and longer wave form. Therefore lower levels of current will be enough to cause the same effects equal to those of AC. On the other hand at shorter waveform AC will also have a higher frequency which is less effective at stimulating nerves and muscles due to its biphasic nature which doesn't allow a build up of enough charge, in the short time between phases, to depolarize cell membranes in order to evoke an action potential. References Bibliography Electric power Electric current
Pulsed DC
[ "Physics", "Engineering" ]
811
[ "Physical quantities", "Electrical engineering", "Power (physics)", "Electric power", "Electric current", "Wikipedia categories named after physical quantities" ]
4,098,466
https://en.wikipedia.org/wiki/Sister
A sister is a woman or a girl who shares parents or a parent with another individual; a female sibling. The male counterpart is a brother. Although the term typically refers to a familial relationship, it is sometimes used endearingly to refer to non-familial relationships. A full sister is a first-degree relative. Overview The English word sister comes from Old Norse which itself derives from Proto-Germanic *swestēr, both of which have the same meaning, i.e. sister. Some studies have found that sisters display more traits indicating jealousy around their siblings than their male counterparts, brothers. In some cultures, sisters are afforded a role of being under the protection by male siblings, especially older brothers, from issues ranging from bullies or sexual advances by womanizers. In some quarters, the term sister has gradually broadened its colloquial meaning to include individuals stipulating kinship. In response, in order to avoid equivocation, some publishers prefer the usage of female sibling over sister. Males with a twin sister sometimes view her as their female alter ego, or what they would have been like if they had two X chromosomes. A study in Perth, Australia found that girls having only youngers brothers resulted in a chastity effect: losing their virginity on average more than a year later than average. This has been hypothesized as being attributed to the pheromones in their brothers' sweat and household-related errands. Sororal relationships Various studies have shown that older sisters are likely to give a varied gender role to their younger siblings, as well as being more likely to develop a close bond with their younger siblings. Older sisters are more likely to play with their younger siblings. Younger siblings display more needy behavior when near their older sister and are more likely to be tolerant of an older sister's bad behavior. Boys with only one older sister are more likely to display stereotypically male behavior, and such masculine boys increased their masculine behavior with the more sisters they have. The reverse is true for young boys with several sisters, as they tend to be feminine, however, they outgrow this by the time they approach pubescence. Boys with older sisters were less likely to be delinquent or have emotional and behavioral disorders. A younger sister is less likely to be scolded by older siblings than a younger brother. The most common recreational activity between older brother/younger sister pairs is art drawing. Some studies also found a correlation between having an older sister and constructive discussions about safe sexual practices. Some studies have shown that men without sisters are more likely to be ineffectual at courtship and romantic relationships. Famous sisters LaVerne, Maxene, and Patricia Andrews, singing group Anna, Louisa, Elizabeth, and Abigail Alcott, daughters of Amos Bronson Alcott and Abby May Saffron, Lily, and Ruby Aldridge, models Natalie, Emily, and Alyvia Alyn Lind, actresses and daughters of Barbara Alyn Woods Maude Apatow and Iris Apatow, actresses and daughters of Judd Apatow and Leslie Mann Rosanna, Patricia, and Alexis Arquette, actresses Cassandra Austen, watercolourist and Jane Austen, novelist Chloe Bailey and Halle Bailey, singers, actresses, and members of Chloe x Halle Nikki Bella and Brie Bella, professional wrestlers and television personalities Estelle Bennett and Ronnie Spector, members of The Ronettes, which included their cousin, Nedra Talley Charlotte, Emily, and Anne Brontë, novelists and poets Barbara and Jenna Bush, daughters of George W. Bush and Laura Bush Liz Cheney and Mary Cheney, daughters of Dick Cheney and Lynne Cheney Joan Collins and Jackie Collins, actresses and authors Penélope Cruz and Mónica Cruz, actresses Brandi Cyrus, Miley Cyrus, and Noah Cyrus, singers, actresses, and daughters of Billy Ray Cyrus Kaley Cuoco and Briana Cuoco, actresses Dixie D'Amelio and Charli D'Amelio, social media personalities Poppy Delevingne and Cara Delevingne, models and actresses Nicola and Gabriella DeMartino, YouTubers, internet personalities, and singers Emily Deschanel and Zooey Deschanel, actresses Emilie, Annette, Marie, Cecile and Yvonne Dionne, the first quintuplets to survive infancy Haylie Duff and Hilary Duff, actresses and singers Elizabeth II and Princess Margaret, daughters of George VI and Queen Elizabeth The Queen Mother Abby Elliott and Bridey Elliott, actresses, comedians, daughters of Chris Elliott, and granddaughters of Bob Elliott Dakota Fanning and Elle Fanning, actresses Mamie, Grace, and Louisa Gummer, actresses and daughters of Meryl Streep Gigi Hadid and Bella Hadid, models Este, Danielle, and Alana Haim, musicians and members of Haim Kamala Harris, politician and Maya Harris, lawyer Paris Hilton and Nicky Hilton, socialites, models, daughters of Kathy Hilton, and nieces of Kim and Kyle Richards Rebbie, La Toya, and Janet Jackson, singers and sisters of The Jackson 5 Lynda and Luci Baines Johnson, daughters of Lyndon B. Johnson and Lady Bird Johnson Kidada Jones, Rashida Jones, and Kenya Kinski-Jones, daughters of Quincy Jones Kourtney Kardashian, Kim Kardashian, Khloé Kardashian, Kendall Jenner, and Kylie Jenner, media personalities, socialites, and daughters of Kris Jenner Nicole Kidman and Antonia Kidman, daughters of Antony Kidman Beyoncé and Solange Knowles, singers and actresses Lisa Ling and Laura Ling, journalists Lori, Robyn, and Blake Lively, actresses Mary I of England and Elizabeth I, daughters of Henry VIII and Anne Boleyn Lisa and Lena Mantler, social media personalities Kate Mara and Rooney Mara, actresses Brooklyn and Bailey McKnight, YouTubers and social media personalities Veronica and Vanessa Merrell, YouTubers, actresses, producers, musicians, singers, and songwriters Aly Michalka and AJ Michalka, singers, actresses, and members of Aly & AJ Kate and Pippa Middleton, socialites Savannah Miller, fashion designer and Sienna Miller, actress Kylie Minogue and Dannii Minogue, singers and actresses Tia Mowry and Tamera Mowry, actresses Tricia and Julie Nixon, daughters of Richard Nixon and Pat Nixon Malia and Sasha Obama, daughters of Barack Obama and Michelle Obama Mary-Kate, Ashley, and Elizabeth Olsen, actresses and known as "the Olsen twins" Vanessa Paradis and Alysson Paradis, actresses Anna Pierangeli and Maria Pierangeli, actresses Rain Phoenix, Liberty Phoenix, and Summer Phoenix, actresses Tegan and Sara Quin, music duo Kim Richards, Kyle Richards, and Kathy Hilton, actresses, socialites, and television personalities Nicole Richie and Sofia Richie, daughters of Lionel Richie Jessica Simpson and Ashlee Simpson, singers and actresses Britney Spears and Jamie Lynn Spears, singers and actresses Liv Tyler and Mia Tyler, actresses and daughters of Steven Tyler Lana and Lilly Wachowski, trans women filmmakers Venus Williams and Serena Williams, professional tennis players Maddie Ziegler and Mackenzie Ziegler, dancers and actresses Fictional works about sisters Films What Ever Happened to Baby Jane? (1962) Hannah and Her Sisters (1986) The Parent Trap (1998) The Virgin Suicides (1999) Hanging Up (2000) Frozen (2013) Little Women (2019) Trolls Band Together (2023) Literature Little Women by Louisa May Alcott Laura Lee Hope's Bobbsey Twins novels, which included two sets of fraternal twins: 12-year-old Nan and Bert, and six-year-old Flossie and Freddie In Her Shoes (2002), by Jennifer Weiner The Virgin Suicides by Jeffrey Eugenides Teen Titans by Bob Haney and Bruno Premiani, DC Comics superhero team which includes alien princess superhero Starfire and one of the supervillains was her older sister Blackfire Television Breaking Bad (Skyler White and Marie Schrader) Hope & Faith Sisters What I Like About You Charmed Sister, Sister Little Women The Powerpuff Girls Teen Titans (Blackfire and Starfire) Games Mileena & Kitana, Mortal Kombat Kat and Ana, WarioWare See also Brother Religious sister Sisterhood (disambiguation) Stepsibling References External links Kinship and descent Terms for women
Sister
[ "Biology" ]
1,706
[ "Behavior", "Human behavior", "Kinship and descent" ]
4,098,482
https://en.wikipedia.org/wiki/Bergman%20cyclization
The Masamune-Bergman cyclization or Masamune-Bergman reaction or Masamune-Bergman cycloaromatization is an organic reaction and more specifically a rearrangement reaction taking place when an enediyne is heated in presence of a suitable hydrogen donor (Scheme 1). It is the most famous and well-studied member of the general class of cycloaromatization reactions. It is named for Japanese-American chemist Satoru Masamune (b. 1928) and American chemist Robert G. Bergman (b. 1942). The reaction product is a derivative of benzene. The reaction proceeds by a thermal reaction or pyrolysis (above 200 °C) forming a short-lived and very reactive para-benzyne biradical species. It will react with any hydrogen donor such as 1,4-cyclohexadiene which converts to benzene. When quenched by tetrachloromethane the reaction product is a 1,4-dichlorobenzene and with methanol the reaction product is benzyl alcohol. When the enyne moiety is incorporated into a 10-membered hydrocarbon ring (e.g. cyclodeca-3-ene-1,5-diyne in scheme 2) the reaction, taking advantage of increased ring strain in the reactant, is possible at the much lower temperature of 37 °C. Naturally occurring compounds such as calicheamicin contain the same 10-membered ring and are found to be cytotoxic. These compounds generate the diradical intermediate described above which can cause single and double stranded DNA cuts. There are novel drugs which attempt to make use of this property, including monoclonal antibodies such as mylotarg. A biradical mechanism is also proposed for the formation of certain biomolecules found in marine sporolides that have a chlorobenzene unit as part of their structure. In this mechanism a halide salt provides the halogen. A model reaction with the enediyene cyclodeca-1,5-diyn-3-ene, lithium bromide as halogen source and acetic acid as hydrogen source in DMSO at 37 °C supports the theory: The reaction is found to be first-order in enediyne with the formation of p-benzyne A as the rate-limiting step. The halide ion then donates its two electrons in the formation of a new Br-C bond and radical electron involved is believed to shuttle over a transient C1-C4 bond forming the anion intermediate B. The anion is a powerful base, stripping protons even from DMSO to final product. The dibromide or dihydrogen product (tetralin) never form. In 2015 IBM scientists demonstrated that a reversible Masamune-Bergman cyclisation of diyne can be induced by a tip of an atomic force microscope (AFM). They also recorded images of individual diyne molecules during this process. When learning about this direct experimental demonstration Bergman commented, "When we first reported this reaction I had no idea that it would be biologically relevant, or that the reaction could someday be visualized at the molecular level. References External links Bergman Cycloaromatization Powerpoint Whitney M. Erwin 2002 Rearrangement reactions Carbon-carbon bond forming reactions Name reactions Enediynes
Bergman cyclization
[ "Chemistry" ]
706
[ "Carbon-carbon bond forming reactions", "Rearrangement reactions", "Organic reactions", "Name reactions", "Ring forming reactions" ]
4,098,495
https://en.wikipedia.org/wiki/Brother
A brother (: brothers or brethren) is a man or boy who shares one or more parents with another; a male sibling. The female counterpart is a sister. Although the term typically refers to a familial relationship, it is sometimes used endearingly to refer to non-familial relationships. A full brother is a first degree relative. Overview The term brother comes from the Proto-Indo-European *bʰréh₂tēr, which becomes Latin frater, of the same meaning. Sibling warmth or affection between male siblings has been correlated to some more negative effects. In pairs of brothers, higher sibling warmth is related to more risk taking behaviour, although risk taking behaviour is not related to sibling warmth in any other type of sibling pair. The cause of this phenomenon in which sibling warmth is only correlated with risk taking behaviours in brother pairs still is unclear. This finding does, however, suggest that although sibling conflict is a risk factor for risk taking behaviour, sibling warmth does not serve as a protective factor. Some studies suggest that girls having an older brother delays the onset of menarche by roughly one year. Research also suggests that the likelihood of being gay increases with the more older brothers a man has. Some analyzers have suggested that a man's attractiveness to a heterosexual woman may increase with the more he resembles her brother, while his unattractiveness may increase the more his likeness diverges from her brother. Females with a twin or very close-in-age brother, sometimes view him as their male alter ego, or what they would have been like, if they had a Y chromosomes. Fraternal relationship The book Nicomachean Ethics, Book VIII written by Aristotle in 350 B.C.E., offers a way in which people should view the relationships between biological brothers. The relationship of brothers is laid out with the following quote: "The friendship of brothers has the characteristics found in that of comrades and in general between people who are like each other, is as much as they belong more to each other and start with a love for each other from their very birth, and in as much as those born to the same parents and brought up together and similarly educated are more akin in character; and the test of time has been applied most fully and convincingly in their case". For these reasons, it is the job of the older brother to influence the ethics of the younger brother by being a person of good action. Aristotle says "by imitating and reenacting the acts of good people, a child becomes habituated to good action". Over time the younger brother will develop the good actions of the older brother as well and be like him. Aristotle also adds this on the matter of retaining the action of doing good once imitated: "Once the habits of ethics or immorality become entrenched, they are difficult to break." The good habits that are created by the influence of the older brother become habit in the life of the younger brother and turn out to be seemingly permanent. It is the role of the older brother to be a positive influence on the development of the younger brother's upbringing when it comes to the education of ethics and good actions. When positive characteristics are properly displayed to the younger brother by the older brother, these habits and characteristics are imitated and foster an influential understanding of good ethics and positive actions. Famous brothers Gracchi, Ancient Roman reformers George Washington Adams, John Adams II, and Charles Francis Adams Sr., politicians Ben Affleck and Casey Affleck, actors The Alexander Brothers; musicians Mel, Wes, and Eric Chase Anderson Alec Baldwin, William Baldwin, Stephen Baldwin, Daniel Baldwin, also known as the Baldwin brothers; actors John and Lionel Barrymore, actors Beau Biden and Hunter Biden, sons of Joe Biden Beau Bridges and Jeff Bridges, actors Chang and Eng Bunker, the original Siamese twins George W. Bush, Jeb Bush, Neil Bush and Marvin Bush, sons of George H. W. Bush David Carradine, Keith Carradine, and Robert Carradine, American actors Bill Clinton, 42nd President of the United States, and Roger Clinton Jr., his younger half-brother Joel and Ethan Coen; filmmakers Carmine Coppola and Anton Coppola, composers August Coppola and Francis Ford Coppola, sons of Carmine Coppola Marc Coppola, Christopher Coppola, and Nicolas Cage, actors and sons of August Coppola Gian-Carlo Coppola and Roman Coppola, sons of Francis Ford Coppola Stephen Curry and Seth Curry; current NBA point guards in the Western Conference Dizzy and Daffy Dean, Major League Baseball pitchers Mark DeBarge, Randy DeBarge, El DeBarge, James DeBarge, and Bobby DeBarge, the male members of the singing group DeBarge Ethan and Grayson Dolan, YouTubers Matt and Ross Duffer, directors and writers Emilio Estevez and Charlie Sheen, actors Chris Evans and Scott Evans, actors Isaac Everly and Phil Everly, The Everly Brothers, singers James Franco, Tom Franco, and Dave Franco, actors Liam Gallagher and Noel Gallagher, members of Oasis (band) Barry Gibb, Robin Gibb, and Maurice Gibb, members of the Brothers Gibb or "Bee Gees" singing group John Gotti, Eugene "Gene" Gotti, Peter Gotti and Richard V. Gotti, New York "made men" with the Gambino crime family Frederick Dent Grant, Ulysses S. Grant Jr., and Jesse Root Grant Jacob Grimm and Wilhelm Grimm, known as the Brothers Grimm, German academics and folk tale collectors Matt Hardy and Jeff Hardy, professional wrestlers Luke Hemsworth, Chris Hemsworth, and Liam Hemsworth, actors Herbert Hoover Jr. and Allan Hoover Pau and Marc Gasol, professional basketball players O'Kelly Isley Jr., Rudolph Isley, and Ronald Isley, Ernie Isley, Marvin Isley, and Vernon Isley, members of The Isley Brothers singer-songwriting group and band, which also included their brother-in-law, Chris Jasper Jackie Jackson, Tito Jackson, Jermaine Jackson, Marlon Jackson, Michael Jackson and Randy Jackson, members of The Jackson 5 and later The Jacksons Jesse and Frank James, Old West outlaws Kevin, Joe and Nick Jonas, musicians, collectively known as the Jonas Brothers John, Robert and Ted Kennedy, politicians Edward M. Kennedy Jr. and Patrick J. Kennedy, politicians Terry Labonte and Bobby Labonte, race car drivers Robert Todd Lincoln, Edward Baker Lincoln, William Wallace Lincoln and Tad Lincoln, sons of Abraham Lincoln Loud Brothers, piano designers and manufacturers Eli and Peyton Manning, National Football League quarterbacks Mario and Luigi, video game characters Zack and Cody Martin, Disney Channel characters John McCain, U.S. Senator and two-time presidential candidate, and Joe McCain, American stage actor, newspaper reporter Justin, Travis, and Griffin McElroy, podcasters Billy Leon McCrary and Benny Loyd McCrary, wrestlers known as The McGuire Twins Lyle and Erik Menendez, murderers Harold Nixon, Richard Nixon, Donald Nixon, Arthur Nixon, and Edward Nixon Matthew, Christopher, and Jonathan Nolan, directors J. Robert Oppenheimer and Frank Oppenheimer, physicists Alan Osmond, Wayne Osmond, Merrill Osmond, Jay Osmond and Donny Osmond, members of The Osmonds Logan Paul and Jake Paul, YouTubers, internet personalities, and actors River Phoenix and Joaquin Phoenix, actors Neil and Ronald Reagan Ringling brothers, circus performers, owners, and show runners John D. Rockefeller and William Rockefeller, co-founders of Standard Oil and members of the Rockefeller family Cornelius Roosevelt and James I. Roosevelt Theodore Roosevelt Jr., Kermit Roosevelt, Archibald Bulloch Roosevelt, and Quentin Roosevelt James Roosevelt, Elliot Roosevelt, Franklin Delano Roosevelt Jr., and John Aspinwall Roosevelt Russo brothers, filmmakers, producers, and directors Matthew Shire, Jason Schwartzman, and Robert Schwartzman, actors, musicians, and sons of Talia Shire Daniel Sedin and Henrik Sedin, professional hockey players Alexander Skarsgård, Gustaf Skarsgård, Bill Skarsgård, and Valter Skarsgård, actors and sons of Stellan Skarsgård Wallace Shawn and Allen Shawn, writer and composer of The Fever Bobby Shriver, Timothy Shriver, Mark Shriver, and Anthony Shriver Thomas "Tommy" Smothers and Richard "Dick" Smothers, performing artists known as the Smothers Brothers Dylan Sprouse and Cole Sprouse, actors and known as the Dylan and Cole Sprouse Prabowo Subianto and Hashim Djojohadikusumo, politicians Thor and Loki, Marvel Comics characters Fred Trump Jr., Donald Trump, and Robert Trump Vincent van Gogh, painter, and Theo van Gogh, art dealer J. J. Watt, T. J. Watt, Derek Watt, National Football League Players Damon Wayans, Dwayne Wayans, Keenan Ivory Wayans, Marlon Wayans, Shawn Wayans, performing artists, directors and producers Bob Weinstein and Harvey Weinstein, film producers Andrew, Owen, and Luke Wilson, actors and sons of Laura Wilson Brian Wilson, Dennis Wilson, and Carl Wilson, members of The Beach Boys Marvin Winans, Carvin Winans, Michael Winans, and Ronald Winans, members of The Winans, singers and musicians Orville Wright and Wilbur Wright, known as the Wright brothers, pioneer aviators Jack Jr., Kevin, David, Kerry, Mike, and Chris Von Erich, wrestlers and sons of Fritz Von Erich Marshall and Ross Von Erich, wrestlers and sons of Kevin Von Erich Agus Harimurti Yudhoyono and Edhie Baskoro Yudhoyono, politicians Other works about brothers In the Bible: Cain and Abel, the sons of Adam and Eve Jacob and Esau, the sons of Isaac and Rebecca Moses and Aaron, prophets Sts. Peter and Andrew, apostles Sts. James and John, apostles Sts. Thomas and his unnamed twin brother Monsters: The Lyle and Erik Menendez Story (2024), television series My Brother, My Brother, and Me, podcast Saving Private Ryan (1998), film Simon & Simon, television series Step Brothers (2008), film Supernatural, American television series The Brothers Karamazov, novel The Darjeeling Limited (2007), film The Iron Claw (2023), film The Suite Life of Zack & Cody (2005-2008), sitcom The Suite Life on Deck (2008-2011), sitcom The Suite Life Movie (2011), television film The Wayans Bros., television series Bonanza (1959–1973), television series In the Ramayana: Rama, Lakshmana, Bharata, and Shatrughna In the Mahabharata: The Pandavas – Yudhishthira, Arjuna, Bhima, Sahadeva and Nakula The Kauravas – One hundred brothers including Duryodhana, Dushasana and Vikarna, among others See also Brotherhood (disambiguation) Religious brother Sister Stepsibling References External links Terms for men Kinship and descent Sibling
Brother
[ "Biology" ]
2,289
[ "Behavior", "Human behavior", "Kinship and descent" ]
4,098,636
https://en.wikipedia.org/wiki/Diradical
In chemistry, a diradical is a molecular species with two electrons occupying molecular orbitals (MOs) which are degenerate. The term "diradical" is mainly used to describe organic compounds, where most diradicals are extremely reactive and non-Kekulé molecules that are rarely isolated. Diradicals are even-electron molecules but have one fewer bond than the number permitted by the octet rule. Examples of diradical species can also be found in coordination chemistry, for example among bis(1,2-dithiolene) metal complexes. Spin states Diradicals are usually triplets. The phrases singlet and triplet are derived from the multiplicity of states of diradicals in electron spin resonance: a singlet diradical has one state (S=0, Ms=2*0+1=1, ms=0) and exhibits no signal in EPR and a triplet diradical has 3 states (S=1, Ms=2*1+1=3, ms=-1; 0; 1) and shows in EPR 2 peaks (if no hyperfine splitting). The triplet state has total spin quantum number S=1 and is paramagnetic. Therefore, diradical species display a triplet state when the two electrons are unpaired and display the same spin. When the unpaired electrons with opposite spin are antiferromagnetically coupled, diradical species can display a singlet state (S=0) and be diamagnetic. Examples Stable, isolable, diradicals include singlet oxygen and triplet oxygen. Other important diradicals are certain carbenes, nitrenes, and their main-group elemental analogues. Lesser-known diradicals are nitrenium ions, carbon chains, and organic so-called non-Kekulé molecules in which the electrons reside on different carbon atoms. Main-group cyclic structures can also exhibit diradicals, such as disulfur dinitride, or diradical character, such as diphosphadiboretanes. In inorganic chemistry, both homoleptic and heteroleptic 1,2-dithiolene complexes of d8 transition metal ions show a large degree of diradical character in the ground state. References Further reading Inorganic chemistry Magnetism Organic chemistry
Diradical
[ "Chemistry" ]
495
[ "nan" ]
4,099,038
https://en.wikipedia.org/wiki/Brian%20Conrad
Brian Conrad (born November 20, 1970) is an American mathematician and number theorist, working at Stanford University. Previously, he taught at the University of Michigan and at Columbia University. Conrad and others proved the modularity theorem, also known as the Taniyama-Shimura Conjecture. He proved this in 1999 with Christophe Breuil, Fred Diamond and Richard Taylor, while holding a joint postdoctoral position at Harvard University and the Institute for Advanced Study in Princeton, New Jersey. Conrad received his bachelor's degree from Harvard in 1992, where he won a prize for his undergraduate thesis. He did his doctoral work under Andrew Wiles and went on to receive his Ph.D. from Princeton University in 1996 with a dissertation titled Finite Honda Systems And Supersingular Elliptic Curves. He was also featured as an extra in Nova's The Proof. His identical twin brother Keith Conrad, also a number theorist, is a professor at the University of Connecticut. He was awarded the prestigious Barry Prize for Distinguished Intellectual Achievement by the American Academy of Sciences and Letters in 2024. References External links Homepage at Stanford University On the modularity of elliptic curves over Q - Proof of Taniyama-Shimura coauthored by Conrad. Brian Conrad, Fred Diamond, Richard Taylor: Modularity of certain potentially Barsotti-Tate Galois representations, Journal of the American Mathematical Society 12 (1999), pp. 521–567. Also contains the proof C. Breuil, B. Conrad, F. Diamond, R. Taylor : On the modularity of elliptic curves over Q: wild 3-adic exercises, Journal of the American Mathematical Society 14 (2001), 843–939. 20th-century American mathematicians 21st-century American mathematicians American number theorists Harvard University staff Princeton University alumni University of Michigan faculty Scientists from New York City 1970 births Living people Harvard College alumni Fermat's Last Theorem Mathematicians from New York (state) American identical twins Recipients of the Presidential Early Career Award for Scientists and Engineers
Brian Conrad
[ "Mathematics" ]
406
[ "Theorems in number theory", "Fermat's Last Theorem" ]
4,099,290
https://en.wikipedia.org/wiki/Transgene
A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene. The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions. Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems. History The idea of shaping an organism to fit a specific need is not a new science. However, until the late 1900s farmers and scientists could breed new strains of a plant or organism only from closely related species because the DNA had to be compatible for offspring to be able to reproduce. In the 1970 and 1980s, scientists passed this hurdle by inventing procedures for combining the DNA of two vastly different species with genetic engineering. The organisms produced by these procedures were termed transgenic. Transgenesis is the same as gene therapy in the sense that they both transform cells for a specific purpose. However, they are completely different in their purposes, as gene therapy aims to cure a defect in cells, and transgenesis seeks to produce a genetically modified organism by incorporating the specific transgene into every cell and changing the genome. Transgenesis will therefore change the germ cells, not only the somatic cells, in order to ensure that the transgenes are passed down to the offspring when the organisms reproduce. Transgenes alter the genome by blocking the function of a host gene; they can either replace the host gene with one that codes for a different protein, or introduce an additional gene. The first transgenic organism was created in 1974 when Annie Chang and Stanley Cohen expressed Staphylococcus aureus genes in Escherichia coli. In 1978, yeast cells were the first eukaryotic organisms to undergo gene transfer. Mouse cells were first transformed in 1979, followed by mouse embryos in 1980. Most of the very first transmutations were performed by microinjection of DNA directly into cells. Scientists were able to develop other methods to perform the transformations, such as incorporating transgenes into retroviruses and then infecting cells; using electroinfusion, which takes advantage of an electric current to pass foreign DNA through the cell wall; biolistics, which is the procedure of shooting DNA bullets into cells; and also delivering DNA into the newly fertilized egg. The first transgenic animals were only intended for genetic research to study the specific function of a gene, and by 2003, thousands of genes had been studied. Use in plants A variety of transgenic plants have been designed for agriculture to produce genetically modified crops, such as corn, soybean, rapeseed oil, cotton, rice and more. , these GMO crops were planted on 170 million hectares globally. Golden rice One example of a transgenic plant species is golden rice. In 1997, five million children developed xerophthalmia, a medical condition caused by vitamin A deficiency, in Southeast Asia alone. Of those children, a quarter million went blind. To combat this, scientists used biolistics to insert the daffodil phytoene synthase gene into Asia indigenous rice cultivars. The daffodil insertion increased the production of β-carotene. The product was a transgenic rice species rich in vitamin A, called golden rice. Little is known about the impact of golden rice on xerophthalmia because anti-GMO campaigns have prevented the full commercial release of golden rice into agricultural systems in need. Transgene escape The escape of genetically-engineered plant genes via hybridization with wild relatives was first discussed and examined in Mexico and Europe in the mid-1990s. There is agreement that escape of transgenes is inevitable, even "some proof that it is happening". Up until 2008 there were few documented cases. Corn Corn sampled in 2000 from the Sierra Juarez, Oaxaca, Mexico contained a transgenic 35S promoter, while a large sample taken by a different method from the same region in 2003 and 2004 did not. A sample from another region from 2002 also did not, but directed samples taken in 2004 did, suggesting transgene persistence or re-introduction. A 2009 study found recombinant proteins in 3.1% and 1.8% of samples, most commonly in southeast Mexico. Seed and grain import from the United States could explain the frequency and distribution of transgenes in west-central Mexico, but not in the southeast. Also, 5.0% of corn seed lots in Mexican corn stocks expressed recombinant proteins despite the moratorium on GM crops. Cotton In 2011, transgenic cotton was found in Mexico among wild cotton, after 15 years of GMO cotton cultivation. Rapeseed (canola) Transgenic rapeseed Brassicus napus – hybridized with a native Japanese species, Brassica rapa – was found in Japan in 2011 after having been identified in 2006 in Québec, Canada. They were persistent over a six-year study period, without herbicide selection pressure and despite hybridization with the wild form. This was the first report of the introgression—the stable incorporation of genes from one gene pool into another—of an herbicide-resistance transgene from Brassica napus into the wild form gene pool. Creeping bentgrass Transgenic creeping bentgrass, engineered to be glyphosate-tolerant as "one of the first wind-pollinated, perennial, and highly outcrossing transgenic crops", was planted in 2003 as part of a large (about 160 ha) field trial in central Oregon near Madras, Oregon. In 2004, its pollen was found to have reached wild growing bentgrass populations up to 14 kilometres away. Cross-pollinating Agrostis gigantea was even found at a distance of 21 kilometres. The grower, Scotts Company could not remove all genetically engineered plants, and in 2007, the U.S. Department of Agriculture fined Scotts $500,000 for noncompliance with regulations. Risk assessment The long-term monitoring and controlling of a particular transgene has been shown not to be feasible. The European Food Safety Authority published a guidance for risk assessment in 2010. Use in mice Genetically modified mice are the most common animal model for transgenic research. Transgenic mice are currently being used to study a variety of diseases including cancer, obesity, heart disease, arthritis, anxiety, and Parkinson's disease. The two most common types of genetically modified mice are knockout mice and oncomice. Knockout mice are a type of mouse model that uses transgenic insertion to disrupt an existing gene's expression. In order to create knockout mice, a transgene with the desired sequence is inserted into an isolated mouse blastocyst using electroporation. Then, homologous recombination occurs naturally within some cells, replacing the gene of interest with the designed transgene. Through this process, researchers were able to demonstrate that a transgene can be integrated into the genome of an animal, serve a specific function within the cell, and be passed down to future generations. Oncomice are another genetically modified mouse species created by inserting transgenes that increase the animal's vulnerability to cancer. Cancer researchers utilize oncomice to study the profiles of different cancers in order to apply this knowledge to human studies. Use in Drosophila Multiple studies have been conducted concerning transgenesis in Drosophila melanogaster, the fruit fly. This organism has been a helpful genetic model for over 100 years, due to its well-understood developmental pattern. The transfer of transgenes into the Drosophila genome has been performed using various techniques, including P element, Cre-loxP, and ΦC31 insertion. The most practiced method used thus far to insert transgenes into the Drosophila genome utilizes P elements. The transposable P elements, also known as transposons, are segments of bacterial DNA that are translocated into the genome, without the presence of a complementary sequence in the host's genome. P elements are administered in pairs of two, which flank the DNA insertion region of interest. Additionally, P elements often consist of two plasmid components, one known as the P element transposase and the other, the P transposon backbone. The transposase plasmid portion drives the transposition of the P transposon backbone, containing the transgene of interest and often a marker, between the two terminal sites of the transposon. Success of this insertion results in the nonreversible addition of the transgene of interest into the genome. While this method has been proven effective, the insertion sites of the P elements are often uncontrollable, resulting in an unfavorable, random insertion of the transgene into the Drosophila genome. To improve the location and precision of the transgenic process, an enzyme known as Cre has been introduced. Cre has proven to be a key element in a process known as recombinase-mediated cassette exchange (RMCE). While it has shown to have a lower efficiency of transgenic transformation than the P element transposases, Cre greatly lessens the labor-intensive abundance of balancing random P insertions. Cre aids in the targeted transgenesis of the DNA gene segment of interest, as it supports the mapping of the transgene insertion sites, known as loxP sites. These sites, unlike P elements, can be specifically inserted to flank a chromosomal segment of interest, aiding in targeted transgenesis. The Cre transposase is important in the catalytic cleavage of the base pairs present at the carefully positioned loxP sites, permitting more specific insertions of the transgenic donor plasmid of interest. To overcome the limitations and low yields that transposon-mediated and Cre-loxP transformation methods produce, the bacteriophage ΦC31 has recently been utilized. Recent breakthrough studies involve the microinjection of the bacteriophage ΦC31 integrase, which shows improved transgene insertion of large DNA fragments that are unable to be transposed by P elements alone. This method involves the recombination between an attachment (attP) site in the phage and an attachment site in the bacterial host genome (attB). Compared to usual P element transgene insertion methods, ΦC31 integrates the entire transgene vector, including bacterial sequences and antibiotic resistance genes. Unfortunately, the presence of these additional insertions has been found to affect the level and reproducibility of transgene expression. Use in livestock and aquaculture One agricultural application is to selectively breed animals for particular traits: Transgenic cattle with an increased muscle phenotype has been produced by overexpressing a short hairpin RNA with homology to the myostatin mRNA using RNA interference. Transgenes are being used to produce milk with high levels of proteins or silk from the milk of goats. Another agricultural application is to selectively breed animals, which are resistant to diseases or animals for biopharmaceutical production. Future potential The application of transgenes is a rapidly growing area of molecular biology. As of 2005 it was predicted that in the next two decades, 300,000 lines of transgenic mice will be generated. Researchers have identified many applications for transgenes, particularly in the medical field. Scientists are focusing on the use of transgenes to study the function of the human genome in order to better understand disease, adapting animal organs for transplantation into humans, and the production of pharmaceutical products such as insulin, growth hormone, and blood anti-clotting factors from the milk of transgenic cows. As of 2004 there were five thousand known genetic diseases, and the potential to treat these diseases using transgenic animals is, perhaps, one of the most promising applications of transgenes. There is a potential to use human gene therapy to replace a mutated gene with an unmutated copy of a transgene in order to treat the genetic disorder. This can be done through the use of Cre-Lox or knockout. Moreover, genetic disorders are being studied through the use of transgenic mice, pigs, rabbits, and rats. Transgenic rabbits have been created to study inherited cardiac arrhythmias, as the rabbit heart markedly better resembles the human heart as compared to the mouse. More recently, scientists have also begun using transgenic goats to study genetic disorders related to fertility. Transgenes may be used for xenotransplantation from pig organs. Through the study of xeno-organ rejection, it was found that an acute rejection of the transplanted organ occurs upon the organ's contact with blood from the recipient due to the recognition of foreign antibodies on endothelial cells of the transplanted organ. Scientists have identified the antigen in pigs that causes this reaction, and therefore are able to transplant the organ without immediate rejection by removal of the antigen. However, the antigen begins to be expressed later on, and rejection occurs. Therefore, further research is being conducted. Transgenic microorganisms capable of producing catalytic proteins or enzymes which increase the rate of industrial reactions. Ethical controversy Transgene use in humans is currently fraught with issues. Transformation of genes into human cells has not been perfected yet. The most famous example of this involved certain patients developing T-cell leukemia after being treated for X-linked severe combined immunodeficiency (X-SCID). This was attributed to the close proximity of the inserted gene to the LMO2 promoter, which controls the transcription of the LMO2 proto-oncogene. See also Hybrid Fusion protein Gene pool Gene flow Introgression Nucleic acid hybridization Mouse models of breast cancer metastasis References Further reading Genetic engineering Gene delivery
Transgene
[ "Chemistry", "Engineering", "Biology" ]
3,154
[ "Genetics techniques", "Biological engineering", "Genetic engineering", "Molecular biology techniques", "Molecular biology", "Gene delivery" ]
4,099,405
https://en.wikipedia.org/wiki/Prism%20%28chipset%29
The Prism brand is used for wireless networking integrated circuit (commonly called "chips") technology from Conexant for wireless LANs. They were formerly produced by Intersil Corporation. Legacy 802.11b products (Prism 2/2.5/3) The open-source HostAP driver supports the IEEE 802.11b Prism 2/2.5/3 family of chips. Wireless adaptors which use the Prism chipset are known for compatibility, and are preferred for specialist applications such as packet capture. No win64 drivers are known to exist. Intersil firmware WEP WPA (TKIP), after update WPA2 (CCMP), after update Lucent/Agere WEP WPA (TKIP in hardware) 802.11b/g products (Prism54, ISL38xx) The chipset has undergone a major redesign for 802.11g compatibility and cost reduction, and newer "Prism54" chipsets are not compatible with their predecessors. Intersil initially provided a Linux driver for the first Prism54 chips which implemented a large part of the 802.11 stack in the firmware. However, further cost reductions caused a new, lighter firmware to be designed and the amount of on-chip memory to shrink, making it impossible to run the older version of the firmware on the latest chips. In the meantime, the PRISM business was sold to Conexant, which never published information about the newer firmware API that would enable a Linux driver to be written. However, a reverse engineering effort eventually made it possible to use the new Prism54 chipsets under the Linux and BSD operating systems. See also HostAP driver for prism chipsets External links PRISM solutions at Conexant GPL drivers and firmware for the ISL38xx-based Prism chipsets (mostly reverse engineered) Wireless networking hardware
Prism (chipset)
[ "Technology" ]
380
[ "Wireless networking hardware", "Wireless networking" ]
4,099,694
https://en.wikipedia.org/wiki/G%2029-38
Giclas 29-38, also known as ZZ Piscium, is a variable white dwarf star of the DAV (or ZZ Ceti) type, whose variability is due to large-amplitude, non-radial pulsations known as gravity waves. It was first reported to be variable by Shulov and Kopatskaya in 1974. DAV stars are like normal white dwarfs but have luminosity variations with amplitudes as high as 30%, arising from a superposition of vibrational modes with periods from 100 to 1,000 seconds. Large-amplitude DAVs generally differ from lower-amplitude DAVs by having lower temperatures, longer primary periodicities, and many peaks in their vibrational spectra with frequencies which are sums of other vibrational modes. G29-38, like other complex, large-amplitude DAV variables, has proven difficult to understand. The power spectrum or periodogram of the light curve varies over times which range from weeks to years. Usually, one strong mode dominates, although many smaller-amplitude modes are often observed. The larger-amplitude modes, however, fluctuate in and out of observability; some low-power areas show more stability. Asteroseismology uses the observed spectrum of pulsations from stars like G29-38 to infer the structure of their interiors. Debris disk The circumstellar environment of G29-38 first attracted attention in the late 1980s during a near-infrared survey of 200 white dwarfs conducted by Ben Zuckerman and Eric Becklin to search for low mass companion stars and brown dwarfs. G29-38 was shown to radiate substantial emission between 2 and 5 micrometres, far in excess of that expected from extrapolation of the visual and near infrared spectrum of the star. Like other young, hot white dwarfs, G29-38 is thought to have formed relatively recently (600 million years ago) from its AGB progenitor, and therefore the excess was naturally explained by emission from a Jupiter-like brown dwarf with a temperature of 1200 K and a radius of 0.15 solar radius. However, later observations, including speckle interferometry, failed to detect a brown dwarf. Infrared observations made in 2004 by NASA's Spitzer Space Telescope indicated the presence of a dust cloud around G29-38, which may have been created by tidal disruption of an exocomet or exoasteroid passing close to the white dwarf. This may mean that G29-38 is still orbited by a ring of surviving comets and, possibly, outer planets. This is the first observation supporting the idea that comets persist to the white dwarf stage of stellar evolution. Infrared emission at 9-11 Mircons from Spitzer spectroscopy were interpreted as a mixture of amorphous olivine and a small amount of fosterite in the disk. Modelling of the disk have shown that the inner edge of the disk lies at around 96±4 white dwarf radii and that the disk has a width of about 1-10 white dwarf radii. The dust mass of the disk is about 4-5 x 1018 g (about half the mass of a massive asteroid) and the disk has a temperature less than 1000 K. The white dwarf is detected in x-rays with Chandra and XMM-Newton. This is seen as evidence for accretion from the disk and while the count number is small, there is evidence that this x-ray emission could come from iron. See also List of exoplanets and planetary debris around white dwarfs GD 362 is the second white dwarf with a disk discovered References External links Pulsating white dwarfs Pisces (constellation) Piscium, ZZ
G 29-38
[ "Astronomy" ]
763
[ "Pisces (constellation)", "Constellations" ]
4,100,584
https://en.wikipedia.org/wiki/QuickWin
QuickWin was a library from Microsoft that made it possible to compile command line MS-DOS programs as Windows 3.1 applications, displaying their output in a window. Since the release of Windows NT, Microsoft has included support for console applications in the Windows operating system itself via the Windows Console, eliminating the need for QuickWin. But Intel Visual Fortran still uses that library. Borland's equivalent in Borland C++ 5 was called EasyWin. There is a program called QuickWin on CodeProject, which does a similar thing. See also Command-line interface References Computer libraries
QuickWin
[ "Technology" ]
122
[ "IT infrastructure", "Computer libraries" ]
4,100,725
https://en.wikipedia.org/wiki/Air%20flow%20bench
An air flow bench is a device used for testing the internal aerodynamic qualities of an engine component and is related to the more familiar wind tunnel. It is used primarily for testing the intake and exhaust ports of cylinder heads of internal combustion engines. It is also used to test the flow capabilities of any component such as air filters, carburetors, manifolds or any other part that is required to flow gas. A flow bench is one of the primary tools of high performance engine builders, and porting cylinder heads would be strictly hit or miss without it. A flow bench consists of an air pump of some sort, a metering element, pressure and temperature measuring instruments such as manometers, and various controls. The test piece is attached in series with the pump and measuring element and air is pumped through the whole system. Therefore, all the air passing through the metering element also passes through the test piece. Because the volumetric flow rate through the metering element is known and the flow through the test piece is the same, it is also known. The mass flow rate can be calculated using the known pressure and temperature data to calculate air densities, and multiplying by the volume flow rate. Air pump The air pump used must be able to deliver the volume required at the pressure required. Most flow testing is done at 10 and 28 inches of water pressure (2.5 to 7 kilopascals). Although other test pressures will work, the results would have to be converted for comparison to the work of others. The pressure developed must account for the test pressure plus the loss across the metering element plus all other system losses. The greater the accuracy of the metering element the greater is the loss. Flow volume of between 100 and 600 cubic feet per minute (0.05 to 0.28 m³/s) would serve almost all applications depending on the size of the engine under test. Any type of pump that can deliver the required pressure difference and flow volume can be used. Most often used is the dynamic-compression centrifugal type compressor, which is familiar to most as being used in vacuum cleaners and turbochargers, but multistaged axial-flow compressor types, similar to those used in most jet engines, could work as well, although there would be little need for the added cost and complexities involved, as they typically don't require such a high flow rate as a jet engine, nor are they limited by the aerodynamic drag considerations which makes a narrow-diameter axial compressor more effective in jet engines than a centrifugal compressor of equal air flow. Positive displacement types such as piston compressors, or rotary types such as a Roots blower could also be used with suitable provisions for damping the pulsations in the air flow (however, other rotary types such as twin screw compressors are capable of providing a steady supply of compressed fluid). The pressure ratio of a single fan blade is too low and cannot be used. Metering element There are several possible types of metering element in use. Flow benches ordinarily use one of three types: orifice plate, venturi meter and pitot/static tube, all of which deliver similar accuracy. Most commercial machines use orifice plates due to their simple construction and the ease of providing multiple flow ranges. Although the venturi offers substantial improvements in efficiency, its cost is higher. Instrumentation Air flow conditions must be measured at two locations, across the test piece and across the metering element. The pressure difference across the test piece allows the standardization of tests from one to another. The pressure across the metering element allows calculation of the actual flow through the whole system. The pressure across the test piece is typically measured with a U tube manometer while, for increased sensitivity and accuracy, the pressure difference across the metering element is measured with an inclined manometer. One end of each manometer is connected to its respective plenum chamber while the other is open to the atmosphere. Ordinarily all flow bench manometers measure in inches of water although the inclined manometer's scale is usually replaced with a logarithmic scale reading in percentage of total flow of the selected metering element which makes flow calculation simpler. Temperature must also be accounted for because the air pump will heat the air passing through it making the air down stream of it less dense and more viscous. This difference must be corrected for. Temperature is measured at the test piece plenum and at the metering element plenum. Correction factors are then applied during flow calculations. Some flow bench designs place the air pump after the metering element so that heating by the air pump is not as large a concern. Additional manometers can be installed for use with hand held probes, which are used to explore local flow conditions in the port. Flow bench data The air flow bench can give a wealth of data about the characteristics of a cylinder head or whatever part is tested. The result of main interest is bulk flow. It is the volume of air that flows through the port in a given time. Expressed in cubic feet per minute or cubic meters per second/minute. Valve lift can be expressed as an actual dimension in decimal inches or mm. It can also be specified as a ratio between a characteristic diameter and the lift L/D. Most often used is the valve head diameter. Normally engines have an L/D ratio from 0 up to a maximum of 0.35. For example, a valve would be lifted a maximum of 0.350 inch. During flow testing the valve would be set at L/D 0.05 0.1 0.15 0.2 0.25 0.3 and readings taken successively. This allows the comparison of efficiencies of ports with other valve sizes, as the valve lift is proportional rather than absolute. For comparison with tests by others the characteristic diameter used to determine lift must be the same. Flow coefficients are determined by comparing the actual flow of a test piece to the theoretical flow of a perfect orifice of equal area. Thus the flow coefficient should be a close measure of efficiency. It cannot be exact because the L/D does not indicate the actual minimum size of the duct. An orifice with a flow coefficient of 0.59 would flow the same amount of fluid as a perfect orifice with 59% of its area or 59% of the flow of a perfect orifice with the same area (orifice plates of the type shown would have a coefficient of between 0.58 and 0.62 depending on the precise details of construction and the surrounding installation). Valve/port coefficient is non dimensional and is derived by multiplying a characteristic physical area of the port and by the bulk flow figures and comparing the result to an ideal orifice of the same area. It is here that air flow bench norms differ from fluid dynamics or aerodynamics at large. The coefficient may be based on the inner valve seat diameter, the outer valve head diameter, the port throat area or the valve open curtain area. Each of these methods are valid for some purpose but none of them represents the true minimum area for the valve/port in question and each results in a different flow coefficient. The great difficulty of measuring the actual minimum area at all the various valve lifts precludes using this as a characteristic measurement. This is due to the minimum area changing shape and location throughout the lift cycle. Because of this non standardization, port flow coefficients are not "true" flow coefficients, which would be based on the actual minimum area in the flow path. Which method to choose depends on what use is intended for the data. Engine simulation applications each require their own specification. If the result is to be compared to the work of others then the same method would have to be selected. Using extra instrumentation (manometers and probes) the detailed flow through the port can be mapped by measuring multiple points within the port with probes. Using these tools, the velocity profile throughout the port can be mapped which gives insight into what the port is doing and what might be done to improve it. Of less interest is mass flow per minute or second since the test is not of a running engine which would be affected by it. It is the weight of air that flows through the port in a given time. Expressed in pounds per minute/hour or kilograms per second/minute. Mass flow is derived from the volume flow result to which a density correction is applied. With the information gathered on the flow bench, engine power curve and system dynamics can be roughly estimated by applying various formulae. With the advent of accurate engine simulation software, however, it is much more useful to use flow data to create an engine model for a simulator. Determining air velocity is a useful part of flow testing. For incompressible flow (below 230 Ft/s or 70 M/s this equation gives a less than 1% error corresponding to a test pressure of 12" of water or 306mm of water) it is calculated as follows: For one set of English units Where: V, velocity in feet per minute H, pressure drop across test piece in inches of water measured by the test pressure manometer d, density of air in pounds per cubic foot (0.075 pound per cubic foot at standard conditions) For SI units Where: V, velocity in meters per second H, pressure drop across test piece in pascals measured by the test pressure manometer d, density of air in kilograms per cubic meter (1.20 kilograms per cubic meter at standard conditions) This represents the highest speed of the air in the flow path of a normally shaped port, at or near the section of minimum area (through the valve seat at low values of L/D for instance). That would not apply to other shapes such as a venturi tube where the local speed in the throat can be much higher than indicated by the pressure drop across the whole system.(When a pitot tube is used to measure velocities (adiabatic) above 230 Ft/s or 70 M/s, the error due to compressibility increases progressively with this formula from 1% to ~26% at mach 1) Once velocity has been calculated, the volume can be calculated by multiplying the velocity by the orifice area times its flow coefficient. Limitations A flow bench is capable of giving flow data which is closely but not perfectly related to actual engine performance. There are a number of limiting factors which contribute to the discrepancy. Steady state flow vs dynamic flow A flow bench tests ports under a steady pressure difference while in the actual engine the pressure difference varies widely during the whole cycle. The exact flow conditions existing in the flow bench test exist only fleetingly if at all in an actual running engine. Running engines cause the air to flow in strong waves rather than the steady stream of the flow bench. This acceleration/deceleration of the fuel/air column causes effects not accounted for in flow bench tests. This graph, generated with an engine simulation program, shows how widely the pressures vary in a running engine vs. the steady test pressure of the flow bench. (Note, on the graph, that, in this case, when the intake valve opens, the cylinder pressure is above atmospheric (nearly 50% above or 1.5 bar or 150 kPa). This will cause reverse flow into the intake port until pressure in the cylinder falls below the ports pressure). Pressure differential The coefficient of the port may change somewhat at different pressure differentials due to changes in Reynolds number regime leading to a possible loss of dynamic similitude. Flow bench test pressure are typically conducted at 10 to 28 inches of water (2.5 to 7 kPa) while a real engine may see 190 inches of water (47 kPa) pressure difference. Air only vs mixed gas/fuel mist flow The flow bench tests using only air while a real engine usually uses air mixed with fuel droplets and fuel vapor, which is significantly different. Evaporating fuel passing through the port-runner has the effect of adding gas to and lowering the temperature of the air stream along the runner and giving the outlet flow rate slightly higher than the flow rate entering the port-runner. A port which flows dry air well might cause fuel droplets to fall out of suspension causing a loss of power not indicated by flow figures alone. Bulk flow vs flow velocity Large ports and valves can show high flow rates on a flow bench but the velocity can be lowered to the point that the gas dynamics of a real engine are ruined. Overly large ports also contribute to fuel fall out. Even room temperature vs. uneven high temperature A running engine is much hotter than room temperature and the temperature in various parts of the system vary widely. This affects the actual flow, fuel effects as well as the dynamic wave effects in the engine which do not exist on the flow bench. Physical and mechanical differences The proximity, shape and movement of the piston as well as the movement of the valve itself significantly alters the flow conditions in a real engine that do not exist in flow bench tests. Exhaust port conditions The flow simulated on a flow bench bears almost no similarity to the flow in a real exhaust port. Here even the coefficients measured on flow benches are inaccurate. This is due to the very high and wide-ranging pressures and temperatures. From the graph above it can be seen that the pressure in the port reaches 2.5 bar (250 kPa) and the cylinder pressure at opening is 6 bar (600 kPa) and more. This is many times more than the capabilities of a typical flow bench of 0.06 bar (6 kPa). The flow in a real exhaust port can easily be sonic with choked flow occurring and even supersonic flow in areas. The very high temperature causes the viscosity of the gas to increase, all of which alters the Reynolds number drastically. Added to the above is the profound effect that downstream elements have on the flow of the exhaust port. Far more than upstream elements found on the intake side. Exhaust port size and flow information might be considered as vague, but there are certain guidelines which are used when creating a base-line to optimum performance. This base line, of course, is further tuned and qualified through a dynamometer. See also Air flow meter References External links Free demo engine simulator used to generate graph above Plans for a home built flow bench Original forum for those interested in the design and construction of flow benches Latest forums for those interested in the design and construction of flow benches Engine tuning instruments Aerodynamics
Air flow bench
[ "Chemistry", "Technology", "Engineering" ]
2,939
[ "Engine tuning instruments", "Measuring instruments", "Aerodynamics", "Mechanical engineering", "Aerospace engineering", "Fluid dynamics" ]
4,101,208
https://en.wikipedia.org/wiki/Takt%20time
Takt time, or simply takt, is a manufacturing term to describe the required product assembly duration that is needed to match the demand. Often confused with cycle time, takt time is a tool used to design work and it measures the average time interval between the start of production of one unit and the start of production of the next unit when items are produced sequentially. For calculations, it is the time to produce parts divided by the number of parts demanded in that time interval. The takt time is based on customer demand; if a process or a production line are unable to produce at takt time, either demand leveling, additional resources, or process re-engineering is needed to ensure on-time delivery. For example, if the customer demand is 10 units per week, then, given a 40-hour workweek and steady flow through the production line, the average duration between production starts should be 4 hours, ideally. This interval is further reduced to account for things like machine downtime and scheduled employee breaks. Etymology Takt time is a borrowing of the Japanese word , which in turn was borrowed from the German word , meaning 'cycle time'. The word was likely introduced to Japan by German engineers in the 1930s. The word originates from the Latin word "tactus" meaning "touch, sense of touch, feeling". Some earlier meanings include: (16th century) "beat triggered by regular contact, clock beat", then in music "beat indicating the rhythm" and (18th century) "regular unit of note values". History Takt time has played an important role in production systems even before the industrial revolution. From 16th-century shipbuilding in Venice, mass-production of Model T by Henry Ford, synchronizing airframe movement in the German aviation industry and many more. Cooperation between the German aviation industry and Mitsubishi brought takt to Japan, where Toyota incorporated it in the Toyota Production System (TPS). James P. Womack and Daniel T. Jones in The Machine That Changed the World (1990) and Lean Thinking (1996) introduced the world to the concept of "lean". Through this, Takt was connected to lean systems. In the Toyota Production System (TPS), takt time is a central element of the just-in-time pillar (JIT) of this production system. Definition Assuming a product is made one unit at a time at a constant rate during the net available work time, the takt time is the amount of time that must elapse between two consecutive unit completions in order to meet the demand. Takt time can be first determined with the formula: Where T   = Takt time (or takt), e.g. [work time between two consecutive units] Ta = Net time available to work during the period, e.g. [work time per period] D = Demand (customer demand) during the period, e.g. [units required per period] Net available time is the amount of time available for work to be done. This excludes break times and any expected stoppage time (for example scheduled maintenance, team briefings, etc.). Example:If there are a total of 8 hours (or 480 minutes) in a shift (gross time) less 30 minutes lunch, 30 minutes for breaks (2 × 15 mins), 10 minutes for a team briefing and 10 minutes for basic maintenance checks, then the net Available Time to Work = 480 - 30 - 30 - 10 - 10 = 400 minutes. If customer demand were 400 units a day and one shift was being run, then the line would be required to output at a minimum rate of one part per minute in order to be able to keep up with customer demand. Takt time may be adjusted according to requirements within a company. For example, if one department delivers parts to several manufacturing lines, it often makes sense to use similar takt times on all lines to smooth outflow from the preceding station. Customer demand can still be met by adjusting daily working time, reducing down times on machines, and so on. Implementation Takt time is common in production lines that move a product along a line of stations that each performs a set of predefined tasks. Manufacturing: casting of parts, drilling holes, or preparing a workplace for another task Control tasks: testing of parts or adjusting machinery Administration: answering standard inquiries or call center operation Construction Management: scheduling process steps within a phase of the project Takt in construction With the adoption of lean thinking in the construction industry, takt time has found its way into the project-based production systems of the industry. Starting with construction methods that have highly repetitive products like bridge construction, tunnel construction, and repetitive buildings like hotels and residential high-rises, implementation of takt is increasing. According to Koskela (1992), an ideal production system has continuous flow and creates value for the customer while transforming raw materials into products. Construction projects use critical path method (CPM) or program evaluation and review technique (PERT) for planning and scheduling. These methods do not generate flow in the production and tend to be vulnerable to variation in the system. Due to common cost and schedule overruns, industry professionals and academia have started to regard CPM and PERT as outdated methods that often fail to anticipate uncertainties and allocate resources accurately and optimally in a dynamic construction environment. This has led to increasing developments and implementation of takt. Space scheduling Takt, as used in takt planning or takt-time planning (TTP) for construction, is considered one of the several ways of planning and scheduling construction projects based on their utilization of space rather than just time, as done traditionally in the critical path method. Also, to visualize and create flow of work on a construction site, utilization of space becomes essential. Some other space scheduling methods include: Linear scheduling method (LSM) and vertical production method (VPM) which are used to schedule horizontal and vertical repetitive projects respectively, Line-of-balance (LOB) method used for any type of repetitive projects. Location-based management system (LBMS) uses flowlines with the production rates of the crews, as they move through locations with an objective of optimizing work continuity. Comparison with manufacturing In manufacturing, the product being built keeps moving on the assembly line, while the workstations are stationary. On contrary, construction product, i.e. the building or infrastructure facilities being constructed, is stationary and the workers move from one location to another. Takt planning needs an accurate definition of work at each workstation, which in construction is done through defining spaces, called "zones". Due to the non-repetitive distribution of work in construction, achieving work completion within the defined takt for each zone, becomes difficult. Capacity buffer is used to deal with this variability in the system. The rationale behind defining these zones and setting the takt is not standardized and varies as per the style of the planner. Work density method (WDM) is one of the methods being used to assist in this process. Work density is expressed as a unit of time per unit of area. For a certain work area, work density describes how much time a trade will require to do their work in that area (zone), based on: the product's design, i.e., what is in the construction project drawings and specifications the scope of the trade's work, the specific task in their schedule (depending on work already in place and work that will follow later in the same or another process), the means and methods the trade will use (e.g., when prefabricating off-site, the work density on-site is expected to decrease), while accounting for crew capabilities and size. Benefits of takt time Once a takt system is implemented there are a number of benefits: The product moves along a line, so bottlenecks (stations that need more time than planned) are easily identified when the product does not move on in time. Correspondingly, stations that don't operate reliably (suffer a frequent breakdown, etc.) are easily identified. The takt leaves only a certain amount of time to perform the actual value added work. Therefore, there is a strong motivation to get rid of all non-value-adding tasks (like machine set-up, gathering of tools, transporting products, etc.) Workers and machines perform sets of similar tasks, so they don't have to adapt to new processes every day, increasing their productivity. There is no place in the takt system for removal of a product from the assembly line at any point before completion, so opportunities for shrink and damage in transit are minimized. Problems of takt time Once a takt system is implemented there are a number of problems: When customer demand rises so much that takt time has to come down, quite a few tasks have to be either reorganized to take even less time to fit into the shorter takt time, or they have to be split up between two stations (which means another station has to be squeezed into the line and workers have to adapt to the new setup) When one station in the line breaks down for whatever reason the whole line comes to a grinding halt, unless there are buffer capacities for preceding stations to get rid of their products and following stations to feed from. A built-in buffer of three to five percent downtime allows needed adjustments or recovery from failures. Short takt time can put considerable stress on the "moving parts" of a production system or subsystem. In automated systems/subsystems, increased mechanical stress increases the likelihood of a breakdown, and in non-automated systems/subsystems, personnel face both increased physical stress (which increases the risk of repetitive motion (also "stress" or "strain") injury), intensified emotional stress, and lowered motivation, sometimes to the point of increased absenteeism. Tasks have to be leveled to make sure tasks don't bulk in front of certain stations due to peaks in workload. This decreases the flexibility of the system as a whole. The concept of takt time doesn't account for human factors such as an operator needing an unexpected bathroom break or a brief rest period between units (especially for processes involving significant physical labor). In practice, this means that the production processes must be realistically capable of operation above peak takt and demand must be leveled in order to avoid wasted line capacity See also Turnaround time Lean manufacturing Toyota Production System Muri Lean construction Factory Physics, a book on manufacturing management Clock-face scheduling, sometimes referred to as Taktplan References External links Lean Manufacturing site about Takt time Six Sigma site about Takt time On Line business processes simulator Takt Time - a vision for Lean Manufacturing Understanding Takt Time and Cycle Time Further reading Ohno, Taiichi, Toyota Production System: Beyond Large-Scale Production, Productivity Press (1988). Baudin, Michel, Lean Assembly: The Nuts and Bolts of Making Assembly Operations Flow, Productivity Press (2002). Ortiz, Chris A., Kaizen Assembly: Designing, Constructing, and Managing a Lean Assembly Line, CRC Press. Production and manufacturing Lean manufacturing
Takt time
[ "Engineering" ]
2,281
[ "Lean manufacturing" ]
4,101,529
https://en.wikipedia.org/wiki/Rocker%20box
A rocker box (also known as a cradle or a big box) is a gold mining implement for separating alluvial placer gold from sand and gravel which was used in placer mining in the 19th century. It consists of a high-sided box, which is open on one end and on top, and was placed on rockers. The inside bottom of the box is lined with riffles and usually a carpet (called Miner's Moss) similar to a sluice box. On top of the box is a classifier sieve (usually with half-inch or quarter-inch openings) which screens-out larger pieces of rock and other material, allowing only finer sand and gravel through. Between the sieve and the lower sluice section is a baffle, which acts as another trap for fine gold and also ensures that the aggregate material being processed is evenly distributed before it enters the sluice section. It sits at an angle and points towards the closed back of the box. Traditionally, the baffle consisted of a flexible apron made of canvas or a similar material, which had a sag of about an inch and a half in the center, to act as a collection pocket for fine gold. Later rockers (including most modern ones) dispensed with the flexible apron and used a pair of solid wood or metal baffle boards. These are sometimes covered with carpet to trap fine gold. The entire device sits on rockers at a slight gradient, which allows it to be rocked side to side. Today, the rocker box is not used as extensively as the sluice, but still is an effective method of recovering gold in areas where there is not enough available water to operate a sluice effectively. Like a sluice box, the rocker box has riffles and a carpet in it to trap gold. It was designed to be used in areas with less water than a sluice box. The mineral processing involves pouring water out of a small cup and then rocking the small sluice box like a cradle, thus the name rocker box or cradle. Rocker boxes must be manipulated carefully, to prevent losing the gold. Although big, and difficult to move, the rocker can pick up twice the amount of the gravel, and therefore more gold in one day than an ordinary gold mining pan. The rocker, like the pan, is used extensively in small-scale placer work, in sampling, and for washing sluice concentrates and material cleaned by hand from bedrock in other placer operations. One to three cubic yards, bank measure, can be dug and washed in a rocker per man-shift, depending upon the distance the gravel or water has to be carried, the character of the gravel, and the size of the rocker. Rockers are usually homemade and display a variety of designs. A favorite design consists essentially of a combination washing box and screen, a canvas or carpet apron under the screen, a short sluice with two or more riffles, and rockers under the sluice. The bottom of the washing box consists of sheet metal with holes about a half an inch in diameter punched in it, or a half-inch mesh screen can be used. Dimensions shown are satisfactory, but variations are possible. The bottom of the rocker should be made of a single wide, smooth board, which will greatly facilitate cleanups. The materials for building a rocker cost only a few dollars, depending mainly on the source of lumber. Notes References Further reading Recreational Gold Panning Gold Ankauf (in German) Fossicking Gold mining Mining equipment
Rocker box
[ "Engineering" ]
728
[ "Mining equipment" ]
4,101,904
https://en.wikipedia.org/wiki/Flux%20balance%20analysis
In biochemistry, flux balance analysis (FBA) is a mathematical method for simulating the metabolism of cells or entire unicellular organisms, such as E. coli or yeast, using genome-scale reconstructions of metabolic networks. Genome-scale reconstructions describe all the biochemical reactions in an organism based on its entire genome. These reconstructions model metabolism by focusing on the interactions between metabolites, identifying which metabolites are involved in the various reactions taking place in a cell or organism, and determining the genes that encode the enzymes which catalyze these reactions (if any). In comparison to traditional methods of modeling, FBA is less intensive in terms of the input data required for constructing the model. Simulations performed using FBA are computationally inexpensive and can calculate steady-state metabolic fluxes for large models (over 10,000 reactions) in a few seconds on modern personal computers. The related method of metabolic pathway analysis seeks to find and list all possible pathways between metabolites. FBA finds applications in bioprocess engineering to systematically identify modifications to the metabolic networks of microbes used in fermentation processes that improve product yields of industrially important chemicals such as ethanol and succinic acid. It has also been used for the identification of putative drug targets in cancer and pathogens, rational design of culture media, and host–pathogen interactions. The results of FBA can be visualized for smaller networks using flux maps similar to the image on the right, which illustrates the steady-state fluxes carried by reactions in glycolysis. The thickness of the arrows is proportional to the flux through the reaction. FBA formalizes the system of equations describing the concentration changes in a metabolic network as the dot product of a matrix of the stoichiometric coefficients (the stoichiometric matrix S) and the vector v of the unsolved fluxes. The right-hand side of the dot product is a vector of zeros representing the system at steady state. At steady state, metabolite concentrations remain constant as the rates of production and consumption are balanced, resulting in no net change over time. Since the system of equations is often underdetermined, there can be multiple possible solutions. To obtain a single solution, the flux that maximizes a reaction of interest, such as biomass or ATP production, is selected. Linear programming is then used to calculate one of the possible solutions of fluxes corresponding to the steady state. History Some of the earliest work in FBA dates back to the early 1980s. Papoutsakis demonstrated that it was possible to construct flux balance equations using a metabolic map. It was Watson, however, who first introduced the idea of using linear programming and an objective function to solve for the fluxes in a pathway. The first significant study was subsequently published by Fell and Small, who used flux balance analysis together with more elaborate objective functions to study the constraints in fat synthesis. Simulations FBA is not computationally intensive, taking on the order of seconds to calculate optimal fluxes for biomass production for a typical network (around 10,000 reactions). This means that the effect of deleting reactions from the network and/or changing flux constraints can be sensibly modelled on a single computer. Gene/reaction deletion and perturbation studies Single reaction deletion A frequently used technique to search a metabolic network for reactions that are particularly critical to the production of biomass. By removing each reaction in a network in turn and measuring the predicted flux through the biomass function or any other objective such as ATP production, each reaction can be classified as either essential (if the flux through the biomass function is substantially reduced) or non-essential (if the flux through the biomass function is unchanged or only slightly reduced). Pairwise reaction deletion Pairwise reaction deletion of all possible pairs of reactions is useful when looking for drug targets, as it allows the simulation of multi-target treatments, either by a single drug with multiple targets or by drug combinations. Double deletion studies can also quantify the synthetic lethal interactions between different pathways providing a measure of the contribution of the pathway to overall network robustness. Single and multiple gene deletions Genes are connected to enzyme-catalyzed reactions by Boolean expressions known as Gene-Protein-Reaction expressions (GPR). Typically a GPR takes the form (Gene A AND Gene B) to indicate that the products of genes A and B are protein sub-units that assemble to form the complete protein and therefore the absence of either would result in deletion of the reaction. On the other hand, if the GPR is (Gene A OR Gene B) it implies that the products of genes A and B are isozymes, meaning that the expression of either one is sufficient to maintain an active reaction. A reaction can also be regulated by a single gene, or in the case of diffusion, it may not be associated with any gene at all. Therefore, it is possible to evaluate the effect of single or multiple gene deletions by evaluation of the GPR as a Boolean expression. If the GPR evaluates to false, the reaction is constrained to zero in the model prior to performing FBA. Thus gene knockouts can be simulated using FBA. Logically, reactions that are not associated with any genes cannot be deleted. Interpretation of gene and reaction deletion results The utility of reaction inhibition and deletion analyses becomes most apparent if a gene-protein-reaction matrix has been assembled for the network being studied with FBA. The gene-protein-reaction matrix is a binary matrix connecting genes with the proteins made from them. Using this matrix, reaction essentiality can be converted into gene essentiality indicating the gene defects which may cause a certain disease phenotype or the proteins/enzymes which are essential (and thus what enzymes are the most promising drug targets in pathogens). However, the gene-protein-reaction matrix does not specify the Boolean relationship between genes with respect to the enzyme, instead it merely indicates an association between them. Therefore, it should be used only if the Boolean GPR expression is unavailable. Reaction inhibition The effect of inhibiting a reaction, rather than removing it entirely, can be simulated in FBA by restricting the allowed flux through it. The effect of an inhibition can be classified as lethal or non-lethal by applying the same criteria as in the case of a deletion where a suitable threshold is used to distinguish “substantially reduced” from “slightly reduced”. Generally the choice of threshold is arbitrary but a reasonable estimate can be obtained from growth experiments where the simulated inhibitions/deletions are actually performed and growth rate is measured. Growth media optimization To design optimal growth media with respect to enhanced growth rates or useful by-product secretion, it is possible to use a method known as Phenotypic Phase Plane analysis. PhPP involves applying FBA repeatedly on the model while co-varying the nutrient uptake constraints and observing the value of the objective function (or by-product fluxes). PhPP makes it possible to find the optimal combination of nutrients that favor a particular phenotype or a mode of metabolism resulting in higher growth rates or secretion of industrially useful by-products. The predicted growth rates of bacteria in varying media have been shown to correlate well with experimental results, as well as to define precise minimal media for the culture of Salmonella typhimurium. Host-pathogen interactions The human microbiota is a complex system with as many as 400 trillion microbes and bacteria interacting with each other and the host. To understand key factors in this system; a multi-scale, dynamic flux-balance analysis is proposed as FBA is classified as less computationally intensive. Mathematical description In contrast to the traditionally followed approach of metabolic modeling using coupled ordinary differential equations, flux balance analysis requires very little information in terms of the enzyme kinetic parameters and concentration of metabolites in the system. It achieves this by making two assumptions, steady state and optimality. The first assumption is that the modeled system has entered a steady state, where the metabolite concentrations no longer change, i.e. in each metabolite node the producing and consuming fluxes cancel each other out. The second assumption is that the organism has been optimized through evolution for some biological goal, such as optimal growth or conservation of resources. The steady-state assumption reduces the system to a set of linear equations, which is then solved to find a flux distribution that satisfies the steady-state condition subject to the stoichiometry constraints while maximizing the value of a pseudo-reaction (the objective function) representing the conversion of biomass precursors into biomass. The steady-state assumption dates to the ideas of material balance developed to model the growth of microbial cells in fermenters in bioprocess engineering. During microbial growth, a substrate consisting of a complex mixture of carbon, hydrogen, oxygen and nitrogen sources along with trace elements are consumed to generate biomass. The material balance model for this process becomes: If we consider the system of microbial cells to be at steady state then we may set the accumulation term to zero and reduce the material balance equations to simple algebraic equations. In such a system, substrate becomes the input to the system which is consumed and biomass is produced becoming the output from the system. The material balance may then be represented as: Mathematically, the algebraic equations can be represented as a dot product of a matrix of coefficients and a vector of the unknowns. Since the steady-state assumption puts the accumulation term to zero. The system can be written as: Extending this idea to metabolic networks, it is possible to represent a metabolic network as a stoichiometry balanced set of equations. Moving to the matrix formalism, we can represent the equations as the dot product of a matrix of stoichiometry coefficients (stoichiometric matrix ) and the vector of fluxes as the unknowns and set the right hand side to 0 implying the steady state. Metabolic networks typically have more reactions than metabolites and this gives an under-determined system of linear equations containing more variables than equations. The standard approach to solve such under-determined systems is to apply linear programming. Linear programs are problems that can be expressed in canonical form: where x represents the vector of variables (to be determined), c and b are vectors of (known) coefficients, A is a (known) matrix of coefficients, and is the matrix transpose. The expression to be maximized or minimized is called the objective function (cTx in this case). The inequalities Ax ≤ b are the constraints which specify a convex polytope over which the objective function is to be optimized. Linear Programming requires the definition of an objective function. The optimal solution to the LP problem is considered to be the solution which maximizes or minimizes the value of the objective function depending on the case in point. In the case of flux balance analysis, the objective function Z for the LP is often defined as biomass production. Biomass production is simulated by an equation representing a lumped reaction that converts various biomass precursors into one unit of biomass. Therefore, the canonical form of a Flux Balance Analysis problem would be: where represents the vector of fluxes (to be determined), is a (known) matrix of coefficients. The expression to be maximized or minimized is called the objective function ( in this case). The inequalities and define, respectively, the minimal and the maximal rates of flux for every reaction corresponding to the columns of the matrix. These rates can be experimentally determined to constrain and improve the predictive accuracy of the model even further or they can be specified to an arbitrarily high value indicating no constraint on the flux through the reaction. The main advantage of the flux balance approach is that it does not require any knowledge of the metabolite concentrations, or more importantly, the enzyme kinetics of the system; the homeostasis assumption precludes the need for knowledge of metabolite concentrations at any time as long as that quantity remains constant, and additionally it removes the need for specific rate laws since it assumes that at steady state, there is no change in the size of the metabolite pool in the system. The stoichiometric coefficients alone are sufficient for the mathematical maximization of a specific objective function. The objective function is essentially a measure of how each component in the system contributes to the production of the desired product. The product itself depends on the purpose of the model, but one of the most common examples is the study of total biomass. A notable example of the success of FBA is the ability to accurately predict the growth rate of the prokaryote E. coli when cultured in different conditions. In this case, the metabolic system was optimized to maximize the biomass objective function. However this model can be used to optimize the production of any product, and is often used to determine the output level of some biotechnologically relevant product. The model itself can be experimentally verified by cultivating organisms using a chemostat or similar tools to ensure that nutrient concentrations are held constant. Measurements of the production of the desired objective can then be used to correct the model. A good description of the basic concepts of FBA can be found in the freely available supplementary material to Edwards et al. 2001 which can be found at the Nature website. Further sources include the book "Systems Biology" by B. Palsson dedicated to the subject and a useful tutorial and paper by J. Orth. Many other sources of information on the technique exist in published scientific literature including Lee et al. 2006, Feist et al. 2008, and Lewis et al. 2012. Model preparation and refinement The key parts of model preparation are: creating a metabolic network without gaps, adding constraints to the model, and finally adding an objective function (often called the Biomass function), usually to simulate the growth of the organism being modelled. Metabolic network and software tools Metabolic networks can vary in scope from those describing a single pathway, up to the cell, tissue or organism. The main requirement of a metabolic network that forms the basis of an FBA-ready network is that it contains no gaps. This typically means that extensive manual curation is required, making the preparation of a metabolic network for flux-balance analysis a process that can take months or years. However, recent advances such as so-called gap-filling methods can reduce the required time to weeks or months. Software packages for creation of FBA models include: Pathway Tools/MetaFlux, Simpheny, MetNetMaker, COBRApy, CarveMe, MIOM, or COBREXA.jl. Generally models are created in BioPAX or SBML format so that further analysis or visualization can take place in other software although this is not a requirement. Constraints A key part of FBA is the ability to add constraints to the flux rates of reactions within networks, forcing them to stay within a range of selected values. This lets the model more accurately simulate real metabolism. The constraints belong to two subsets from a biological perspective; boundary constraints that limit nutrient uptake/excretion and internal constraints that limit the flux through reactions within the organism. In mathematical terms, the application of constraints can be considered to reduce the solution space of the FBA model. In addition to constraints applied at the edges of a metabolic network, constraints can be applied to reactions deep within the network. These constraints are usually simple; they may constrain the direction of a reaction due to energy considerations or constrain the maximum speed of a reaction due to the finite speed of all reactions in nature. Growth media constraints Organisms, and all other metabolic systems, require some input of nutrients. Typically the rate of uptake of nutrients is dictated by their availability (a nutrient that is not present cannot be absorbed), their concentration and diffusion constants (higher concentrations of quickly-diffusing metabolites are absorbed more quickly) and the method of absorption (such as active transport or facilitated diffusion versus simple diffusion). If the rate of absorption (and/or excretion) of certain nutrients can be experimentally measured then this information can be added as a constraint on the flux rate at the edges of a metabolic model. This ensures that nutrients that are not present or not absorbed by the organism do not enter its metabolism (the flux rate is constrained to zero) and also means that known nutrient uptake rates are adhered to by the simulation. This provides a secondary method of making sure that the simulated metabolism has experimentally verified properties rather than just mathematically acceptable ones. Thermodynamical reaction constraints In principle, all reactions are reversible however in practice reactions often effectively occur in only one direction. This may be due to significantly higher concentration of reactants compared to the concentration of the products of the reaction. But more often it happens because the products of a reaction have a much lower free energy than the reactants and therefore the forward direction of a reaction is favored more. For ideal reactions, For certain reactions a thermodynamic constraint can be applied implying direction (in this case forward) Realistically the flux through a reaction cannot be infinite (given that enzymes in the real system are finite) which implies that, Experimentally measured flux constraints Certain flux rates can be measured experimentally () and the fluxes within a metabolic model can be constrained, within some error (), to ensure these known flux rates are accurately reproduced in the simulation. Flux rates are most easily measured for nutrient uptake at the edge of the network. Measurements of internal fluxes is possible using radioactively labelled or NMR visible metabolites. Constrained FBA-ready metabolic models can be analyzed using software such as the COBRA toolbox (available implementations in MATLAB and Python), SurreyFBA, or the web-based FAME. Additional software packages have been listed elsewhere. A comprehensive review of all such software and their functionalities has been recently reviewed. An open-source alternative is available in the R (programming language) as the packages or sybil for performing FBA and other constraint based modeling techniques. Objective function FBA can give a large number of mathematically acceptable solutions to the steady-state problem . However solutions of biological interest are the ones which produce the desired metabolites in the correct proportion. The objective function defines the proportion of these metabolites. For instance when modelling the growth of an organism the objective function is generally defined as biomass. Mathematically, it is a column in the stoichiometry matrix the entries of which place a "demand" or act as a "sink" for biosynthetic precursors such as fatty acids, amino acids and cell wall components which are present on the corresponding rows of the S matrix. These entries represent experimentally measured, dry weight proportions of cellular components. Therefore, this column becomes a lumped reaction that simulates growth and reproduction. Therefore, the accuracy of experimental measurements plays an essential role in the correct definition of the biomass function and makes the results of FBA biologically applicable by ensuring that the correct proportion of metabolites are produced by metabolism. When modeling smaller networks the objective function can be changed accordingly. An example of this would be in the study of the carbohydrate metabolism pathways where the objective function would probably be defined as a certain proportion of ATP and NADH and thus simulate the production of high energy metabolites by this pathway. Optimization of the objective/biomass function Linear programming can be used to find a single optimal solution. The most common biological optimization goal for a whole-organism metabolic network would be to choose the flux vector that maximises the flux through a biomass function composed of the constituent metabolites of the organism placed into the stoichiometric matrix and denoted or simply In the more general case any reaction can be defined and added to the biomass function with either the condition that it be maximised or minimised if a single “optimal” solution is desired. Alternatively, and in the most general case, a vector can be introduced, which defines the weighted set of reactions that the linear programming model should aim to maximise or minimise, In the case of there being only a single separate biomass function/reaction within the stoichiometric matrix would simplify to all zeroes with a value of 1 (or any non-zero value) in the position corresponding to that biomass function. Where there were multiple separate objective functions would simplify to all zeroes with weighted values in the positions corresponding to all objective functions. Reducing the solution space – biological considerations for the system The analysis of the null space of matrices is implemented in software packages specialized for matrix operations such as Matlab and Octave. Determination of the null space of tells us all the possible collections of flux vectors (or linear combinations thereof) that balance fluxes within the biological network. The advantage of this approach becomes evident in biological systems which are described by differential equation systems with many unknowns. The velocities in the differential equations above - and - are dependent on the reaction rates of the underlying equations. The velocities are generally taken from the Michaelis–Menten kinetic theory, which involves the kinetic parameters of the enzymes catalyzing the reactions and the concentration of the metabolites themselves. Isolating enzymes from living organisms and measuring their kinetic parameters is a difficult task, as is measuring the internal concentrations and diffusion constants of metabolites within an organism. Therefore, the differential equation approach to metabolic modeling is beyond the current scope of science for all but the most studied organisms. FBA avoids this impediment by applying the homeostatic assumption, which is a reasonably approximate description of biological systems. Although FBA avoids that biological obstacle, the mathematical issue of a large solution space remains. FBA has a two-fold purpose. Accurately representing the biological limits of the system and returning the flux distribution closest to the natural fluxes within the target system/organism. Certain biological principles can help overcome the mathematical difficulties. While the stoichiometric matrix is almost always under-determined initially (meaning that the solution space to is very large), the size of the solution space can be reduced and be made more reflective of the biology of the problem through the application of certain constraints on the solutions. Extensions The success of FBA and the realization of its limitations has led to extensions that attempt to mediate the limitations of the technique. Flux variability analysis The optimal solution to the flux-balance problem is rarely unique with many possible, and equally optimal, solutions existing. Flux variability analysis (FVA), built into some analysis software, returns the boundaries for the fluxes through each reaction that can, paired with the right combination of other fluxes, estimate the optimal solution. Reactions which can support a low variability of fluxes through them are likely to be of a higher importance to an organism and FVA is a promising technique for the identification of reactions that are important. Minimization of metabolic adjustment (MOMA) When simulating knockouts or growth on media, FBA gives the final steady-state flux distribution. This final steady state is reached in varying time-scales. For example, the predicted growth rate of E. coli on glycerol as the primary carbon source did not match the FBA predictions; however, on sub-culturing for 40 days or 700 generations, the growth rate adaptively evolved to match the FBA prediction. Sometimes it is of interest to find out what is the immediate effect of a perturbation or knockout, since it takes time for regulatory changes to occur and for the organism to re-organize fluxes to optimally utilize a different carbon source or circumvent the effect of the knockout. MOMA predicts the immediate sub-optimal flux distribution following the perturbation by minimizing the distance (Euclidean) between the wild-type FBA flux distribution and the mutant flux distribution using quadratic programming. This yields an optimization problem of the form. where represents the wild-type (or unperturbed state) flux distribution and represents the flux distribution on gene deletion that is to be solved for. This simplifies to: This is the MOMA solution which represents the flux distribution immediately post-perturbation. Regulatory on-off minimization (ROOM) ROOM attempts to improve the prediction of the metabolic state of an organism after a gene knockout. It follows the same premise as MOMA that an organism would try to restore a flux distribution as close as possible to the wild-type after a knockout. However it further hypothesizes that this steady state would be reached through a series of transient metabolic changes by the regulatory network and that the organism would try to minimize the number of regulatory changes required to reach the wild-type state. Instead of using a distance metric minimization however it uses a mixed integer linear programming method. Dynamic FBA Dynamic FBA attempts to add the ability for models to change over time, thus in some ways avoiding the strict steady state condition of pure FBA. Typically the technique involves running an FBA simulation, changing the model based on the outputs of that simulation, and rerunning the simulation. By repeating this process an element of feedback is achieved over time. Comparison with other techniques FBA provides a less simplistic analysis than Choke Point Analysis while requiring far less information on reaction rates and a much less complete network reconstruction than a full dynamic simulation would require. In filling this niche, FBA has been shown to be a very useful technique for analysis of the metabolic capabilities of cellular systems. Choke point analysis Unlike choke point analysis which only considers points in the network where metabolites are produced but not consumed or vice versa, FBA is a true form of metabolic network modelling because it considers the metabolic network as a single complete entity (the stoichiometric matrix) at all stages of analysis. This means that network effects, such as chemical reactions in distant pathways affecting each other, can be reproduced in the model. The upside to the inability of choke point analysis to simulate network effects is that it considers each reaction within a network in isolation and thus can suggest important reactions in a network even if a network is highly fragmented and contains many gaps. Dynamic metabolic simulation Unlike dynamic metabolic simulation, FBA assumes that the internal concentration of metabolites within a system stays constant over time and thus is unable to provide anything other than steady-state solutions. It is unlikely that FBA could, for example, simulate the functioning of a nerve cell. Since the internal concentration of metabolites is not considered within a model, it is possible that an FBA solution could contain metabolites at a concentration too high to be biologically acceptable. This is a problem that dynamic metabolic simulations would probably avoid. One advantage of the simplicity of FBA over dynamic simulations is that they are far less computationally expensive, allowing the simulation of large numbers of perturbations to the network. A second advantage is that the reconstructed model can be substantially simpler by avoiding the need to consider enzyme rates and the effect of complex interactions on enzyme kinetics. See also Isotopic labeling Metabolomics Metabolic engineering Metabolic network modelling References Bioinformatics Systems biology Computational biology
Flux balance analysis
[ "Engineering", "Biology" ]
5,559
[ "Bioinformatics", "Biological engineering", "Computational biology", "Systems biology" ]
4,102,130
https://en.wikipedia.org/wiki/Open%20implementation
In computing, open implementation platforms are systems where the implementation is accessible. Open implementation allows developers of a program to alter pieces of the underlying software to fit their specific needs. With this technique, it is far easier to write general tools, though it makes the programs themselves more complex to design and use. There are also open language implementations, which make aspects of the language implementation accessible to application programmers. Open implementation is not to be confused with open source, which allows users to change implementation source code, rather than using existing application programming interfaces. See also Aspect-oriented programming as a successor concept in research Metaobject protocol for the primary implementation means Software architecture for organization of software in general External links Links pertaining to open implementation Free software culture and documents
Open implementation
[ "Technology" ]
150
[ "Computing stubs" ]
4,102,360
https://en.wikipedia.org/wiki/Filopodia
Filopodia (: filopodium) are slender cytoplasmic projections that extend beyond the leading edge of lamellipodia in migrating cells. Within the lamellipodium, actin ribs are known as microspikes, and when they extend beyond the lamellipodia, they're known as filopodia. They contain microfilaments (also called actin filaments) cross-linked into bundles by actin-bundling proteins, such as fascin and fimbrin. Filopodia form focal adhesions with the substratum, linking them to the cell surface. Many types of migrating cells display filopodia, which are thought to be involved in both sensation of chemotropic cues, and resulting changes in directed locomotion. Activation of the Rho family of GTPases, particularly Cdc42 and their downstream intermediates, results in the polymerization of actin fibers by Ena/Vasp homology proteins. Growth factors bind to receptor tyrosine kinases resulting in the polymerization of actin filaments, which, when cross-linked, make up the supporting cytoskeletal elements of filopodia. Rho activity also results in activation by phosphorylation of ezrin-moesin-radixin family proteins that link actin filaments to the filopodia membrane. Filopodia have roles in sensing, migration, neurite outgrowth, and cell-cell interaction. To close a wound in vertebrates, growth factors stimulate the formation of filopodia in fibroblasts to direct fibroblast migration and wound closure. In macrophages, filopodia act as phagocytic tentacles, pulling bound objects towards the cell for phagocytosis. Functions and variants Many cell types have filopodia. The functions of filopodia have been attributed to pathfinding of neurons, early stages of synapse formation, antigen presentation by dendritic cells of the immune system, force generation by macrophages and virus transmission. They have been associated with wound closure, dorsal closure of Drosophila embryos, chemotaxis in Dictyostelium, Delta-Notch signaling, vasculogenesis, cell adhesion, cell migration, and cancer metastasis. Specific kinds of filopodia have been given various names: microspikes, pseudopods, thin filopodia, thick filopodia, gliopodia, myopodia, invadopodia, podosomes, telopodes, tunneling nanotubes and dendrites. In infections Filopodia are also used for movement of bacteria between cells, so as to evade the host immune system. The intracellular bacteria Ehrlichia are transported between cells through the host cell filopodia induced by the pathogen during initial stages of infection. Filopodia are the initial contact that human retinal pigment epithelial (RPE) cells make with elementary bodies of Chlamydia trachomatis, the bacteria that causes chlamydia. Viruses have been shown to be transported along filopodia toward the cell body, leading to cell infection. Directed transport of receptor-bound epidermal growth factor (EGF) along filopodia has also been described, supporting the proposed sensing function of filopodia. SARS-CoV-2, the strain of coronavirus responsible for COVID-19, produces filopodia in infected cells. In brain cells In developing neurons, filopodia extend from the growth cone at the leading edge. In neurons deprived of filopodia by partial inhibition of actin filaments polymerization, growth cone extension continues as normal, but direction of growth is disrupted and highly irregular. Filopodia-like projections have also been linked to dendrite creation when new synapses are formed in the brain. A study deploying protein imaging of adult mice showed that filopodia in the explored regions were by an order of magnitude more abundant than previously believed, comprising about 30% of all dendritic protrusions. At their tips, they contain "silent synapses" that are inactive until recruited as part of neural plasticity and flexible learning or memories, previously thought to be present mainly in the developing pre-adult brain and to die off with time. References External links MBInfo - Filopodia MBInfo - Filopodia Assembly New Form of Cinema: Cellular Film, proposal for documentaries with cellular imaging Cell movement Cytoskeleton Cell biology Neurons Actin-based structures de:Filopodium
Filopodia
[ "Biology" ]
980
[ "Cell biology" ]
4,102,366
https://en.wikipedia.org/wiki/Verneuil%20method
The Verneuil method (or Verneuil process or Verneuil technique), also called flame fusion, was the first commercially successful method of manufacturing synthetic gemstones, developed in the late 1883 by the French chemist Auguste Verneuil. It is primarily used to produce the ruby, sapphire and padparadscha varieties of corundum, as well as the diamond simulants rutile, strontium titanate and spinel. The principle of the process involves melting a finely powdered substance using an oxyhydrogen flame, and crystallising the melted droplets into a boule. The process is considered to be the founding step of modern industrial crystal growth technology, and remains in wide use to this day. History Since the study of alchemy began, there have been attempts to synthetically produce precious stones, and ruby, being one of the prized cardinal gems, has long been a prime candidate. In the 19th century, significant advances were achieved, with the first ruby formed by melting two smaller rubies together in 1817, and the first microscopic crystals created from alumina (aluminium oxide) in a laboratory in 1837. By 1877, chemist Edmond Frémy had devised an effective method for commercial ruby manufacture by using molten baths of alumina, yielding the first gemstone-quality synthetic stones. The Parisian chemist Auguste Verneuil collaborated with Frémy on developing the method, but soon went on to independently develop the flame fusion process, which would eventually come to bear his name. One of Verneuil's sources of inspiration for developing his own method was the appearance of synthetic rubies sold by an unknown Genevan merchant in 1880. These "Geneva rubies" were dismissed as artificial at the time, but are now believed to be the first rubies produced by flame fusion, predating Verneuil's work on the process by 20 years. After examining the "Geneva rubies", Verneuil came to the conclusion that it was possible to recrystallise finely ground aluminium oxide into a large gemstone. This realisation, along with the availability of the recently developed oxyhydrogen torch and growing demand for synthetic rubies, led him to design the Verneuil furnace, where finely ground purified alumina and chromium oxide were melted by a flame of at least , and recrystallised on a support below the flame, creating a large crystal. He announced his work in 1902, publishing details outlining the process in 1904. By 1910, Verneuil's laboratory had expanded into a 30-furnace production facility, with annual gemstone production by the Verneuil process having reached in 1907. By 1912, production reached , and would go on to reach in 1980 and in 2000, led by Hrand Djevahirdjian's factory in Monthey, Switzerland, founded in 1914. The most notable improvements in the process were made in 1932, by S. K. Popov, who helped establish the capability for producing high-quality sapphires in the Soviet Union through the next 20 years. A large production capability was also established in the United States during World War II, when European sources were not available, and jewels were in high demand for their military applications such as for timepieces. The process was designed primarily for the synthesis of rubies, which became the first gemstone to be produced on an industrial scale. However, the Verneuil process could also be used for the production of other stones, including blue sapphire, which required oxides of iron and titanium to be used in place of chromium oxide, as well as more elaborate ones, such as star sapphires, where titania (titanium dioxide) was added and the boule was kept in the heat longer, allowing needles of rutile to crystallise within it. In 1947, the Linde Air Products division of Union Carbide pioneered the use of the Verneuil process for creating such star sapphires, until production was discontinued in 1974 owing to overseas competition. Despite some improvements in the method, the Verneuil process remains virtually unchanged to this day, while maintaining a leading position in the manufacture of synthetic corundum and spinel gemstones. Its most significant setback came in 1917, when Jan Czochralski introduced the Czochralski process, which has found numerous applications in the semiconductor industry, where a much higher quality of crystals is required than the Verneuil process can produce. Other alternatives to the process emerged in 1957, when Bell Labs introduced the hydrothermal process, and in 1958, when Carroll Chatham introduced the flux method. In 1989 Larry P Kelley of ICT, Inc. also developed a variant of the Czochralski process where natural ruby is used as the 'feed' material. Process One of the most crucial factors in successfully crystallising an artificial gemstone is obtaining highly pure starting material, with at least 99.9995% purity. In the case of manufacturing rubies, sapphires or padparadscha, this material is alumina. The presence of sodium impurities is especially undesirable, as it makes the crystal opaque. But because the bauxite from which alumina is obtained is most likely by way of the Bayer process (the first stage of which introduces caustic soda in order to separate the Al2O3) particular attention must be paid to the feedstock. Depending on the desired colouration of the crystal, small quantities of various oxides are added, such as chromium oxide for a red ruby, or ferric oxide and titania for a blue sapphire. Other starting materials include titania for producing rutile, or titanyl double oxalate for producing strontium titanate. Alternatively, small, valueless crystals of the desired product can be used. This starting material is finely powdered, and placed in a container within a Verneuil furnace, with an opening at the bottom through which the powder can escape when the container is vibrated. While the powder is being released, oxygen is supplied into the furnace, and travels with the powder down a narrow tube. This tube is located within a larger tube, into which hydrogen is supplied. At the point where the narrow tube opens into the larger one, combustion occurs, with a flame of at least at its core. As the powder passes through the flame, it melts into small droplets, which fall onto an earthen support rod placed below. The droplets gradually form a sinter cone on the rod, the tip of which is close enough to the core to remain liquid. It is at that tip that the seed crystal eventually forms. As more droplets fall onto the tip, a single crystal, called a boule, starts to form, and the support is slowly moved downward, allowing the base of the boule to crystallise, while its cap always remains liquid. The boule is formed in the shape of a tapered cylinder, with a diameter broadening away from the base and eventually remaining more or less constant. With a constant supply of powder and withdrawal of the support, very long cylindrical boules can be obtained. Once removed from the furnace and allowed to cool, the boule is split along its vertical axis to relieve internal pressure, otherwise the crystal will be prone to fracture when the stalk is broken due to a vertical parting plane. When initially outlining the process, Verneuil specified a number of conditions crucial for good results. These include: a flame temperature that is not higher than necessary for fusion; always keeping the melted product in the same part of the oxyhydrogen flame; and reducing the point of contact between the melted product and support to as small an area as possible. The average commercially produced boule using the process is in diameter and long, weighing about . The process can also be performed with a custom-oriented seed crystal to achieve a specific desired crystallographic orientation. Crystals produced by the Verneuil process are chemically and physically equivalent to their naturally occurring counterparts, and strong magnification is usually required to distinguish between the two. A telltale characteristic is the Verneuil crystal is curved growth lines (curved striae) form, as the cylindrical boule grows upwards in an environment with a high thermal gradient, while the equivalent lines in natural crystals are straight. Another distinguishing feature is the common presence of microscopic gas bubbles formed due to an excess of oxygen in the furnace; imperfections in natural crystals are usually solid impurities. See also Bridgman–Stockbarger method Czochralski method Float-zone silicon Kyropoulos method Laser-heated pedestal growth Micro-pulling-down Shelby Gem Factory References R. T. Liddicoat Jr., Gem, McGraw-Hill AccessScience, January 2002, Page 2. Chemical processes Mineralogy Gemology French inventions Industrial processes Crystals Science and technology in France Methods of crystal growth
Verneuil method
[ "Chemistry", "Materials_science" ]
1,813
[ "Methods of crystal growth", "Chemical processes", "Crystallography", "Crystals", "nan", "Chemical process engineering" ]
4,102,521
https://en.wikipedia.org/wiki/Glycol%20cleavage
Glycol cleavage is a specific type of organic chemistry oxidation. The carbon–carbon bond in a vicinal diol (glycol) is cleaved and instead the two oxygen atoms become double-bonded to their respective carbon atoms. Depending on the substitution pattern in the diol, these carbonyls will be ketones and/or aldehydes. Glycol cleavage is an important for determining the structures of sugars. After cleavage of the glycol, the ketone and aldehyde fragments can be inspected and the location of the former hydroxyl groups ascertained. Reagents Iodine-based reagents such as periodic acid (HIO4) and (diacetoxyiodo)benzene (PhI(OAc)2) are commonly used. Another reagent is lead tetraacetate (Pb(OAc)4). These I- and Pb-based methods are called the Malaprade reaction and Criegee oxidation, respectively. The former is favored for aqueous solutions, the latter for nonaqueous solutions. Cyclic intermediate are invariably invoked. The ring then fragments, with cleavage of the carbon–carbon bond and formation of carbonyl groups. Warm concentrated potassium permanganate (KMnO4) will react with an alkene to form a glycol. Following this dihydroxylation, the KMnO4 can then cleave the glycol to give aldehydes or ketones. The aldehydes will react further with (KMnO4), being oxidized to become carboxylic acids. Controlling the temperature, concentration of the reagent and the pH of the solution can keep the reaction from continuing past the formation of the glycol. References External links www.cem.msu.edu Periodate oxidation of polysaccharides Organic redox reactions
Glycol cleavage
[ "Chemistry" ]
397
[ "Organic redox reactions", "Organic reactions" ]
1,511,686
https://en.wikipedia.org/wiki/S%C3%A2nzian%C4%83
Sânziană is the Romanian name for gentle fairies who play an important part in local folklore, also used to designate the Galium verum or Cruciata laevipes flowers. Under the plural form Sânziene, the word designates an annual festival in the fairies' honor. Etymologically, the name comes from the Latin Sancta Diana, the Roman goddess of the hunt and moon, also celebrated in Roman Dacia (ancient Romania). Diana was known to be the virgin goddess and looked after virgins and women. She was one of the three maiden goddesses, Diana, Minerva and Vesta, who swore never to marry. People in the western Carpathian Mountains celebrate the Sânziene holiday annually, on June 24. This is similar to the Swedish Midsummer holiday, and is believed to be a pagan celebration of the summer solstice in June. According to the official position of the Romanian Orthodox Church, the customs actually relate to the celebration of Saint John the Baptist's Nativity, which also happens on June 24. Sânziene rituals The folk practices of Sânziene imply that the most beautiful maidens in the village dress in white and spend all day searching for and picking flowers, of which one MUST be Galium verum (Lady's bedstraw or Yellow bedstraw) which in Romanian is also named "Sânziànă". Using the flowers they picked during the day, the girls braid floral crowns which they wear upon returning to the village at nightfall. There they meet with their beloved and they dance around a bonfire. The crowns are thrown over the houses, and whenever the crown falls, it is said that someone will die in that house; if the crown stays on the roof of the house, then good harvest and wealth will be bestowed upon the owners. As with other bonfire celebrations, jumping over the embers after the bonfire is not raging anymore is done to purify the person and also to bring health. Another folk belief is that during the Sânziene Eve night, the heavens open up, making it the strongest night for magic spells, especially for the love spells. Also it is said that the plants harvested during this night will have tremendous magical powers. It is not a good thing though to be a male and walk at night during Sanziene Eve night, as that is the time when the fairies dance in the air, blessing the crops and bestowing health on people - they do not like to be seen by males, and whoever sees them will be maimed, or the fairies will take their hearing/speech or make them mad. In some areas of the Carpathians, the villagers then light a big wheel of hay from the ceremonial bonfire and push it down a hill. This has been interpreted as a symbol for the setting sun (from the solstice to come and until the midwinter solstice, the days will be getting shorter). In cultural references The consequences of heavens opening on Sânziene are connected by some to paranormal events reported during that period of each year. According to popular beliefs, strange things, both positive and negative, may happen to a person wandering alone on Sânziene night. Strange ethereal activities are believed to happen especially in places such as the Băneasa forest (near the capital of Bucharest) or the Baciu forest (near the city of Cluj-Napoca). Mircea Eliade's novel, Noaptea de Sânziene (translated as The Forbidden Forest), includes references to the folk belief about skies opening at night, as well as to paranormal events happening in the Băneasa Forest. In the form Sânziana ("the sânziană"), the word has also come to be used as a female name. It is notably used as such in Vasile Alecsandri's comedy Sânziana şi Pepelea (later an opera by George Stephănescu). The fairy Sânziene, "the fairy of the summer solstice", is described in a colinda (Romanian folk song) as the "sister of the Sun". Moldovan band Zdob şi Zdub recorded a song called Sânziene, which tells the story of a search for one's soulmate throughout a midsummer night festival. See also Diana (mythology) Ileana Cosânzeana Rusalii References Further reading Details about the Sânziene tradition Details about the Sânziene tradition from the National Museum of Romanian History External links Sânziene in Enciclopedia Dacica Sânziene picture and description Sânziene celebrated at the National Museum of Romanian History Fairies Festivals in Romania Saint John's Day June observances Romanian legendary creatures Romanian words and phrases Female legendary creatures Nature spirits Summer solstice
Sânziană
[ "Astronomy" ]
971
[ "Time in astronomy", "Summer solstice" ]
1,512,013
https://en.wikipedia.org/wiki/Wald%27s%20equation
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands. The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation. Basic version Let be a sequence of real-valued, independent and identically distributed random variables and let be an integer-valued random variable that is independent of the sequence . Suppose that and the have finite expectations. Then Example Roll a six-sided dice. Take the number on the die (call it ) and roll that number of six-sided dice to get the numbers , and add up their values. By Wald's equation, the resulting value on average is General version Let be an infinite sequence of real-valued random variables and let be a nonnegative integer-valued random variable. Assume that: . are all integrable (finite-mean) random variables, . for every natural number , and . the infinite series satisfies Then the random sums are integrable and If, in addition, . all have the same expectation, and . has finite expectation, then Remark: Usually, the name Wald's equation refers to this last equality. Discussion of assumptions Clearly, assumption () is needed to formulate assumption () and Wald's equation. Assumption () controls the amount of dependence allowed between the sequence and the number of terms; see the counterexample below for the necessity. Note that assumption () is satisfied when is a stopping time for a sequence of independent random variables . Assumption () is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof. If assumption () is satisfied, then assumption () can be strengthened to the simpler condition . there exists a real constant such that for all natural numbers . Indeed, using assumption (), and the last series equals the expectation of  [Proof], which is finite by assumption (). Therefore, () and () imply assumption (). Assume in addition to () and () that . is independent of the sequence and . there exists a constant such that for all natural numbers . Then all the assumptions (), (), () and (), hence also () are satisfied. In particular, the conditions () and () are satisfied if . the random variables all have the same distribution. Note that the random variables of the sequence don't need to be independent. The interesting point is to admit some dependence between the random number of terms and the sequence . A standard version is to assume (), (), () and the existence of a filtration such that . is a stopping time with respect to the filtration, and . and are independent for every . Then () implies that the event is in , hence by () independent of . This implies (), and together with () it implies (). For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence and the filtration , the following additional assumption is often imposed: . the sequence is adapted to the filtration , meaning the is -measurable for every . Note that () and () together imply that the random variables are independent. Application An application is in actuarial science when considering the total claim amount follows a compound Poisson process within a certain time period, say one year, arising from a random number of individual insurance claims, whose sizes are described by the random variables . Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions, Panjer's recursion can be used to calculate the distribution of . Examples Example with dependent terms Let be an integrable, -valued random variable, which is independent of the integrable, real-valued random variable with . Define for all . Then assumptions (), (), (), and () with are satisfied, hence also () and (), and Wald's equation applies. If the distribution of is not symmetric, then () does not hold. Note that, when is not almost surely equal to the zero random variable, then () and () cannot hold simultaneously for any filtration , because cannot be independent of itself as is impossible. Example where the number of terms depends on the sequence Let be a sequence of independent, symmetric, and }-valued random variables. For every let be the σ-algebra generated by and define when is the first random variable taking the value . Note that , hence by the ratio test. The assumptions (), () and (), hence () and () with , (), (), and () hold, hence also (), and () and Wald's equation applies. However, () does not hold, because is defined in terms of the sequence . Intuitively, one might expect to have in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading. Counterexamples A counterexample illustrating the necessity of assumption () Consider a sequence of i.i.d. (Independent and identically distributed random variables) random variables, taking each of the two values 0 and 1 with probability  (actually, only is needed in the following). Define . Then is identically equal to zero, hence , but and and therefore Wald's equation does not hold. Indeed, the assumptions (), (), () and () are satisfied, however, the equation in assumption () holds for all except for . A counterexample illustrating the necessity of assumption () Very similar to the second example above, let be a sequence of independent, symmetric random variables, where takes each of the values and with probability . Let be the first such that . Then, as above, has finite expectation, hence assumption () holds. Since for all , assumptions () and () hold. However, since almost surely, Wald's equation cannot hold. Since is a stopping time with respect to the filtration generated by , assumption () holds, see above. Therefore, only assumption () can fail, and indeed, since and therefore for every , it follows that A proof using the optional stopping theorem Assume (), (), (), (), () and (). Using assumption (), define the sequence of random variables Assumption () implies that the conditional expectation of given equals almost surely for every , hence is a martingale with respect to the filtration by assumption (). Assumptions (), () and () make sure that we can apply the optional stopping theorem, hence is integrable and Due to assumption (), and due to assumption () this upper bound is integrable. Hence we can add the expectation of to both sides of Equation () and obtain by linearity Remark: Note that this proof does not cover the above example with dependent terms. General proof This proof uses only Lebesgue's monotone and dominated convergence theorems. We prove the statement as given above in three steps. Step 1: Integrability of the random sum We first show that the random sum is integrable. Define the partial sums Since takes its values in and since , it follows that The Lebesgue monotone convergence theorem implies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain where the second inequality follows using the monotone convergence theorem. By assumption (), the infinite sequence on the right-hand side of () converges, hence is integrable. Step 2: Integrability of the random sum We now show that the random sum is integrable. Define the partial sums of real numbers. Since takes its values in and since , it follows that As in step 1, the Lebesgue monotone convergence theorem implies that By the triangle inequality, Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain By assumption (), Substituting this into () yields which is finite by assumption (), hence is integrable. Step 3: Proof of the identity To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sums and in order to show that they have the same expectation. Using the dominated convergence theorem with dominating random variable and the definition of the partial sum given in (), it follows that Due to the absolute convergence proved in () above using assumption (), we may rearrange the summation and obtain that where we used assumption () and the dominated convergence theorem with dominating random variable for the second equality. Due to assumption () and the σ-additivity of the probability measure, Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see () above), using linearity of expectation and the definition of the partial sum of expectations given in (), By using dominated convergence again with dominating random variable , If assumptions () and () are satisfied, then by linearity of expectation, This completes the proof. Further generalizations Wald's equation can be transferred to -valued random variables by applying the one-dimensional version to every component. If are Bochner-integrable random variables taking values in a Banach space, then the general proof above can be adjusted accordingly. See also Lorden's inequality Wald's martingale Spitzer's formula Notes References External links Probability theory Articles containing proofs Actuarial science
Wald's equation
[ "Mathematics" ]
2,136
[ "Articles containing proofs", "Actuarial science", "Applied mathematics" ]
1,512,119
https://en.wikipedia.org/wiki/Prime%20constant
The prime constant is the real number whose th binary digit is 1 if is prime and 0 if is composite or 1. In other words, is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is, where indicates a prime and is the characteristic function of the set of prime numbers. The beginning of the decimal expansion of ρ is: The beginning of the binary expansion is: Irrationality The number is irrational. Proof by contradiction Suppose were rational. Denote the th digit of the binary expansion of by . Then since is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers and such that for all and all . Since there are an infinite number of primes, we may choose a prime . By definition we see that . As noted, we have for all . Now consider the case . We have , since is composite because . Since we see that is irrational. References External links Irrational numbers Prime numbers Articles containing proofs Mathematical constants
Prime constant
[ "Mathematics" ]
201
[ "Irrational numbers", "Prime numbers", "Mathematical objects", "nan", "Articles containing proofs", "Mathematical constants", "Numbers", "Number theory" ]
1,512,189
https://en.wikipedia.org/wiki/Hot%20springs%20in%20Taiwan
Taiwan is part of the collision zone between the Yangtze Plate and Philippine Sea Plate. Eastern and southern Taiwan are the northern end of the Philippine Mobile Belt. Located next to an oceanic trench and volcanic system in a tectonic collision zone, Taiwan has evolved a unique environment that produces high-temperature springs with crystal-clear water, usually both clean and safe to drink. These hot springs are commonly used for spas and resorts. Soaking in hot springs became popular in Taiwan around 1895 during the 50-year long colonial rule by Japan. History The first mention of Taiwan's hot springs came from a 1697 manuscript, , but they were not developed until 1893, when a German businessman discovered Beitou and later established a small local spa. Under Japanese rule, the government constantly promoted and further enhanced the natural hot springs. The Japanese rule brought with them their rich onsen culture of spring soaking, which had a great influence on Taiwan. In March 1896, from Osaka, Japan opened Taiwan's first hot spring hotel, called . He not only heralded a new era of hot spring bathing in Beitou, but also paved the road for a whole new hot spring culture for Taiwan. In the Japanese onsen culture, hot springs are claimed to offer many health benefits. As well as raising energy levels, the minerals in the water are commonly suggested to help treat chronic fatigue, eczema or arthritis. During Japanese rule, the four major hot springs in Taiwan were in modern-day Beitou, Yangmingshan, Guanziling and Sichongxi. However, under Republic of China administration starting from 1945, the hot spring culture in Taiwan gradually lost momentum. It was not until 1999 that the authorities again started large-scale promotion of Taiwan's hot springs, setting off a renewed hot spring fever. In recent years, hot spring spas and resorts on Taiwan have gained more popularity. With the support of the government, the hot spring has become not only another industry but also again part of Taiwanese culture. Taiwan has one of the highest concentrations (more than 100 hot springs) and greatest variety of thermal springs in the world varying from hot springs to cold springs, mud springs, and seabed hot springs. Geology Taiwan is located on a faultline where several continental plates meet; the Philippine Sea Plate and the Eurasian Plate intersect in the Circum-Pacific seismic zone. Types of springs Sodium carbonate springs Sulfur springs Ferrous springs Sodium hydrogen carbonate springs Mud springs (spring water contains alkaline and iodine, is salty and has a light sulfuric smell) Salt or hydrogen sulfide springs Partial list of hot springs in Taiwan Jiaoxi Dakeng Beitou - is considered the "hot spring capital of Taiwan". Zhiben Tai-an - is an odorless and colorless alkaline carbonate hot spring. Yangmingshan Guguan Guanziling - is known for its mud baths. Sichongxi Wulai Ruisui, Hualien - this hot spring has a high iron content, consequently the water has a brownish tint. Zhaori See also Onsen Culture of Taiwan List of hot springs References External links Taiwanzen, website about Taiwan and hot springs Taiwanese Hot Springs Taiwan Journal Hot spring tour, Tourism Bureau, R.O.C. Tourism in Taiwan Geothermal areas Culture of Taiwan Balneotherapy Geology of Taiwan Hydrology Spa towns Thermal treatment
Hot springs in Taiwan
[ "Chemistry", "Engineering", "Environmental_science" ]
684
[ "Hydrology", "Environmental engineering" ]
1,512,214
https://en.wikipedia.org/wiki/Germplasm
Germplasm refers to genetic resources such as seeds, tissues, and DNA sequences that are maintained for the purpose of animal and plant breeding, conservation efforts, agriculture, and other research uses. These resources may take the form of seed collections stored in seed banks, trees growing in nurseries, animal breeding lines maintained in animal breeding programs or gene banks. Germplasm collections can range from collections of wild species to elite, domesticated breeding lines that have undergone extensive human selection. Germplasm collection is important for the maintenance of biological diversity, food security, and conservation efforts. In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production. Regulation In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production. Specifically for plants, there is the U.S. National Plant Germplasm System (NPGS) which holds > 450,000 accessions with 10,000 species of the 85 most commonly grown crops. Many accessions held are international species, and NPGS distributes germplasm resources internationally. As genetic information moves largely online there is a transition in germplasm information from a physical location (seed banks, cryopreserving) to online platforms containing genetic sequences. In addition there are issues in the collection germplasm information and where they are shared. Historically some germplasm information had been collected in developing countries and then shared to researchers who then sell the donor country the original germplasm that they altered. There is a lack of compensation to the donor countries and this is an issue. Storage methods Effective Germplasm work includes the collection, storage, analysis, documentation, and exchange of genetic information. This information can be stored as accessions, which is DNA sequence information, or live cells/tissues that can be preserved. However, only about 5% of current germplasm resources are living samples. For live cells/tissues, germplasm resources can be stored ex situ in seed banks, botanic gardens, or through cryopreservation. Cryopreservation is the process of storing germplasm at very low temperatures, such as liquid nitrogen. This process ensures that cells do not degrade and keeps the germplasm intact. In addition, resources can be stored in situ such as the natural area the species was found. Conservation efforts About 10,000 years ago is when humans began to domesticate plant species for the purpose of food, seeds, and vegetation. Since then, agriculture has been a staple for human civilizations and plant breeding has allowed more genetic diversity and a more diverse gene pool. Germplasm resources allow for more genetic assets to be used and integrated for agricultural systems for plant breeding and bringing about new varieties. In addition, researchers are looking at crop wild relatives (CWRs) that could expand gene pools of crop species and provide more ability to select target traits. Furthermore, we are currently facing a biodiversity crisis event that is caused by human activities and industrialization. Many plants and animals have gone extinct due to losing their habitat, their habitat being degraded with contaminants, and climate change. Germplasm resources are a way to conserve the pre-existing biological diversity and to possibly regenerate habitats. By storing this genetic information there is data about what species are present including plants, animals, bacteria, and fungi and what a complete ecosystem in specific areas look like. See also Animal genetic resources for food and agriculture Conservation biology Cryoconservation of animal genetic resources Forest genetic resources International Treaty on Plant Genetic Resources for Food and Agriculture Plant genetic resources Seed saving Germ plasm References Day-Rubenstein, K and Heisey, P. 2003. Plant Genetic Resources: New Rules for International Exchange 63 p. Economic Research Service. Global resources and productivity: questions and answers 174 p. SeedQuest Primer Germplasm Resources References External links USDA-ARS Germplasm Resources Information Network (GRIN) Bioversity International Bioversity International: Germplasm Collection Bioversity International: Germplasm Databases Bioversity International: Germplasm Documentation - overview Bioversity International: Germplasm Health DAD-IS: Domestic Animal Diversity Information System Developmental biology Conservation biology Food security Biorepositories
Germplasm
[ "Biology" ]
999
[ "Behavior", "Developmental biology", "Reproduction", "Bioinformatics", "Conservation biology", "Biorepositories" ]
1,512,566
https://en.wikipedia.org/wiki/BS%208110
BS 8110 is a withdrawn British Standard for the design and construction of reinforced and prestressed concrete structures. It is based on limit state design principles. Although used for most civil engineering and building structures, bridges and water-retaining structures are covered by separate standards (BS 5400 and BS 8007). The relevant committee of the British Standards Institute considers that there is no need to support BS 8110. In 2004, BS 8110 was replaced by EN 1992 (Eurocode 2 or EC2). In general, EC2 used in conjunction with the National Annex, is not wildly different from BS 8110 in terms of the design approach. It gives similar answers and offers scope for more economic structures. Overall EC2 is less prescriptive, and its scope is more extensive than BS 8110 for example in permitting higher concrete strengths. In this sense the new code will permit designs not currently permitted in the UK, and this gives designers the opportunity to derive benefit from the considerable advances in concrete technology over recent years. References 08110 Reinforced concrete Structural engineering standards
BS 8110
[ "Engineering" ]
216
[ "Structural engineering", "Structural engineering standards" ]
1,512,655
https://en.wikipedia.org/wiki/Hagen%20number
The Hagen number (Hg) is a dimensionless number used in forced flow calculations. It is the forced flow equivalent of the Grashof number and was named after the German hydraulic engineer G. H. L. Hagen. Definition It is defined as: where: is the pressure gradient L is a characteristic length ρ is the fluid density ν is the kinematic viscosity For natural convection and so the Hagen number then coincides with the Grashof number. Hagen number vs. Bejan number Awad: presented Hagen number vs. Bejan number. Although their physical meaning is not the same because the former represents the dimensionless pressure gradient while the latter represents the dimensionless pressure drop, it will be shown that Hagen number coincides with Bejan number in cases where the characteristic length (l) is equal to the flow length (L). Also, a new expression of Bejan number in the Hagen-Poiseuille flow will be introduced. In addition, extending the Hagen number to a general form will be presented. For the case of Reynolds analogy (Pr = Sc = 1), all these three definitions of Hagen number will be the same. The general form of the Hagen number is where is the corresponding diffusivity of the process in consideration References Dimensionless numbers of physics
Hagen number
[ "Mathematics" ]
265
[ "Mathematical objects", "Numbers", "Number stubs" ]
1,512,690
https://en.wikipedia.org/wiki/A%20Naturalist%20in%20Indian%20Seas
A Naturalist in Indian Seas, or, Four Years with the Royal Indian Marine Survey Ship Investigator is a 1902 publication by Alfred William Alcock, a British naturalist and carcinologist. The book is mostly a narrative describing the Investigator's journey through areas of the Indian Ocean, such as the Laccadive Sea, the Bay of Bengal and the Andaman Sea. It also details the history of the Investigator, as well as the marine biology of the Indian Ocean. The book is considered a classic in natural history travel, and in 1903, The Geographical Journal described it as "a most fascinating and complete popular account of the deep-sea fauna of the Indian seas. The book is one of intense interest throughout to a zoologist". In its original edition, A Naturalist in Indian Seas was 328 pages long and published in 8 volumes in London. References External links A naturalist in Indian seas; or, Four years with the Royal Indian marine survey ship "Investigator,", Alfred Alcock, Marine Survey of India. J. Murray, 1902. 1902 non-fiction books Books about India Indian Ocean Marine biology British travel books
A Naturalist in Indian Seas
[ "Biology" ]
227
[ "Marine biology" ]
1,512,695
https://en.wikipedia.org/wiki/Malemute
Malemute is the designation of an American sounding rocket family. The original Malemute had a maximum flight altitude of 165 km, a liftoff thrust of 57.00 kN, a total mass of 100 kg, a diameter of 0.41 m and a total length of 2.40 m. It was a single stage vehicle powered by a Thiokol Malemute TU-758 engine, operated by Sandia National Laboratories (SNL). It was used for used for conducting upper atmosphere research in various missions to study phenomena such as auroras, ionosphere and cosmic radiation. Over the years more advanced versions were developed. Versions Improved versions exist, with the addition of a second stages and using different first stage engines. Launches Malemute rockets were launched from Andøya Space Center, ESRANGE, Kauai Test Facility, Tonopah Test Range and White Sands Missile Range. References Sounding rockets of the United States Meteorological instrumentation and equipment
Malemute
[ "Astronomy", "Technology", "Engineering" ]
193
[ "Rocketry stubs", "Meteorological instrumentation and equipment", "Astronomy stubs", "Measuring instruments" ]
1,512,810
https://en.wikipedia.org/wiki/Landmark%20%28hotel%20and%20casino%29
The Landmark was a hotel and casino located in Winchester, Nevada, east of the Las Vegas Strip and across from the Las Vegas Convention Center. Frank Caroll, the project's original owner, purchased the property in 1961. Fremont Construction began work on the tower that September, while Caroll opened the adjacent Landmark Plaza shopping center and Landmark Apartments by the end of the year. The tower's completion was expected for early 1963, but because of a lack of financing, construction was stopped in 1962, with the resort approximately 80 percent complete. Up to 1969, the topped-off tower was the tallest building in Nevada until the completion of the International Hotel across the street. In 1966, the Central Teamsters Pension Fund provided a $5.5 million construction loan to finish the project, with ownership transferred to a group of investors that included Caroll and his wife. The Landmark's completion and opening was delayed several more times. In April 1968, Caroll withdrew his request for a gaming license after he was charged with assault and battery against the project's interior designer. The Landmark was put up for sale that month. Billionaire Howard Hughes, through Hughes Tool Company, purchased the Landmark in 1969 at a cost of $17.3 million. Hughes spent approximately $3 million to add his own touches to the resort before opening it on July 1, 1969, with 400 slot machines and 503 hotel rooms. In addition to a ground-floor casino, the resort also had a second, smaller casino on the 29th floor; it was the first high-rise casino in Nevada. Aside from the second casino, the five-story cupola dome at the top of the tower also featured restaurants, lounges, and a night club. During the 1970s, the Landmark became known for its performances by country music artists. The resort also played host to celebrities such as Danny Thomas and Frank Sinatra. However, the resort suffered financial problems after its opening and underwent several ownership changes, none of which resulted in success. The Landmark entered bankruptcy in 1985, and ultimately closed on August 8, 1990, unable to compete with new megaresorts. The Las Vegas Convention and Visitors Authority purchased the property in September 1993, and demolished the resort in November 1995, to add a 2,200-space parking lot for its convention center. In 2019, work was underway on a convention center expansion which includes the former site of the Landmark. The Las Vegas Convention Center's West Hall expansion opened on the site in June 2021. History Frank Caroll, also known as Frank Caracciolo, was a building developer from Kansas City. In 1960, he and his wife Susan decided to construct a hotel-casino and shopping center in Las Vegas. Frank Caroll received a gaming license that year. In 1961, the Carolls purchased of land at the northwest corner of Convention Center Drive and Paradise Road in Winchester, Nevada, approximately half a mile east of the Las Vegas Strip and across from the Las Vegas Convention Center. Aside from a gas station, the property was vacant. Construction (1961–1968) Commencement The Landmark was initially planned as a 14-story hotel with a casino, although the floor count increased as the project progressed. Fremont Construction, owned by Louis P. Scherer of Redlands, California, began construction of the tower at the end of September 1961, under a $1.5 million contract. Frank Caroll's company, Caroll Construction Company, also worked on the tower. At the start of construction, the tower was to include 20 stories, while completion was planned for early 1963. The tower was built on a five-foot-thick base of concrete and steel, measuring 80 feet in diameter and resting on a base of caliche that descended 30 feet into the ground. Consolidated Construction Company was the concrete subcontractor for the tower. By December 1961, Caroll had opened the two-story Landmark Plaza shopping center, built out in an L-shape at the base of the tower. The Landmark Apartments, with 120 units, were also built near the tower and operational by the end of 1961. In 1962, a bar known as Shannon's Saloon and a western music radio station, KVEG, began operating in the Landmark Plaza. In addition to studios, KVEG also had its offices in the shopping center. By February 1962, the tower was planned to include 31 floors, making it the tallest building in Nevada. While plans for a separate hotel structure were being made, work began on the tower by pouring concrete on a continuous 24-hour schedule. The concrete pour was done with a slip forming method. With 21 floors expected to be added to the tower over a 12-day period, it was expected to reach the 24th floor by the end of the month. In March 1962, at the request of Caroll, Clark County Commissioners removed a restriction which specified that gaming licenses could only be issued for ground-level casinos, as Caroll wanted to open a casino on the second floor of the Landmark's shopping center. That month, Caroll received a $450,000 loan from Appliance Buyers Credit Corporation (ABCC), a subsidiary of RCA-Whirlpool. Construction had reached the 26th floor by the end of April 1962. Upon completion of the floor, work was to begin on the tower's bubble dome. By June 1962, ABCC loaned an additional $300,000 to Caroll, who reached his $3 million loan limit with the company. Caroll ultimately owed ABCC a total of $3.5 million. In August 1962, the Landmark tower was designated as a civilian fallout shelter, with the capacity to hold 3,500 people after its completion. That month, work was underway on the steel framework base for the tower's glass bubble dome. By September 1962, the Landmark tower was nearing completion and had become the tallest building in Las Vegas and the state, being visible from 20 miles away. By that time, many stores in the Landmark Plaza had closed due to falling debris that included welding sparks, steel, tools, rivets, and cement. A construction delay occurred in September 1962, when shipments of steel for the tower's dome were deemed inadequate and crews had to wait for new shipments. Construction was progressing rapidly on the tower's dome during October 1962, with steel and concrete still being added to the tower. Completion was still scheduled for early 1963. The Aluminium Division of Apex Steel Corporation Limited was contracted to install a $40,000 aluminum undershine on the tower's dome, to provide a maintenance-free and clean-looking appearance for viewers on the ground. Crews used scaffolding and hoists to reach the area where aluminum sheets needed to be placed. Each day, it took crews 18 minutes to be lifted up. Due to delays arising from strong winds, it took crews two months for the aluminium to be attached. Delay In December 1962, construction of the tower was stopped when ABCC denied further funding and alleged that the Carolls had defaulted on payments. The 31-story tower had been topped off and the resort was approximately 80 percent complete, with $5 million already spent on the project. The tower's planned opening was delayed until April 1963, but it did not occur as scheduled. In May 1963, ABCC was planning a sale of the apartments, shopping center, and unfinished tower for the following month. The Carolls sought to halt the sale, and filed a $2.1 million damage suit against ABCC, alleging that the company stopped construction and refused to pay the contractors. An injunction against foreclosure was granted in June 1963, but was dissolved the following year. In October 1964, a sale of the tower was approved for later that month, after being requested by ABCC, which was still owed $3.5 million by Landmark Plaza Corporation. Up to that time, the tower had been appraised several times and was valued between $8 million and $9 million. Ownership subsequently changed, as did the resort's design plans. In August 1965, Maury Friedman was working on a deal with RCA Victor to convert the Landmark's tower and apartment buildings into office space. By the following month, Inter-Nation Tower, Inc. – a Beverly Hills-based corporation – was negotiating with RCA-Whirlpool to develop the tower and adjacent land as an international market place, an idea that was supported by local retailers and resorts. In December 1965, architect Gerald Moffitt said the Landmark's design had gone through many revisions and that his design plans had been impounded by a court; a spokesman said there were no plans to resume construction in the near future. It was estimated that an additional six months were needed to complete the tower. The unfinished tower became an eyesore for visitors to the nearby convention center. During its vacancy, people noted that the building appeared to be tilted, similar to the Leaning Tower of Pisa; experts stated that this was an illusion caused when the building was viewed with nearby power poles, which were tilted rather than the building itself. Local residents nicknamed it the "Leaning Tower of Plaza", the "Leaning Tower of Las Vegas", and "Frank's Folly." Moffitt said, "It doesn't tilt. There is only three-eights of an inch difference in diameter from top to bottom." In May 1966, early negotiations were being held with a prospective buyer of the Landmark. Resumption In July 1966, new design plans were filed with the county for the completion of the tower. Scherer planned to acquire additional property for use as a parking lot to accommodate the redesigned project. In August 1966, the Central Teamsters Pension Fund provided a $5.5 million construction loan for the project. By that time, ownership had been transferred to Plaza Tower, Inc., made up of several investors, including the Carolls and Scherer, whose construction company was awarded a $2.5 million contract to finish the Landmark tower. Because of legal problems involved with the project, the acquisition of title required over 5,000 hours of legal work and the settlement of more than 40 lawsuits. Construction was underway again in early September 1966, with completion expected in early 1967. The shops and taverns in the Landmark Plaza were closed, and the shopping center and gas station were demolished, so the land around the tower could be used to construct a casino, a hotel lobby, offices, and new shops. The adjacent Landmark apartments were to be converted into hotel rooms for the new resort. In November 1966, Caroll planned to install two slot machines inside the Landmark Coffee Shop, which sold food to construction workers from inside a temporary structure that was to become the site of a permanent building eventually. Caroll's plans were denied as his gaming license did not apply to the coffee shop. At the time, Caroll was also accused by sheriff Ralph Lamb of being uncooperative with police officers who were searching for a hoodlum at the Landmark Apartments. The Landmark had been scheduled to open on September 15, 1967, but its opening was further delayed because of construction problems. A new opening date of November 15 was announced, with an official grand opening to be held on December 31, 1967. In early November 1967, Scherer was awarded a $2.2 million contract for the final construction phase of the Landmark. Construction crews worked 24 hours a day for each day of the week during the final phase to have the 650-seat dinner showroom theater ready for the planned New Year's Eve opening. Also included in the final phase were clothing and jewelry shops, as well as a recreation area with swimming pools and a 20-foot waterfall. By the time of its planned New Year's Eve opening, the tower was nearly complete, with an opening now scheduled for mid-January 1968. Two groups – Plaza Tower Inc., the property's landlord group; and Plaza Tower Operating Corporation, the casino operating group – submitted a request for a gaming license to the Nevada Gaming Control Board, which investigates licensees and top casino employees prior to issuing gaming licenses. The Landmark's opening did not occur as scheduled. During February and March 1968, the Landmark was declared as being completed, although it was stated the following year that some construction work remained unfinished. At the time of its stated completion in 1968, a total of 200,000 hours had been spent working on the project, which used 100,000 yards of concrete and 100 tons of steel. The tower occupied of the property, and remained as the tallest building in the state. Further developments (1968–1969) Gaming license In February 1968, an updated list of top casino employees was submitted to the gaming control board, which had up to 90 days to make a decision regarding the issuance of a gaming license. An opening date of mid-April 1968 was considered possible. In March 1968, the Nevada Gaming Control Board recommended against the issuance of a gaming license due to "inadequate financial capabilities and resources of the operating corporation and of its principal investor", referring to Caroll. However, the Nevada Gaming Commission had the Gaming Control Board reevaluate the license application. On April 5, 1968, the Las Vegas media was given a tour of the Landmark. During the event, Caroll beat the Landmark's interior designer, Leonard Edward England, for allegedly flirting with Caroll's wife. Caroll was arrested on April 17, 1968, on charges of assault and battery against England. On April 22, 1968, Caroll withdrew his request for a gaming license, a decision that was approved two days later. The company then planned to receive new financing and to eventually submit a new gaming application. Approximately 600 people were expected to be employed at the Landmark upon its opening. The Landmark was put up for sale in April 1968, and the charges against Caroll were dropped two months later on the condition that he not renew his gaming license application. Financial problems In May 1968, the Teamsters Pension Fund filed a notice of breach on the trust deed, alleging that Caroll, Plaza Tower Inc. and Plaza Tower Operating had been defaulting on loan payments since October 1967. In late August 1968, the Las Vegas-based Supreme Mattress Company filed a lawsuit stating that it had only received $4,250 in payments for $25,505 worth of bedding material that was sold to the Landmark in December 1967. On August 29, 1968, a joint petition was filed to declare the Landmark bankrupt. The petition was filed by Vegas Valley Electric, Inc., a plumbing contractor, and Landmark architects George Tate and Thomas Dobrusky. By that time, the Teamsters Union Pension Fund agreed to delay its foreclosure until the property was sold. Simultaneously, Sylvania Electric Company had intended to foreclose on the property because of an unpaid $3.7 million bill relating to electronic equipment installed in the Landmark. The joint petition prevented Sylvania from taking over ownership of the property. Plane crash On the night of August 2, 1968, Everett Wayne Shaw, a 39-year-old mechanic depressed by the break-up of his month-long marriage, stole a Cessna 180 plane as part of an apparent suicide attempt. Shaw flew the plane toward the Landmark tower and pulled up just before hitting it. The plane brushed the top of the tower before crashing into the Las Vegas Convention Center across the street, approximately away. Shaw was killed in the crash, which did not harm anyone else. Plane debris was found on the Landmark's roof and at its base, but the crash was not believed to have caused any damage to the building. Sale negotiations and Howard Hughes In July 1968, there were five firms interested in purchasing the Landmark, which was expected to sell for $16 million to $17 million. One of the firms, Olla Corporation, withdrew consideration of a purchase later that month, while an announcement of the resort's sale was expected within several days. Multiple companies made purchase offers that were ultimately rejected, including Rosco Industries Inc., based in Los Angeles. On October 12, 1968, Caroll denied a report that the Landmark would be leased to Royal Inns of America, Inc. and operated without a casino. At the time, negotiations were underway with three corporations interested in purchasing the resort. On October 23, 1968, billionaire Howard Hughes reached an agreement to purchase the Landmark through Hughes Tool Company for $17.3 million, after denying reports earlier in the year that he was interested in purchasing the project. As part of the sale agreement, Hughes' Hotel Properties, Inc. would accept responsibility for approximately $8.9 million owed to the Teamster Union, as well as approximately $5.9 million in other debts and a balance of $2.4 million to Plaza Tower, Inc. At the time of the agreement, Hughes also owned five other hotel-casinos in Las Vegas. The United States Department of Justice launched an antitrust investigation into Hughes' proposed purchase, after previously investigating his attempt to purchase the Stardust Resort and Casino. As part of the investigation, the Department of Justice tried to determine whether there were other prospective buyers for the Landmark. By December 1968, negotiations were underway with several interested firms, including a $20 million offer from Tanger Industries, a holding company based in El Monte, California. Hughes purchase and opening preparations On January 17, 1969, the Department of Justice approved Hughes' plan to purchase the Landmark as his sixth Las Vegas resort. Later that month, a $1.5 million lawsuit was filed against Hughes Tool Company by Pennsylvania resident James U. Meiler and New York brokerage firm John R. Roake and Son, Inc. Meiler and the brokerage firm stated that they were entitled to a $500,000 brokerage fee for previously arranging a sale of the Landmark to Republic Investors Holding Company, before Hughes Tool Company agreed to purchase it. The lawsuit alleged that Hughes Tool Company "purposely and intentionally caused a restraining of interstate commerce". At the end of January 1969, Hughes spokesmen stated that some construction on the resort was never finished; that some maintenance systems had not yet been installed; and that some repairs were needed. Hughes also planned to have some of the hotel rooms refurbished. Because of the additional work, the resort was not expected to open until at least July 1, 1969. Approximately 1,000 to 1,100 people were expected to be employed at the Landmark. The Landmark was the only casino that Hughes had taken over before it was opened. As a result, Hughes was heavily involved in details regarding the project. Hughes spent approximately $3 million to give the interior a lavish design and to add other touches to the resort, while the exterior of the Landmark buildings was left unchanged. In March 1969, Hughes applied for approval to operate the Landmark's gambling operations, with a tentative opening date of July 1, 1969. Hughes planned to operate the casino through his Nevada company, Hughes Properties Inc., which was overseen by Hughes executive Edward H. Nigro. Hughes planned for the resort to include 26 table games and 401 slot machines. Hughes' purchase of the Landmark was not complete at that time, and his representatives stated that the sale would not be completed unless gambling and liquor licenses were issued by the state. In April 1969, Hughes received approval from the Gaming Control Board and from the state. Hughes planned to personally oversee planning for the Landmark's grand opening; Robert Maheu, who had worked for Hughes since the 1950s, said "I knew from that point on that I was in trouble. He was completely incapable of making decisions." Hughes and Maheu never met each other in person due to Hughes' reclusive lifestyle. Instead, they communicated by telephone and through written messages. For months, they had intense arguments regarding the Landmark's opening date. Maheu believed the Landmark should open on July 1, 1969, but Hughes did not want to commit to an exact date for various reasons. Across the street from the Landmark, Kirk Kerkorian was planning to open his International Hotel on July 2, 1969. Hughes had wanted the Landmark's grand opening event to be better than Kerkorian's, but was concerned that the opening night would not go as planned. Hughes also did not want the opening date to be publicly announced too soon in the event that it should be delayed; Hughes wrote to Maheu: "With my reputation for unreliability in the keeping of engagements, I dont [sic] want this event announced until the date is absolutely firmly established." Additionally, Hughes wrote to Maheu: "I would hate to see the Landmark open on the 1st of July and then watch the International open a few days later and make the Landmark opening look like small potatoes by comparison." Maheu became concerned, as it was difficult to plan the grand opening without knowing the date. As the tentative opening date approached, Hughes became concerned about other events scheduled for July 1969 – such as the Apollo 11 Moon landing – which might distract from the publicity of the Landmark's opening. By mid-June 1969, Hughes had still not given a definite opening date, which was still tentatively scheduled for July 1, although Hughes had wanted the Landmark to open sometime after the International Hotel. Weeks before the tentative opening, Hughes obsessively made repeated changes to the guest list for the resort's opening night. Regarding who should be invited, Hughes had complex specifications for Maheu to follow. Maheu ultimately had to decide the guest list himself. On June 16, 1969, Sun Realty filed a claim against Plaza Tower, Inc., thus delaying Hughes' purchase of the Landmark and threatening its planned opening. Sun Realty alleged that it was owed a $500,000 finder's fee for locating Hughes as a buyer. The case was dismissed on June 25, 1969. On June 30, 1969, Sun Realty appealed the decision but was denied that day as it was unable to post a bond that would pay the $5.8 million worth of claims, filed by approximately 120 other creditors after Plaza Towers Inc. entered bankruptcy. Hughes' $17.3 million acquisition of the Landmark, through Hughes Tool Company, was completed on July 1, 1969, a day after Hughes issued checks to three different entities to complete the purchase: $2.5 million to Plaza Towers; $5.8 million to fully pay unsecured creditors; and $9 million to pay off the Teamsters Union. Opening and operation (1969–1990) The Landmark opened on the night of July 1, 1969, a day before the International Hotel. The resort was first unveiled to 480 VIP guests prior to the public opening, which was scheduled for after 9:00 p.m. Apollo 10 astronauts Thomas P. Stafford and Eugene Cernan attended the grand opening, and were the first people to enter the new resort. Other guests included Cary Grant, Dean Martin, Jimmy Webb, Phil Harris, Tony Bennett, Sammy Cahn, Steve and Eydie, and Wilt Chamberlain. Nevada governor Paul Laxalt, as well as senators Alan Bible and Howard Cannon, were also at the opening. Three members of the Los Angeles Rams were also in attendance: Jack Snow, Lamar Lundy, and Roger Brown. Local, national and international media were also present for the grand opening, which was described by the Las Vegas Sun as resembling a Hollywood premiere. A closed-circuit television camera filmed the festivities in the Landmark on opening night, with the footage being shown live to guests at Hughes' other hotels, the Sands and the Frontier. Hughes – who lived in a secluded penthouse at his nearby Desert Inn hotel-casino – did not attend the grand opening. For opening night, comedian Danny Thomas was the first to perform in the Landmark's theater-restaurant showroom. Hughes had earlier suggested a Rat Pack reunion or a Bob Hope-Bing Crosby reunion as the opening act, both of which were considered unlikely to happen. Television advertisements for the resort stated: "In France, it's the Eiffel Tower. In India, it's the Taj Mahal. In Las Vegas, it's the Landmark." Dick Parker, executive vice president for the Landmark, had stated during the previous year that the International and the nearby Las Vegas Convention Center would not harm the Landmark's business. The Landmark reportedly lost $5 million in its first week of operations, and despite its close proximity to the convention center, the resort failed to make a profit during the subsequent years of its operation. In October 1969, Sun Realty filed a damages lawsuit against Hughes Tool Company and Plaza Tower, Inc, alleging that the two companies conspired to avoid paying the realty company its $500,000 finder's fee. Aside from the finder's fee, Sun Realty also sought an additional $5 million in punitive damages. In February 1971, the Nevada supreme court rejected the lawsuit, which had sought $3 million by that time. In December 1971, Hughes paid a little over $1 million to purchase of adjacent land located west of the Landmark. Hughes had previously leased the property, which he had been using as a parking lot for the resort. In January 1973, ownership of the Landmark was transferred to Hughes' Summa Corporation, formerly Hughes Tool Company. That year, the Landmark was valued at $25 million in a property appraisal. By 1974, William Bennett and William Pennington made an offer to buy the Landmark, but Hughes raised the price several times, from $15 million to $20 million; they bought the Circus Circus resort instead. In January 1976, the Landmark began offering foreign-language gaming video tapes to its German, Japanese, and Spanish hotel guests, who frequently limited themselves to playing slot machines rather than table games because of language barriers. Summa general manager E. H. Milligan said, "As far as we know, we are the first hotel in Las Vegas to present this service in this manner." The hotel and casino briefly closed in March 1976, as part of a hotel worker strike consisting of nearly 25,000 employees, affecting 15 Las Vegas resorts. The strike lasted two weeks before ending in late March. Hughes died of kidney failure the following month. By May 1977, Summa was financially struggling; that month, the brokerage firm of Merrill, Lynch, Pierce, Fenner & Smith recommended that Summa sell its various holdings, including the Landmark. According to the brokerage firm, the Landmark "has proven highly inefficient for hotel/casino operations and, in the opinion of Summa Corporation's management, does not warrant further investment." Gas leak and fire On July 15, 1977, shortly after 4:00 a.m., a water pipe burst in the tower's subbasement, two floors below ground level. Two feet of water flooded the basement room and shorted out the main power panel, thereby cutting out electricity for the resort shortly before 5:00 a.m. An auxiliary power generator provided lighting for the resort. However, telephones, air conditioning, and four of the tower's five elevators were left non-functional because of the main power failure. Carbon monoxide, freon and methane, all originating from the auxiliary generator, infiltrated the tower through ventilation ducts, forcing an evacuation of the building. Between 9:00 a.m. and 11:00 a.m., crews from the Southwest Gas Corporation inspected the building with firemen and found no further traces of gas, allowing guests and employees to re-enter the building. A second evacuation was ordered at 2:30 p.m. after another power failure, which rendered the elevators inoperable once again. During the outage, 21 table games remained open with the use of emergency lights, while a bar gave away free drinks. Power was restored at 6:45 p.m., although telephones remained inoperable. Guests were given the option to stay at one of Summa's other hotel properties. Despite the incident, hotel executives stated that the resort maintained 95-percent occupancy. An investigation into the cause of the gas leaks could not begin that day due to the presence of fumes in the basement. During the incident, a news reporter and a cameraman for the local KLAS-TV news channel – also owned by Summa – were beaten and forced out of the hotel lobby by Landmark guards who were armed with clubs and flashlights. Damaged in the altercation was the recording unit for a $37,000 camera owned by KLAS. Other local news crews were allowed to stay at the property to cover the incident. Orders to remove KLAS were given to the guards by hotel management, which had been irritated by recent KLAS news stories that related to Summa's properties, including a story stating that negotiations were underway to sell the Landmark to an Arabian investor. A total of 138 people were hospitalized after inhaling the poisonous gases; they were treated at four local hospitals. Among the hospitalized were nearly 100 hotel guests, and several firemen and ambulance drivers; most of the patients were released from the hospitals within three days of the incident. A 55-year-old man was the sole casualty in the incident. An investigation into the cause of the gas leaks concluded on July 19, 1977, and found that a defective exhaust line on one of the emergency generators was responsible. The line had been installed during the hotel's construction. John Pisciotta, director of the Clark County Building Department, did not believe that he or anyone else would be able to determine how the line became damaged. Summa brought in the company which installed the system to have it repaired. On October 23, 1977, at 3:44 p.m., a two-alarm fire was reported in a hotel room on the 22nd floor, after a bartender in the 27th floor lounge smelled smoke. The entire room had caught on fire from a cigarette. The fire was extinguished with help from 45 firefighters, who put it out within five minutes of their arrival. However, the fire led to heavy smoke infiltrating the entire hotel and ground-floor through elevator shafts. The Landmark was evacuated, and hundreds of guests and employees were allowed to return inside at approximately 5:15 p.m., after smoke had been cleared from the resort's interior. The 22nd through 27th floors had moderate smoke damage. Five hotel guests were treated for smoke inhalation, but none required hospitalization. Prospective buyers During October 1977, Summa was in negotiations with several prospective buyers for the Landmark, which had approximately 1,200 employees at the time. One interested buyer was a group of Chicago investors led by an attorney. Summa was also in negotiations to sell the Landmark for $12 million to Nick Lardakis, a tavern owner who lived in Akron, Ohio. Simultaneously, Summa was holding discussions with the Scott Corporation – a group of downtown Las Vegas entrepreneurs led by Frank Scott – which wanted to purchase the resort at a price of nearly $10 million. Lardakis' acquisition of the Landmark was rejected that month as he was unable to raise the necessary funds to make the purchase; according to Summa, Lardakis' terms were "unrealistic." The Chicago group made a $12 million offer, but Summa's board of directors favored the offer by Scott Corporation, which had no down payment and included a 20-year payout period, while the Chicago group was opposed to a long-term mortgage arrangement with Summa. The Chicago group noted that Summa officials repeatedly declined to let the group examine the Landmark's 1973 property appraisal. Other $12 million offers came from Las Vegas heiress JoAnn Seigal and Beverly Hills management consultant Charles Fink. Seigal also complained that Summa would not provide her with a property appraisal to base her negotiations. The Beverly Hills-based Acro Management Consultants offered $16 million for the Landmark, the highest of five bids up to that time. Summa spokesman Fred Lewis said that Acro's bid was considered "more of an inquiry" than a serious offer, a belief that was disputed by Leonard Gale, vice president of Acro. Gale acknowledged that the Landmark was "the biggest lemon in Las Vegas", but was confident it could become a successful property under Acro's ownership. After weeks of negotiations, Summa announced that no decision had been made on a sale of the Landmark, reportedly due to disagreements within the company. William Lummis, a cousin of Hughes, had been named chairman of the Summa board earlier in the year. Lummis wanted to sell all of Summa's non-profitable properties, while chief operating officer Frank William Gay, citing the purported desires of Hughes, wanted to expand and modernize such properties. The Landmark was considered the weakest of Summa's six gaming and hotel properties in Nevada, as it had never made a profit up to that time. Summa officials held a meeting on November 3, 1977, but the company made no decision on selling the Landmark, which lost an average of $500,000 per month. By that time, the Scott Corporation stated that it would likely withdraw its offer to purchase the Landmark because of inability to obtain long-term financing. In January 1978, Summa announced that the Landmark would be sold to the Scott Corporation, with the sale price reportedly ranging between $10 million and $12 million. Up to that time, the resort had reportedly lost $15 million since its opening, despite numerous attempts to increase business. Experts believed that the Landmark suffered financially as a result of its low room-count (486 guest rooms at the time) and its location across the street from the Las Vegas Hilton (formerly the International), which was the world's largest hotel at the time. Frank Scott owned downtown Las Vegas' Union Plaza Hotel, which had become one of the city's most successful casinos, and he said the same management principles used at the Union Plaza would be applied to the Landmark. Scott intended to change the name of the resort, with "The Plaza Tower" as the favorite among several names under consideration. Scott planned to take over operations once the sale received approval from Summa, county and state gaming officials, and courts that were handling Hughes' estate. Because higher offers were subsequently made for the Landmark, the Scott Corporation's offer was rejected by a judge who was monitoring the Hughes estate. Wolfram/Tickel ownership A group of midwestern investors purchased the Landmark from the Summa Corporation in February 1978, at a cost of $12.5 million. The group was led by Lou Tickel and Zula Wolfram, and it included Gary Yelverton. The purchase was financed using money that Wolfram's husband, Ed Wolfram, embezzled from his brokerage firm, Bell & Beckwith. Faye Todd, the Landmark's entertainment director and a corporate executive assistant, primarily oversaw the Landmark's operations for the Wolframs, who lived in Ohio. The Wolframs were high rollers who frequently stayed at the Desert Inn resort when visiting Las Vegas. Todd met the Wolframs while working for the Desert Inn as special events coordinator, and she became close friends with Zula Wolfram, who had been planning to purchase a Las Vegas hotel with her husband. Tickel, a former magistrate judge and a resident of Salina, Kansas, previously owned several other hotels. The group was confident that the Landmark would overcome its financial problems, and they planned to add a 750-room hotel tower to the property within two years. The sale was completed on March 31, 1978, under the new ownership of Zula Wofram, and Lou and Jo Ann Tickel. However, the new owners were unable to find someone with a gaming license and sufficient funds to continue operating the casino ahead of the sale's completion. The investment group had yet to apply for gaming and liquor licenses, and the Summa Corporation declined to continue operating the casino, citing a lack of interest. The Landmark's casino, which had 272 employees, was closed on April 1, 1978, due to the lack of gaming licenses. The owners began a search for a suitable licensed individual who could temporarily operate the casino until they could receive their own gaming license. The hotel, restaurants, and shops remained open, with 700 other employees. The casino reopened on June 2, 1978, after a one-year gaming license had been granted to Frank Modica, a Las Vegas gaming figure who would temporarily operate the casino on the owners' behalf. The casino's bingo parlor remained closed as it was undergoing renovations. In October 1978, Tickel, Wolfram, and Yelverton were approved by the state to be licensed as the landlords of the Landmark. At the time, Ed Wolfram was listed as a financial adviser on the licensing plan. In 1979, Jesse Jackson Jr. was the Landmark hotel manager, and was the only such manager in the Las Vegas hotel industry to be black. The Tickels remained as co-owners of the Landmark until 1980, following Zula Wolfram's approval to purchase their interest in the resort. In 1982, architect Martin Stern Jr. was hired to design a large expansion of the Landmark. Revenue for the Landmark exceeded $26 million that year, although the resort lost $500,000 during the month of November 1982. Up to that time, the Landmark had lost an average of $3 million every year since its opening. Federal investigators shut down Wolfram's firm on February 7, 1983, after they discovered $36 million of money missing in six accounts that were managed by him and his wife, ultimately leading to the discovery of his embezzlement. Lawyer Patrick McGraw, trustee for Bell & Beckwith, was approved later that month to operate the Landmark until it could be liquidated. The expansion designed by Stern was cancelled, and Ed Wolfram was convicted of embezzling later that year, after admitting to using money from his firm to pay for various businesses ventures, with the Landmark being the most expensive. Zula Wolfram, who had owed $5 million to Summa since her purchase of the Landmark, was forced to sell her majority share in the resort. Morris ownership The Landmark was entangled in a Toledo bankruptcy court in July 1983, at which point Bill Morris, a Las Vegas lawyer, made plans to purchase the resort. Morris, also a member of the Las Vegas Convention and Visitors Authority (LVCVA), had previously owned the Holiday Inn Center Strip hotel-casino, as well as the Riverside Resort in nearby Laughlin. Morris had also previously represented Plaza Tower, Inc. at the time that Hughes completed his purchase of the resort. Morris intended to eventually expand the resort to 1,100 hotel rooms. Yelverton and his wife stated that they had been sold a five-percent interest in the Landmark in 1979, but that the document was never filed with the county recorder's office. In August 1983, the Yelvertons filed a state suit to prevent the sale to Morris, stating that they would not be compensated for their interest if the sale proceeded. At the time, Gary Yelverton was the Landmark's casino manager. The Nevada Gaming Control Board delayed approval of Morris' purchase until his offer could be updated to include what Zula Wolfram owed to Summa. Morris purchased the Landmark for $18.7 million, and took over ownership on October 30, 1983. The struggling resort had a profitable first month under its new management. Morris worked 18 hours a day to ensure the Landmark's success. He said the Landmark had "never really been given a fair chance," citing the absence of "on-hands management on a day-in, day-out basis" as one reason for its lack of success. Morris also believed that previous operators tried to make the Landmark "do something it was not meant to do" by competing with "superstar productions," whereas he believed the resort's location made it more ideal for serving attendees of the Las Vegas Convention Center. The Landmark remained open while Morris spent nearly $3.5 million on a renovation, which was underway in late 1983. Morris said the Landmark would compete against rivals with its "budget prices and good service." He intended to capitalize on the resort's location with a planned expansion that would feature three 15-story towers with 1,500 hotel rooms, accompanied by a large domed family entertainment center. The expansion was to be built west of the Landmark on of vacant land that Morris had purchased along with the resort. The expansion did not occur, and the Landmark struggled throughout the 1980s. By the middle of 1985, Morris was negotiating a $28 million loan to pay for improvements and fire safety updates for the Landmark. Clark County officials considered taking action against the resort because of its failed compliance with fire safety standards. On July 29, 1985, the Internal Revenue Service (IRS) filed a $2.1 million lien against the property, because of Morris' failure to pay withholding and payroll taxes for the resort's employees for the previous six months. Two days after the lien was filed, the Landmark filed for Chapter 11 bankruptcy to prevent the IRS from seizing assets such as casino cage money. The resort remained open despite the bankruptcy filing, and the casino had enough money to remain operational. The Landmark had debts totaling $30.6 million, while it had $30.6 million in assets. Morris blamed the bankruptcy on McGraw, alleging that he derailed a $28.8 million refinancing of the Landmark 24 hours prior to the finalization of the loan. Morris said operations would continue as normal despite the bankruptcy filing. The Nevada National Bank requested in early 1986 that the bankruptcy be converted to a liquidation proceeding to pay off creditors, stating that the Landmark's bankruptcy reorganization plan could not succeed. Morris said he would have to cancel his reorganization plan and lay off 700 to 800 Landmark employees if a bankruptcy court did not allow the resort to abandon its union labor contracts. Part of Morris' reorganization plan involved cutting employee wages by 15 percent, including his own yearly salary of $145,000. The pay cut would give the Landmark an additional $6,500 per month, which would allow the resort to make its mortgage payments. Morris hoped to increase the hotel's room count after the resort's eventual emergence from bankruptcy, with additional financing from a national franchise hotel chain. He hoped that the Landmark would be out of Chapter 11 bankruptcy by March 1, 1986, although it would ultimately remain in bankruptcy for the rest of its operation. In January 1987, a small fire broke out in the resort's showroom, located next to the casino. Five employees were evacuated, and there were no injuries. Customers in the casino were unaware of the fire, which was quickly extinguished by the local fire department. The fire was determined to have likely been caused by an arsonist. In July 1987, the Landmark began offering poker tournaments in its Nightcap Lounge each weekday night. To help bring in customers, two cash drawings were held during each tournament. Morris and bank company Drexel Burnham Lambert began a search in 1989 for a new owner to take over the Landmark. At the end of the year, a U.S. bankruptcy court judge gave Morris until 1990 to find a buyer or refinancing. Otherwise, the Landmark would be liquidated to pay off creditors, in accordance with a court order. On January 2, 1990, the Landmark was ordered into Chapter 7 bankruptcy after a judge ruled that the creditors would not be able to receive compensation under the reorganization plan. Between $43 million and $46 million was owed to various creditors. Morris' gaming license expired that month after the resort failed to pay $500,000 in taxes and penalties. Richard Davis, a Las Vegas-based real estate agent, was appointed by the bankruptcy court that month to temporarily operate the resort. On February 21, 1990, the Nevada Gaming Commission extended the gaming license and allowed the resort to stay open for at least two additional weeks while its financial problems were analyzed by state experts. At that time, the hotel had $562,000 in cash, including $175,000 in revenue that had accumulated in the prior six weeks. The Landmark continued to struggle, although the introduction of various casino programs helped improve revenue. A U.S. bankruptcy court judge approved a request for the Landmark to be sold seven weeks later in a public auction scheduled for August 6, 1990. The request was made by Davis, who cited numerous failed attempts to sell the resort. More than 200 prospective buyers had inquired about the Landmark, but only five to ten of them were considered as having serious interest in the resort. In July 1990, two Denver businessmen, David M. Droubay and Martin Heckmaster, offered $35.5 million to purchase the bankrupt resort. Morris was dissatisfied with the offer, stating that the property had been appraised as high as $70 million. Closure (1990–1995) On August 6, 1990, the bankruptcy hearing failed to attract a buyer for the Landmark. Ralph Engelstad and Charles Frias, who both held substantial interest in the resort, had made $100,000 deposits which allowed them to bid at the hearing, but they did not do so and left the hearing without commenting. Droubay and Heckmaster were ineligible to bid as they did not make a deposit. At the request of Davis' attorney, a U.S. bankruptcy judge granted permission to close the Landmark. Gaming operations began shutting down that afternoon, within an hour of the failed hearing. Slot machine and hotel operations were scheduled to shut down later in the week. With 498 rooms at the time, the Landmark was unable to compete with new megaresorts, and was fully closed on August 8, 1990. Morris, upset about the failed auction, said, "Sometimes it comes down to good luck and bad luck. I had nothing but bad luck. Someone is going to come in and run the Landmark and look like a genius." Forrest Woodward, who managed the casino for Davis, said, "This is just an obsolete gaming property that no one's interested in, considering the debt," which included $48 million; a portion of that was $10 million in unsecured claims. Davis' attorney predicted the Landmark would be closed for 100 days or more while creditors pursued a foreclosure sale. A week after the closure, Davis received permission from the U.S. bankruptcy court to abandon the property as trustee, due to the cost of maintaining security at the closed resort. Davis' attorney said it would cost between $60,000 and $200,000 each month to maintain the property. Creditors would be left to pay bills relating to the property until a foreclosure sale could take place. In December 1990, the property was purchased through a foreclosure sale by Lloyds Bank of London for $20 million. Lloyds Bank made the purchase in order to protect a $25 million loan it had made to Morris in 1988. By March 1993, the Landmark's contents had been liquidated through a sale conducted by National Content Liquidators. By July 1993, representatives of Lloyds Bank had approached the LVCVA about the possibility of purchasing the Landmark. LVCVA was interested in the proposal, with plans to use the Landmark's 21-acre property either for a parking lot or expansion. LVCVA purchased the Landmark in September 1993, at a cost of $15.1 million. During 1994, board members of LVCVA debated on whether to restore the Landmark or demolish it, ultimately deciding on the latter. Only three LVCVA board members voted to save the building. Among those voting in support was Lorraine Hunt, who later said that the Landmark "was iconic and part of the history of Las Vegas. Had they kept it, it could have been the office for the Las Vegas Convention and Visitors Authority." Demolition LVCVA paid $800,000 for asbestos removal in the tower. Central Environmental Inc. was hired to remove the asbestos, while AB-Haz Environmental, Inc. was the asbestos removal consultant. In mid-1994, AB-Haz Environmental began removing asbestos insulation from the Landmark. The removal, scheduled for completion in August 1994, took nearly six months. In October 1994, it was announced that the Landmark would be demolished the following month to make way for a 21-acre parking lot, to be used by the Las Vegas Convention Center. Demolition of the tower was delayed several times, to allow for the removal of additional asbestos. The Clark County Health District proposed penalties against the asbestos companies. By February 1995, AB-Haz had twice declared the Landmark to be asbestos-free and safe for demolition, although Clark County officials discovered that some hotel floors still contained 90 percent of the asbestos. Up to that time, LVCVA had already paid a total of $1 million to the asbestos companies to have the asbestos removed from the hotel and an adjacent apartment complex, allowing for their demolition. The Clark County Air Pollution Control Division recommended a $450,000 fine against AB-Haz for failure to remove the asbestos, while LVCVA would have to spend an additional $1 million for further asbestos removal. AB-Haz was ultimately cited for violating air emission standards during the asbestos removal, and signed a settlement in which the company agreed to pay an $18,000 fine. Central Environmental was removing asbestos from the tower as of August 1995. Because of previous delays, officials for LVCVA had given up on setting a demolition date until all the asbestos was removed. In October 1995, LVCVA paid Iconco Inc. $740,000 to remove remaining asbestos from the resort, hoping to have it demolished in time for ConExpo to be held on the property's new parking lot in March 1996. Controlled Demolition, Inc. (CDI) was hired to implode the tower. No blueprints could be found for the tower, which CDI president Mark Loizeaux considered unusual. Demolition crews discovered secret stairwells in the tower, and Loizeaux said, "We have learned everything as we have gone in. It was a very strange structure, very unique." A week before the Landmark tower was demolished, crews removed the remaining asbestos from the low-rise structures and subsequently tore them down. Crews then spent the final days of demolition by drilling in the tower to weaken and prepare it ahead of its planned implosion. Less than 100 pounds of dynamite was placed in certain locations throughout the tower's first four floors. At 5:37 a.m. on November 7, 1995, the Landmark tower was demolished through implosion. An estimated 7,000 people arrived to witness the implosion. Upon detonation, the tower's northwest half was brought down, followed by the second half, which caved in on itself, followed by a black cloud of dust ascending 150 feet into the air. Most of the material from the demolished structure was to be recycled and used in other construction projects. The 31-story tower was the tallest reinforced concrete building ever demolished in North America, and the second tallest building in the world to be demolished. Demolition and related expenses cost $3 million. Frank Wright, curator of the Nevada State Museum and Historical Society, said "I kind of hate to see it come down," stating that the Landmark tower still represented what the then-upcoming Stratosphere tower represented: "the biggest and the tallest." The property was to become occupied by 2,200 parking spaces, expected to be ready by March 1996. One of the Landmark's ground-level signs, with gold and blue cursive neon lettering, was restored by the Neon Museum and installed at the parking lot. As of 2017, the property contains 2,948 parking spaces for the Las Vegas Convention Center. In 2019, work was underway on an expansion of the convention center, to be built on the former sites of the Landmark and the nearby Riviera. The sign was removed from the site and temporarily put into storage by the Neon Museum. The convention center's West Hall expansion opened on the site in June 2021. Architecture The Landmark tower was designed by architects Gerald Moffitt and Ed Hendricks. The uniquely designed Landmark tower was the first of its kind to be built in Nevada; its design was inspired by the Space Needle located in Seattle, Washington. When construction stopped in 1962, the project consisted of of floor space, and included two basements that were 30 feet deep. The tower's height measured 297 feet, while its diameter measured 60 feet. The tower's dome measured 141 feet in diameter. In 1966 – the year that construction resumed – architects George Tate and Thomas Dobrusky were hired to design new portions of the resort, including the ground-floor casino. Height The Landmark tower was billed as having 31 floors, although it skipped floors 13 and 28. The Landmark tower was the tallest building in the state from 1962 to 1969. In 1967, a revolving letter "L" neon sign was installed at the top of the tower. Excluding its rooftop sign, the tower stood , seven feet taller than the Mint hotel in downtown Las Vegas. Conflicting numbers have been given for the tower's total height. According to Scherer, the sign measured , and the tower measured , including the sign. At the time of opening, the Landmark tower was billed as having a height of . By that time, the new 30-story International Hotel had become the tallest building in the state at . When it was demolished, the tower reportedly stood . According to Emporis, the tower stood from the ground to its roof, while the tip raised the height to a total of . Features When the Landmark opened, it had a total of 400 slot machines. The ground-floor casino was , while a second casino, consisting of , was located in the dome on the 29th floor; it was the first high-rise casino in the state. At the time of opening, the ground-floor casino featured red and black colors, while the upper casino used orange coloring and wood. The hotel contained 476 rooms and 27 suites for a total of 503, a small number in comparison to other Las Vegas resorts, which commonly had 1,000 rooms. The tower included 157 hotel rooms, while the remaining units were located on ground level. The tower used an octagonal floorplan, and the rooms in the tower used a layout that had them shaped like pie slices. By 1977, the room count had increased to 524, before ultimately being lowered to 498 at the time of the Landmark's closure in 1990. The Landmark's interior designer was Las Vegas resident Leonard Edward England, who designed the ground floor to include a colorful and primitive Incan theme, which gradually changed to a Space Age theme on subsequent floors. The interior included $200,000 light fixtures, glowing, red-colored Incan masks, and a burnished metal wall sculpture representing a Cape Kennedy launch. The interior also included 65 tons of black and white polished marble, and carved mahogany woodwork from Mexico. In addition, the interior featured murals depicting the eight Wonders of the World, which included the Landmark tower. After Hughes agreed to purchase the resort, he had an island built in the middle of the hotel's 240-foot swimming pool, which cost $200,000 and was the longest in the world. The Landmark's pool included waterfalls and three carpeted bridges leading to its center island, which featured palm trees. For the hotel, Hughes replaced 72-inch beds with 80-inch beds and had color televisions built into the walls of each room ahead of the resort's opening. The Landmark's second floor was used for offices. The tower's dome included five floors, although floors 26 and 30 were used by employees for maintenance equipment, elevator equipment, and dressing rooms. The shape and strength of the tower's bubble dome was maintained by perlite concrete and steel girders. The Landmark included a high-speed exterior glass elevator, which took people up to the five-story cupola dome. The elevator was located on the tower's west side, facing the Las Vegas Strip. It was capable of moving 1,000 feet per minute, allowing people to go from the ground floor to the 31st floor in 20 seconds. It was the fastest elevator in the Western United States. Hughes biographer Michael Drosnin stated that the elevator was prone to constant malfunctions, and that the Landmark's air-conditioning system "never really worked." The dome provided wraparound views of the city, and was capable of holding over 2,000 people. The dome included lounges and a night club, as well as the high-rise casino on the 29th floor. At the time of the Landmark's opening, the showroom and the Cascade Terrace coffee shop were located on the first floor, while a steak and seafood gourmet restaurant known as Towers Restaurant was located on the 27th floor and a Chinese restaurant known as the Mandarin Room was located on the 29th floor. In April 1971, plans were announced for a $750,000 expansion that would include luxury suites on the 29th floor, the highest in Las Vegas at the time. Also planned was the remodeling of the casino and lobby, and the expansion of a coffee shop. The Skytop Rendezvous, a piano bar and dance floor on the top floor of the tower, was reopened as a discotheque on February 3, 1975, specializing in middle of the road music. The Landmark was the only major hotel in the state to have a discotheque. When Morris' renovation began in December 1983, the tower contained 150 rooms, a number that was expected to be reduced as the rooms would be enlarged and upgraded to first class standards. Other plans included changes to the coffee shop, new casino carpeting, and redesigning and renaming the 27th-floor restaurant as Anthony's Seafood and Prime Rib Room. The renovation was financed by Valley Bank of Nevada. The Love Song Lounge operated on the top floor during the mid-1980s, before and after Morris' renovation, and offered dancing. During 1985 through 1987, the resort also operated the Sunset Room on the 27th floor, offering piano-bar music and fine dining, with an emphasis on steaks and seafood. The Poolside Room operated on the ground level. The Nightcap Lounge opened at the Landmark in 1986, and offered comedy acts. Reception In 1962, the Los Angeles Times called the $6 million Landmark, "By far the most spectacular project", out of several Las Vegas resorts that were under construction; the newspaper further wrote that the Landmark was "destined to become the Mark Hopkins of Las Vegas." The following year, the Reno Evening Gazette opined that the Landmark had "the most unusual exterior architecture in Nevada." In 1966, Billboard wrote that the mushroom-shaped Landmark tower had "the most spectacular design" of all recent high-rise structures in the city. In 1993, architecture critic Alan Hess noted the simplicity of the Landmark and the nearby International Hotel when compared with previous Las Vegas casinos, writing, "As singular, self-contained forms, they showed none of the complexity of the different pieces and sequential additions that made the original Strip visually and urbanistically richer." In 2002, Geoff Carter of Las Vegas Weekly wrote that the demolished Landmark was "Vegas' coolest building and a veritable shrine to 1960s 'Googie' architecture." Performances Peggy Lee performed at the Landmark during the year of its opening. In its early years, the Landmark became well known for its performances by country singers, including Kay Starr, Jimmy Dean, Patti Page, Bobbie Gentry, and Danny Davis with his Nashville Brass band, as well as a four-week show starring Ferlin Husky and Archie Campbell. Frank Sinatra also performed at the Landmark, and Bobby Darin made one of his final appearances there. In 1974, the Landmark launched Red McIlvaine's Star Search, a variety show featuring people from across the United States. The following year, The Jim Halsey Company began Country Music USA, a show at the Landmark that featured a different country music headliner every two to three weeks. The show was usually sold out. Roy Clark and Mel Tillis made their debuts in Country Music USA, as did Freddy Fender. The Oak Ridge Boys made their Las Vegas debut in Country Music USA. Leroy Van Dyke performed in the show, with Fender as his opening act. Van Dyke performed again at the Landmark later in the 1970s, with Sons of the Pioneers as his opening act. Other artists who performed in Country Music USA included Barbara Fairchild, Johnny Paycheck and Tommy Overstreet, as well as Jody Miller, Roy Head, and Hank Thompson. Country Music USA ran for two years, until 1977. Spellcaster, an 80-minute family oriented show featuring country-western singer Roy Clayborne, debuted at the Landmark in 1982. Spellcaster, a production show with dancers and showgirls, featured Clayborne singing 15 songs. Spellcaster was named after one of the Wolframs' racing horses, and was produced through Zula Wolfram's Las Vegas production company, Zula Productions. The show was designed and directed by Larry Hart, a 1979 Grammy Award winner, and it ran for approximately eight months. At the time of Spellcasters debut, Danny Hein and Terri Dancer also began performing in the resort's Galaxy Lounge. Hein and Dancer had four different shows consisting of various costumes and set decorations, and were accompanied by a five-person band of musicians who backed up the duo. In the late 1980s, the Landmark's showroom hosted minor acts and was considered small in comparison to other Las Vegas resorts. The Landmark hosted magician Melinda Saxe in a family-friendly magic show, which was initially known as 88 Follies Revue and was renamed Follies Revue '89 the following year before concluding its run. In 1990, the main showroom featured Spellbound, a magic show consisting of two illusionist teams. Dick Foster was the show's director and producer. In popular culture The unfinished tower briefly appears in the 1964 film, Viva Las Vegas. In 1971, Sean Connery and stuntmen rode atop the Landmark's exterior elevator as part of filming for scenes in the James Bond film Diamonds Are Forever; the tower was among other Las Vegas resorts that stood in as the fictional Whyte House hotel-casino. In the 1980s, the Landmark appeared in the television series Vega$ and Crime Story. In October 1994, the exterior entrance of the Landmark was lit up for one night so it could be used for outdoor shots as the fictional Tangiers casino, featured in the 1995 film, Casino. The Landmark's implosion was filmed for use in director Tim Burton's 1996 film, Mars Attacks!. In the film, the Landmark is portrayed as the fictional Galaxy Hotel, which is destroyed by an alien spaceship. Burton had stayed at the hotel a few times and was upset by the decision to demolish it, so he wanted to immortalize it in his film. A scale model of the Landmark tower was also made for the production of Mars Attacks!. The demolition of the Landmark also appears during the closing credits of the 2003 film, The Cooler. The Lucky 38, a fictional tower casino featured in the 2010 video game Fallout: New Vegas, partially resembles the Landmark. A near-exact replica of the Landmark called the Bikini Atoll Casino can be seen in the Saints Row (2022) reboot, in the El Dorado district (which is based on the Las Vegas Strip) of Santo Ileso. It is portrayed as an abandoned casino. See also Fontainebleau Las Vegas, tallest building in Nevada since 2008; opened in 2023 after a construction delay Stratosphere Las Vegas, still-extant hotel with a similarly-designed tower Notes References External links Slideshow of Landmark photos Landmark demolition video Footage of the Landmark's implosion used for Mars Attacks! Eyewitness News Las Vegas news coverage KLAS-TV news coverage KSNV news coverage KTNV-TV news coverage Tribute to the Landmark Casinos completed in 1969 Hotel buildings completed in 1969 Hotels established in 1969 1990 disestablishments in Nevada Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Skyscraper hotels in Winchester, Nevada Former skyscraper hotels Demolished hotels in Clark County, Nevada Buildings and structures demolished by controlled implosion Buildings and structures demolished in 1995 1969 establishments in Nevada Casino hotels
Landmark (hotel and casino)
[ "Engineering" ]
13,140
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,513,007
https://en.wikipedia.org/wiki/Helmert%E2%80%93Wolf%20blocking
The Helmert–Wolf blocking (HWB) is a least squares solution method for the solution of a sparse block system of linear equations. It was first reported by F. R. Helmert for use in geodesy problems in 1880; (1910–1994) published his direct semianalytic solution in 1978. It is based on ordinary Gaussian elimination in matrix form or partial minimization form. Description Limitations The HWB solution is very fast to compute but it is optimal only if observational errors do not correlate between the data blocks. The generalized canonical correlation analysis (gCCA) is the statistical method of choice for making those harmful cross-covariances vanish. This may, however, become quite tedious depending on the nature of the problem. Applications The HWB method is critical to satellite geodesy and similar large problems. The HWB method can be extended to fast Kalman filtering (FKF) by augmenting its linear regression equation system to take into account information from numerical forecasts, physical constraints and other ancillary data sources that are available in realtime. Operational accuracies can then be computed reliably from the theory of minimum-norm quadratic unbiased estimation (Minque) of C. R. Rao. See also Block matrix Notes Statistical algorithms Least squares Geodesy
Helmert–Wolf blocking
[ "Mathematics" ]
278
[ "Applied mathematics", "Geodesy" ]
1,513,065
https://en.wikipedia.org/wiki/Palladium-hydrogen%20electrode
The palladium-hydrogen electrode (abbreviation: Pd/H2) is one of the common reference electrodes used in electrochemical study. Most of its characteristics are similar to the standard hydrogen electrode (with platinum). But palladium has one significant feature—the capability to absorb (dissolve into itself) molecular hydrogen. Electrode operation Two phases can coexist in palladium when hydrogen is absorbed: alpha-phase at hydrogen concentration less than 0.025 atoms per atom of palladium beta-phase at hydrogen concentration corresponding to the non-stoichiometric formula PdH0.6 The electrochemical behaviour of a palladium electrode in equilibrium with H3O+ ions in solution parallels the behaviour of palladium with molecular hydrogen Thus the equilibrium is controlled in one case by the partial pressure or fugacity of molecular hydrogen and in other case—by activity of H+-ions in solution. When palladium is electrochemically charged by hydrogen, the existence of two phases is manifested by a constant potential of approximately +50 mV compared to the reversible hydrogen electrode. This potential is independent of the amount of hydrogen absorbed over a wide range. This property has been utilized in the construction of a palladium/hydrogen reference electrode. The main feature of such electrode is an absence of non-stop bubbling of molecular hydrogen through the solution as it is absolutely necessary for the standard hydrogen electrode. See also Dynamic hydrogen electrode Reversible hydrogen electrode References External links Electrochimica Acta Electrodes Palladium Hydrogen technologies
Palladium-hydrogen electrode
[ "Chemistry" ]
316
[ "Physical chemistry stubs", "Electrochemistry", "Electrodes", "Electrochemistry stubs" ]
1,513,212
https://en.wikipedia.org/wiki/Japan%20black
Japan black (also called black japan and bicycle paint) is a lacquer or varnish suitable for many substrates but known especially for its use on iron and steel. It can also be called japan lacquer and Brunswick black. Its name comes from the association between the finish and Japanese products in the West. Used as a verb, japan means "to finish in japan black". Thus japanning and japanned are terms describing the process and its products. Its high bitumen content provides a protective finish that is durable and dries quickly. This allowed japan black to be used extensively in the production of automobiles in the early 20th century in the United States. Ingredients Japan black consists mostly of an asphaltic base dissolved in naphtha or turpentine, sometimes with other varnish ingredients, such as linseed oil. It is applied directly to metal parts, and then baked at about 200°C (400°F) for up to an hour. Automobile use Japan black's popularity was due in part to its durability as an automotive finish; however, it was the ability of japan black to dry quickly that made it a favorite of early mass-produced automobiles such as Henry Ford's Model T. While other colors were available for automotive finishes, early colored variants of automotive lacquers could take up to 14 days to cure, whereas japan black would cure in 48 hours or less. Thus, variously colored pre-1925 car bodies were usually consigned to special orders, or custom-bodied luxury automobiles. The development of quick-drying nitrocellulose lacquers (pyroxylins) which could be colored to suit the needs of the buying public in the 1920s led to the disuse of japan black by the end of the 1920s. In 1924, General Motors introduced "True Blue" Duco (a product of DuPont) nitrocellulose lacquer on its 1925 model Oakland automobile marque products. Ford's formulations Ford used two formulations of japan black, F-101 and F-102 (renamed M-101 and M-102 after March 15, 1922). F-101, the "First Coat Black Elastic Japan", was used as the basic coat applied directly to the metal, while F-102, "Finish Coat Elastic Black Japan", was applied over the first layer. Their compositions were similar: 25–35% asphalt and 10% linseed oil with lead and iron-based dryers, dissolved in 55% thinners (mineral spirits, turpentine substitute or naphtha). The F-101 also had 1–3% of carbon black added as a pigment. The asphalt used in the Ford formulations was specified to be Gilsonite. This has long been used in formulations of paint for use on ironware as it increases the elasticity of the paint layer, allowing it to adhere to steel subjected to vibration, deformation and thermal expansion without cracking or peeling. It is also cheap, yields a glossy dark surface, and acts as a curing agent for the oil. See also Pontypool japan Rustproofing References Painting materials Ford Motor Company Paints Auto parts
Japan black
[ "Chemistry" ]
650
[ "Paints", "Coatings" ]
1,513,277
https://en.wikipedia.org/wiki/Microtome
A microtome (from the Greek mikros, meaning "small", and temnein, meaning "to cut") is a cutting tool used to produce extremely thin slices of material known as sections, with the process being termed microsectioning. Important in science, microtomes are used in microscopy for the preparation of samples for observation under transmitted light or electron radiation. Microtomes use steel, glass or diamond blades depending upon the specimen being sliced and the desired thickness of the sections being cut. Steel blades are used to prepare histological sections of animal or plant tissues for light microscopy. Glass knives are used to slice sections for light microscopy and to slice very thin sections for electron microscopy. Industrial grade diamond knives are used to slice hard materials such as bone, teeth and tough plant matter for both light microscopy and for electron microscopy. Gem-quality diamond knives are also used for slicing thin sections for electron microscopy. Microtomy is a method for the preparation of thin sections for materials such as bones, minerals and teeth, and an alternative to electropolishing and ion milling. Microtome sections can be made thin enough to section a human hair across its breadth, with section thickness between 50 nm and 100 μm. History In the beginnings of light microscope development, sections from plants and animals were manually prepared using razor blades. It was found that to observe the structure of the specimen under observation it was important to make clean reproducible cuts on the order of 100 μm, through which light can be transmitted. This allowed for the observation of samples using light microscopes in a transmission mode. One of the first devices for the preparation of such cuts was invented in 1770 by George Adams, Jr. (1750–1795) and further developed by Alexander Cummings. The device was hand operated, and the sample held in a cylinder and sections created from the top of the sample using a hand crank. In 1835, Andrew Prichard developed a table based model which allowed for the vibration to be isolated by affixing the device to the table, separating the operator from the knife. Occasionally, attribution for the invention of the microtome is given to the anatomist Wilhelm His, Sr. (1865). In his (German for Description of a Microtome), Wilhelm wrote: Other sources further attribute the development to a Czech physiologist Jan Evangelista Purkyně. Several sources describe the Purkyne model as the first in practical use. The obscurities in the origins of the microtome are due to the fact that the first microtomes were simply cutting apparatuses, and the developmental phase of early devices is widely undocumented. At the end of the 1800s, the development of very thin and consistently thin samples by microtomy, together with the selective staining of important cell components or molecules allowed for the visualisation of microscope details. Today, the majority of microtomes are a knife-block design with a changeable knife, a specimen holder and an advancement mechanism. In most devices the cutting of the sample begins by moving the sample over the knife, where the advancement mechanism automatically moves forward such that the next cut for a chosen thickness can be made. The section thickness is controlled by an adjustment mechanism, allowing for precise control. Applications The most common applications of microtomes are: Traditional Histology Technique: tissues are fixed, dehydrated, cleared, and embedded in melted paraffin, which when cooled forms a solid block. The tissue is then cut in the microtome at thicknesses varying from 2 to 50 μm. From there the tissue can be mounted on a microscope slide, stained with appropriate aqueous dye(s) after removal of the paraffin, and examined using a light microscope. Frozen section procedure: water-rich tissues are hardened by freezing and cut in the frozen state with a freezing microtome or microtome-cryostat; sections are stained and examined with a light microscope. This technique is much faster than traditional histology (5 minutes vs 16 hours) and is used in conjunction with medical procedures to achieve a quick diagnosis. Cryosections can also be used in immunohistochemistry as freezing tissue stops degradation of tissue faster than using a fixative and does not alter or mask its chemical composition as much. Electron Microscopy Technique: after embedding tissues in epoxy resin, a microtome equipped with a glass or gem grade diamond knife is used to cut very thin sections (typically 60 to 100 nanometer). Sections are stained with an aqueous solution of an appropriate heavy metal salt and examined with a transmission electron microscope. This instrument is often called an ultramicrotome. The ultramicrotome is also used with its glass knife or an industrial grade diamond knife to cut survey sections prior to thin sectioning. These survey sections are generally 0.5 to 1 μm thick and are mounted on a glass slide and stained to locate areas of interest under a light microscope prior to thin sectioning for the TEM. Thin sectioning for the TEM is often done with a gem quality diamond knife. Complementing traditional TEM techniques ultramicrotomes are increasingly found mounted inside an SEM chamber so the surface of the block face can be imaged and then removed with the microtome to uncover the next surface for imaging. This technique is called serial block-face scanning electron microscopy (SBFSEM). Botanical Microtomy Technique: hard materials like wood, bone and leather require a sledge microtome. These microtomes have heavier blades and cannot cut as thin as a regular microtome. Spectroscopy (especially FTIR or infrared spectroscopy) Technique: thin polymer sections are needed in order that the infra-red beam can penetrate the sample under examination. It is normal to cut samples to between 20 and 100 μm in thickness. For more detailed analysis of much smaller areas in a thin section, FTIR microscopy can be used for sample inspection. A recent development is the laser microtome, which cuts the target specimen with a femtosecond laser instead of a mechanical knife. This method is contact-free and does not require sample preparation techniques. The laser microtome has the ability to slice almost every tissue in its native state. Depending on the material being processed, slice thicknesses of 10 to 100 μm are feasible. Sectioning intervals can be classified mainly into either: Serial sectioning: obtaining a continuous ribbon of sections from a paraffin block and using all for slides. Step sections: collected at specified depths in the block. Precision cut kidney slices Precision-cut kidney slices refer to thin sections of the kidney tissue that are prepared using a microtome to study kidney functions, drug metabolism or disease processes. Researchers use these slices to study the impact of substances on renal function. This includes drug metabolism and the effects of toxic substances. Types Sledge A sledge microtome is a device where the sample is placed into a fixed holder (shuttle), which then moves backwards and forwards across a knife. Modern sled microtomes have the sled placed upon a linear bearing, a design that allows the microtome to readily cut many coarse sections. By adjusting the angles between the sample and the microtome knife, the pressure applied to the sample during the cut can be reduced. Typical applications for this design of microtome are of the preparation of large samples, such as those embedded in paraffin for biological preparations. Typical cut thickness achievable on a sledge microtome is between 1 and 60 μm. Rotary This instrument is a common microtome design. This device operates with a staged rotary action such that the actual cutting is part of the rotary motion. In a rotary microtome, the knife is typically fixed in a vertical position. In the figure to the left, the principle of the cut is explained. Through the motion of the sample holder, the sample is cut by the knife position 1 to position 2, at which point the fresh section remains on the knife. At the highest point of the rotary motion, the sample holder is advanced by the same thickness as the section that is to be made, allowing the next section to be made. The flywheel in many microtomes can be operated by hand. This has the advantage that a clean cut can be made, as the relatively large mass of the flywheel prevents the sample from being stopped during the sample cut. The flywheel in newer models is often integrated inside the microtome casing. The typical cut thickness for a rotary microtome is between 1 and 60 μm. For hard materials, such as a sample embedded in a synthetic resin, this design of microtome can allow good "semi-thin" sections with a thickness of as low as 0.5 μm. Cryomicrotome For the cutting of frozen samples, many rotary microtomes can be adapted to cut in a liquid-nitrogen chamber, in a so-called cryomicrotome setup. The reduced temperature allows the hardness of the sample to be increased, such as by undergoing a glass transition, which allows the preparation of semi-thin samples. However the sample temperature and the knife temperature must be controlled in order to optimise the resultant sample thickness. Ultramicrotome An ultramicrotome is a main tool of ultramicrotomy. It allows the preparation of extremely thin sections, with the device functioning in the same manner as a rotational microtome, but with very tight tolerances on the mechanical construction. As a result of the careful mechanical construction, the linear thermal expansion of the mounting is used to provide very fine control of the thickness. These extremely thin cuts are important for use with transmission electron microscope (TEM) and serial block-face scanning electron microscopy (SBFSEM), and are sometimes also important for light-optical microscopy. The typical thickness of these cuts is between 40 and 100 nm for transmission electron microscopy and often between 30 and 50 nm for SBFSEM. Thicker sections up to 500 nm thick are also taken for specialized TEM applications or for light-microscopy survey sections to select an area for the final thin sections. Diamond knives (preferably) and glass knives are used with ultramicrotomes. To collect the sections, they are floated on top of a liquid as they are cut and are carefully picked up onto grids suitable for TEM specimen viewing. The thickness of the section can be estimated by the thin-film interference colors of reflected light that are seen as a result of the extremely low sample thickness. Vibrating The vibrating microtome operates by cutting using a vibrating blade, allowing the resultant cut to be made with less pressure than would be required for a stationary blade. The vibrating microtome is usually used for difficult biological samples. The cut thickness is usually around 30–500 μm for live tissue and 10–500 μm for fixed tissue. Saw The saw microtome is especially for hard materials such as teeth or bones. The microtome of this type has a recessed rotating saw, which slices through the sample. The minimal cut thickness is approximately 30 μm and can be made for comparatively large samples. Laser The laser microtome is an instrument for contact-free slicing. Prior preparation of the sample through embedding, freezing or chemical fixation is not required, thereby minimizing the artifacts from preparation methods. Alternately this design of microtome can also be used for very hard materials, such as bones or teeth, as well as some ceramics. Dependent upon the properties of the sample material, the thickness achievable is between 10 and 100 μm. The device operates using a cutting action of an infrared laser. As the laser emits a radiation in the near infrared, in this wavelength regime the laser can interact with biological materials. Through sharp focusing of the probe within the sample, a focal point of very high intensity, up to TW/cm2, can be achieved. Through the non-linear interaction of the optical penetration in the focal region a material separation in a process known as photo-disruption is introduced. By limiting the laser pulse durations to the femtoseconds range, the energy expended at the target region is precisely controlled, thereby limiting the interaction zone of the cut to under a micrometre. External to this zone the ultra-short beam application time introduces minimal to no thermal damage to the remainder of the sample. The laser radiation is directed onto a fast scanning mirror-based optical system, which allows three-dimensional positioning of the beam crossover, whilst allowing beam traversal to the desired region of interest. The combination of high power with a high raster rate allows the scanner to cut large areas of sample in a short time. In the laser microtome the laser-microdissection of internal areas in tissues, cellular structures, and other types of small features is also possible. Knives The selection of microtome knife blade profile depends upon the material and preparation of the samples, as well as the final sample requirements (e.g. cut thickness and quality). Design and cut types Generally, knives are characterized by the profile of the knife blade, which falls under the categories of planar concave, wedge shaped or chisel shaped designs. Planar concave microtome knives are extremely sharp, but are also very delicate and are therefore only used with very soft samples. The wedge profile knives are somewhat more stable and find use in moderately hard materials, such as in epoxy or cryogenic sample cutting. Finally, the chisel profile with its blunt edge, raises the stability of the knife, whilst requiring significantly more force to achieve the cut. For ultramicrotomes, glass and diamond knives are required, the cut breadth of the blade is therefore on the order of a few millimetres and is therefore significantly smaller than for classical microtome knives. Glass knives are usually manufactured by the fracture of glass bars using special "knife-maker" fracturing devices. Glass knives may be used for initial sample preparations even where diamond knives may be used for final sectioning. Glass knives usually have small troughs, made with plastic tape, which are filled with water to allow the sample to float for later collection. Diamond blades may be built into such an existing trough, allowing for the same collection method. Sectioning Prior to cutting by microtome, biological materials are usually placed in a more rigid fixative, in a process known as embedding. This is achieved by the inflow of a liquid substance around the sample, such as paraffin (wax) or epoxy, which is placed in a mold and later hardened to produce a "block" which is readily cut. The declination is the angle of contact between the sample vertical and knife blade. If the knife blade is at right angles (declination=90) the cut is made directly using a pressure based mode, and the forces are therefore proportionally larger. If the knife is tilted, however, the relative motion of the knife is increasingly parallel to sample motion, allowing for a slicing action. This behaviour is very important for large or hard samples The inclination of the knife is the angle between the knife face and the sample. For an optimal result, this angle must be chosen appropriately. The optimal angle depends upon the knife geometry, the cut speed and many other parameters. If the angle is adjusted to zero, the knife cut can often become erratic, and a new location of the knife must be used to smooth this out. If the angle is too large, the sample can crumple and the knife can induce periodic thickness variations in the cut. By further increasing the angle such that it is too large one can damage the knife blade itself. See also Histology Microscope Precision cut lung slices References External links Histology Forestry tools Cutting machines
Microtome
[ "Physics", "Chemistry", "Technology" ]
3,223
[ "Machines", "Cutting machines", "Physical systems", "Histology", "Microscopy" ]
1,513,441
https://en.wikipedia.org/wiki/Calcium%20channel
A calcium channel is an ion channel which shows selective permeability to calcium ions. It is sometimes synonymous with voltage-gated calcium channel, which are a type of calcium channel regulated by changes in membrane potential. Some calcium channels are regulated by the binding of a ligand. Other calcium channels can also be regulated by both voltage and ligands to provide precise control over ion flow. Some cation channels allow calcium as well as other cations to pass through the membrane. Calcium channels can participate in the creation of action potentials across cell membranes. Calcium channels can also be used to release calcium ions as second messengers within the cell, affecting downstream signaling pathways. Comparison tables The following tables explain gating, gene, location and function of different types of calcium channels, both voltage and ligand-gated. Voltage-gated voltage-operated calcium channels Ligand-gated receptor-operated calcium channels Non-selective channels permeable to calcium There are several cation channel families that allow positively charged ions including calcium to pass through. These include P2X receptors, Transient Receptor Potential (TRP) channels, Cyclic nucleotide-gated (CNG) channels, Acid-sensing ion channels, and SOC channels. These channels can be regulated by membrane voltage potentials, ligands, and/or other cellular conditions. Cat-Sper channels, found in mammalian sperm, are one example of this as they are voltage gated and ligand regulated. Pharmacology L-type calcium channel blockers are used to treat hypertension. In most areas of the body, depolarization is mediated by sodium influx into a cell; changing the calcium permeability has little effect on action potentials. However, in many smooth muscle tissues, depolarization is mediated primarily by calcium influx into the cell. L-type calcium channel blockers selectively inhibit these action potentials in smooth muscle which leads to dilation of blood vessels; this in turn corrects hypertension. T-type calcium channel blockers are used to treat epilepsy. Increased calcium conductance in the neurons leads to increased depolarization and excitability. This leads to a greater predisposition to epileptic episodes. Calcium channel blockers reduce the neuronal calcium conductance and reduce the likelihood of experiencing epileptic attacks. See also . References External links Ion channels Electrophysiology Integral membrane proteins Calcium channels
Calcium channel
[ "Chemistry" ]
489
[ "Neurochemistry", "Ion channels" ]
1,513,442
https://en.wikipedia.org/wiki/Chloride%20channel
Chloride channels are a superfamily of poorly understood ion channels specific for chloride. These channels may conduct many different ions, but are named for chloride because its concentration in vivo is much higher than other anions. Several families of voltage-gated channels and ligand-gated channels (e.g., the CaCC families) have been characterized in humans. Voltage-gated chloride channels perform numerous crucial physiological and cellular functions, such as controlling pH, volume homeostasis, transporting organic solutes, regulating cell migration, proliferation, and differentiation. Based on sequence homology the chloride channels can be subdivided into a number of groups. General functions Voltage-gated chloride channels are important for setting cell resting membrane potential and maintaining proper cell volume. These channels conduct or other anions such as . The structure of these channels are not like other known channels. The chloride channel subunits contain between 1 and 12 transmembrane segments. Some chloride channels are activated only by voltage (i.e., voltage-gated), while others are activated by , other extracellular ligands, or pH. CLC family The CLC family of chloride channels contains 10 or 12 transmembrane helices. Each protein forms a single pore. It has been shown that some members of this family form homodimers. In terms of primary structure, they are unrelated to known cation channels or other types of anion channels. Three CLC subfamilies are found in animals. CLCN1 is involved in setting and restoring the resting membrane potential of skeletal muscle, while other channels play important parts in solute concentration mechanisms in the kidney. These proteins contain two CBS domains. Chloride channels are also important for maintaining safe ion concentrations within plant cells. Structure and mechanism The CLC channel structure has not yet been resolved, however the structure of the CLC exchangers has been resolved by x-ray crystallography. Because the primary structure of the channels and exchangers are so similar, most assumptions about the structure of the channels are based on the structure established for the bacterial exchangers. Each channel or exchanger is composed of two similar subunits—a dimer—each subunit containing one pore. The proteins are formed from two copies of the same protein—a homodimer—though scientists have artificially combined subunits from different channels to form heterodimers. Each subunit binds ions independently of the other, meaning conduction or exchange occur independently in each subunit. Each subunit consists of two related halves oriented in opposite directions, forming an 'antiparallel' structure. These halves come together to form the anion pore. The pore has a filter through which chloride and other anions can pass, but lets little else through. These water-filled pores filter anions via three binding sites—Sint, Scen, and Sext—which bind chloride and other anions. The names of these binding sites correspond to their positions within the membrane. Sint is exposed to intracellular fluid, Scen lies inside the membrane or in the center of the filter, and Sext is exposed to extracellular fluid.[4] Each binding site binds different chloride anions simultaneously. In the exchangers, these chloride ions do not interact strongly with one another, due to compensating interactions with the protein. In the channels, the protein does not shield chloride ions at one binding site from the neighboring negatively charged chlorides. Each negative charge exerts a repulsive force on the negative charges next to it. Researchers have suggested that this mutual repulsion contributes to the high rate of conduction through the pore. CLC transporters shuttle H+ across the membrane. The H+ pathway in CLC transporters utilizes two glutamate residues—one on the extracellular side, Gluex, and one on the intracellular side, Gluin. Gluex also serves to regulate chloride exchange between the protein and extracellular solution. This means that the chloride and the proton share a common pathway on the extracellular side, but diverge on the intracellular side. CLC channels also have dependence on H+, but for gating rather than Cl− exchange. Instead of utilizing gradients to exchange two Cl− for one H+, the CLC channels transport one H+ while simultaneously transporting millions of anions. This corresponds with one cycle of the slow gate. Eukaryotic CLC channels also contain cytoplasmic domains. These domains have a pair of CBS motifs, whose function is not fully characterized yet. Though the precise function of these domains is not fully characterized, their importance is illustrated by the pathologies resulting from their mutation. Thomsen's disease, Dent's disease, infantile malignant osteopetrosis, and Bartter's syndrome are all genetic disorders due to such mutations. At least one role of the cytoplasmic CBS domains regards regulation via adenosine nucleotides. Particular CLC transporters and proteins have modulated activity when bound with ATP, ADP, AMP, or adenosine at the CBS domains. The specific effect is unique to each protein, but the implication is that certain CLC transporters and proteins are sensitive to the metabolic state of the cell. Selectivity The Scen acts as the primary selectivity filter for most CLC proteins, allowing the following anions to pass through, from most selected to least: SCN−, Cl−, Br−, NO, I−. Altering a serine residue at the selectivity filter, labeled Sercen, to a different amino acid alters the selectivity. Gating and kinetics Gating occurs through two mechanisms: protopore or fast gating and common or slow gating. Common gating involves both protein subunits closing their pores at the same time (cooperation), while protopore gating involves independent opening and closing of each pore. As the names imply, fast gating occur at a much faster rate than slow gating. Precise molecular mechanisms for gating are still being studied. For the channels, when the slow gate is closed, no ions permeate through the pore. When the slow gate is open, the fast gates open spontaneously and independently of one another. Thus, the protein could have both gates open, or both gates closed, or just one of the two gates open. Single-channel patch-clamp studies demonstrated this biophysical property even before the dual-pore structure of CLC channels had been resolved. Each fast gate opens independently of the other and the ion conductance measured during these studies reflects a binomial distribution. H+ transport promotes opening of the common gate in CLC channels. For every opening and closing of the common gate, one H+ is transported across the membrane. The common gate is also affected by the bonding of adenosine nucleotides to the intracellular CBS domains. Inhibition or activation of the protein by these domains is specific to each protein. Function The CLC channels allow chloride to flow down its electrochemical gradient, when open. These channels are expressed on the cell membrane. CLC channels contribute to the excitability of these membranes as well as transport ions across the membrane. The CLC exchangers are localized to intracellular components like endosomes or lysosomes and help regulate the pH of their compartments. Pathology Bartter's syndrome, which is associated with renal salt wasting and hypokalemic alkalosis, is due to the defective transport of chloride ions and associated ions in the thick ascending loop of Henle. CLCNKB has been implicated. Another inherited disease that affects the kidney organs is Dent's disease, characterised by low molecular weight proteinuria and hypercalciuria where mutations in CLCN5 are implicated. Thomsen disease is associated with dominant mutations and Becker disease with recessive mutations in CLCN1. Genes CLCN1, CLCN2, CLCN3, CLCN4, CLCN5, CLCN6, CLCN7, CLCNKA, CLCNKB BSND - encodes barttin, accessory subunit beta for CLCNKA and CLCNKB E-ClC family Members of Epithelial Chloride Channel (E-ClC) Family (TC# 1.A.13) catalyze bidirectional transport of chloride ions. Mammals have multiple isoforms (at least 6 different gene products plus splice variants) of epithelial chloride channel proteins, catalogued into the Chloride channel accessory (CLCA) family. The first member of this family to be characterized was a respiratory epithelium, Ca2+-regulated, chloride channel protein isolated from bovine tracheal apical membranes. It was biochemically characterized as a 140 kDa complex. The bovine EClC protein has 903 amino acids and four putative transmembrane segments. The purified complex, when reconstituted in a planar lipid bilayer, behaved as an anion-selective channel. It was regulated by Ca2+ via a calmodulin kinase II-dependent mechanism. Distant homologues may be present in plants, ciliates and bacteria, Synechocystis and Escherichia coli, so at least some domains within E-ClC family proteins have an ancient origin. Genes CLCA1, CLCA2, CLCA3, CLCA4 CLIC family The Chloride Intracellular Ion Channel (CLIC) Family (TC# 1.A.12) consists of six conserved proteins in humans (CLIC1, CLIC2, CLIC3, CLIC4, CLIC5, CLIC6). Members exist as both monomeric soluble proteins and integral membrane proteins where they function as chloride-selective ion channels. These proteins are thought to function in the regulation of the membrane potential and in transepithelial ion absorption and secretion in the kidney. They are a member of the glutathione S-transferase (GST) superfamily. Structure They possess one or two putative transmembrane α-helical segments (TMSs). The bovine p64 protein is 437 amino acyl residues in length and has the two putative TMSs at positions 223-239 and 367-385. The N- and C-termini are cytoplasmic, and the large central luminal loop may be glycosylated. The human nuclear protein (CLIC1 or NCC27) is much smaller (241 residues) and has only one putative TMS at positions 30-36. It is homologous to the second half of p64. Structural studies showed that in the soluble form, CLIC proteins adopt a GST fold with an active site exhibiting a conserved glutaredoxin monothiol motif, similar to the omega class GSTs. Al Khamici et al. demonstrated that CLIC proteins have glutaredoxin-like glutathione-dependent oxidoreductase enzymatic activity. CLICs 1, 2 and 4 demonstrate typical glutaredoxin-like activity using 2-hydroxyethyl disulfide as a substrate. This activity may regulate CLIC ion channel function. Transport reaction The generalized transport reaction believed to be catalyzed chloride channels is: Cl− (cytoplasm) → Cl− (intraorganellar space) CFTR CFTR is a chloride channel belonging to the superfamily of ABC transporters. Each channel has two transmembrane domains and two nucleotide binding domains. ATP binding to both nucleotide binding domains causes changes these domains to associate, further causing changes that open up the ion pore. When ATP is hydrolyzed, the nucleotide binding domains dissociate again and the pore closes. Pathology Cystic fibrosis is caused by mutations in the CFTR gene on chromosome 7, the most common mutation being deltaF508 (a deletion of a codon coding for phenylalanine, which occupies the 508th amino acid position in the normal CFTR polypeptide). Any of these mutations can prevent the proper folding of the protein and induce its subsequent degradation, resulting in decreased numbers of chloride channels in the body. This causes the buildup of mucus in the body and chronic infections. Other chloride channels and families GABAA Glycine Receptor Calcium-activated chloride channel Anion-conducting channelrhodopsin References Further reading External links - CLC chloride channels Membrane proteins Transmembrane transporters Integral membrane proteins Protein domains
Chloride channel
[ "Biology" ]
2,587
[ "Protein domains", "Protein classification", "Membrane proteins" ]
1,513,538
https://en.wikipedia.org/wiki/Jonathan%20Sarfati
Jonathan David Sarfati (born 1 October 1964) is a young Earth creationist who writes articles for Creation Ministries International (CMI), a non-profit Christian apologetics ministry. Sarfati has a PhD in chemistry, and was New Zealand national chess champion in 1987 and 1988. Background Born in Ararat, Victoria, Sarfati moved with his family to New Zealand as a child, where he became a dual Australian and New Zealand citizen. He attended Wellington College in New Zealand, later graduating from Victoria University of Wellington with a BSc (Hons.) in chemistry, and a PhD in the same subject for a thesis entitled "A Spectroscopic Study of some Chalcogenide Ring and Cage Molecules". He co-authored a paper on high-temperature superconductors that was published in Nature in 1987 ("Letters to Nature"), and from 1988 to 1995, had five papers on spectroscopy of condensed matter samples published in other peer-reviewed scientific journals. In 1996, he returned to Brisbane, Australia to work for the Creation Science Foundation, then Answers in Genesis, then its current name Creation Ministries International. In 2010, he moved to the American office of that ministry. Creationism Sarfati was a founder of the Wellington Christian Apologetics Society in New Zealand, and has long retained an interest in Christian apologetics and the creation–evolution controversy. His first two books, Refuting Evolution in 1999, and Refuting Evolution 2 in 2002, are intended as rebuttals to the National Academy of Sciences' publication Teaching about Evolution and the Nature of Science and the PBS/Nova series Evolution, respectively. Refuting Compromise, published in 2004, is Sarfati's rebuttal of the day-age creationist teachings of Hugh Ross, who attempts to harmonise the Genesis account of creation with mainstream science regarding the age of the Earth and the possible size of the Biblical Flood, against which Sarfati defends a literal biblical timeline and a global flood. Eugenie Scott and Glenn Branch of the National Center for Science Education called Sarfati's Refuting Evolution 2 a "crude piece of propaganda". Sarfati is a critic of geocentrism, the Myth of the flat Earth and flat Earth teaching, homosexual behaviour, and abortion except to save the life of the mother. While opposing embryonic stem cell research, he supports adult stem cell research. Sarfati also supports vaccination and rebuts anti-vaccination arguments. Chess Sarfati is a chess FIDE Master, and achieved a draw against former world champion Boris Spassky during a tournament in Wellington in 1988, and was New Zealand's national chess champion in 1987–88. Although tied with Rey Casse for first place in the Australian Junior Championship of 1981, he was not eligible to share the title as he was a resident of New Zealand at the time. He represented New Zealand in three Chess Olympiads: the 27th in Dubai in 1986, the 28th in Thessaloniki in 1988, and the 30th in Manila in 1992. He also represented New Zealand on top board at the 5th Asian Teams in New Delhi. He has given blindfold chess exhibitions at chess clubs and other events, and has played twelve such games simultaneously. His previous best was winning 11/11 at the Kāpiti Chess Club in New Zealand. Bibliography The Genesis Account: A theological, historical, and scientific commentary on Genesis 1-11, 2015, Creation Book Publishers Christianity for Skeptics, 2012, with Steve Kumar (first author), Creation Book Publishers The Greatest Hoax on Earth? Refuting Dawkins on Evolution, 2010, Creation Book Publishers By Design: Evidence for nature's Intelligent Designer—the God of the Bible, 2008, Creation Book Publishers ABN: 978-0-949906-72-4 Refuting Compromise: A Biblical and Scientific Refutation of Progressive Creationism, 2004, Creation Book Publishers The Revised & Expanded Answers Book, 2003, with Carl Wieland and David Catchpoole, edited by Don Batten, Refuting Evolution 2, 2002/2011, Creation Book Publishers Refuting Evolution, 1999–2010, Creation Book Publishers References External links Jonathan D. Sarfati at creation.com 1964 births Living people 20th-century Australian male writers 20th-century Australian non-fiction writers 20th-century evangelicals 21st-century Australian male writers 21st-century Australian non-fiction writers 21st-century evangelicals Australian chess players Australian emigrants to New Zealand Australian male non-fiction writers Chess FIDE Masters Christian apologists Christian fundamentalists Christian Young Earth creationists Evangelical writers New Zealand chemists New Zealand chess players New Zealand evangelicals People educated at Wellington College, Wellington People from Ararat, Victoria Physical chemists Spectroscopists Victoria University of Wellington alumni
Jonathan Sarfati
[ "Physics", "Chemistry" ]
970
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
1,513,620
https://en.wikipedia.org/wiki/Desert%20Inn
The Desert Inn, also known as the D.I., was a hotel and casino on the Las Vegas Strip in Paradise, Nevada, which operated from April 24, 1950, to August 28, 2000. Designed by architect Hugh Taylor and interior design by Jac Lessman, it was the fifth resort to open on the Strip, the first four being El Rancho Vegas, The New Frontier, Flamingo, and the El Rancho (then known as the Thunderbird). It was situated between Desert Inn Road and Sands Avenue. The Desert Inn opened with 300 rooms and the Sky Room restaurant, headed by a chef formerly of the Ritz Paris, which once had the highest vantage point on the Las Vegas Strip. The casino, at , was one of the largest in Nevada at the time. The nine-story St. Andrews Tower was completed during the first renovation in 1963, and the 14-story Augusta Tower became the Desert Inn's main tower when it was completed in 1978 along with the seven-story Wimbledon Tower. The Palms Tower was completed in 1997 with the second and final renovation. The Desert Inn was the first hotel in Las Vegas to feature a fountain at the entrance. In 1997, the Desert Inn underwent a $200 million renovation and expansion, but after it was purchased for $270 million by Steve Wynn in 2000, he decided to demolish it and build the Wynn Las Vegas resort and casino where the Desert Inn once stood, and later, Encore. The remaining towers of the Desert Inn were imploded in 2004. The original performance venue at the Desert Inn was the Painted Desert Room, later the Crystal Room, which opened in 1950 with 450 seats. Frank Sinatra made his Las Vegas debut there on September 13, 1951, and became a regular performer. The property included an 18-hole golf course which hosted the PGA Tour Tournament of Champions from 1953 to 1966. The golf course remained in place and is now a part of the Wynn resort. History The hotel was situated at 3145 Las Vegas Boulevard South, between Desert Inn Road and Sands Avenue. The original name was Wilbur Clark's Desert Inn. Wilbur Clark, described by Frank Sinatra biographer James Kaplan as a "onetime San Diego bellhop and Reno craps dealer", originally began building the resort with his brother in 1947 with $250,000, but ran out of money. Author Hal Rothman notes that "for nearly two years the framed structure sat in the hot desert sun, looking more like an ancient relic than a nascent casino". Clark approached the Reconstruction Finance Corporation for investment, but it was struggling financially. In 1949, he met with Moe Dalitz, the head of the notorious Cleveland Syndicate, which had ties to the Mayfield Road Mob, and Dalitz agreed to fund 75% of the project with $1.3 million, and construction resumed. Much of the financing came from the American National Insurance Company (ANICO), though Clark became the public frontman of the resort while Dalitz remained quietly in the background as the principal owner. The resort would eventually be renamed Desert Inn and was called the "D.I." by Las Vegas locals and regular guests. The Desert Inn opened formally on April 24, 1950, at a two-day gala which was heavily publicized nationally. Journalists from all of the major newspapers and magazines were invited, and the hotel paid $5,700 to cover air tickets. 150 invitations were sent out by Clark to VIPs with a credit limit of $10,000. About half the attendees at the opening were from California and Nevada. At the opening show in the Painted Desert Room were performers such as Edgar Bergen and Charlie McCarthy, Vivian Blaine, Pat Patrick, The Donn Arden Dancers, Van Heflin, Abbott and Costello, and the Desert Inn Orchestra, led by Ray Noble. In attendance were a number of mafiosi, including Black Bill Tocco, Joe Massei, Sam Maceo, Peter Licavoli, and Frank Malone in a gala which Barbara Greenspun believed marked the beginning of heavy involvement of the mafia in the development of Las Vegas. Sidney Korshak was one of its early investors. The Desert Inn became known for its "opulence" and top-notch service. The first manager of the Desert Inn had previously worked as the manager at the Clift Hotel in San Francisco. Lew and Edie Wasserman were frequent guests of the hotel. During the 1950s, the hotel often hosted the Duke and Duchess of Windsor, Winston Churchill, Adlai Stevenson, Senator John F. Kennedy, and former President Harry S. Truman. In the mid 1940s and early 1950s the city and its Chamber of Commerce worked to keep the Vegas nickname of the "Atomic City" going to attract tourists. After the Desert Inn opened, so called "bomb parties" famously took place in the hotel's panoramic Sky Room, where patrons could view the detonations from a relatively safe distance while drinking Atomic Cocktails. In 1959, Lawrence Wien, owner of New York City's Plaza Hotel purchased the hotel, but signed a management deal for Clark to remain as manager. In the early 1960s, the mafia-financed casino hotels of the Las Vegas Strip and Nevada came under close scrutiny by the FBI, and they placed increased pressure on the Nevada Gaming Control Board to force the mobsters out of Las Vegas. After Sam Giancana was spotted on the premises of Frank Sinatra's Cal Neva Lodge & Casino at Lake Tahoe, his gambling license was removed by the Board and he was forced to sell up and forfeit his share in the Sands Hotel and Casino. The Desert Inn faced similar scrutiny by the FBI, attracting controversy at the same time for the involvement of Dalitz and his mobster associates, but simultaneously called for the prosecution of the FBI for illegal wiretapping. In 1964, Clark sold his remaining share in the hotel to Dalitz and business associates Morris Kleinman, Thomas McGinty and Sam Tucker. He died of a heart attack the following year. The bell captain of the Desert Inn, Jack Butler, remembered Clark: "Wilbur was the greatest guy. Without him this town never would've got off the ground. Everyone came into the club just to see him and he was all over the postcards. He was the only boss who would agree to have his picture taken". The Desert Inn's most famous guest, businessman Howard Hughes, arrived on Thanksgiving Day 1966, renting the hotel's entire top two floors. After staying past his initial ten-day reservation, he was asked to leave in December so that the resort could accommodate the high rollers who were expected for New Year's Eve. Instead of leaving, Hughes started negotiations to buy the Desert Inn. On March 27, 1967, Hughes purchased the resort from Dalitz for $6.2 million in cash and $7 million in loans. This was the first of many Las Vegas resort purchases by Hughes, including the Sands Hotel and Casino ($14.6 million) and the Frontier Hotel and Casino ($23 million). However, Hughes refused to include the PGA Tour Tournament of Champions in the deal, so Dalitz moved the tournament to his Stardust Resort and Casino in 1967 and 1968. The reclusive Hughes continued to live in his penthouse suite at the Desert Inn for four years, never leaving his bedroom. Usually unclothed, he spent his time "negotiating purchases and business deals with the curtains drawn and windows and doors sealed shut with tape", and did not allow anyone from the hotel staff to come in and clean his room. On the eve of Thanksgiving 1970, he was removed from his room on a stretcher and flown to the Bahamas. After Hughes's death in 1976, the hotel remained under the Summa Corporation, which completed the extensive renovation that he had ordered. Summa sold the hotel to Kirk Kerkorian and the Tracinda Corporation in 1986, and it became known as the MGM Desert Inn. In 1992, Frank Sinatra celebrated his 77th birthday at the hotel in an event that generated much media attention. Dick Taylor, the CEO of public relations firm Rogers & Cowan recalled: "We had the stars assemble in the casino's presidential suite and then took them in limos to the entrance of the hotel, where the press and hundreds of fans were gathered, like a Hollywood movie premiere. The stars were interviewed on the red carpet and in they went to the famed Crystal Room. It was a very big deal." Modern history Kerkorian sold the resort to ITT Sheraton in 1993 for $160 million and it was renamed the Sheraton Desert Inn. In May 1994, ITT Sheraton announced plans to build the Sheraton Desert Kingdom, a $750 million, 3,500-room megaresort on the property, adjacent to the existing Sheraton Desert Inn. When ITT Sheraton bought Caesars World in December 1994, plans for the new resort were shelved. In 1997, ITT Sheraton undertook a $200 million renovation of the Augusta Tower and St. Andrews Tower and expansion, with the building and completion of the Palms Tower. The resort was returned to its historic name, The Desert Inn, dropping the Sheraton name, and was placed in the ITT Sheraton Luxury Collection division. ITT Sheraton itself was sold the following year to Starwood. Due to losing money, Starwood immediately put The Desert Inn up for sale, and contracted a sale to Sun International Hotels Ltd. on May 19, 1999, for $275 million. The sale to Sun International fell through the following March, however. Also in 1999, Sinatra's and the Rat Pack's estate managers, Sheffield Enterprises Inc., sued the Desert Inn, claiming an infringement of rights in their use of Sinatra's name and persona in its advertising and sales, including the words "Frank", "Ol' Blue Eyes", "the chairman of the Board" and "The Rat Pack". Sinatra's estate specifically objected to their use in "billboard advertising, marquees, alcoholic beverages and wine menus, and on the front and back of tee-shirts and caps at its gift shop" and alleged photographs of Sinatra and his signature on the walls behind the bar near the entrance to the Starlight Lounge of the Desert Inn. The Desert Inn celebrated its 50th anniversary on April 24, 2000. Celebrations were held for a week and a celebrity golf tournament was held with the likes of Robert Loggia, Chris O'Donnell, Robert Urich, Susan Anton, Vincent Van Patten and Tony Curtis. As part of the festivities, a time capsule was buried in a granite burial chamber on April 25, to be reopened on April 25, 2050. Three days later, on April 27, Steve Wynn purchased the resort from Starwood for $270 million. Wynn closed the Desert Inn at 2:00 a.m. on August 28, 2000. On October 23, 2001, the Augusta Tower, the Desert Inn's southernmost building, was imploded to make room for a mega-resort that Wynn planned to build. Coming a month after the September 11 attacks, the implosion was marked with less fanfare than previous Las Vegas demolition spectacles due to its similarity to the collapse of the Twin Towers. Originally intended to be named Le Rêve, the new project opened as Wynn Las Vegas. The remaining two towers, the St. Andrews Tower and Palms Tower were both temporarily used as the Wynn Gallery, spanning to display some of Wynn's art collection. The St. Andrews Tower and Palms Tower were finally imploded on November 16, 2004. Architecture and features The initial hotel, a $6.5 million property set in 200 acres, was designed by Hugh Taylor who was hired after Wilbur Clark and Wayne McAllister could not agree on the design. Interiors were by noted New York architect Jac Lessman. The property conveyed the image of a "southwestern spa" that was "half ranch house, half nightclub". It was built of "cinder blocks but trimmed with sandstone and finished throughout the inside with redwood". The logo of the hotel was a Joshua tree cactus. The driveway into the hotel passed under an "old-fashioned ranch sign" bearing the name Wilbur Clark's Desert Inn in scripted letters. The Desert Inn was the first hotel in Las Vegas to feature a fountain at the entrance. A "Dancing Waters" show involved the fountain jets choreographed to music. The interior of the hotel was finished in redwood with flagstone flooring. The public space included a registration area, a casino, two bars, a coffee shop, a restaurant, various commercial shops and services, and a broadcasting station for K-RAM radio. Guest rooms were located in wings situated behind the main building, surrounding the figure-eight swimming pool. The hotel originally had 300 rooms, each outfitted with air conditioning with individual thermostats. The lounge was located in a three-story, glass-sided tower at the front of the hotel known as the Sky Room, which was the largest structure on the Strip at the time of its construction and commanded views of the mountains and desert all around, as well as overlooked the "Dancing Waters" feature. The Sky Room restaurant was headed by a chef formerly of the Ritz Paris. The original performance venue at the Desert Inn was the 450 seat Painted Desert Room, later the Crystal Room, which opened in 1950 with 450 seats. Charles Cobelle created the handpainted murals, and a "band car" was used to move the orchestra within the showroom. Next door was a restaurant, the Cactus Room. The Kachina Doll Ranch was a supervised play area for guests' children. The hotel had a ladies salon and health club from the outset. Another performance venue at the hotel was the Lady Luck Lounge. The hotel had its first addition when in 1963 the St. Andrews Tower was built. The tower was designed by William B. Tabler. A near-identical tower by Tabler was built concurrently at the Stardust, which was under common ownership. In the 1970s, the hotel underwent a $54-million renovation under Howard Hughes, which resumed under the responsibility of the Summa Corporation after his death in 1976. The 14-story Augusta Tower became the Desert Inn's main tower when it was completed in 1978. The seven-story Wimbledon Tower contained duplex suites, and resembled a modern version of a Mayan pyramid. It overlooked the golf course and was built at the same time, bringing the total room count to 825. By 1978, most of the 1950s structures on the property had been replaced with modern buildings and the property was renamed the Desert Inn and Country Club. It featured full country club amenities open to guests of the hotel, including a club house, driving range, pro shop, restaurant and lounge at the golf club; 10 tournament-class outdoor tennis courts; and a spa. Three restaurants were added: the "small, intimate" Monte Carlo Room, the "gourmet" Portofino Room, and the Ho Wan Chinese restaurant. At the time of its sale to ITT-Sheraton in 1993, the Desert Inn had the largest frontage of any casino hotel on the Las Vegas Strip, measuring feet. In 1997, the Desert Inn underwent a $200 million renovation and expansion by Steelman Partners, giving it a new Mediterranean-looking exterior with white stucco and red clay tile roofs. The room count was reduced to 715 to provide more luxurious accommodations. The nine-story Palm Tower was completed, the lagoon-style pool was added, and notable changes were made to the Grand Lobby Atrium, Starlight Lounge, Villas Del Lago, and new golf shop and country club. The seven-story lobby, fully built in marble, was also a major part of the renovation. Casino At its opening in 1950, the casino, at , was one of the largest in Nevada at the time. The windowless room included "five crap tables, three roulette wheels, four black jack tables and 75 slot machines", together with a sportsbook. Hundreds of coin-operated gambling machines – including slot machines, video poker, 21, and keno – were installed during the 1978 renovation. The casino acquired a reputation for attracting the high rollers. On January 27, 2000, the Megabucks jackpot record for Las Vegas was broken when $34,955,489 was won by an anonymous gambler at the Desert Inn, playing a bank of six Megabucks machines near the hotel's coffee shop. Golf course and country club The 18-hole, par-72 Desert Inn Golf Club opened in 1952. Initially, Dalitz had pushed the idea of opening a golf course next to the hotel with an entrance off the Strip, which would be accessible to other hotels and boost the city's profile as a resort destination. When other hotel owners rejected this idea, Dalitz built the course on the hotel premises. He also opened an outdoor dining area, to accommodate golfers and swimmers who might prefer a more informal atmosphere. The course hosted the PGA Tour Tournament of Champions from 1953 to 1966, attracting professional golfers such as Sam Snead, Arnold Palmer, and Jack Nicklaus. Allard Roen was director of the tournament for many years and was instrumental in breaking down the race barrier on the Strip. He broke the all-white club convention by permitting Sammy Davis, Jr. to play on the course. From 1958 it hosted the Golf Cup Golf Tournament, the largest tournament in the world for amateur golfers. According to the Las Vegas Sun, the course "held the distinction of being the only golf course in the United States to have annually hosted three championship tour events – the PGA Tour's Las Vegas Invitational, the Las Vegas Senior Classic and the LPGA Las Vegas International". The Panasonic Las Vegas Invitational, now the Las Vegas Invitational, returned to the Desert Inn in 1983, and became known as the wealthiest PGA event in the world. It has since been won by the likes of Fuzzy Zoeller, Curtis Strange, Greg Norman, and Paul Azinger. The Las Vegas Senior Classic event at the Desert Inn was added to the Senior PGA Tour in 1986 and has since been won by Bruce Crampton (1986), Al Geiberger, who equaled the course record at the time of 62 (1987), Larry Mowry (1988) and Lee Trevino (1992). Wilbur Clark was the first to build a home on the golf course in the 1950s. Additional homes were added to the Desert Inn Country Club Estates from the 1960s on. During his ownership of the hotel, Howard Hughes built 100 residential units on the property. After Steve Wynn purchased the resort in 2000 and announced that the real estate was too valuable to leave as a golf course, homeowners were forced to sell their properties to Wynn and his property developer Irwin Molasky. Molasky bought homes closest to the golf course for $2 million each, and homes on the perimeter of the resort for $900,000 to $1.2 million each. The Junior League of Las Vegas convinced Wynn to save one house from demolition and moved it to a lot in downtown Las Vegas to serve as its headquarters. This was the Morelli House, designed by architect Hugh Taylor for Antonio Morelli, a "rare example of modernist architecture in Las Vegas". The house was subsequently listed on the City and State historic registers. Performances Almost every major star of the latter half of the 20th century played at the Desert Inn. Frank Sinatra made his Las Vegas debut at the Desert Inn on September 13, 1951. He later said of it: "Wilbur Clark gave me my first job in Las Vegas. That was in 1951. For six bucks you got a filet mignon dinner and me". Noël Coward performed at the Inn on one occasion for an entire month. In 1954, after a performance at the Desert Inn, Betty Hutton announced one of her several retirements. In 1958, Tony Martin was signed to a five-year deal at $25,000 per week, making him the highest paid performer in Las Vegas. Eddie Fisher was heckled by a disguised Elizabeth Taylor during a 1961 performance, in a year which saw Dinah Shore booked for her fourth performance and debut Vegas performances at the Desert Inn by both Benny Goodman and Rosemary Clooney. In 1979, Jet magazine noted that Wayne Newton was "enthroned" at the Desert Inn as king of entertainment idols", earning $10 million a year, which made him the highest-paid nightclub performer of all time. Other performers in its famous "crystal showroom" over the years included Patti Page, Ted Lewis, Joe E. Lewis, Bobby Darin, Jimmy Durante, Tony Bennett, Paul Anka, Dionne Warwick, Louise Mandrell, and more. Louis Prima and Keely Smith recorded their 1960 Dot Records LP On Stage live at the Desert Inn. Bobby Darin's famous album Live! At the Desert Inn was recorded at the hotel in February 1971. In 1992, a week-long celebration of Frank Sinatra's 77th birthday at the Desert Inn was held and later in January it was announced that Sinatra, Liza Minnelli, Paul Anka, Shirley MacLaine, Dean Martin, Steve Lawrence and Eydie Gorme had all signed a two-year engagement agreeing to perform at least five weeks annually. Film and television Portions of Ocean's 11 were shot at the Desert Inn. It is one of the five Las Vegas hotels robbed on New Year's Eve by the characters played by Frank Sinatra, Dean Martin and others in the film. Orson Welles' film F for Fake covers, among other topics, the scandal of a fake biography of Howard Hughes, and the billionaire's Desert Inn residence is illustrated by Welles. In the 1985 film Lost in America, Julie Hagerty's character Linda Howard loses the couple's "nest egg" at the Desert Inn, leading to a memorable scene in which Albert Brooks' character David Howard tries to convince the Casino manager (Garry Marshall) to give them their money back. David, an ad man, proposes a campaign centered around the generosity of the casino in his case, replete with a jingle: "The Desert Inn has heart... The Desert Inn has heart." The opening scene to the 1993 film Sister Act 2: Back in the Habit took place in the Grand Ballroom of the hotel. The Desert Inn saw its last commercial use in the 2001 film Rush Hour 2, shortly before it was imploded. It was converted into the "Red Dragon", an Asian-themed casino set. The hotel served as the primary backdrop for the TV show Vega$ which aired on ABC from 1978 to 1981. The 1980s Aaron Spelling soap opera Dynasty included footage of the hotel, and use of the Presidential Suite. The hit 1980s NBC TV series Remington Steele filmed its 60th Las Vegas-set episode at the inn, where both the exterior and interior are shown regularly throughout the episode. Legacy The closure of the Desert Inn in 2000 and subsequent demolition was unpopular with many as it seemed to mark the end of old Las Vegas. Historian Michael Green stated: "To a lot of people outside of Las Vegas, these two places (the Desert Inn and the Sands) really meant Las Vegas. These were the places that represent the images of Las Vegas, in a far greater way than the Dunes, the Aladdin, the Hacienda and the Landmark". Robert Maheu, Howard Hughes's head of Nevada operations and publicist for many years, remarked that the "Desert Inn was the gem of Las Vegas". The hotel remained popular with locals until the end, as the heavily tourism-driven modern Las Vegas emerged in the 1990s. Desert Inn Road Desert Inn Road is a 17¼ mile west–east road part of the Las Vegas Valley grid road system. It travels through residential, commercial, and industrial areas and exists as a major thoroughfare in the area. At the Las Vegas Strip exists a 2½ mile expressway portion of the road officially called the Desert Inn Road Super Arterial that acts an arterial road between Winchester and Paradise. The expressway opened in 1996 and had a construction cost of US$84 million. See also References Citations Sources External links Video of Desert Inn implosion, October 23, 2001 Jac Lessman architectural records and papers, 1925-1975. Held by the Department of Drawings & Archives, Avery Architectural & Fine Arts Library, Columbia University. Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Las Vegas Strip Howard Hughes Landmarks in Nevada Skyscraper hotels in Paradise, Nevada Casinos completed in 1950 Hotel buildings completed in 1950 Hotel buildings completed in 1967 Hotel buildings completed in 1997 Demolished hotels in Clark County, Nevada Buildings and structures demolished by controlled implosion Buildings and structures demolished in 2001 Buildings and structures demolished in 2004 Hotels established in 1950 1950 establishments in Nevada 2000 disestablishments in Nevada Casino hotels Hotels disestablished in 2000
Desert Inn
[ "Engineering" ]
5,088
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,513,690
https://en.wikipedia.org/wiki/Miguel%20Alcubierre
Miguel Alcubierre Moya (born March 28, 1964) is a Mexican theoretical physicist. Alcubierre is known for the proposed Alcubierre drive, a speculative warp drive by which a spacecraft could achieve faster-than-light travel. Personal life Alcubierre was born in Mexico City. His father, Miguel Alcubierre Ortiz, a Spanish refugee, arrived in Mexico shortly after the Spanish Civil War with his own father, Miguel Alcubierre Pérez. Alcubierre has three younger siblings, among them is historian Beatriz Alcubierre Moya. From elementary throughout high school, Alcubierre attended Colegio Ciudad de México. At the age of 13, his father bought him a small telescope, and, together with sci-fi shows such as Star Trek, motivated him to pursue a scientific career. At the age of 15, after having read Patrick Moore and David Hardy's Challenge of the Stars, Alcubierre decided that he wanted to become an astronomer. He knew that to achieve this he needed to study physics first. Academic life Alcubierre obtained a Licentiate degree in physics in 1988 and a MSc degree in theoretical physics in 1990, both at the National Autonomous University of Mexico (UNAM). At the end of 1990, Alcubierre moved to Wales to attend graduate school at Cardiff University, receiving his PhD degree in 1994 through study of numerical general relativity. After 1996 he worked at the Max Planck Institute for Gravitational Physics in Potsdam, Germany, developing new numerical techniques used in the description of black holes. Since 2002, he has worked at the Nuclear Sciences Institute of UNAM, where he conducts research in numerical relativity, employing computers to formulate and solve the physical equations first proposed by Albert Einstein. The solitary wave solutions proposed by Alcubierre for the Einsteinian field equations may possibly prove general relativity consistent with the experimentally verified non-locality of quantum mechanics. This work militates against the idea that quantum non-locality would ultimately require abandoning the mathematical structure of general relativity. On June 11, 2012, Alcubierre was appointed Director of the Nuclear Sciences Institute at UNAM. On June 14, 2016, the Governing Board of UNAM re-elected Alcubierre as Director of the Nuclear Sciences Institute for another four-year period. May 1994 paper Alcubierre is best known for the proposal of "The Warp Drive: Hyper-fast travel within general relativity" that was published in the science journal Classical and Quantum Gravity. In this, he describes the Alcubierre drive, a theoretical means of traveling faster than light that does not violate the physical principle that nothing can locally travel faster than light. In this paper, he constructed a model that might transport a volume of flat space inside a "bubble" of curved space. This bubble, named as Hyper-relativistic local-dynamic space, is driven forward by a local expansion of space-time behind it, and an opposite contraction in front of it, so that theoretically a spaceship would be placed in motion by forces generated in the change made by space-time. Media appearances Miguel Alcubierre made a special appearance on the TV productions How William Shatner Changed the World and Michio Kaku's Sci Fi Science: Physics of the Impossible, in which his warp bubble theory was discussed. Alcubierre has been invited twice to interviews on radio station Radio Educación XEEP (1060 AM), first on February 18, 2011, and later on March 4, 2011, on the technology-related talk show Interfase, where he explained his views on the current state of scientific and technology research in Mexico, and gave a brief introduction to his warp drive model and how it came to be. Textbook Introduction to 3+1 Numerical Relativity (International Series of Monographs on Physics, Paperback, 2012, ) References 1964 births Living people 21st-century Mexican physicists Mexican nuclear physicists Scientists from Mexico City National Autonomous University of Mexico alumni Academic staff of the National Autonomous University of Mexico Alumni of Cardiff University Theoretical physicists
Miguel Alcubierre
[ "Physics" ]
834
[ "Theoretical physics", "Theoretical physicists" ]
1,513,755
https://en.wikipedia.org/wiki/Multiuser%20DOS
Multiuser DOS is a real-time multi-user multi-tasking operating system for IBM PC-compatible microcomputers. An evolution of the older Concurrent CP/M-86, Concurrent DOS and Concurrent DOS 386 operating systems, it was originally developed by Digital Research and acquired and further developed by Novell in 1991. Its ancestry lies in the earlier Digital Research 8-bit operating systems CP/M and MP/M, and the 16-bit single-tasking CP/M-86 which evolved from CP/M. When Novell abandoned Multiuser DOS in 1992, the three master value-added resellers (VARs) DataPac Australasia, Concurrent Controls and Intelligent Micro Software were allowed to take over and continued independent development into Datapac Multiuser DOS and System Manager, CCI Multiuser DOS, and IMS Multiuser DOS and REAL/32. The FlexOS line, which evolved from Concurrent DOS 286 and Concurrent DOS 68K, was sold off to Integrated Systems, Inc. (ISI) in July 1994. Concurrent CP/M-86 The initial version of CP/M-86 1.0 (with BDOS 2.x) was adapted and became available to the IBM PC in 1982. It was commercially unsuccessful as IBM's PC DOS 1.0 offered much the same facilities for a considerably lower price. Neither PC DOS nor CP/M-86 could fully exploit the power and capabilities of the new 16-bit machine. It was soon supplemented by an implementation of CP/M's multitasking 'big brother', MP/M-86 2.0, since September 1981. This turned a PC into a multiuser machine capable of supporting multiple concurrent users using dumb terminals attached by serial ports. The environment presented to each user made it seem as if they had the entire computer to themselves. Since terminals cost a fraction of the then-substantial price of a complete PC, this offered considerable cost savings, as well as facilitating multi-user applications such as accounts or stock control in a time when PC networks were rare, very expensive and difficult to implement. CP/M-86 1.1 (with BDOS 2.2) and MP/M-86 2.1 were merged to create Concurrent CP/M-86 3.0 (also known as CCP/M-86) with BDOS 3.0 in late 1982. Kathryn Strutynski, the project manager for CP/M-86, was also the project manager for Concurrent CP/M-86. One of its designers was Francis "Frank" R. Holsworth. Initially, this was a single-user operating system supporting true multi-tasking of up to four (in its default configuration) CP/M-86 compatible programs. Like its predecessors it could be configured for multi-processor support (see e.g. Concurrent CP/M-86/80) and also added "virtual screens" letting an operator switch between the interactions of multiple programs. Later versions supported dumb terminals and so could be deployed as multiuser systems. Concurrent CP/M-86 3.1 (BDOS 3.1) shipped on 21 February 1984. Adaptations Concurrent CP/M-86 with Windows In February 1984 Digital Research also offered a version of Concurrent CP/M-86 with windowing capabilities named Concurrent CP/M with Windows for the IBM Personal Computer and Personal Computer XT. Concurrent CP/M-86/80 This was an adaptation of Concurrent CP/M-86 for the LSI-M4, LSI Octopus and CAL PC computers. These machines had both 16-bit and 8-bit processors, because in the early days of 16-bit personal computing, 8-bit software was more available and often ran faster than the corresponding 16-bit software. Concurrent CP/M-86/80 allowed users to run both CP/M (8-bit) and CP/M-86 (16-bit) applications. When a command was entered, the operating system ran the corresponding application on either the 8-bit or the 16-bit processor, depending on whether the executable file had a .COM or .CMD extension. It emulated a CP/M environment for 8-bit programs by translating CP/M system calls into CP/M-86 system calls, which were then executed by the 16-bit processor. Concurrent DOS In August 1983, Bruce Skidmore, Raymond D. Pedrizetti, Dave Brown and Gordon Edmonds teamed up to create PC-MODE, an optional module for Concurrent CP/M-86 3.1 (with BDOS 3.1) to provide basic compatibility with PC DOS 1.1 (and MS-DOS 1.1). This was shown publicly at COMDEX in December 1983 and shipped in March 1984 as Concurrent DOS 3.1 (a.k.a. CDOS with BDOS 3.1) to hardware vendors. Simple DOS applications, which did not directly access the screen or other hardware, could be run. For example, although a console program such as PKZIP worked perfectly and offered more facilities than the CP/M-native ARC archiver, applications which performed screen manipulations, such as the WordStar word processor for DOS, would not, and native Concurrent CP/M (or CP/M-86) versions were required. While Concurrent DOS 3.1 up to 4.1 had been developed in the US, OEM adaptations and localizations were carried out by DR Europe's OEM Support Group in Newbury, UK, since 1983. Digital Research positioned Concurrent DOS 4.1 with GEM as alternative for IBM's TopView in 1985. Concurrent PC DOS Concurrent DOS 3.2 (with BDOS 3.2) in 1984 was compatible with applications for CP/M-86 1.x, Concurrent CP/M-86 3.x and PC DOS 2.0. It was available for many different hardware platforms. The version with an IBM PC compatible BIOS/XIOS was named Concurrent PC DOS 3.2. Kathryn Strutynski was the product manager for Concurrent PC DOS. Concurrent DOS 68K and FlexOS 68K Efforts being part of a cooperation with Motorola since 1984 led to the development of Concurrent DOS 68K in Austin, Texas, as a successor to CP/M-68K written in C. One of its main architects was Francis "Frank" R. Holsworth (using siglum FRH). Concurrent DOS 68K 1.0 became available for OEM evaluation in early 1985. The effort received considerable funding worth several million dollars from Motorola and was designed for their 68000/68010 processors. Like the earlier GEMDOS system for 68000 processors it initially ran on the Motorola VME/10 development system. Concurrent DOS 68K 1.20/1.21 was available in April 1986, offered for about to OEMs. This system evolved into FlexOS 68K in late 1986. Known versions include: Concurrent DOS 68K 1.0 (1985) Concurrent DOS 68K 1.1 Concurrent DOS 68K 1.20 (April 1986, 1986-05-27) Concurrent DOS 68K 1.21 (1986) Concurrent DOS 286 and FlexOS 286 In parallel to the Concurrent DOS 68K effort, Digital Research also previewed Concurrent DOS 286 in cooperation with Intel in January 1985. This was based on MP/M-286 and Concurrent CP/M-286, on which Digital Research had worked since 1982. Concurrent DOS 286 was a complete rewrite in the C language based on a new system architecture with dynamically loadable device drivers instead of a static BIOS or XIOS. One of its main architects was Francis "Frank" R. Holsworth. The operating system would function strictly in 80286 native mode, allowing protected mode multi-user, multitasking operation while running 8086 emulation. While this worked on the B-1 step of prototype chip samples, Digital Research, with evaluation copies of their operating system already shipping in April, discovered problems with the emulation on the production level C-1 step of the processor in May, which would not allow Concurrent DOS 286 to run 8086 software in protected mode. The release of Concurrent DOS 286 had been scheduled for late May, but was delayed until Intel could develop a new version of the chip. In August, after extensive testing E-1 step samples of the 80286, Digital Research said that Intel had corrected all documented 286 errata, but that there were still undocumented chip performance problems with the prerelease version of Concurrent DOS 286 running on the E-1 step. Intel said that the approach Digital Research wished to take in emulating 8086 software in protected mode differed from the original specifications; nevertheless they incorporated into the E-2 step minor changes in the microcode that allowed Digital Research to run emulation mode much faster (see LOADALL). These same limitations affected FlexOS 286 version 1.x, a reengineered derivation of Concurrent DOS 286, which was developed by Digital Research's new Flexible Automation Business Unit in Monterey, California, since 1986. Later versions added compatibility with PC DOS 2.x and 3.x. Known versions include: Concurrent DOS 286 1.0 (1985) Concurrent DOS 286 1.1 (1986-01-07) Concurrent DOS 286 1.2 (1986) FlexOS 286 1.3 (November 1986) FlexOS 286 1.31 (May 1987) Concurrent DOS XM and Concurrent DOS 386 The OEM Support Group was relocated into Digital Research's newly created European Development Centre (EDC) in Hungerford, UK in 1986, which started to take over further development of the Concurrent DOS family since Concurrent DOS 4.11, including siblings like DOS Plus and successors. Developed in Hungerford, UK, versions 5 and 6 (Concurrent DOS XM, with XM standing for Expanded Memory) could bank switch up to 8 MB of EEMS to provide a real-mode environment to run multiple CP/M-86 and DOS programs concurrently and support up to three users (one local and up to two hooked up via serial terminals). In 1987, Concurrent DOS 86 was rewritten to become Concurrent DOS 386, still a continuation of the classical XIOS & BDOS architecture. This ran on machines equipped with the Intel 80386 and later processors, using the 386's hardware facilities for virtualizing the hardware, allowing most DOS applications to run unmodified under Concurrent DOS 386, even on terminals. The OS supported concurrent multiuser file access, allowing multiuser applications to run as if they were on individual PCs attached to a network server. Concurrent DOS 386 allowed a single server to support a number of users on dumb terminals or inexpensive low-specification PCs running terminal emulation software, without the need for expensive workstations and then-expensive network cards. It was a true multiuser system; several users could use a single database with record locking to prevent mutual interference. Concurrent DOS 6.0 represented also the starting point for the DR DOS family, which was carved out of it. Known versions include: DR Concurrent PC DOS XM 5.0 (BDOS 5.0) DR Concurrent DOS XM 5.0 (BDOS 5.0, October 1986) DR Concurrent DOS XM 5.1 (BDOS 5.1?, January 1987) DR Concurrent DOS XM 5.2 (BDOS 5.2?, September 1987) DR Concurrent DOS XM 6.0 (BDOS 6.0, 1987-11-18), 6.01 (1987) DR Concurrent DOS XM 6.2 (BDOS 6.2), 6.21 DR Concurrent DOS 386 1.0 (BDOS 5.0?, 1987) DR Concurrent DOS 386 1.1 (BDOS 5.2?, September 1987) DR Concurrent DOS 386 2.0 (BDOS 6.0, 1987-11-18), 2.01 DR Concurrent DOS 386 3.0 (BDOS 6.2, December 1988, January 1989), 3.01 (1989-05-19), 3.02 (1989) Concurrent PC DOS XM 5.0 emulated IBM PC DOS 2.10, whereas Concurrent DOS XM 6.0 and Concurrent DOS 386 2.0 were compatible with IBM PC DOS 3.30. Adaptations Known CCI Concurrent DOS adaptations by Concurrent Controls, Inc. include: CCI Concurrent DOS 386 1.12 (BDOS 5.0?, October 1987) CCI Concurrent DOS 386 2.01 (BDOS 6.0?, May 1988) CCI Concurrent DOS 386 3.01 (BDOS 6.2?, March 1989) CCI Concurrent DOS 386 3.02 (April 1990) CCI Concurrent DOS 386 3.03 (March 1991) CCI Concurrent DOS 386 3.04 (July 1991) aka "CCI Concurrent DOS 4.0" CCI Concurrent DOS 3.05 R1 (1992-02), R2 (1992), R3+R4 (1992), R5+R6 (1992), R7+R8 (1993), R9+R10 (1993), R11 (August 1993) CCI Concurrent DOS 3.06 R1 (December 1993), R2+R3 (1994), R4+R5+R6 (1994), R7 (July 1994) CCI Concurrent DOS 3.07 R1 (March 1995), R2 (1995), R3 (1996), R4 (1996), R5 (1997), R6 (1997), R7 (June 1998) CCI Concurrent DOS 3.08 CCI Concurrent DOS 3.10 R1 (2003-10-05) Other adaptations include: Apricot Concurrent DOS 386 2.01 (1987) for Apricot Quad Version Level 4.3 Multiuser DOS Later versions of Concurrent DOS 386 incorporated some of the enhanced functionality of DR's later single-user PC DOS clone DR DOS 5.0, after which the product was given the more explanatory name "Multiuser DOS" (a.k.a. MDOS), starting with version 5.0 (with BDOS 6.5) in 1991. Multiuser DOS suffered from several technical limitations that restricted its ability to compete with LANs based on PC DOS. It required its own special device drivers for much common hardware, as PC DOS drivers were not multiuser or multi-tasking aware. Driver installation was more complex than the simple PC DOS method of copying the files onto the boot disk and modifying CONFIG.SYS appropriately it was necessary to relink the Multiuser DOS kernel (known as a nucleus) using the SYSGEN command. Multiuser DOS was also unable to use many common PC DOS additions such as network stacks, and it was limited in its ability to support later developments in the PC-compatible world, such as graphics adaptors, sound cards, CD-ROM drives and mice. Although many of these were soon rectified for example, graphical terminals were developed, allowing users to use CGA, EGA and VGA software it was less flexible in this regard than a network of individual PCs, and as the prices of these fell, it became less and less competitive, although it still offered benefits in terms of management and lower total cost of ownership. As a multi-user operating system its price was higher than a single-user system, of course, and it required special device drivers, unlike single-user multitasking DOS add-ons such as Quarterdeck's DESQview. Unlike MP/M, it never became popular for single-user but multitasking use. When Novell acquired Digital Research in 1991 and abandoned Multiuser DOS in 1992, the three Master VARs DataPac Australasia, Concurrent Controls and Intelligent Micro Software were allowed to license the source code of the system to take over and continue independent development of their derivations in 1994. Known versions include: DR Multiuser DOS 5.00 (1991), 5.01 Novell DR Multiuser DOS 5.10 (1992-04-13), 5.11 Novell DR Multiuser DOS 5.13 (BDOS 6.6, 1992) All versions of Digital Research and Novell DR Multiuser DOS reported themselves as "IBM PC DOS" version 3.31. Adaptations DataPac Australasia Known versions by DataPac Australasia Pty Limited include: Datapac Multiuser DOS 5.0 Datapac Multiuser DOS 5.1 (BDOS 6.6) Datapac System Manager 7.0 (1996-08-22) In 1997, Datapac was bought by Citrix Systems, Inc., and System Manager was abandoned soon after. In 2002 the Sydney-based unit was spun out into Citrix' Advanced Products Group. Concurrent Controls Known CCI Multiuser DOS versions by Concurrent Controls, Inc. (CCI) include: CCI Multiuser DOS 7.00 CCI Multiuser DOS 7.10 CCI Multiuser DOS 7.21 CCI Multiuser DOS 7.22 R1 (September 1996), R2 (1996), R3 (1997), R4 GOLD/PLUS/LITE (BDOS 6.6, 1997-02-10), R5 GOLD (1997), R6 GOLD (1997), R7 GOLD (June 1998), R8 GOLD, R9 GOLD, R10 GOLD, R11 GOLD (2000-09-25), R12 GOLD (2002-05-15), R13 GOLD (2002-07-15), R14 GOLD (2002-09-13), R15 GOLD, R16 GOLD (2003-10-10), R17 GOLD (2004-02-09), R18 GOLD (2005-04-21) All versions of CCI Multiuser DOS report themselves as "IBM PC DOS" version 3.31. Similar to SETVER under DOS, this can be changed using the Multiuser DOS utility. In 1999, CCI changed its name to Applica, Inc. In 2002 Applica Technology became Aplycon Technologies, Inc. Intelligent Micro Software, Itera and Integrated Solutions DOS 386 Professional IMS Multiuser DOS Known adaptations of IMS Multiuser DOS include: IMS Multiuser DOS Enhanced Release 5.1 (1992) IMS Multiuser DOS 5.11 IMS Multiuser DOS 5.14 IMS Multiuser DOS 7.0 IMS Multiuser DOS 7.1 (BDOS 6.7, 1994) All versions of IMS Multiuser DOS report themselves as "IBM PC DOS" version 3.31. REAL/32 Intelligent Micro Software Ltd. (IMS) of Thatcham, UK, acquired a license to further develop Multiuser DOS from Novell in 1994 and renamed their product REAL/32 in 1995. Similar to FlexOS/4690 OS before, IBM in 1995 licensed REAL/32 7.50 to bundle it with their 4695 POS terminals. IMS REAL/32 versions: IMS REAL/32 7.50 (BDOS 6.8, 1995-07-01), 7.51 (BDOS 6.8), 7.52 (BDOS 6.9), 7.53 (BDOS 6.9, 1996-04-01), 7.54 (BDOS 6.9, 1996-08-01) IMS REAL/32 7.60 (BDOS 6.9, February 1997), 7.61, 7.62, 7.63 IMS REAL/32 7.70 (November 1997), 7.71, 7.72, 7.73, 7.74 (1998) IMS REAL/32 7.80, 7.81 (February 1999), 7.82, 7.83 (BDOS 6.10) IMS REAL/32 7.90 (1999), 7.91, 7.92 ITERA IMS REAL/32 7.93 (June 2002), 7.94 (BDOS 6.13, 2003-01-31) Integrated Solutions IMS REAL/32 7.95 REAL/32 7.50 to 7.74 report themselves as "IBM PC DOS" version 3.31, whereas 7.80 and higher report a version of 6.20. LBA and FAT32 support was added with REAL/32 7.90 in 1999. On 19 April 2002, Intelligent Micro Software Ltd. filed for insolvency and was taken over by one of its major customers, Barry Quittenton's Itera Ltd. This company was dissolved on 2006-03-28. As of 2010 REAL/32 was supplied by Integrated Solutions of Thatcham, UK, but the company, at the same address, was later listed as builders. REAL/NG REAL/NG was IMS' attempt to create the "Next Generation" of REAL/32, also named "REAL/32 for the internet age". REAL/NG promised "increased range of hardware from PCs to x86 multi-processor server systems". Advertised feature list, as of 2003: Runs with Red Hat 7.3 or later version of Linux Backward compatible with DOS and REAL/32 Max 65535 virtual consoles; each of these can be a user No Linux expertise required Administration/setup/upgrade by web browser (local and remote) Supplied with TCP/IP Linux-/Windows-based terminal emulator for the number of users purchased Print and file sharing built in Drive mapping between Linux and REAL/NG servers built in User hardware support Increased performance Vastly increased TPA Multi-processor support Improved hardware support Built-in firewall support Very low cost per seat Low total cost of ownership Supplied on CD Supplied with a set of Red Hat CDs By 10 December 2003, IMS made "REALNG V1.60-V1.19-V1.12" available, which, based on the Internet Archive, seems to be the latest release. By 2005, the realng.com website was mirroring the IMS main website, and had no mention of REAL/NG, only REAL/32. Application software While the various releases of this operating system had increasing ability to run DOS programs, software written for the platform could take advantage of its features by using function calls specifically suitable for multiuser operation. It used pre-emptive multitasking, preventing badly-written applications from delaying other processes by retaining control of the processor. To this day, Multiuser DOS is supported by popular SSL/TLS libraries such as wolfSSL. The API provided support for blocking and non-blocking message queues, mutual-exclusion queues, the ability to create sub-process threads which executed independently from the parent, and a method of pausing execution which did not waste processor cycles, unlike idle loops used by single-user operating systems. Applications were started as "attached" to a console. However, if an application did not need user interaction it could "detach" from the console and run as a background process, later reattaching to a console if needed. Another key feature was that the memory management supported a "shared" memory model for processes (in addition to the usual models available to normal DOS programs). In the shared memory model the "code" and "data" sections of a program were isolated from each other. Because the "code" contained no modifiable data, code sections in memory could be shared by several processes running the same program, thereby reducing memory requirements. Programs written, or adapted, for any multitasking platform need to avoid the technique used by single-tasking systems of going into endless loops until interrupted when, for example, waiting for a user to press a key; this wasted processor time that could be used by other processes. Instead, Concurrent DOS provided an API call which a process could call to "sleep" for a period of time. Later versions of the Concurrent DOS kernel included Idle Detection, which monitored DOS API calls to determine whether the application was doing useful work or in fact idle, in which case the process was suspended allowing other processes to run. Idle Detection was the catalyst for the patented DR-DOS Dynamic Idle Detection power management feature invented in 1989 by Roger Alan Gross and John P. Constant and marketed as BatteryMAX. See also CP/M MP/M Concurrent DOS V60 FlexOS DR DOS PC DOS – IBM's OEM version of (single-user) MS-DOS MS-DOS 4.0 (multitasking) PC-MOS/386 – unrelated multitasking DOS clone VM/386 – unrelated multitasking DOS environment Virtual DOS machine Multiuser DOS Federation Timeline of operating systems List of mergers and acquisitions by Citrix References Further reading External links former Intelligent Micro Software (IMS) website (vendors of IMS Multiuser DOS, IMS REAL/32, and REAL/NG) former Logan Industries (LLI) website (IMS REAL/32 US distributor up to 2002-05-01) former Concurrent Controls website (CCI Multiuser DOS) Applica, Inc. website former Aplycon Technologies, Inc. website CP/M variants Disk operating systems DOS variants Real-time operating systems Digital Research operating systems Novell operating systems Microcomputer software Discontinued operating systems Proprietary operating systems
Multiuser DOS
[ "Technology" ]
5,216
[ "Real-time computing", "Real-time operating systems" ]
1,513,850
https://en.wikipedia.org/wiki/Aldolase%20A
Aldolase A (ALDOA, or ALDA), also known as fructose-bisphosphate aldolase, is an enzyme that in humans is encoded by the ALDOA gene on chromosome 16. The protein encoded by this gene is a glycolytic enzyme that catalyzes the reversible conversion of fructose-1,6-bisphosphate to glyceraldehyde 3-phosphate (G3P) and dihydroxyacetone phosphate (DHAP). Three aldolase isozymes (A, B, and C), encoded by three different genes, are differentially expressed during development. Aldolase A is found in the developing embryo and is produced in even greater amounts in adult muscle. Aldolase A expression is repressed in adult liver, kidney and intestine and similar to aldolase C levels in brain and other nervous tissue. Aldolase A deficiency has been associated with myopathy and hemolytic anemia. Alternative splicing and alternative promoter usage results in multiple transcript variants. Related pseudogenes have been identified on chromosomes 3 and 10. Structure ALDOA is a homotetramer and one of the three aldolase isozymes (A, B, and C), encoded by three different genes. The ALDOA gene contains 8 exons and the 5' UTR IB. Key amino acids responsible for its catalytic function have been identified. The residue Tyr363 functions as the acid–base catalyst for protonating C3 of the substrate, while Lys146 is proposed to stabilize the negative charge of the resulting conjugate base of Tyr363 and the strained configuration of the C-terminal. Residue Glu187 participates in multiple functions, including FBP aldolase catalysis, acid–base catalysis during substrate binding, dehydration, and substrate cleavage. Though ALDOA localizes to the nucleus, it lacks any known nuclear localization signals (NLS). Mechanism In mammalian aldolase, the key catalytic amino acid residues involved in the reaction are lysine and tyrosine. The tyrosine acts as an efficient hydrogen acceptor while the lysine covalently binds and stabilizes the intermediates. Many bacteria use two magnesium ions in place of the lysine. The numbering of the carbon atoms indicates the fate of the carbons according to their position in fructose 6-phosphate. Function ALDOA is a key enzyme in the fourth step of glycolysis, as well as in the reverse pathway gluconeogenesis. It catalyzes the reversible conversion of fructose-1,6-bisphosphate to glyceraldehydes-3-phosphate and dihydroxyacetone phosphate by aldol cleavage of the C3–C4 bond. As a result, it is a crucial player in ATP biosynthesis. ALDOA also contributes to other "moonlighting" functions such as muscle maintenance, regulation of cell shape and motility, striated muscle contraction, actin cytoskeleton organization, and regulation of cell proliferation. ALDOA likely regulates actin cytoskeleton remodeling through interacting with cytohesin-2 (ARNO) and Arf6. ALDOA is ubiquitously expressed in most tissues, though it is predominantly expressed in developing embryo and adult muscle. In lymphocytes, ALDOA is the predominant aldolase isoform. Within the cell, ALDOA typically localizes to the cytoplasm, but it can localize to the nucleus during DNA synthesis of the cell cycle S phase. This nuclear localization is regulated by the protein kinases AKT and p38. It is suggested that the nucleus serves as a reservoir for ALDOA in low glucose conditions. ALDOA has also been found in mitochondria. ALDOA is regulated by the energy metabolism substrates glucose, lactate, and glutamine. In human mast cells (MCs), ALDOA has been observed to undergo post-translational regulation by protein tyrosine nitration, which may alter its relative affinity for FBP and/or IP3. This change then affects IP3 and PLC signaling cascades in IgE-dependent responses. Clinical significance Aldolase A (ALDOA) is highly expressed in multiple cancers, including lung squamous cell carcinoma (LSCC), renal cancer, and hepatocellular carcinoma. It is proposed that ALDOA overexpression enhances glycolysis in these tumor cells, promoting their growth. In LSCC, its upregulation correlates with metastasis and poor prognosis, while its downregulation reduces tumor cell motility and tumorigenesis. Thus, ALDOA could be a potential LSCC biomarker and therapeutic drug target. Aldolase A deficiency is a rare, autosomal recessive disorder that is linked to hemolysis and accompanied by weakness, muscle pain, and myopathy. Interactive pathway map Interactions Aldolase A has been shown to interact with: PLD2, actin, GLUT4, phospholipase D2, light chain 8 of dynein, erythrocyte anion exchanger Band 3 protein, ryanodine receptor, Cytohesin-2, and V-ATPase (vacuolar-type H+-ATPase). See also ALDOB ALDOC Fructose-bisphosphate aldolase References Further reading External links http://pdbdev.sdsc.edu:48346/pdb/molecules/pdb50_5.html PDBe-KB provides an overview of all the structure information available in the PDB for Human Fructose-bisphosphate aldolase A Enzymes Glycolysis
Aldolase A
[ "Chemistry" ]
1,213
[ "Carbohydrate metabolism", "Glycolysis" ]
1,513,917
https://en.wikipedia.org/wiki/Potassium%20selective%20electrode
Potassium selective electrodes are a type of ion selective electrode used in biochemical and biophysical research, where measurements of potassium concentration in an aqueous solution are required, usually on a real time basis. These electrodes are typical ion exchange resin membrane electrodes, using valinomycin, a potassium ionophore, as the ion carrier in the membrane to provide the potassium specificity. This type of ion-selective electrode is subject to interference from (in declining order of magnitude) rubidium, caesium, ammonium, sodium, calcium, magnesium, and lithium. The most significant interference with measurement of potassium concentration is from the ammonium ion, which in practice is a problem where the ammonium concentration is approximately equal to or greater than the potassium concentration. Although sodium is usually present in high concentrations in biological preparations, the degree of interference is low enough to represent an error on the order of only 0.05 parts per million for the normal range of sodium concentration, requiring reduction of sodium only for measurements of very low potassium concentrations. Although the interference from rubidium or caesium is strong enough to require that these ions be present in much lower concentration than the potassium to be measured, this is not usually a problem in most experiments. Interference from calcium, magnesium, or lithium, on the other hand, is weak enough that their presence in normal concentrations is also usually not a problem. Further reading Ionophores for potassium-selective electrodes Potassium ion-selective electrodes Electrodes
Potassium selective electrode
[ "Chemistry" ]
303
[ "Physical chemistry stubs", "Electrochemistry", "Electrodes", "Electrochemistry stubs" ]
1,514,075
https://en.wikipedia.org/wiki/Sodium%20phosphate
A sodium phosphate is a generic variety of salts of sodium () and phosphate (). Phosphate also forms families or condensed anions including di-, tri-, tetra-, and polyphosphates. Most of these salts are known in both anhydrous (water-free) and hydrated forms. The hydrates are more common than the anhydrous forms. Uses Sodium phosphates have many applications in food and for water treatment. For example, sodium phosphates are often used as emulsifiers (as in processed cheese), thickening agents, and leavening agents for baked goods. They are also used to control pH of processed foods. They are also used in medicine for constipation and to prepare the bowel for medical procedures. They are also used in detergents for softening water and as an efficient anti-rust solution. Adverse effects Sodium phosphates are popular in commerce in part because they are inexpensive and because they are nontoxic at normal levels of consumption. However, oral sodium phosphates when taken at high doses for bowel preparation for colonoscopy may in some individuals carry a risk of kidney injury under the form of phosphate nephropathy. There are several oral phosphate formulations which are prepared extemporaneously. Oral phosphate prep drugs have been withdrawn in the United States, although evidence of causality is equivocal. Since safe and effective replacements for phosphate purgatives are available, several medical authorities have recommended general disuse of oral phosphates. Monophosphates Three families of sodium monophosphates are common, those derived from orthophosphate (), hydrogen phosphate (), and dihydrogenphosphate (). Some of the best known salts are shown in the following table. Di- and polyphosphates In addition to these phosphates, sodium forms a number of useful salts with pyrophosphates (also called diphosphates), triphosphates and high polymers. Of these salts, those of the diphosphates are particularly common commercially. Beyond the diphosphates, sodium salts are known triphosphates, e.g. sodium triphosphate and tetraphosphates. The cyclic polyphosphates, called metaphosphates, include the trimer sodium trimetaphosphate and the tetramer, and , respectively. Polymeric sodium phosphates are formed upon heating mixtures of and , which induces a condensation reaction. The specific polyphosphate generated depends on the details of the heating and annealing. One derivative is the glassy (i.e., amorphous) Graham's salt (sodium hexametaphosphate). It is a cyclic polyphosphate with the formula . Crystalline high molecular weight polyphosphates include Kurrol's salt and Maddrell's salt (CAS#10361-03-2). These species have the formula where n can be as great as 2000, and it is a white powder practically insoluble in water. In terms of their structures, these polymers consist of units, with the chains are terminated by protonated phosphates. References External links Phosphates Sodium compounds Edible thickening agents
Sodium phosphate
[ "Chemistry" ]
683
[ "Phosphates", "Salts" ]
1,514,086
https://en.wikipedia.org/wiki/Techno-progressivism
Techno-progressivism, or tech-progressivism, is a stance of active support for the convergence of technological change and social change. Techno-progressives argue that technological developments can be profoundly empowering and emancipatory when they are regulated by legitimate democratic and accountable authorities to ensure that their costs, risks and benefits are all fairly shared by the actual stakeholders to those developments. One of the first mentions of techno-progressivism appeared within extropian jargon in 1999 as the removal of "all political, cultural, biological, and psychological limits to self-actualization and self-realization". Stance Techno-progressivism maintains that accounts of progress should focus on scientific and technical dimensions, as well as ethical and social ones. For most techno-progressive perspectives, then, the growth of scientific knowledge or the accumulation of technological powers will not represent the achievement of proper progress unless and until it is accompanied by a just distribution of the costs, risks, and benefits of these new knowledges and capacities. At the same time, for most techno-progressive critics and advocates, the achievement of better democracy, greater fairness, less violence, and a wider rights culture are all desirable, but inadequate in themselves to confront the quandaries of contemporary technological societies unless and until they are accompanied by progress in science and technology to support and implement these values. Strong techno-progressive positions include support for the civil right of a person to either maintain or modify his or her own mind and body, on his or her own terms, through informed, consensual recourse to, or refusal of, available therapeutic or enabling biomedical technology. During the November 2014 Transvision Conference, many of the leading transhumanist organizations signed the Technoprogressive Declaration. The Declaration stated the values of technoprogressivism. Contrasting stance Bioconservatism (a portmanteau word combining "biology" and "conservatism") is a stance of hesitancy about technological development especially if it is perceived to threaten a given social order. Strong bioconservative positions include opposition to genetic modification of food crops, the cloning and genetic engineering of livestock and pets, and, most prominently, rejection of the genetic, prosthetic, and cognitive modification of human beings to overcome what are broadly perceived as current human biological and cultural limitations. Bioconservatives range in political perspective from right-leaning religious and cultural conservatives to left-leaning environmentalists and technology critics. What unifies bioconservatives is skepticism about medical and other biotechnological transformations of the living world. Typically less sweeping as a critique of technological society than bioluddism, the bioconservative perspective is characterized by its defense of the natural, deployed as a moral category. Although techno-progressivism is the stance which contrasts with bioconservatism in the biopolitical spectrum, both techno-progressivism and bioconservatism, in their more moderate expressions, share an opposition to unsafe, unfair, undemocratic forms of technological development, and both recognize that such developmental modes can facilitate unacceptable recklessness and exploitation, exacerbate injustice and incubate dangerous social discontent. List of notable techno-progressive social critics Technocritic Dale Carrico with his accounts of techno-progressivism Philosopher Donna Haraway with her accounts of cyborg theory. Media theorist Douglas Rushkoff with his accounts of open source. Cultural critic Mark Dery and his accounts of cyberculture. Science journalist Chris Mooney with his account of the U.S. Republican Party's "war on science". Futurist Bruce Sterling with his Viridian design movement. Futurist Alex Steffen and his accounts of bright green environmentalism through the Worldchanging blog. Science journalist Annalee Newitz with her accounts of the Biopunk. Bioethicist James Hughes of the Institute for Ethics and Emerging Technologies with his accounts of democratic transhumanism. Controversy Technocritic Dale Carrico, who has used "techno-progressive" as a shorthand to describe progressive politics that emphasize technoscientific issues, has expressed concern that some "transhumanists" are using the term to describe themselves, with the consequence of possibly misleading the public regarding their actual cultural, social and political views, which may or may not be compatible with critical techno-progressivism. See also Algocracy Body modification Bioethics Biopolitics Digital freedom Free software movement Frontierism Fordism High modernism Manifest Destiny New Frontier Post-scarcity economy Progress Studies Scientism Technocentrism Techno-utopianism Transhumanist politics References External links Institute for Ethics and Emerging Technologies Overview of Biopolitics Ideologies Technology in society Political ideologies Progressivism Science and technology studies Transhumanism Ethics of science and technology Transhumanist politics Politics and technology
Techno-progressivism
[ "Technology", "Engineering", "Biology" ]
991
[ "Transhumanist politics", "Science and technology studies", "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
1,514,091
https://en.wikipedia.org/wiki/Linear%20referencing
Linear referencing, also called linear reference system or linear referencing system (LRS), is a method of spatial referencing in engineering and construction, in which the locations of physical features along a linear element are described in terms of measurements from a fixed point, such as a milestone along a road. Each feature is located by either a point (e.g. a signpost) or a line (e.g. a no-passing zone). If a segment of the linear element or route is changed, only those locations on the changed segment need to be updated. Linear referencing is suitable for management of data related to linear features like roads, railways, oil and gas transmission pipelines, power and data transmission lines, and rivers. Motivation A system for identifying the location of pipeline features and characteristics is by measuring distance from the start of the pipeline. An example linear reference address is: Engineering Station 1145 + 86 on pipeline Alpha = 114,586 feet from the start of the pipeline. With a reroute, cumulative stationing might not be the same as engineering stationing, because of the addition of the extra pipeline. Linear referencing systems compute the differences to resolve this dilemma. Linear referencing is one of a family of methods of expressing location. Coordinates such as latitude and longitude are another member of the family, as are landmark references such as "5 km south of Ayers Rock." Linear referencing has traditionally been the expression of choice in engineering applications such as road and pipeline maintenance. One can more realistically dispatch a worker to a bridge 12.7 km along a road from a reference point, rather than to a pair of coordinates or a landmark. The road serves as the reference frame, just as the earth serves as the reference frame for latitude and longitude. Benefits Linear referencing can be used to define points along a linear feature with just a small amount of information such as the name of a road and the distance and bearing from a landmark along the road. This information can be communicated concisely via plaintext. For example: "State route 4, 20 feet east of mile marker 187." Giving a latitude and longitude coordinate to a work crew is not meaningful unless the coordinate is plotted on a map. Often work crews work in remote areas without wireless connectivity which makes on-line digital maps not practical, and the relatively higher effort of providing offline maps or printed maps is not as economical as simply stating locations as offsets, or ranges of offsets, along a linear feature. Linear referencing systems can also be made to be both very precise and very accurate at a much lower cost than is needed to collect latitude and longitude coordinates with high accuracy, especially when the goal is sub-meter accuracy. This is highly dependent upon the width of the linear feature, its centerline, and the visibility of the landmarks and markers that are used to define linear reference offsets. Often, roads are created by engineers using CAD tools that have no geospatial reference at all, and LRS is the preferred method of defining data for linear features. Limitations Consequently, a major limitation of linear referencing is that specifying points that are not on a linear feature is troublesome and error-prone, though not entirely impossible. Consider for example a ski lodge located 100 meters to the right of the road, traveling north. The linear referencing system can be extended by specifying a lateral offset, but the absolute location (i.e. coordinates) of the lodge cannot be determined unless coordinates are specified for the road; that process is prone to error particularly on curved roads. Another major drawback of linear referencing is that a modification in the alignment of a road (e.g. constructing a bypass around a town) changes the measurements that reference all downstream points. The system requires an extensive network of reference stations, and constant maintenance. In an era of mobile maps and GPS, this maintenance overhead for linear referencing systems challenges its long-term viability. (But see below for US Federal Highway Administration requirement that all State DOTs use LRS.) Nonetheless, travel along a road is a linear experience, and at the very least, linear referencing will continue to have a conversational role. Linear referencing systems are recognized by the US Federal government as a valuable tool for specifying right of way data, and are now actually required for the States. Therefore, it is not likely to see LRS usage decline any time soon. Applications ARNOLD: US Federal Requirements for Highways The US Federal Highway Administration is pushing states to move closer to standardization of LRS data with the ARNOLD requirement. To wit: "On August 7, 2012, FHWA announced that the HPMS is expanding the requirement for State Departments of Transportation (DOTs) to submit their LRS to include all public roads. This requirement will be referred to as the All Road Network of Linear Referenced Data (ARNOLD)". The ARNOLD requirement sets the stage for systems that utilize both LRS and coordinates. Both systems are useful in different contexts, and while using latitude and longitude is becoming very popular due to the availability of practical and affordable devices for capturing and displaying global coordinate data, the use of LRS has widely been adopted for planning, engineering, and maintenance. Supported platforms Linear referencing is supported for example by several Geographic Information System software, including: Intergraph GE Global Transmission Office ArcGIS GEOMAP GIS GRASS GIS PostGIS QGIS See also Chainage (also known as station references) in construction surveying Exit number Geocoding Geographic coordinate system Length measurement Milestone Road map Zero mile References Further reading Geographic data and information Construction surveying Coordinate systems Curves Length, distance, or range measuring devices
Linear referencing
[ "Mathematics", "Technology", "Engineering" ]
1,132
[ "Construction surveying", "Construction", "Data", "Coordinate systems", "Geographic data and information" ]
1,514,171
https://en.wikipedia.org/wiki/Defining%20length
In genetic algorithms and genetic programming defining length L(H) is the maximum distance between two defining symbols (that is symbols that have a fixed value as opposed to symbols that can take any value, commonly denoted as # or *) in schema H. In tree GP schemata, L(H) is the number of links in the minimum tree fragment including all the non-= symbols within a schema H. Example Schemata "00##0", "1###1", "01###", and "##0##" have defining lengths of 4, 4, 1, and 0, respectively. Lengths are computed by determining the last fixed position and subtracting from it the first fixed position. In genetic algorithms as the defining length of a solution increases so does the susceptibility of the solution to disruption due to mutation or cross-over. References Evolutionary algorithms
Defining length
[ "Technology" ]
190
[ "Computing stubs", "Computer science", "Computer science stubs" ]
1,514,252
https://en.wikipedia.org/wiki/SILC%20%28protocol%29
SILC (Secure Internet Live Conferencing protocol) is a protocol that provides secure synchronous conferencing services (very much like IRC) over the Internet. Components The SILC protocol can be divided in three main parts: SILC Key Exchange (SKE) protocol, SILC Authentication protocol and SILC Packet protocol. SILC protocol additionally defines SILC Commands that are used to manage the SILC session. SILC provides channels (groups), nicknames, private messages, and other common features. However, SILC nicknames, in contrast to many other protocols (e.g. IRC), are not unique; a user is able to use any nickname, even if one is already in use. The real identification in the protocol is performed by unique Client ID. The SILC protocol uses this to overcome nickname collision, a problem present in many other protocols. All messages sent in a SILC network are binary, allowing them to contain any type of data, including text, video, audio, and other multimedia data. The SKE protocol is used to establish session key and other security parameters for protecting the SILC Packet protocol. The SKE itself is based on the Diffie–Hellman key exchange algorithm (a form of asymmetric cryptography) and the exchange is protected with digital signatures. The SILC Authentication protocol is performed after successful SKE protocol execution to authenticate a client and/or a server. The authentication may be based on passphrase or on digital signatures, and if successful gives access to the relevant SILC network. The SILC Packet protocol is intended to be a secure binary packet protocol, assuring that the content of each packet (consisting of a packet header and packet payload) is secured and authenticated. The packets are secured using algorithms based on symmetric cryptography and authenticated by using Message Authentication Code algorithm, HMAC. SILC channels (groups) are protected by using symmetric channel keys. It is optionally possible to digitally sign all channel messages. It is also possible to protect messages with a privately generated channel key that has been previously agreed upon by channel members. Private messages between users in a SILC network are protected with session keys. It is, however, possible to execute SKE protocol between two users and use the generated key to protect private messages. Private messages may be optionally digitally signed. When messages are secured with key material generated with the SKE protocol or previously agreed upon key material (for example, passphrases) SILC provides security even when the SILC server may be compromised. History SILC was designed by Pekka Riikonen between 1996 and 1999 and first released in public in summer 2000. A client and a server were written. Protocol specifications were proposed, but ultimately request for publication was denied in June 2004 by IESG and no RFC has been published to date. At present time, there are several clients, the most advanced being the official SILC client and an irssi plugin. SILC protocol is also integrated to the popular Pidgin instant messaging client. Other GUI clients are Silky and Colloquy. The Silky client was put on hold and abandoned on the 18th of July 2007, due to inactivity for several years. The latest news on the Silky website was that the client was to be completely rewritten. As of 2008, three SILC protocol implementations have been written. Most SILC clients use libsilc, part of the SILC Toolkit. The SILC Toolkit is dual-licensed and distributed under both the GNU General Public License (GPL) and the revised BSD license. Security As described in the SILC FAQ, chats are secured through the generation of symmetric encryption keys. These keys have to be generated somewhere, and this occurs on the server. This means that chats might be compromised, if the server itself is compromised. This is just a version of the man-in-the-middle attack. The solution offered is that chat members generate their own public-private keypair for asymmetric encryption. The private key is shared only by the chat members, and this is done out of band. The public key is used to encrypt messages into the channel. This approach is still open to compromise, if one of the members of the chat should have their private key compromised, or if they should share the key with another, without agreement of the group. Networks SILC uses a similar pattern to IRC, in that there is no global "SILC network" but many small independent networks consisting of one or several servers each, although it is claimed that SILC can scale better with many servers in a single network. The "original" network is called SILCNet, at the round-robin. However, as of May 2014, it has only one active (though unstable) server out of four. Most SILC networks have shut down due to declining popularity of SILC. See also Synchronous conferencing Comparison of instant messaging protocols Multiprotocol instant messaging application Public-key cryptography References External links The SILC Project WhatsApp GB Internet protocols Instant messaging protocols Online chat
SILC (protocol)
[ "Technology" ]
1,055
[ "Instant messaging", "Instant messaging protocols" ]
1,514,271
https://en.wikipedia.org/wiki/Stress-induced%20leakage%20current
Stress-induced leakage current (SILC) is an increase in the gate leakage current of a MOSFET, used in semiconductor physics. It occurs due to defects created in the gate oxide during electrical stressing. SILC is perhaps the largest factor inhibiting device miniaturization. Increased leakage is a common failure mode of electronic devices. Oxide defects The most well-studied defects assisting in the leakage current are those produced by charge trapping in the oxide. This model provides a point of attack and has stimulated researchers to develop methods to decrease the rate of charge trapping by mechanisms such as nitrous oxide (N2O) nitridation of the oxide. SILC is linked to the trap density in an oxide, i.e. the density of defects. The SILC may be measured to determine the neutral trap density in that oxide. However, the oxide traps responsible for SILC are not necessarily responsible for oxide breakdown, as SILC and oxide breakdown have different annealing kinetics. References Semiconductor device defects
Stress-induced leakage current
[ "Technology" ]
210
[ "Technological failures", "Semiconductor device defects" ]
1,514,313
https://en.wikipedia.org/wiki/Nike-Cajun
The Nike-Cajun was a two-stage sounding rocket built by combining a Nike base stage with a Cajun upper stage. The Nike-Cajun was known as a CAN for Cajun And Nike. The Cajun was developed from the Deacon rocket. It retained the external size, shape and configuration of the Deacon but had 36 percent greater impulse than the Deacon due to improved propellant. It was launched 714 times between 1956 and 1976 and was the most frequently used sounding rocket of the western world. The Nike Cajun had a launch weight of 698 kg (1538 lb), a payload of 23 kg (51 lb), a launch thrust of 246 kN (55,300 lbf) and a maximum altitude of 120 km (394,000 ft). It had a diameter of 42 cm (1 ft 4 in) and a length of 7.70 m (25 ft 3 in). The maximum speed of the Nike-Cajun was . The Cajun stage of this rocket was named for the Cajun people of South Louisiana because one of the rocket's designers, J. G. Thibodaux, was a Cajun. The Nike-Cajun configuration was also used by one variation of the MQR-13 BMTS target rocket. Engine: 1st stage: Allegheny Ballistics Lab. X216A2 solid-fueled rocket; 246 kN (55,000 lb) for 3 s 2nd stage: Thiokol TE-82-1 Cajun solid-fueled rocket; 37 kN (8,300 lb) for 2.8 s See also Nike-Apache References Nike-Cajun at Encyclopedia Astronautica External links University of Michigan/NACA RM-85/PWN-3 Nike-Cajun Nike (rocket family)
Nike-Cajun
[ "Astronomy" ]
366
[ "Rocketry stubs", "Astronomy stubs" ]
1,514,469
https://en.wikipedia.org/wiki/Column%20chromatography
Column chromatography in chemistry is a chromatography method used to isolate a single chemical compound from a mixture. Chromatography is able to separate substances based on differential absorption of compounds to the adsorbent; compounds move through the column at different rates, allowing them to be separated into fractions. The technique is widely applicable, as many different adsorbents (normal phase, reversed phase, or otherwise) can be used with a wide range of solvents. The technique can be used on scales from micrograms up to kilograms. The main advantage of column chromatography is the relatively low cost and disposability of the stationary phase used in the process. The latter prevents cross-contamination and stationary phase degradation due to recycling. Column chromatography can be done using gravity to move the solvent, or using compressed gas to push the solvent through the column. A thin-layer chromatograph can show how a mixture of compounds will behave when purified by column chromatography. The separation is first optimised using thin-layer chromatography before performing column chromatography. Column preparation A column is prepared by packing a solid adsorbent into a cylindrical glass or plastic tube. The size will depend on the amount of compound being isolated. The base of the tube contains a filter, either a cotton or glass wool plug, or glass frit to hold the solid phase in place. A solvent reservoir may be attached at the top of the column. Two methods are generally used to prepare a column: the dry method and the wet method. For the dry method, the column is first filled with dry stationary phase powder, followed by the addition of mobile phase, which is flushed through the column until it is completely wet, and from this point is never allowed to run dry. For the wet method, a slurry is prepared of the eluent with the stationary phase powder and then carefully poured into the column. The top of the silica should be flat, and the top of the silica can be protected by a layer of sand. Eluent is slowly passed through the column to advance the organic material. The individual components are retained by the stationary phase differently and separate from each other while they are running at different speeds through the column with the eluent. At the end of the column they elute one at a time. During the entire chromatography process the eluent is collected in a series of fractions. Fractions can be collected automatically by means of fraction collectors. The productivity of chromatography can be increased by running several columns at a time. In this case multi stream collectors are used. The composition of the eluent flow can be monitored and each fraction is analyzed for dissolved compounds, e.g. by analytical chromatography, UV absorption spectra, or fluorescence. Colored compounds (or fluorescent compounds with the aid of a UV lamp) can be seen through the glass wall as moving bands. Stationary phase The stationary phase or adsorbent in column chromatography is a solid. The most common stationary phase for column chromatography is silica gel, the next most common being alumina. Cellulose powder has often been used in the past. A wide range of stationary phases are available in order to perform ion exchange chromatography, reversed-phase chromatography (RP), affinity chromatography or expanded bed adsorption (EBA). The stationary phases are usually finely ground powders or gels and/or are microporous for an increased surface, though in EBA a fluidized bed is used. There is an important ratio between the stationary phase weight and the dry weight of the analyte mixture that can be applied onto the column. For silica column chromatography, this ratio lies within 20:1 to 100:1, depending on how close to each other the analyte components are being eluted. Mobile phase (eluent) The mobile phase or eluent is a solvent or a mixture of solvents used to move the compounds through the column. It is chosen so that the retention factor value of the compound of interest is roughly around 0.2 - 0.3 in order to minimize the time and the amount of eluent to run the chromatography. The eluent has also been chosen so that the different compounds can be separated effectively. The eluent is optimized in small scale pretests, often using thin layer chromatography (TLC) with the same stationary phase, using solvents of different polarity until a suitable solvent system is found. Common mobile phase solvents, in order of increasing polarity, include hexane, dichloromethane, ethyl acetate, acetone, and methanol. A common solvent system is a mixture of hexane and ethyl acetate, with proportions adjusted until the target compound has a retention factor of 0.2 - 0.3. Contrary to common misconception, methanol alone can be used as an eluent for highly polar compounds, and does not dissolve silica gel. There is an optimum flow rate for each particular separation. A faster flow rate of the eluent minimizes the time required to run a column and thereby minimizes diffusion, resulting in a better separation. However, the maximum flow rate is limited because a finite time is required for the analyte to equilibrate between the stationary phase and mobile phase, see Van Deemter's equation. A simple laboratory column runs by gravity flow. The flow rate of such a column can be increased by extending the fresh eluent filled column above the top of the stationary phase or decreased by the tap controls. Faster flow rates can be achieved by using a pump or by using compressed gas (e.g. air, nitrogen, or argon) to push the solvent through the column (flash column chromatography). The particle size of the stationary phase is generally finer in flash column chromatography than in gravity column chromatography. For example, one of the most widely used silica gel grades in the former technique is mesh 230 – 400 (40 – 63 μm), while the latter technique typically requires mesh 70 – 230 (63 – 200 μm) silica gel. A spreadsheet that assists in the successful development of flash columns has been developed. The spreadsheet estimates the retention volume and band volume of analytes, the fraction numbers expected to contain each analyte, and the resolution between adjacent peaks. This information allows users to select optimal parameters for preparative-scale separations before the flash column itself is attempted. Automated systems Column chromatography is an extremely time-consuming stage in any lab and can quickly become the bottleneck for any process lab. Many manufacturers like Biotage, Buchi, Interchim and Teledyne Isco have developed automated flash chromatography systems (typically referred to as LPLC, low pressure liquid chromatography, around ) that minimize human involvement in the purification process. Automated systems will include components normally found on more expensive high performance liquid chromatography (HPLC) systems such as a gradient pump, sample injection ports, a UV detector and a fraction collector to collect the eluent. Typically these automated systems can separate samples from a few milligrams up to an industrial many kilogram scale and offer a much cheaper and quicker solution to doing multiple injections on prep-HPLC systems. The resolution (or the ability to separate a mixture) on an LPLC system will always be lower compared to HPLC, as the packing material in an HPLC column can be much smaller, typically only 5 micrometre thus increasing stationary phase surface area, increasing surface interactions and giving better separation. However, the use of this small packing media causes the high back pressure and is why it is termed high pressure liquid chromatography. The LPLC columns are typically packed with silica of around 50 micrometres, thus reducing back pressure and resolution, but it also removes the need for expensive high pressure pumps. Manufacturers are now starting to move into higher pressure flash chromatography systems and have termed these as medium pressure liquid chromatography (MPLC) systems which operate above . Column chromatogram resolution calculation Typically, column chromatography is set up with peristaltic pumps, flowing buffers and the solution sample through the top of the column. The solutions and buffers pass through the column where a fraction collector at the end of the column setup collects the eluted samples. Prior to the fraction collection, the samples that are eluted from the column pass through a detector such as a spectrophotometer or mass spectrometer so that the concentration of the separated samples in the sample solution mixture can be determined. For example, if you were to separate two different proteins with different binding capacities to the column from a solution sample, a good type of detector would be a spectrophotometer using a wavelength of 280 nm. The higher the concentration of protein that passes through the eluted solution through the column, the higher the absorbance of that wavelength. Because the column chromatography has a constant flow of eluted solution passing through the detector at varying concentrations, the detector must plot the concentration of the eluted sample over a course of time. This plot of sample concentration versus time is called a chromatogram. The ultimate goal of chromatography is to separate different components from a solution mixture. The resolution expresses the extent of separation between the components from the mixture. The higher the resolution of the chromatogram, the better the extent of separation of the samples the column gives. This data is a good way of determining the column's separation properties of that particular sample. The resolution can be calculated from the chromatogram. The separate curves in the diagram represent different sample elution concentration profiles over time based on their affinity to the column resin. To calculate resolution, the retention time and curve width are required. Retention time is the time from the start of signal detection by the detector to the peak height of the elution concentration profile of each different sample. Curve width is the width of the concentration profile curve of the different samples in the chromatogram in units of time. A simplified method of calculating chromatogram resolution is to use the plate model. The plate model assumes that the column can be divided into a certain number of sections, or plates and the mass balance can be calculated for each individual plate. This approach approximates a typical chromatogram curve as a Gaussian distribution curve. By doing this, the curve width is estimated as 4 times the standard deviation of the curve, 4σ. The retention time is the time from the start of signal detection to the time of the peak height of the Gaussian curve. From the variables in the figure above, the resolution, plate number, and plate height of the column plate model can be calculated using the equations: Resolution (Rs): Rs = 2(tRB – tRA)/(wB + wA), where: tRB = retention time of solute B tRA = retention time of solute A wB = Gaussian curve width of solute B wA = Gaussian curve width of solute A Plate Number (N): N = (tR)2/(w/4)2 Plate Height (H): H = L/N where L is the length of the column. Column adsorption equilibrium For an adsorption column, the column resin (the stationary phase) is composed of microbeads. Even smaller particles such as proteins, carbohydrates, metal ions, or other chemical compounds are conjugated onto the microbeads. Each binding particle that is attached to the microbead can be assumed to bind in a 1:1 ratio with the solute sample sent through the column that needs to be purified or separated. Binding between the target molecule to be separated and the binding molecule on the column beads can be modeled using a simple equilibrium reaction Keq = [CS]/([C][S]) where Keq is the equilibrium constant, [C] and [S] are the concentrations of the target molecule and the binding molecule on the column resin, respectively. [CS] is the concentration of the complex of the target molecule bound to the column resin. Using this as a basis, three different isotherms can be used to describe the binding dynamics of a column chromatography: linear, Langmuir, and Freundlich. The linear isotherm occurs when the solute concentration needed to be purified is very small relative to the binding molecule. Thus, the equilibrium can be defined as: [CS] = Keq[C]. For industrial scale uses, the total binding molecules on the column resin beads must be factored in because unoccupied sites must be taken into account. The Langmuir isotherm and Freundlich isotherm are useful in describing this equilibrium. The Langmuir isotherm is given by: [CS] = (KeqStot[C])/(1 + Keq[C]), where Stot is the total binding molecules on the beads. The Freundlich isotherm is given by: [CS] = Keq[C]1/n The Freundlich isotherm is used when the column can bind to many different samples in the solution that needs to be purified. Because the many different samples have different binding constants to the beads, there are many different Keqs. Therefore, the Langmuir isotherm is not a good model for binding in this case. See also Fast protein liquid chromatography (FPLC) – separation of proteins using column chromatography High-performance liquid chromatography (HPLC) – column chromatography using high pressure References External links Flash Column Chromatography Guide (pdf) Radial Flow Chromatography Chromatography Laboratory techniques
Column chromatography
[ "Chemistry" ]
2,916
[ "Chromatography", "nan", "Separation processes" ]
1,514,566
https://en.wikipedia.org/wiki/Mating%20pool
Mating pool is a concept used in evolutionary algorithms and means a population of parents for the next population. The mating pool is formed by candidate solutions that the selection operators deem to have the highest fitness in the current population. Solutions that are included in the mating pool are referred to as parents. Individual solutions can be repeatedly included in the mating pool, with individuals of higher fitness values having a higher chance of being included multiple times. Crossover operators are then applied to the parents, resulting in recombination of genes recognized as superior. Lastly, random changes in the genes are introduced through mutation operators, increasing the genetic variation in the gene pool. Those two operators improve the chance of creating new, superior solutions. A new generation of solutions is thereby created, the children, who will constitute the next population. Depending on the selection method, the total number of parents in the mating pool can be different to the size of the initial population, resulting in a new population that’s smaller. To continue the algorithm with an equally sized population, random individuals from the old populations can be chosen and added to the new population. At this point, the fitness value of the new solutions is evaluated. If the termination conditions are fulfilled, processes come to an end. Otherwise, they are repeated. The repetition of the steps result in candidate solutions that evolve towards the most optimal solution over time. The genes will become increasingly uniform towards the most optimal gene, a process called convergence. If 95% of the population share the same version of a gene, the gene has converged. When all the individual fitness values have reached the value of the best individual, i.e. all the genes have converged, population convergence is achieved. Mating pool creation Several methods can be applied to create a mating pool. All of these processes involve the selective breeding of a particular number of individuals within a population. There are multiple criteria that can be employed to determine which individuals make it into the mating pool and which are left behind. The selection methods can be split into three general types: fitness proportionate selection, ordinal based selection and threshold based selection. Fitness proportionate selection In the case of fitness proportionate selection, random individuals are selected to enter the pool. However, the ones with a higher level of fitness are more likely to be picked and therefore have a greater chance of passing on their features to the next generation. One of the techniques used in this type of parental selection is the roulette wheel selection. This approach divides a hypothetical circular wheel into different slots, the size of which is equal to the fitness values of each potential candidate. Afterwards, the wheel is rotated and a fixed point determines which individual gets picked. The greater the fitness value of an individual, the higher the probability of being chosen as a parent by the random spin of the wheel. Alternatively, stochastic universal sampling can be implemented. This selection method is also based on the rotation of a spinning wheel. However, in this case there is more than one fixed point and as a result all of the mating pool members will be selected simultaneously. Ordinal based selection The ordinal based selection methods include the tournament and ranking selection. Tournament selection involves the random selection of individuals of a population and the subsequent comparison of their fitness levels. The winners of these “tournaments” are the ones with the highest values and will be put into the mating pool as parents. In ranking selection all the individuals are sorted based on their fitness values. Then, the selection of the parents is made according to the rank of the candidates. Every individual has a chance of being chosen, but higher ranked ones are favored Threshold based selection The last type of selection method is referred to as the threshold based method. This includes the truncation selection method, which sorts individuals based on their phenotypic values on a specific trait and later selects the proportion of them that are within a certain threshold as parents. References Population genetics Evolutionary algorithms Genetic algorithms
Mating pool
[ "Biology" ]
794
[ "Genetics techniques", "Genetic algorithms" ]
1,514,641
https://en.wikipedia.org/wiki/Mercury%28II%29%20oxide
Mercury(II) oxide, also called mercuric oxide or simply mercury oxide, is the inorganic compound with the formula HgO. It has a red or orange color. Mercury(II) oxide is a solid at room temperature and pressure. The mineral form montroydite is very rarely found. History An experiment for the preparation of mercuric oxide was first described by 11th century Arab-Spanish alchemist, Maslama al-Majriti, in Rutbat al-hakim. It was historically called red precipitate (as opposed to white precepitate being the mercuric amidochloride). In 1774, Joseph Priestley discovered that oxygen was released by heating mercuric oxide, although he did not identify the gas as oxygen (rather, Priestley called it "dephlogisticated air," as that was the paradigm that he was working under at the time). Synthesis and reactions The red form of HgO can be made by heating Hg in oxygen at roughly 350 °C, or by pyrolysis of Hg(NO3)2. The yellow form can be obtained by precipitation of aqueous Hg2+ with alkali. The difference in color is due to particle size; both forms have the same structure consisting of near linear O-Hg-O units linked in zigzag chains with an Hg-O-Hg angle of 108°. It is sometimes said that HgO "is soluble in acids", but in fact it reacts with acids to make mercuric salts. Structure Under atmospheric pressure mercuric oxide has two crystalline forms: one is called montroydite (orthorhombic, 2/m 2/m 2/m, Pnma), and the second is analogous to the sulfide mineral cinnabar (hexagonal, hP6, P3221); both are characterized by Hg-O chains. At pressures above 10 GPa both structures convert to a tetragonal form. Uses Mercury oxide is sometimes used in the production of mercury as it decomposes quite easily. When it decomposes, oxygen gas is generated. It is also used as a material for cathodes in mercury batteries. Health issues Mercury oxide is a highly toxic substance which can be absorbed into the body by inhalation of its aerosol, through the skin and by ingestion. The substance is irritating to the eyes, the skin and the respiratory tract and may have effects on the kidneys, resulting in kidney impairment. In the food chain important to humans, bioaccumulation takes place, specifically in aquatic organisms. The substance is banned as a pesticide in the EU. Evaporation at 20 °C is negligible. HgO decomposes on exposure to light or on heating above 500 °C. Heating produces highly toxic mercury fumes and oxygen, which increases the fire hazard. Mercury(II) oxide reacts violently with reducing agents, chlorine, hydrogen peroxide, magnesium (when heated), disulfur dichloride and hydrogen trisulfide. Shock-sensitive compounds are formed with metals and elements such as sulfur and phosphorus. References External links National Pollutant Inventory – Mercury and compounds fact sheet Information at Webelements. Oxides Mercury(II) compounds Inorganic compounds
Mercury(II) oxide
[ "Chemistry" ]
682
[ "Oxides", "Inorganic compounds", "Salts" ]
1,514,696
https://en.wikipedia.org/wiki/Parity%20benchmark
Parity problems are widely used as benchmark problems in genetic programming but inherited from the artificial neural network community. Parity is calculated by summing all the binary inputs and reporting if the sum is odd or even. This is considered difficult because: a very simple artificial neural network cannot solve it, and all inputs need to be considered and a change to any one of them changes the answer. References Foundations of Genetic Programming Genetic programming
Parity benchmark
[ "Biology" ]
87
[ "Genetics techniques", "Genetic programming" ]
1,514,713
https://en.wikipedia.org/wiki/Premature%20convergence
Premature convergence is an unwanted effect in evolutionary algorithms (EA), a metaheuristic that mimics the basic principles of biological evolution as a computer algorithm for solving an optimization problem. The effect means that the population of an EA has converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene. Strategies for preventing premature convergence Strategies to regain genetic variation can be: a mating strategy called incest prevention, uniform crossover, mimicking sexual selection, favored replacement of similar individuals (preselection or crowding), segmentation of individuals of similar fitness (fitness sharing), increasing population size The genetic variation can also be regained by mutation though this process is highly random. A general strategy to reduce the risk of premature convergence is to use structured populations instead of the commonly used panmictic ones. Identification of the occurrence of premature convergence It is hard to determine when premature convergence has occurred, and it is equally hard to predict its presence in the future. One measure is to use the difference between the average and maximum fitness values, as used by Patnaik & Srinivas, to then vary the crossover and mutation probabilities. Population diversity is another measure which has been extensively used in studies to measure premature convergence. However, although it has been widely accepted that a decrease in the population diversity directly leads to premature convergence, there have been little studies done on the analysis of population diversity. In other words, by using the term population diversity, the argument for a study in preventing premature convergence lacks robustness, unless specified what their definition of population diversity is. Causes for premature convergence There are a number of presumed or hypothesized causes for the occurrence of premature convergence. Self-adaptive mutations Rechenberg introduced the idea of self-adaptation of mutation distributions in evolution strategies. According to Rechenberg, the control parameters for these mutation distributions evolved internally through self-adaptation, rather than predetermination. He called it the 1/5-success rule of evolution strategies (1 + 1)-ES: The step size control parameter would be increased by some factor if the relative frequency of positive mutations through a determined period of time is larger than 1/5, vice versa if it is smaller than 1/5. Self-adaptive mutations may very well be one of the causes for premature convergence. Accurately locating of optima can be enhanced by self-adaptive mutation, as well as accelerating the search for this optima. This has been widely recognized, though the mechanism's underpinnings of this have been poorly studied, as it is often unclear whether the optima is found locally or globally. Self-adaptive methods can cause global convergence to global optimum, provided that the selection methods used are using elitism, as well as that the rule of self-adaptation doesn't interfere with the mutation distribution, which has the property of ensuring a positive minimum probability when hitting a random subset. This is for non-convex objective functions with sets that include bounded lower levels of non-zero measurements. A study by Rudolph suggests that self-adaption mechanisms among elitist evolution strategies do resemble the 1/5-success rule, and could very well get caught by a local optimum that include a positive probability. Panmictic populations Most EAs use unstructured or panmictic populations where basically every individual in the population is eligible for mate selection based on fitness. Thus, The genetic information of an only slightly better individual can spread in a population within a few generations, provided that no better other offspring is produced during this time. Especially in comparatively small populations, this can quickly lead to a loss of genotypic diversity and thus to premature convergence. A well-known countermeasure is to switch to alternative population models which introduce substructures into the population that preserve genotypic diversity over a longer period of time and thus counteract the tendency towards premature convergence. This has been shown for various EAs such as genetic algorithms, the evolution strategy, other EAs or memetic algorithms. See also Evolutionary computation Evolution References Evolutionary biology Evolutionary algorithms Convergence (mathematics)
Premature convergence
[ "Mathematics", "Biology" ]
975
[ "Sequences and series", "Evolutionary biology", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Mathematical objects", "Mathematical relations" ]
1,514,907
https://en.wikipedia.org/wiki/Unary%20function
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its codomain coincides with its domain. In contrast, a unary function's domain need not coincide with its range. Examples The successor function, denoted , is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows: In some programming languages such as C, executing this operation is denoted by postfixing to the operand, i.e. the use of is equivalent to executing the assignment . Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions. See also Arity Binary function Binary operation Iterated binary operation Ternary operation Unary operation References Foundations of Genetic Programming Functions and mappings Types of functions
Unary function
[ "Mathematics" ]
196
[ "Mathematical analysis", "Functions and mappings", "Unary operations", "Mathematical objects", "Mathematical relations", "Types of functions" ]
1,514,954
https://en.wikipedia.org/wiki/Load-balanced%20switch
A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration. Introduction Internet routers are typically built using line cards connected with a switch. Routers supporting moderate total bandwidth may use a bus as their switch, but high bandwidth routers typically use some sort of crossbar interconnection. In a crossbar, each output connects to one input, so that information can flow through every output simultaneously. Crossbars used for packet switching are typically reconfigured tens of millions of times per second. The schedule of these configurations is determined by a central arbiter, for example a Wavefront arbiter, in response to requests by the line cards to send information to one another. Perfect arbitration would result in throughput limited only by the maximum throughput of each crossbar input or output. For example, if all traffic coming into line cards A and B is destined for line card C, then the maximum traffic that cards A and B can process together is limited by C. Perfect arbitration has been shown to require massive amounts of computation, that scales up much faster than the number of ports on the crossbar. Practical systems use imperfect arbitration heuristics (such as iSLIP) that can be computed in reasonable amounts of time. A load-balanced switch is not related to a load balancing switch, which refers to a kind of router used as a front end to a farm of web servers to spread requests to a single website across many servers. Basic architecture As shown in the figure to the right, a load-balanced switch has N input line cards, each of rate R, each connected to N buffers by a link of rate R/N. Those buffers are in turn each connected to N output line cards, each of rate R, by links of rate R/N. The buffers in the center are partitioned into N virtual output queues. Each input line card spreads its packets evenly to the N buffers, something it can clearly do without contention. Each buffer writes these packets into a single buffer-local memory at a combined rate of R. Simultaneously, each buffer sends packets at the head of each virtual output queue to each output line card, again at rate R/N to each card. The output line card can clearly forward these packets out the line with no contention. Each buffer in a load-balanced switch acts as a shared-memory switch, and a load-balanced switch is essentially a way to scale up a shared-memory switch, at the cost of additional latency associated with forwarding packets at rate R/N twice. The Stanford group investigating load-balanced switches is concentrating on implementations where the number of buffers is equal to the number of line cards. One buffer is placed on each line cards, and the two interconnection meshes are actually the same mesh, supplying rate 2R/N between every pair of line cards. But the basic load-balanced switch architecture does not require that the buffers be placed on the line cards, or that there be the same number of buffers and line cards. One interesting property of a load-balanced switch is that, although the mesh connecting line cards to buffers is required to connect every line card to every buffer, there is no requirement that the mesh act as a non-blocking crossbar, nor that the connections be responsive to any traffic pattern. Such a connection is far simpler than a centrally arbitrated crossbar. Keeping packets in-order If two packets destined for the same output arrive back-to-back at one line card, they will be spread to two different buffers, which could have two different occupancies, and so the packets could be reordered by the time they are delivered to the output. Although reordering is legal, it is typically undesirable because TCP does not perform well with reordered packets. By adding yet more latency and buffering, the load-balanced switch can maintain packet order within flows using only local information. One such algorithm is FOFF (Fully Ordered Frames First). FOFF has the additional benefits of removing any vulnerability to pathological traffic patterns, and providing a mechanism for implementing priorities. Implementations Single chip crossbar plus load-balancing arbiter The Stanford University Tiny Tera project (see Abrizio) introduced a switch architecture that required at least two chip designs for the switching fabric itself (the crossbar slice and the arbiter). Upgrading the arbiter to include load-balancing and combining these devices could have reliability, cost and throughput advantages. Single global router Since the line cards in a load-balanced switch do not need to be physically near one another, one possible implementation is to use an entire continent- or global-sized backbone network as the interconnection mesh, and core routers as the "line cards". Such an implementation suffers from having all latencies increased to twice the worst-case transmission latency. But it has a number of intriguing advantages: Large backbone packet networks typically have massive overcapacity (10x or more) to deal with imperfect capacity planning, congestion, and other problems. A load-balanced switch backbone can deliver 100% throughput with an overcapacity of just 2x, as measured across the whole system. The underpinnings of large backbone networks are usually optical channels that cannot be quickly switched. These map well to the constant-rate 2R/N channels of the load-balanced switch's mesh. No route tables need be changed based on global congestion information, because there is no global congestion. Rerouting in the case of a node failure does require changing the configuration of the optical channels. But the reroute can be precomputed (there are only a finite number of nodes that can fail), and the reroute causes no congestion that would then require further route table changes. References External links Optimal Load-Balancing I. Keslassy, C. Chang, N. McKeown, and D. Lee Scaling Internet Routers Using Optics I. Keslassy, S. Chuang, K. Yu, D. Miller, M. Horowitz, O. Solgaard, and N. McKeown Computer networking Media access control
Load-balanced switch
[ "Technology", "Engineering" ]
1,319
[ "Computer networking", "Computer science", "Computer engineering" ]
1,514,981
https://en.wikipedia.org/wiki/Surrogacy
Surrogacy is an arrangement, often supported by a legal agreement, whereby a woman agrees to pregnancy and childbirth on behalf of (an)other person(s) who will become the child's legal parent(s) after birth. People pursue surrogacy for a variety of reasons such as infertility, dangers or undesirable factors of pregnancy, or when pregnancy is a medical impossibility. A surrogacy relationship or legal agreement contains the person who carries the pregnancy and gives birth and the person or persons who take custody of the child after birth. The person giving birth is called the birth mother or gestational carrier or surrogate mother or surrogate. The person(s) taking custody is/are called the commissioning parent(s) or intended parent(s). The biological mother may be the surrogate or the intended parent or neither. Surrogate mothers are usually introduced to intended parent(s) through third-party agencies, or other matching channels. They are usually required to participate in processes of insemination (no matter traditional or IVF), pregnancy, delivery, and newborn feeding early after birth. In surrogacy arrangements, monetary compensation may or may not be involved. Receiving money for the arrangement is known as commercial surrogacy. The legality and cost of surrogacy varies widely between jurisdictions, contributing to fertility tourism, and sometimes resulting in problematic international or interstate surrogacy arrangements. For example, those living in a country where surrogacy is banned travel to a jurisdiction that permits it. In some countries, surrogacy is legal if there is no financial gain. Where commercial surrogacy is legal, third-party agencies may assist by finding a surrogate and arranging a surrogacy contract with her. These agencies often obtain medical tests to ensure healthy gestation and delivery. They also usually facilitate legal matters concerning the intended parents and the surrogate. Methods Surrogacy may be either traditional or gestational, which are differentiated by the genetic origin of the egg. Gestational surrogacy tends to be more common than traditional surrogacy and is considered less legally complex. Traditional surrogacy A traditional surrogacy (also known as partial, natural, or straight surrogacy) is one where the surrogate's egg is fertilised by the intended father's or a donor's sperm. Insemination of the surrogate can be either through sex (natural insemination) or artificial insemination. Using the sperm of a donor results in a child who is not genetically related to the intended parent(s). If the intended father's sperm is used in the insemination, the resulting child is genetically related to both him and the surrogate. Some choose to inseminate privately without the intervention of a doctor or physician. In some jurisdictions, the intended parents using donor sperm need to go through an adoption process to have legal parental rights of the resulting child. Many fertility centres that provide for surrogacy assist the parties through the legal process. Gestational surrogacy Gestational surrogacy (also known as host or full surrogacy) was first achieved in April 1986. It takes place when an embryo created by in vitro fertilization (IVF) technology is implanted in a surrogate, sometimes called a gestational carrier. Gestational surrogacy has several forms, and in each form, the resulting child is genetically unrelated to the surrogate: The embryo is created using the intended father's sperm and the intended mother's eggs; The embryo is created using the intended father's sperm and a donor egg; The embryo is created using the intended mother's egg and donor sperm; A donor embryo is transferred to a surrogate. Such an embryo may be available when others undergoing IVF have embryos left over, which they donate to others. The resulting child is genetically unrelated to the surrogate. Risks Embryo The embryo implanted in gestational surrogacy faces the same risks as anyone using IVF would. Preimplantation risks of the embryo include unintentional epigenetic effects, influence of media which the embryo is cultured on, and undesirable consequences of invasive manipulation of the embryo. Often, multiple embryos are transferred to increase the chance of implantation, and if multiple gestations occur, both the surrogate and the embryos face higher risks of complications. Children born through singleton IVF surrogacy have been shown to have no physical or mental abnormalities compared to those children born through natural conception. However, children born through multiple gestation in gestational carriers often result in preterm labor and delivery, resulting in prematurity and physical and/or mental anomalies. Surrogate mothers “Pregnancy surrogates seem to have a higher risk of developing complications such as postpartum haemorrhage and severe pre-eclampsia and are more likely to give birth prematurely.” Gestational surrogates have a smaller chance of having hypertensive disorder during pregnancy compared to mothers pregnant by oocyte donation. This is possibly because gestational carriers tend to be healthier and more fertile than women who use oocyte donation. Gestational carriers also have low rates of placenta previa / placental abruptions (1.1–7.9%). In most countries, such as China, there exists a huge gap in the legal framework between the legislation and regulation for surrogacy. Due to insufficient authority supervision, surrogacy and the safety of surrogate mothers lack of professional support or reliable operation, the medical conditions cannot be achieved either. All these precarious factors increase the safety risks of artificial surgeries such as egg retrieval and insemination. Moreover, the underground contracts can inflict serious physiological harm on surrogate mothers. Surrogacy agencies ignore surrogate mothers' health risks and deaths: enforced foetal sex selection through forced abortions are very common, and multiple implantations and foetal reduction procedures may also be repeated on the same surrogate mother, causing health hazards such as miscarriage, infertility, and even death. Outcomes Among gestational surrogacy arrangements, between 19–33% of gestational surrogates will successfully become pregnant from an embryo transfer. Of these cases, 30–70% will result in live birth. For surrogate pregnancies where only one child is born, the preterm birth rate in surrogacy is marginally lower than babies born from standard IVF (11.5% vs 14%). Babies born from surrogacy also have similar average gestational age as infants born through in vitro fertilization and oocyte donation; approximately weeks. Preterm birth rate was higher for surrogate twin pregnancies compared to single births. There are fewer babies with low birth weight when born through surrogacy compared to those born through in vitro fertilization but both methods have similar rates of birth defects. Indications for surrogacy Opting for surrogacy is a choice for single men desiring to raise a child from infancy, same sex couples unable or unwilling for pregnancy, or women unable or unwilling to carry children on their own. Surrogacy is chosen by women for a number of medical reasons, such as abnormal or absent uterus, either congenitally (also known as Mayer–Rokitansky–Kuster–Hauser syndrome) or post-hysterectomy. Women may have a hysterectomy due to complications in childbirth such as heavy bleeding or a ruptured uterus. Medical diseases such as cervical cancer or endometrial cancer can also lead to surgical removal of the uterus. Past implantation failures, history of multiple miscarriages, or concurrent severe heart or renal conditions that can make pregnancy harmful may also prompt women to consider surrogacy. The biological impossibility of single men and same-sex couples having a baby also may indicate surrogacy as an option. Gestational surrogacy In gestational surrogacy, the child is not biologically related to the surrogate, who is often referred to as a gestational carrier. Instead, the embryo is created via in vitro fertilization (IVF), using the eggs and sperm of the intended parents or donors, and is then transferred to the surrogate. Because gestational surrogacy includes at least one round of IVF, it is always more expensive than a round of IVF alone. According to recommendations made by the European Society of Human Reproduction and Embryology and American Society for Reproductive Medicine, a gestational carrier is preferably between the ages of 21 and 45, has had one full-term, uncomplicated pregnancy where she successfully had at least one child, and has had no more than five deliveries or three Caesarean sections.   The International Federation of Gynaecology and Obstetrics recommends that the surrogate's autonomy should be respected throughout the pregnancy even if her wishes conflict with what the intended parents want. The most commonly reported motivation given by gestational surrogates is an altruistic desire to help a childless couple. Other less commonly given reasons include enjoying the experience of pregnancy, and financial compensation. History Having another woman bear a child for a couple to raise, usually with the male half of the couple as the genetic father, has been referenced since the ancient times. Babylonian law and custom allowed this practice, and a woman unable to give birth could use the practice to avoid a divorce, which would otherwise be inevitable. Many developments in medicine, social customs, and legal proceedings around the world paved the way for modern surrogacy: 1936 In the U.S., drug companies Schering-Kahlbaum and Parke-Davis started the pharmaceutical production of estrogen. 1944 Harvard Medical School professor John Rock became the first person to fertilize human ovum outside the uterus. 1953 Researchers successfully performed the first cryopreservation of sperm. 1976 Michigan lawyer Noel Keane wrote the first surrogacy contract in the United States. 1978 Louise Brown, the first "test-tube baby", was born in England, the product of the first successful IVF procedure. 1985–1986 A woman carried the first successful gestational surrogate pregnancy. 1986 Melissa Stern, otherwise known as "Baby M," was born in the U.S. The surrogate and biological mother, Mary Beth Whitehead, refused to give up custody of Melissa to the couple with whom she made the surrogacy agreement. The courts of New Jersey found that Whitehead was the child's legal mother and declared contracts for gestational carrierhood illegal and invalid. However, the court found it in the best interest of the infant to award custody of Melissa to the child's biological father, William Stern, and his wife Elizabeth Stern, rather than to Whitehead, the gestational carrier. 1990 In California, gestational carrier Anna Johnson refused to give up the baby to intended parents Mark and Crispina Calvert. The couple sued her for custody (Calvert v. Johnson), and the court upheld their parental rights. In doing so, it defined the legal mother as the woman who, according to the surrogacy agreement, intends to create and raise a child. 2009 Ukraine, one of the most requested countries in Europe for this treatment, has its first Surrogacy Law approved. 2021 The Supreme Court of Mexico ruled that every individual, regardless of sexual orientation, marital status, or nationality, has the right to access assisted reproductive technology to form a family, and that the Civil Code of the state of Tabasco that restricts surrogacy to Mexican married couples is unconstitutional. It also ruled that legal parentage should be based on the presence of procreational will, not genetic or gestational relationship. Psychological concerns Surrogate Anthropological studies of surrogates have shown that surrogates engage in various distancing techniques throughout the surrogate pregnancy so as to avoid becoming emotionally attached to the baby. Many surrogates intentionally try to foster the development of emotional attachment between the intended mother and the surrogate child. Some surrogates describe feeling empowered by the experience. Although gestational surrogates generally report being satisfied with their experience as surrogates, there are cases in which they are not. Unmet expectations are associated with dissatisfaction. Some women did not feel a certain level of closeness with the couple and others did not feel respected by the couple. Some gestational surrogates report emotional distress during the process of surrogacy. There may be a lack of access to therapy and emotional support through the surrogate process. Gestational surrogates may struggle with postpartum depression and issues with relinquishing the child to their intended parents. Immediate postpartum depression has been observed in gestational surrogates at a rate of 0-20%. Some surrogates report negative feelings with relinquishing rights to the child immediately after birth, but most negative feelings resolve after some time. Child and intended parents A systematic review of 55 studies examining the outcomes for surrogacy for surrogates and resulting families showed that there were no major psychological differences in children up to the age of 10 years old that were born from surrogacy compared to those children born from other assisted reproductive technology or those children conceived naturally. Gay men who have become fathers using surrogacy have reported similar experiences to those of other couples who have used surrogacy, including their relationship with both their child and their surrogate. A study has followed a cohort of 32 surrogacy, 32 egg donation, and 54 natural conception families through to age seven, reporting the impact of surrogacy on the families and children at ages one, two, and seven. At age one, parents through surrogacy showed greater psychological well-being and adaptation to parenthood than those who conceived naturally; there were no differences in infant temperament. At age two, parents through surrogacy showed more positive mother–child relationships and less parenting stress on the part of fathers than their natural conception counterparts; there were no differences in child development between these two groups. At age seven, the surrogacy and egg donation families showed less positive mother–child interaction than the natural conception families, but there were no differences in maternal positive or negative attitudes or child adjustment. The researchers concluded that the surrogacy families continued to function well. Legal issues The legality of surrogacy varies around the world. Many countries do not have laws which specifically deal with surrogacy. Some countries ban surrogacy outright, while others ban commercial surrogacy but allow altruistic surrogacy (in which the surrogate is not financially compensated). Some countries allow commercial surrogacy, with few restrictions. Some jurisdictions extend a ban on surrogacy to international surrogacy. In some jurisdictions rules applicable to adoptions apply while others do not regulate the practice. Commercial surrogacy is banned in Canada and most of Europe. The US, Ukraine, Russia and Georgia have the least restrictive laws in the world, allowing commercial surrogacy, including for foreigners. Surrogacy is legal and common in Iran, and monetary remuneration is practiced and allowed by religious authorities.. Several Asian countries used to have less restrictive laws, but the practice has since been restricted. In 2013, Thailand banned commercial surrogacy, and restricted altruistic surrogacy to Thai couples. In 2016, Cambodia also banned commercial surrogacy. Nepal, Mexico, and India have also recently banned foreign commercial surrogacy. Laws dealing with surrogacy must deal with: Enforceability of surrogacy agreements. In some jurisdictions, they are void or prohibited, and some jurisdictions distinguish between commercial and altruistic surrogacy. The different issues raised by traditional and gestational surrogacy. Mechanisms for the legal recognition of the intended parents as the legal parents, either by pre-birth orders or by post-birth adoption. Although laws differ widely from one jurisdiction to another, some generalizations are possible: The historical legal assumption has been that the woman giving birth to a child is that child's legal mother, and the only way for another woman to be recognized as the legal mother is through adoption (usually requiring the birth mother's formal abandonment of parental rights). Even in jurisdictions that do not recognize surrogacy arrangements, if the potential adoptive parents and the birth mother proceed without any intervention from the government and do not change their mind along the way, they will likely be able to achieve the effects of surrogacy by having the gestational carrier give birth and then give the child up for private adoption to the intended parents. If the jurisdiction specifically bans surrogacy, however, and authorities find out about the arrangement, there may be financial and legal consequences for the parties involved. One jurisdiction (Quebec) prevented the genetic mother's adoption of the child even though that left the child with no legal mother. Some jurisdictions specifically prohibit only commercial and not altruistic surrogacy. Even jurisdictions that do not prohibit surrogacy may rule that surrogacy contracts (commercial, altruistic, or both) are void. If the contract is either prohibited or void, then there is no recourse if one party to the agreement has a change of heart: if a surrogate changes her mind and decides to keep the child, the intended mother has no claim to the child even if it is her genetic offspring, and the couple cannot get back any money they may have paid the surrogate; if the intended parents change their mind and do not want the child after all, the surrogate cannot get any money to make up for the expenses, or any promised payment, and she will be left with legal custody of the child. Jurisdictions that permit surrogacy sometimes offer a way for the intended mother, especially if she is also the genetic mother, to be recognized as the legal mother without going through the process of abandonment and adoption. Often this is via a birth order in which a court rules on the legal parentage of a child. These orders usually require the consent of all parties involved, sometimes even including the husband of a married gestational surrogate. Most jurisdictions provide for only a post-birth order, often out of an unwillingness to force the gestational carrier to give up parental rights if she changes her mind after the birth. A few jurisdictions do provide for pre-birth orders, generally only in cases when the gestational carrier is not genetically related to the expected child. Some jurisdictions impose other requirements in order to issue birth orders: for example, that the intended parents be heterosexual and married to one another. Jurisdictions that provide for pre-birth orders are also more likely to provide for some kind of enforcement of surrogacy contracts. Citizenship The citizenship and legal status of the children resulting from surrogacy arrangements can be problematic. The Hague Conference Permanent Bureau identified the question of citizenship of these children as a "pressing problem" in the Permanent Bureau 2014 Study (Hague Conference Permanent Bureau, 2014a: 84–94). According to U.S. Department of State, Bureau of Consular Affairs, for a child born abroad to be a U.S. citizen one or both of the child's genetic parents must be a U.S. citizen. In other words, the only way for a foreign born surrogate child to acquire U.S. citizenship automatically at birth is if they are the biological child of a U.S. citizen. Furthermore, in some countries, the child will not be a citizen of the country in which they are born because the gestational carrier is not legally the parent of said child. This could result in a child being born without citizenship. Canada “In Canada, it is a crime to pay (in cash, goods, property or services), offer to pay or advertise to pay a woman to be a surrogate mother.” East Asia In South Korea, Hong Kong, Malaysia, Thailand, and India, surrogacies are all regulated “through national laws that expressly ban it or explicitly set the parameters for its legality”. China Particularly in China, surrogacy operates within a legally gray area. Scholars mostly claim that surrogacy incites social instability both for the Chinese Government and the public, such as civil disputes, gender disproportion, crime, and the spread of disease. However, no law legislation or enforcement has been published against surrogacy, whether it is a surrogate mother or a connecting third agency, despite the state government's attitude to ban such practice. Any medical organization involved in surrogacy will be considered as law violation, including any institution that organizes, implements, or facilitates egg retrieval and sale of women. Statistics found more than 400 surrogacy agencies facilitate the birth of more than 10,000 surrogate children every year on average — operating underground with legal prohibitions. Due to such blurry legal issues, surrogate mothers have become an underprivileged group facing the oppression of women's reproductive rights and the lack of formal legal restrictions. Many of the conditions they should have, such as emotional caring and social resources, are absent, as research claiming that surrogacy contracts usually blindly meet client needs while ignoring the health and well-being of the surrogate mothers. They are marginalized by society and lack the companionship of their partners and legitimate medical health checkups during the nearly one year of pregnancy. Australia “Australian states and territories allow altruistic surrogacy but prohibit commercial surrogacy.” Europe Some countries in Europe allow altruistic surrogacy (where the surrogate is not paid), but most European countries prohibit commercial surrogacy. Ethical issues Numerous ethical questions have been raised with regards to surrogacy. They generally stem from concerns relating to social justice, women's rights, child welfare, bioethics, and societal traditional values. Surrogate Those who view surrogacy as a social justice issue argue that it leads to the exploitation of women whose wombs are commodified to meet the reproductive desires of the more affluent. They argue that creating a commercial market for human bodies is inherently exploitative: “A steady supply of women’s bodies is needed in order to meet the demands of rich couples who can afford to pay extravagant fees to agencies.” While some hold that any consensual process is not a human rights violation, other human rights activists argue that human rights are not just about survival but about human dignity and respect. Almost all countries ban the sale of human organs (e.g., selling a spare kidney) and renting out the use of human organs and bodily processes should be prohibited for similar reasons. . Some feminists have also argued that surrogacy is an assault to a woman's dignity and right to autonomy over her body. By degrading women to purchasable "baby producers”, commercial surrogacy has been accused by feminists of commodifying women's bodies in a manner akin to prostitution. Some feminists also express concerns over links between surrogacy and patriarchal expressions of domination as numerous reports have been cited of women in developing countries coerced into commercial surrogacy by their husbands wanting to "earn money off of their wives' bodies". Surrogate contracts can impose restrictions on the surrogate that some say violate the surrogate mother’s rights, such as right to freedom of movement. . These contracts can allow other people to legally impose requirements on the pregnant person that some argue result in “your body, my choice”. Other human rights activists express concern over the conditions under which gestational carriers are kept by surrogacy clinics which exercise much power and control over the process of surrogate pregnancy. Isolated from friends and family and required to live in separate surrogacy hostels on the pretext of ensuring consistent prenatal care, it is argued that gestational carriers may face psychological challenges that cannot be offset by the (limited) economic benefits of surrogacy. Other psychological issues are noted, such as the implications of gestational carriers emotionally detaching themselves from their babies in anticipation of birth departure. Some argue that women in developing countries are particularly vulnerable to exploitation from surrogacy. Decisions cannot be defined as involving agency if they are driven by coercion, violence, or extreme poverty, which is often the case with women in developing countries who pursue surrogacy due to economic need or aggressive persuasion from their husbands. While opponents of this stance argue that surrogacy provides a much-needed source of revenue for women facing poverty in developing countries, others purport that the lack of legislation in such countries often leads to much of the profit accruing to middlemen and commercial agencies rather than the gestational carriers themselves. Supporters of surrogacy have argued to mandate education of gestational carriers regarding their rights and risks through the process in order to both rectify the ethical issues that arise and to enhance their autonomy. Both opponents and supporters of surrogacy have agreed that implementing international laws on surrogacy can limit the social justice issues that gestational carriers face in transnational surrogacy. Some argue that commercial surrogacy strips birthmothers of their natural rights. Most countries consider the birthmother to be the legal mother unless she freely chooses to put her child up for adoption (without coercion or payment). When a woman elects to use a donor egg to become pregnant, she is not the biological mother, but is still considered the legal mother because she is the birthmother; similarly, a surrogate is still the birthmother even if she was paid to use a donor egg. . Some argue that birth mothers cannot be coerced (or paid) to relinquish their custody of the child they bore (though any birthmother might need to share custody with another). It has been argued that under laws of countries where surrogacy falls under the umbrella of adoption, commercial surrogacy can be considered problematic as payment for adoption is unethical. Child Those concerned with the rights of the child in the context of surrogacy reference issues related to identity and parenthood, abandonment and abuse, and child trafficking. It is argued that in commercial surrogacy, the rights of the child are often neglected as the baby becomes a mere commodity within an economic transaction of a good and a service. Such opponents of surrogacy argue that transferring the duties of parenthood from the birthing mother to a contracting couple denies the child any claim to its birth mother and to its biological parents if the egg and/or sperm is/are not that of the contracting parents. In addition, they claim that the child has no right to information about any siblings he or she may have in the latter instance. The relevance of disclosing the use of surrogacy as an assisted reproductive technique to the child has also been argued to be important for both health risks and the rights of the child. It has been argued that bans on surrogacy are violations of human rights under the existing laws of the Inter-American Court of Human Rights reproductive rights landmark. However, “…there is no “right to a child” under international law. The United Nations Report of the Special Rapporteur on the sale and sexual exploitation of children states, “A child is not a good or service that the State can guarantee or provide, but rather a rights-bearing human being” and argues that commercial surrogacy (where transfer of the child is a condition for payment) violates human rights as it is considered to be the sale of children (and humans cannot be bought or sold). UNICEF says “A legally binding contractual relationship between the surrogate mother and the intending parent(s) established pre-birth, in which the transfer of the child would be made conditional upon payment, would constitute the sale of a child….The identity and family relations of a child cannot be for sale.” Traditional values in Chinese society In China, surrogacy has been argued to contradict traditional Chinese values. Traditional Chinese values focus on blood ties and family ties. The physical connection between parents and children and the process by which parents give birth to children are considered virtuous ("生恩 shēng'ēn"). There is also an ancient Chinese saying that believes that "the body, hair, and skin come from the parents who gave birth to one", and blood relatives should be respected, and one should not harm oneself at will ("身体发肤受之父母 shēntǐ fà fū shòu zhī fùmǔ"). When Chinese people regard blood relations as an important pathway to demonstrate filial piety and family intimacy, these traditional concepts are rooted in the cognitive norm of society. Such emphasis on biological parents and blood relations undoubtedly resulted in conflicts with the practice of surrogacy, which regards childbirth as only a physiological process. Correspondingly, this value of kinship relations strongly affects the social status of surrogate mothers. They are easily considered "heartless" or "don't care about their own children" in Chinese society because they are only responsible for the birth process and hand over the children to others and do not participate in the upbringing process. However, there are also opinions that this separation from the children is not voluntary for surrogate mothers, but is forced by third-party agencies or restricted by unfair contracts. They can only give up the right to raise their children and send them away despite suffering great psychological and emotional trauma. Financial aspects According to the Assisted Human Reproduction Act adopted in 2004, it is prohibited in Canada to compensate a female for acting as a surrogate mother or to advertise the payment of such compensation. However, on October 1, 2016, Health Canada announced its intention to update and strengthen the Assisted Human Reproduction Act to regulate the financial aspects of contracts between intended parents and surrogate mothers. According to research, surrogate mothers are mostly motivated by their low socioeconomic status or family debt; they are more likely to be forced into surrogacy due to financial pressures. In 2020, Section 12 of the Assisted Human Reproduction Act provides for the reimbursement of expenses and monetary compensation to the surrogate mother to alleviate the financial burden associated with surrogacy. According to this proposed regulation, the reimbursement of eligible expenses is not obligatory. Aiming at emphasizing the voluntary nature of the gesture. The proposed regulation provides a non-exhaustive list of different categories of eligible expenses, such as parking fees, travel expenses, caregiver expenses, meals, psychological consultations, etc. Additionally, the surrogate mother can be reimbursed for any lost wages during pregnancy if she obtains written confirmation from a qualified physician that the work posed a risk to the pregnancy. In the US, the total costs for gestational surrogacy usually exceed US$100,000 per pregnancy. This includes hiring an agency to find a woman willing to carry the baby, the medical and health insurance costs for the pregnancy, legal fees, and IVF to create the embryos.  Additionally, some people have additional fees for egg or sperm donations, travel, money paid to the surrogate for lost work, maternity clothes, or other expenses. Religious issues Different religions take different approaches to surrogacy, often related to their stances on assisted reproductive technology in general. Buddhism Buddhist thought is inconclusive on the matter of surrogacy. The prominent belief is that Buddhism totally accepts surrogacy since there are no Buddhist teachings suggesting that infertility treatments or surrogacy are immoral. This stance is further supported by the common conception that serving as a gestational carrier is an expression of compassion and therefore automatically aligns with Buddhist values. However, numerous Buddhist thinkers have expressed concerns with certain aspects of surrogacy. One Buddhist perspective on surrogacy arises from the Buddhist belief in reincarnation as a manifestation of karma. According to this view, gestational carrierhood circumvents the workings of karma by interfering with the natural cycle of rebirth. Others reference the Buddha directly who purportedly taught that trade in sentient beings, including human beings, is not a righteous practice as it almost always involves exploitation that causes suffering. Susumu Shimazono, professor of Religious Studies at the University of Tokyo, contends in the magazine Dharma World that surrogacy places the childbearing surrogate in a position of subservience, in which her body becomes a "tool" for another. Simultaneously, other Buddhist thinkers argue that as long as the primary purpose of being a gestational carrier is out of compassion instead of profit, it is not exploitative and is therefore morally permissible. This further highlights the lack of consensus on surrogacy within the Buddhist community. Christianity Catholicism The Roman Catholic Church is opposed to surrogacy, which it views as immoral and incompatible with Biblical texts surrounding topics of birth, marriage, and life. Paragraph 2376 of the Catechism of the Catholic Church states that: "Techniques that entail the dissociation of husband and wife, by the intrusion of a person other than the couple (donation of sperm or ovum, surrogate uterus), are gravely immoral.". Paragraph 2378 states, “A child is not something owed to one, but is a gift. the "supreme gift of marriage" is a human person. A child may not be considered a piece of property, an idea to which an alleged "right to a child" would lead. In this area, only the child possesses genuine rights: the right "to be the fruit of the specific act of the conjugal love of his parents," and "the right to be respected as a person from the moment of his conception.”" Many proponents of this stance express concern that the sanctity of marriage may be compromised by the insertion of a third party into the marriage contract. Additionally, the practice of in vitro fertilisation involved in gestational surrogacy is generally viewed as morally impermissible due to its removal of human conception from the act of sexual intercourse. Roman Catholics also condemn in vitro fertilisation due to the destruction of embryos that accompanies the frequent practice of discarding, freezing, or donating non-implanted eggs to stem cell research. As such, the Roman Catholic Church deems all practices involving in vitro fertilisation, including gestational surrogacy, as morally problematic. Hinduism Surrogacy does not conflict with the Hindu religion. Surrogacy and other scientific methods of assisted reproduction are generally supported within the Hindu community. While Hindu scholars have not debated the issue extensively, T. C. Anand Kumar, an Indian reproductive biologist, argues that there is no conflict between Hinduism and assisted reproduction. Others have supported this stance with reference to Hindu faith, including a story in the Bhagavata Purana which suggests the practice of gestational carrier-hood: Kamsa, the wicked king of Mathura, had imprisoned his sister Devaki and her husband Vasudeva because oracles had informed him that her child would be his killer. Every time she delivered a child, he smashed its head on the floor. He killed six children. When the seventh child was conceived, the gods intervened. They summoned the goddess Yogamaya and had her transfer the fetus from the womb of Devaki to the womb of Rohini (Vasudeva's other wife who lived with her sister Yashoda across the river Yamuna, in the village of cowherds at Gokulam). Thus the child conceived in one womb was incubated in and delivered through another womb. Additionally, infertility is often associated with karma in the Hindu tradition and consequently treated as a pathology to be treated. This has led to general acceptance of medical intervention for addressing infertility amongst Hindus. As such, surrogacy and other scientific methods of assisted reproduction are generally supported within the Hindu community. Nonetheless, Hindu women do not commonly use surrogacy as an option to treat infertility, despite often serving as surrogates for Western commissioning couples. When surrogacy is practiced by Hindus, it is more likely to be used within the family circle as opposed to involving anonymous donors. Islam For Muslims, the Qur'anic injunction that "their mothers are only those who conceived them and gave birth to them (waladna hum)" denies the distinction between genetic and gestational mothers, hence complicating notions of lineage within the context of surrogacy, which are central to the Muslim faith. Jainism Jain scholars have not debated the issue of surrogacy extensively. Nonetheless, the practice of surrogacy is referenced in the Śvētāmbara tradition of Jainism according to which the embryo of Lord Mahavira was transferred from a Brahmin woman Devananada to the womb of Trishala, the queen of Kshatriya ruler Siddharth, by a divinity named Harinegameshin. This account is not present in Digambara Jain texts, however. Other sources state that surrogacy is not objectionable in the Jain view as it is seen as a physical operation akin to any other medical treatment used to treat a bodily deficiency. However, some religious concerns related to surrogacy have been raised within the Jain community including the loss of non-implanted embryos, destruction of traditional marriage relationships, and adulterous implications of gestational surrogacy. Judaism In general, there is a lack of consensus within the Jewish community on the matter of surrogacy. Jewish scholars and rabbis have long debated this topic, expressing conflicting views on both sides of the debate. Those supportive of surrogacy within the Jewish religion generally view it as a morally permissible way for Jewish women who cannot conceive to fulfill their religious obligations of procreation. Rabbis who favour this stance often cite Genesis 9:1 which commands all Jews to "be fruitful and multiply". In 1988, the Committee on Jewish Law and Standards associated with the Conservative Jewish movement issued formal approval for surrogacy, concluding that "the mitzvah of parenthood is so great that ovum surrogacy is permissible". Jewish scholars and rabbis which hold an anti-surrogacy stance often see it as a form of modern slavery wherein women's bodies are exploited and children are commodified. As Jews possess the religious obligation to "actively engage in the redemption of those who are enslaved", practices seen as involving human exploitation are morally condemned. This thinking aligns with concerns brought forth by other groups regarding the relation between surrogacy practices and forms of human trafficking in certain countries with large fertility tourism industries. Several Jewish scholars and rabbis also cite ethical concerns surrounding the "broken relationship" between the child and its surrogate birth mother. Rabbi Immanuel Jacovits, chief rabbi of the United Hebrew Congregation from 1976 to 1991, reported in his 1975 publication Jewish Medical Ethics that "to use another person as an incubator and then take from her the child that she carried and delivered for a fee is a revolting degradation of maternity and an affront to human dignity." Another point of contention surrounding surrogacy within the Jewish community is the issue of defining motherhood. There are generally three conflicting views on this topic: 1) the ovum donor is the mother, 2) the gestational carrier is the mother, and 3) the child has two mothers—both the ovum donor and the gestational carrier. While most contend that parenthood is determined by the woman giving birth, a minority opt to consider the genetic parents the legal parents, citing the well-known passage in Sanhedrin 91b of the Talmud which states that life begins at conception. Also controversial is the issue of defining Judaism in the context of surrogacy. Jewish Law states that if a Jewish woman is the surrogate, then the child is Jewish. However, this often raises issues when the child is raised by a non-Jewish family and approaches for addressing this issue are also widely debated within the Jewish community. Fertility tourism Some countries, such as the United States, Canada, Greece, Georgia and Mexico are popular surrogacy destinations for foreign intended parents. Ukraine, Belarus and Russia were also destinations before the Russian invasion of Ukraine. Eligibility, processes and costs differ from country to country. Fertility tourism for surrogacy is driven by legal restrictions in the home country or the incentive of lower prices abroad. Previously popular destinations, India, Nepal and Thailand have all recently implemented bans on commercial surrogacy for non-residents. China is also a famous destination, even though surrogacy is legally banned. See also Adoption Artificial insemination Bioethics Commercial animal cloning Egg donation Embryo transfer Fertility Infertility Sexual surrogate Sperm donation Surrogacy laws by country Third-party reproduction Mitochondrial replacement therapy References Further reading Teman, Elly (March, 2010). "Birthing a Mother: The Surrogate Body and the Pregnant Self". Berkeley: University of California Press. Siegel-Itzkovich, Judy (April 3, 2010). "Womb to Let". The Jerusalem Post. Li, Shan (February 18, 2012). "Chinese Couples Come to U.S. to Have Children Through Surrogacy". Los Angeles Times. External links "Surrogacy", Better Health Channel, State Government of Victoria, Australia Human pregnancy Obstetrics Gendered occupations Bioethics Human commodity
Surrogacy
[ "Technology" ]
8,667
[ "Bioethics", "Ethics of science and technology" ]
1,514,995
https://en.wikipedia.org/wiki/Staatsgalerie%20Stuttgart
The Staatsgalerie Stuttgart (, "State Gallery") is an art museum in Stuttgart, Germany, it opened in 1843. In 1984, the opening of the Neue Staatsgalerie (New State Gallery) designed by James Stirling transformed the once provincial gallery into one of Europe's leading museums. Alte Staatsgalerie Originally, the classicist building of the Alte Staatsgalerie was also the home of the Royal Art School. The building was built in 1843. After being severely damaged in World War II, it was rebuilt in 1945–1947 and reopened in 1958. It houses the following collections: Old German paintings 1300–1550 Italian paintings 1300–1800 Dutch paintings 1500–1700 German paintings of the baroque period Art from 1800–1900 (romanticism, impressionism) Neue Staatsgalerie The Neue Staatsgalerie, a controversial architectural design by James Stirling, opened on March 9, 1984 on a site right next to the old building. It houses a collection of 20th-century modern art — from Pablo Picasso to Oskar Schlemmer, Joan Miró and Joseph Beuys. The building layout bears resemblance to Schinkel's Altes Museum, with a series of connected galleries around three sides of a central rotunda. However, the front of the museum is not as symmetrical as the Altes Museum and the traditional configuration is slanted with the entrance set at an angle. Notable works in collection Annibale Carracci's Corpse of Christ (1583–1585) Max Beckmann's Journey on the Fish Salvador Dalí's The Raised Instant (1938) Otto Dix's The Match Seller (1920) George Grosz's The Funeral (1918) Franz Marc's The Small Yellow Horses (1912) Henri Matisse's With the Toilet (La Hair-style) (1907) Joan Miró's The Bird with the Calm View, the Wings in Flames (1952) Piet Mondrian's Composition in White, Red and Blue (1936) Pablo Picasso's Tumblers (Mother and Son) (1905), Laufende Frauen am Strand (1922), The Breakfast in the Free One (1961) Barnett Newman's Who's Afraid of Red, Yellow and Blue II (1967) Works by: Paul Klee, Marc Chagall, Wassily Kandinsky, Willi Baumeister, Gerhard Richter In 2013, the Staatsgalerie returned Virgin and Child, a 15th-century painting attributed to the Master of Flémalle (1375–1444), to the estate of Max Stern, a German-born Jewish dealer who fled the Nazis and later operated the Dominion Gallery in Montreal. See also Max Silberberg References External links Museums in Stuttgart Art museums and galleries in Baden-Württemberg Postmodern architecture Art museums and galleries established in 1843 1843 establishments in the German Confederation 19th-century establishments in Württemberg
Staatsgalerie Stuttgart
[ "Engineering" ]
605
[ "Postmodern architecture", "Architecture" ]
1,515,407
https://en.wikipedia.org/wiki/Comparison%20of%20command%20shells
A command shell is a command-line interface to interact with and manipulate a computer's operating system. General characteristics Interactive features Background execution Background execution allows a shell to run a command without user interaction in the terminal, freeing the command line for additional work with the shell. POSIX shells and other Unix shells allow background execution by using the & character at the end of command. In PowerShell, the Start-Process or Start-Job cmdlets can be used. Completions Completion features assist the user in typing commands at the command line, by looking for and suggesting matching words for incomplete ones. Completion is generally requested by pressing the completion key (often the key). Command name completion is the completion of the name of a command. In most shells, a command can be a program in the command path (usually $PATH), a builtin command, a function or alias. Path completion is the completion of the path to a file, relative or absolute. Wildcard completion is a generalization of path completion, where an expression matches any number of files, using any supported syntax for file matching. Variable completion is the completion of the name of a variable name (environment variable or shell variable). Bash, zsh, and fish have completion for all variable names. PowerShell has completions for environment variable names, shell variable names and — from within user-defined functions — parameter names. Command argument completion is the completion of a specific command's arguments. There are two types of arguments, named and positional: Named arguments, often called options, are identified by their name or letter preceding a value, whereas positional arguments consist only of the value. Some shells allow completion of argument names, but few support completing values. Bash, zsh and fish offer parameter name completion through a definition external to the command, distributed in a separate completion definition file. For command parameter name/value completions, these shells assume path/filename completion if no completion is defined for the command. Completion can be set up to suggest completions by calling a shell function. The fish shell additionally supports parsing of man pages to extract parameter information that can be used to improve completions/suggestions. In PowerShell, all types of commands (cmdlets, functions, script files) inherently expose data about the names, types and valid value ranges/lists for each argument. This metadata is used by PowerShell to automatically support argument name and value completion for built-in commands/functions, user-defined commands/functions as well as for script files. Individual cmdlets can also define dynamic completion of argument values where the completion values are computed dynamically on the running system. Command history Users of a shell may find themselves typing something similar to what they have typed before. Support for command history means that a user can recall a previous command into the command-line editor and edit it before issuing the potentially modified command. Shells that support completion may also be able to directly complete the command from the command history given a partial/initial part of the previous command. Most modern shells support command history. Shells which support command history in general also support completion from history rather than just recalling commands from the history. In addition to the plain command text, PowerShell also records execution start- and end time and execution status in the command history. Mandatory argument prompt Mandatory arguments/parameters are arguments/parameters which must be assigned a value upon invocation of the command, function or script file. A shell that can determine ahead of invocation that there are missing mandatory values, can assist the interactive user by prompting for those values instead of letting the command fail. Having the shell prompt for missing values will allow the author of a script, command or function to mark a parameter as mandatory instead of creating script code to either prompt for the missing values (after determining that it is being run interactively) or fail with a message. PowerShell allows commands, functions and scripts to define arguments/parameters as mandatory. The shell determines prior to invocation if there is any mandatory arguments/parameters which have not been bound, and will then prompt the user for the value(s) before actual invocation. Automatic suggestions Shells featuring automatic suggestions display optional command-line completions as the user types. The PowerShell and fish shells natively support this feature; pressing the key inserts the completion. Implementations of this feature can differ between shells; for example, PowerShell and zsh use an external module to provide completions, and fish derives its completions from the user's command history. Directory history, stack or similar features Shells may record a history of directories the user has been in and allow for fast switching to any recorded location. This is referred to as a "directory stack". The concept had been realized as early as 1978 in the release of the C shell (csh). PowerShell allows multiple named stacks to be used. Locations (directories) can be pushed onto/popped from the current stack or a named stack. Any stack can become the current (default) stack. Unlike most other shells, PowerShell's location concept allow location stacks to hold file system locations as well as other location types like e.g. Active Directory organizational units/groups, SQL Server databases/tables/objects, Internet Information Server applications/sites/virtual directories. Command line interpreters 4DOS and its graphical successor Take Command Console also feature a directory stack. Implicit directory change A directory name can be used directly as a command which implicitly changes the current location to the directory. This must be distinguished from an unrelated load drive feature supported by Concurrent DOS, Multiuser DOS, System Manager and REAL/32, where the drive letter L: will be implicitly updated to point to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path. Autocorrection When a command line does not match a command or arguments directly, spell checking can automatically correct common typing mistakes (such as case sensitivity, missing letters). There are two approaches to this; the shell can either suggest probable corrections upon command invocation, or this can happen earlier as part of a completion or autosuggestion. The tcsh and zsh shells feature optional spell checking/correction, upon command invocation. Fish does the autocorrection upon completion and autosuggestion. The feature is therefore not in the way when typing out the whole command and pressing enter, whereas extensive use of the tab and right-arrow keys makes the shell mostly case insensitive. The PSReadLine PowerShell module (which is shipped with version 5.0) provides the option to specify a CommandValidationHandler ScriptBlock which runs before submitting the command. This allows for custom correcting of commonly mistyped commands, and verification before actually running the command. Progress indicator A shell script (or job) can report progress of long running tasks to the interactive user. Unix/Linux systems may offer other tools support using progress indicators from scripts or as standalone-commands, such as the program "pv". These are not integrated features of the shells, however. PowerShell has a built-in command and API functions (to be used when authoring commands) for writing/updating a progress bar. Progress bar messages are sent separates from regular command output and the progress bar is always displayed at the ultimate interactive users console regardless of whether the progress messages originates from an interactive script, from a background job or from a remote session. Colored directory listings JP Software command-line processors provide user-configurable colorization of file and directory names in directory listings based on their file extension and/or attributes through an optionally defined environment variable. For the Unix/Linux shells, this is a feature of the command and the terminal. Text highlighting The command line processors in DOS Plus, Multiuser DOS, REAL/32 and in all versions of DR-DOS support a number of optional environment variables to define escape sequences allowing to control text highlighting, reversion or colorization for display or print purposes in commands like TYPE. All mentioned command line processors support %$ON% and %$OFF%. If defined, these sequences will be emitted before and after filenames. A typical sequence for would be in conjunction with ANSI.SYS, for an ASCII terminal or for an IBM or ESC/P printer. Likewise, typical sequences for would be , , , respectively. The variables %$HEADER% and %$FOOTER% are only supported by COMMAND.COM in DR-DOS 7.02 and higher to define sequences emitted before and after text blocks in order to control text highlighting, pagination or other formatting options. For the Unix/Linux shells, this is a feature of the terminal. Syntax highlighting A defining feature of the fish shell is built-in syntax highlighting, As the user types, text is colored to represent whether the input is a valid command or not (the executable exists and the user has permissions to run it), and valid file paths are underlined. An independent project offers syntax highlighting as an add-on to the Z Shell (zsh). This is not part of the shell, however. PowerShell provides customizable syntax highlighting on the command line through the PSReadLine module. This module can be used with PowerShell v3.0+, and is bundled with v5.0 onwards. It is loaded by default in the command line host "powershell.exe" since v5.0. Take Command Console (TCC) offers syntax highlighting in the integrated environment. Context sensitive help 4DOS, 4OS2, 4NT / Take Command Console and PowerShell (in PowerShell ISE) looks up context-sensitive help information when is pressed. Zsh provides various forms of configurable context-sensitive help as part of its widget, command, or in the completion of options for some commands. The fish shell provides brief descriptions of a command's flags during tab completion. Programming features String processing and filename matching Inter-process communication Keystroke stacking In anticipation of what a given running application may accept as keyboard input, the user of the shell instructs the shell to generate a sequence of simulated keystrokes, which the application will interpret as a keyboard input from an interactive user. By sending keystroke sequences the user may be able to direct the application to perform actions that would be impossible to achieve through input redirection or would otherwise require an interactive user. For example, if an application acts on keystrokes, which cannot be redirected, distinguishes between normal and extended keys, flushes the queue before accepting new input on startup or under certain conditions, or because it does not read through standard input at all. Keystroke stacking typically also provides means to control the timing of simulated keys being sent or to delay new keys until the queue was flushed etc. It also allows to simulate keys which are not present on a keyboard (because the corresponding keys do not physically exist or because a different keyboard layout is being used) and therefore would be impossible to type by a user. Security features Secure prompt Some shell scripts need to query the user for sensitive information such as passwords, private digital keys, PIN codes or other confidential information. Sensitive input should not be echoed back to the screen/input device where it could be gleaned by unauthorized persons. Plaintext memory representation of sensitive information should also be avoided as it could allow the information to be compromised, e.g., through swap files, core dumps etc. The shells bash, zsh and PowerShell offer this as a specific feature. Shells which do not offer this as a specific feature may still be able to turn off echoing through some other means. Shells executing on a Unix/Linux operating system can use the external command to switch off/on echoing of input characters. In addition to not echoing back the characters, PowerShell's option also encrypts the input character-by-character during the input process, ensuring that the string is never represented unencrypted in memory where it could be compromised through memory dumps, scanning, transcription etc. Execute permission Some operating systems define an execute permission which can be granted to users/groups for a file when the file system itself supports it. On Unix systems, the execute permission controls access to invoking the file as a program, and applies both to executables and scripts. As the permission is enforced in the program loader, no obligation is needed from the invoking program, nor the invoked program, in enforcing the execute permission this also goes for shells and other interpreter programs. The behaviour is mandated by the POSIX C library that is used for interfacing with the kernel. POSIX specifies that the exec family of functions shall fail with EACCESS (permission denied) if the file denies execution permission (see ). The execute permission only applies when the script is run directly. If a script is invoked as an argument to the interpreting shell, it will be executed regardless of whether the user holds the execute permission for that script. Although Windows also specifies an execute permission, none of the Windows-specific shells block script execution if the permission has not been granted. Restricted shell subset Several shells can be started or be configured to start in a mode where only a limited set of commands and actions is available to the user. While not a security boundary (the command accessing a resource is blocked rather than the resource) this is nevertheless typically used to restrict users' actions before logging in. A restricted mode is part of the POSIX specification for shells, and most of the Linux/Unix shells support such a mode where several of the built-in commands are disabled and only external commands from a certain directory can be invoked. PowerShell supports restricted modes through session configuration files or session configurations. A session configuration file can define visible (available) cmdlets, aliases, functions, path providers and more. Safe data subset Scripts that invoke other scripts can be a security risk as they can potentially execute foreign code in the context of the user who launched the initial script. Scripts will usually be designed to exclusively include scripts from known safe locations; but in some instances, e.g. when offering the user a way to configure the environment or loading localized messages, the script may need to include other scripts/files. One way to address this risk is for the shell to offer a safe subset of commands which can be executed by an included script. PowerShell data sections can contain constants and expressions using a restricted subset of operators and commands. PowerShell data sections are used when e.g. localized strings needs to be read from an external source while protecting against unwanted side effects. Notes References External links Command shells Shells
Comparison of command shells
[ "Technology" ]
3,025
[ "Software comparisons", "Computing comparisons" ]
1,515,417
https://en.wikipedia.org/wiki/Desipramine
Desipramine, sold under the brand name Norpramin among others, is a tricyclic antidepressant (TCA) used in the treatment of depression. It acts as a relatively selective norepinephrine reuptake inhibitor, though it does also have other activities such as weak serotonin reuptake inhibitory, α1-blocking, antihistamine, and anticholinergic effects. The drug is not considered a first-line treatment for depression since the introduction of selective serotonin reuptake inhibitor (SSRI) antidepressants, which have fewer side effects and are safer in overdose. Medical uses Desipramine is primarily used for the treatment of depression. It may also be useful to treat symptoms of attention-deficit hyperactivity disorder (ADHD). Evidence of benefit is only in the short term, and with concerns of side effects its overall usefulness is not clear. Desipramine at very low doses is also used to help reduce the pain associated with functional dyspepsia. It has also been tried, albeit with little evidence of effectiveness, in the treatment of cocaine dependence. Evidence for usefulness in neuropathic pain is also poor. Side effects Desipramine tends to be less sedating than other TCAs and tends to produce fewer anticholinergic effects such as dry mouth, constipation, urinary retention, blurred vision, and cognitive or memory impairments. Overdose Desipramine is particularly toxic in cases of overdose, compared to other antidepressants. Any overdose or suspected overdose of desipramine is considered to be a medical emergency and can result in death without prompt medical intervention. Pharmacology Pharmacodynamics Desipramine is a very potent and relatively selective norepinephrine reuptake inhibitor (NRI), which is thought to enhance noradrenergic neurotransmission. Based on one study, it has the highest affinity for the norepinephrine transporter (NET) of any other TCA, and is said to be the most noradrenergic and the most selective for the NET of the TCAs. The observed effectiveness of desipramine in the treatment of ADHD was the basis for the development of the selective NRI atomoxetine and its use in ADHD. Desipramine has the weakest antihistamine and anticholinergic effects of the TCAs. It tends to be slightly activating/stimulating rather than sedating, unlike most others TCAs. Whereas other TCAs are useful for treating insomnia, desipramine can cause insomnia as a side effect due to its activating properties. The drug is also not associated with weight gain, in contrast to many other TCAs. Secondary amine TCAs like desipramine and nortriptyline have a lower risk of orthostatic hypotension than other TCAs, although desipramine can still cause moderate orthostatic hypotension. Pharmacokinetics Desipramine is the major metabolite of imipramine and lofepramine. Chemistry Desipramine is a tricyclic compound, specifically a dibenzazepine, and possesses three rings fused together with a side chain attached in its chemical structure. Other dibenzazepine TCAs include imipramine (N-methyldesipramine), clomipramine, trimipramine, and lofepramine (N-(4-chlorobenzoylmethyl)desipramine). Desipramine is a secondary amine TCA, with its N-methylated parent imipramine being a tertiary amine. Other secondary amine TCAs include nortriptyline and protriptyline. The chemical name of desipramine is 3-(10,11-dihydro-5H-dibenzo[b,f]azepin-5-yl)-N-methylpropan-1-amine and its free base form has a chemical formula of C18H22N2 with a molecular weight of 266.381 g/mol. The drug is used commercially mostly as the hydrochloride salt; the dibudinate salt is or has been used for intramuscular injection in Argentina (brand name Nebril) and the free base form is not used. The CAS Registry Number of the free base is 50-47-5, of the hydrochloride is 58-28-6, and of the dibudinate is 62265-06-9. History Desipramine was developed by Geigy. It first appeared in the literature in 1959 and was patented in 1962. The drug was first introduced for the treatment of depression in 1963 or 1964. Society and culture Generic names Desipramine is the generic name of the drug and its and , while desipramine hydrochloride is its , , , and . Its generic name in French and its are désipramine, in Spanish and Italian and its are desipramina, in German is desipramin, and in Latin is desipraminum. Brand names Desipramine is or has been marketed throughout the world under a variety of brand names, including Irene, Nebril, Norpramin, Pertofran, Pertofrane, Pertrofran, and Petylyl among others. References External links Desipramine - MedlinePlus Alpha-1 blockers Antihistamines CYP2D6 inhibitors Dibenzazepines Human drug metabolites M1 receptor antagonists M2 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Norepinephrine reuptake inhibitors Secondary amines Serotonin receptor antagonists Sodium channel blockers Stimulants Tricyclic antidepressants Wakefulness-promoting agents
Desipramine
[ "Chemistry" ]
1,250
[ "Chemicals in medicine", "Human drug metabolites" ]
1,515,472
https://en.wikipedia.org/wiki/Stokes%20parameters
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1851, as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. They can be determined from directly observable phenomena. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947, who named it as the Stokes parameters. Definitions The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right. Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively. Given the Stokes parameters, one can solve for the spherical coordinates with the following equations: Stokes vectors The Stokes parameters are often combined into a vector, known as the Stokes vector: The Stokes vector spans the space of unpolarized, partially polarized, and fully polarized light. For comparison, the Jones vector only spans the space of fully polarized light, but is more useful for problems involving coherent light. The four Stokes parameters are not a preferred coordinate system of the space, but rather were chosen because they can be easily measured or calculated. Note that there is an ambiguous sign for the component depending on the physical convention used. In practice, there are two separate conventions used, either defining the Stokes parameters when looking down the beam towards the source (opposite the direction of light propagation) or looking down the beam away from the source (coincident with the direction of light propagation). These two conventions result in different signs for , and a convention must be chosen and adhered to. Examples Below are shown some Stokes vectors for common states of polarization of light. {| |- | || Linearly polarized (horizontal) |- | || Linearly polarized (vertical) |- | || Linearly polarized (+45°) |- | || Linearly polarized (−45°) |- | || Right-hand circularly polarized |- | || Left-hand circularly polarized |- | || Unpolarized |} Alternative explanation A monochromatic plane wave is specified by its propagation vector, , and the complex amplitudes of the electric field, and , in a basis . The pair is called a Jones vector. Alternatively, one may specify the propagation vector, the phase, , and the polarization state, , where is the curve traced out by the electric field as a function of time in a fixed plane. The most familiar polarization states are linear and circular, which are degenerate cases of the most general state, an ellipse. One way to describe polarization is by giving the semi-major and semi-minor axes of the polarization ellipse, its orientation, and the direction of rotation (See the above figure). The Stokes parameters , , , and , provide an alternative description of the polarization state which is experimentally convenient because each parameter corresponds to a sum or difference of measurable intensities. The next figure shows examples of the Stokes parameters in degenerate states. Definitions The Stokes parameters are defined by where the subscripts refer to three different bases of the space of Jones vectors: the standard Cartesian basis (), a Cartesian basis rotated by 45° (), and a circular basis (). The circular basis is defined so that , . The symbols ⟨⋅⟩ represent expectation values. The light can be viewed as a random variable taking values in the space C2 of Jones vectors . Any given measurement yields a specific wave (with a specific phase, polarization ellipse, and magnitude), but it keeps flickering and wobbling between different outcomes. The expectation values are various averages of these outcomes. Intense, but unpolarized light will have I > 0 but Q = U = V = 0, reflecting that no polarization type predominates. A convincing waveform is depicted at the article on coherence. The opposite would be perfectly polarized light which, in addition, has a fixed, nonvarying amplitude—a pure sine curve. This is represented by a random variable with only a single possible value, say . In this case one may replace the brackets by absolute value bars, obtaining a well-defined quadratic map from the Jones vectors to the corresponding Stokes vectors; more convenient forms are given below. The map takes its image in the cone defined by |I |2 = |Q |2 + |U |2 + |V |2, where the purity of the state satisfies p = 1 (see below). The next figure shows how the signs of the Stokes parameters are determined by the helicity and the orientation of the semi-major axis of the polarization ellipse. Representations in fixed bases In a fixed () basis, the Stokes parameters when using an increasing phase convention are while for , they are and for , they are Properties For purely monochromatic coherent radiation, it follows from the above equations that whereas for the whole (non-coherent) beam radiation, the Stokes parameters are defined as averaged quantities, and the previous equation becomes an inequality: However, we can define a total polarization intensity , so that where is the total polarization fraction. Let us define the complex intensity of linear polarization to be Under a rotation of the polarization ellipse, it can be shown that and are invariant, but With these properties, the Stokes parameters may be thought of as constituting three generalized intensities: where is the total intensity, is the intensity of circular polarization, and is the intensity of linear polarization. The total intensity of polarization is , and the orientation and sense of rotation are given by Since and , we have Relation to the polarization ellipse In terms of the parameters of the polarization ellipse, the Stokes parameters are Inverting the previous equation gives Measurement The Stokes parameters (and thus the polarization of some electromagnetic radiation) can be directly determined from observation. Using a linear polarizer and a quarter-wave plate, the following system of equations relating the Stokes parameters to measured intensity can be obtained: where is the irradiance of the radiation at a point when the linear polarizer is rotated at an angle of , and similarly is the irradiance at a point when the quarter-wave plate is rotated at an angle of . A system can be implemented using both plates at once at different angles to measure the parameters. This can give a more accurate measure of the relative magnitudes of the parameters (which is often the main result desired) due to all parameters being affected by the same losses. Relationship to Hermitian operators and quantum mixed states From a geometric and algebraic point of view, the Stokes parameters stand in one-to-one correspondence with the closed, convex, 4-real-dimensional cone of nonnegative Hermitian operators on the Hilbert space C2. The parameter I serves as the trace of the operator, whereas the entries of the matrix of the operator are simple linear functions of the four parameters I, Q, U, V, serving as coefficients in a linear combination of the Stokes operators. The eigenvalues and eigenvectors of the operator can be calculated from the polarization ellipse parameters I, p, ψ, χ. The Stokes parameters with I set equal to 1 (i.e. the trace 1 operators) are in one-to-one correspondence with the closed unit 3-dimensional ball of mixed states (or density operators) of the quantum space C2, whose boundary is the Bloch sphere. The Jones vectors correspond to the underlying space C2, that is, the (unnormalized) pure states of the same system. Note that the overall phase (i.e. the common phase factor between the two component waves on the two perpendicular polarization axes) is lost when passing from a pure state |φ⟩ to the corresponding mixed state |φ⟩⟨φ|, just as it is lost when passing from a Jones vector to the corresponding Stokes vector. In the basis of horizontal polarization state and vertical polarization state , the +45° linear polarization state is , the -45° linear polarization state is , the left hand circular polarization state is , and the right hand circular polarization state is . It's easy to see that these states are the eigenvectors of Pauli matrices, and that the normalized Stokes parameters (U/I, V/I, Q/I) correspond to the coordinates of the Bloch vector (, , ). Equivalently, we have , , , where is the density matrix of the mixed state. Generally, a linear polarization at angle θ has a pure quantum state ; therefore, the transmittance of a linear polarizer/analyzer at angle θ for a mixed state light source with density matrix is , with a maximum transmittance of at if , or at if ; the minimum transmittance of is reached at the perpendicular to the maximum transmittance direction. Here, the ratio of maximum transmittance to minimum transmittance is defined as the extinction ratio , where the degree of linear polarization is . Equivalently, the formula for the transmittance can be rewritten as , which is an extended form of Malus's law; here, are both non-negative, and is related to the extinction ratio by . Two of the normalized Stokes parameters can also be calculated by . It's also worth noting that a rotation of polarization axis by angle θ corresponds to the Bloch sphere rotation operator . For example, the horizontal polarization state would rotate to . The effect of a quarter-wave plate aligned to the horizontal axis is described by , or equivalently the Phase gate S, and the resulting Bloch vector becomes . With this configuration, if we perform the rotating analyzer method to measure the extinction ratio, we will be able to calculate and also verify . For this method to work, the fast axis and the slow axis of the waveplate must be aligned with the reference directions for the basis states. The effect of a quarter-wave plate rotated by angle θ can be determined by Rodrigues' rotation formula as , with . The transmittance of the resulting light through a linear polarizer (analyzer plate) along the horizontal axis can be calculated using the same Rodrigues' rotation formula and focusing on its components on and : The above expression is the theory basis of many polarimeters. For unpolarized light, T=1/2 is a constant. For purely circularly polarized light, T has a sinusoidal dependence on angle θ with a period of 180 degrees, and can reach absolute extinction where T=0. For purely linearly polarized light, T has a sinusoidal dependence on angle θ with a period of 90 degrees, and absolute extinction is only reachable when the original light's polarization is at 90 degrees from the polarizer (i.e. ). In this configuration, and , with a maximum of 1/2 at θ=45°, and an extinction point at θ=0°. This result can be used to precisely determine the fast or slow axis of a quarter-wave plate, for example, by using a polarizing beam splitter to obtain a linearly polarized light aligned to the analyzer plate and rotating the quarter-wave plate in between. Similarly, the effect of a half-wave plate rotated by angle θ is described by , which transforms the density matrix to: The above expression demonstrates that if the original light is of pure linear polarization (i.e. ), the resulting light after the half-wave plate is still of pure linear polariztion (i.e. without component) with a rotated major axis. Such rotation of the linear polarization has a sinusoidal dependence on angle θ with a period of 90 degrees. See also Mueller calculus Jones calculus Polarization (waves) Rayleigh Sky Model Stokes operators Polarization mixing Notes References Jackson, J. D., Classical Electrodynamics, John Wiley & Sons, 1999. Stone, J. M., Radiation and Optics, McGraw-Hill, 1963. Collett, E., Field Guide to Polarization, SPIE Field Guides vol. FG05, SPIE, 2005. . E. Hecht, Optics, 2nd ed., Addison-Wesley (1987). . Polarization (waves) Radiometry
Stokes parameters
[ "Physics", "Engineering" ]
2,743
[ "Telecommunications engineering", "Polarization (waves)", "Astrophysics", "Radiometry" ]
1,515,653
https://en.wikipedia.org/wiki/Satellite%20navigation
A satellite navigation or satnav system is a system that uses satellites to provide autonomous geopositioning. A satellite navigation system with global coverage is termed global navigation satellite system (GNSS). , four global systems are operational: the United States's Global Positioning System (GPS), Russia's Global Navigation Satellite System (GLONASS), China's BeiDou Navigation Satellite System (BDS), and the European Union's Galileo. Satellite-based augmentation systems (SBAS), designed to enhance the accuracy of GNSS, include Japan's Quasi-Zenith Satellite System (QZSS), India's GAGAN and the European EGNOS, all of them based on GPS. Previous iterations of the BeiDou navigation system and the present Indian Regional Navigation Satellite System (IRNSS), operationally known as NavIC, are examples of stand-alone operating regional navigation satellite systems (RNSS). Satellite navigation devices determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few centimeters to meters) using time signals transmitted along a line of sight by radio from satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to a high precision, which allows time synchronisation. These uses are collectively known as Positioning, Navigation and Timing (PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated. Global coverage for each system is generally achieved by a satellite constellation of 18–30 medium Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but all use orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about ). Classification GNSS systems that provide enhanced accuracy and integrity monitoring usable for civil navigation are classified as follows: is the first generation system and is the combination of existing satellite navigation systems (GPS and GLONASS), with Satellite Based Augmentation Systems (SBAS) or Ground Based Augmentation Systems (GBAS). In the United States, the satellite-based component is the Wide Area Augmentation System (WAAS); in Europe, it is the European Geostationary Navigation Overlay Service (EGNOS); in Japan, it is the Multi-Functional Satellite Augmentation System (MSAS); and in India, it is the GPS-aided GEO augmented navigation (GAGAN). Ground-based augmentation is provided by systems like the Local Area Augmentation System (LAAS). is the second generation of systems that independently provide a full civilian satellite navigation system, exemplified by the European Galileo positioning system. These systems will provide the accuracy and integrity monitoring necessary for civil navigation; including aircraft. Initially, this system consisted of only Upper L Band frequency sets (L1 for GPS, E1 for Galileo, and G1 for GLONASS). In recent years, GNSS systems have begun activating Lower L Band frequency sets (L2 and L5 for GPS, E5a and E5b for Galileo, and G3 for GLONASS) for civilian use; they feature higher aggregate accuracy and fewer problems with signal reflection. As of late 2018, a few consumer-grade GNSS devices are being sold that leverage both. They are typically called "Dual band GNSS" or "Dual band GPS" devices. By their roles in the navigation system, systems can be classified as: There are four global satellite navigation systems, currently GPS (United States), GLONASS (Russian Federation), Beidou (China) and Galileo (European Union). Global Satellite-Based Augmentation Systems (SBAS) such as OmniSTAR and StarFire. Regional SBAS including WAAS (US), EGNOS (EU), MSAS (Japan), GAGAN (India) and SDCM (Russia). Regional Satellite Navigation Systems such as India's NAVIC, and Japan's QZSS. Continental scale Ground Based Augmentation Systems (GBAS) for example the Australian GRAS and the joint US Coast Guard, Canadian Coast Guard, US Army Corps of Engineers and US Department of Transportation National Differential GPS (DGPS) service. Regional scale GBAS such as CORS networks. Local GBAS typified by a single GPS reference station operating Real Time Kinematic (RTK) corrections. As many of the global GNSS systems (and augmentation systems) use similar frequencies and signals around L1, many "Multi-GNSS" receivers capable of using multiple systems have been produced. While some systems strive to interoperate with GPS as well as possible by providing the same clock, others do not. History Ground-based radio navigation is decades old. The DECCA, LORAN, GEE and Omega systems used terrestrial longwave radio transmitters which broadcast a radio pulse from a known "master" location, followed by a pulse repeated from a number of "slave" stations. The delay between the reception of the master signal and the slave signals allowed the receiver to deduce the distance to each of the slaves, providing a fix. The first satellite navigation system was Transit, a system deployed by the US military in the 1960s. Transit's operation was based on the Doppler effect: the satellites travelled on well-known paths and broadcast their signals on a well-known radio frequency. The received frequency will differ slightly from the broadcast frequency because of the movement of the satellite with respect to the receiver. By monitoring this frequency shift over a short time interval, the receiver can determine its location to one side or the other of the satellite, and several such measurements combined with a precise knowledge of the satellite's orbit can fix a particular position. Satellite orbital position errors are caused by radio-wave refraction, gravity field changes (as the Earth's gravitational field is not uniform), and other phenomena. A team, led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, found solutions and/or corrections for many error sources. Using real-time data and recursive estimation, the systematic and residual errors were narrowed down to accuracy sufficient for navigation. Principles Part of an orbiting satellite's broadcast includes its precise orbital data. Originally, the US Naval Observatory (USNO) continuously observed the precise orbits of these satellites. As a satellite's orbit deviated, the USNO sent the updated information to the satellite. Subsequent broadcasts from an updated satellite would contain its most recent ephemeris. Modern systems are more direct. The satellite broadcasts a signal that contains orbital data (from which the position of the satellite can be calculated) and the precise time the signal was transmitted. Orbital data include a rough almanac for all satellites to aid in finding them, and a precise ephemeris for this satellite. The orbital ephemeris is transmitted in a data message that is superimposed on a code that serves as a timing reference. The satellite uses an atomic clock to maintain synchronization of all the satellites in the constellation. The receiver compares the time of broadcast encoded in the transmission of three (at sea level) or four (which allows an altitude calculation also) different satellites, measuring the time-of-flight to each satellite. Several such measurements can be made at the same time to different satellites, allowing a continual fix to be generated in real time using an adapted version of trilateration: see GNSS positioning calculation for details. Each distance measurement, regardless of the system being used, places the receiver on a spherical shell centred on the broadcaster, at the measured distance from the broadcaster. By taking several such measurements and then looking for a point where the shells meet, a fix is generated. However, in the case of fast-moving receivers, the position of the receiver moves as signals are received from several satellites. In addition, the radio signals slow slightly as they pass through the ionosphere, and this slowing varies with the receiver's angle to the satellite, because that angle corresponds to the distance which the signal travels through the ionosphere. The basic computation thus attempts to find the shortest directed line tangent to four oblate spherical shells centred on four satellites. Satellite navigation receivers reduce errors by using combinations of signals from multiple satellites and multiple correlators, and then using techniques such as Kalman filtering to combine the noisy, partial, and constantly changing data into a single estimate for position, time, and velocity. Einstein's theory of general relativity is applied to GPS time correction, the net result is that time on a GPS satellite clock advances faster than a clock on the ground by about 38 microseconds per day. Applications The original motivation for satellite navigation was for military applications. Satellite navigation allows precision in the delivery of weapons to targets, greatly increasing their lethality whilst reducing inadvertent casualties from mis-directed weapons. (See Guided bomb). Satellite navigation also allows forces to be directed and to locate themselves more easily, reducing the fog of war. Now a global navigation satellite system, such as Galileo, is used to determine users location and the location of other people or objects at any given moment. The range of application of satellite navigation in the future is enormous, including both the public and private sectors across numerous market segments such as science, transport, agriculture, insurance, energy, etc. The ability to supply satellite navigation signals is also the ability to deny their availability. The operator of a satellite navigation system potentially has the ability to degrade or eliminate satellite navigation services over any territory it desires. Global navigation satellite systems In order of first launch year: GPS First launch year: 1978 The United States' Global Positioning System (GPS) consists of up to 32 medium Earth orbit satellites in six different orbital planes. The exact number of satellites varies as older satellites are retired and replaced. Operational since 1978 and globally available since 1994, GPS is the world's most utilized satellite navigation system. GLONASS First launch year: 1982 The formerly Soviet, and now Russian, Global'naya Navigatsionnaya Sputnikovaya Sistema, (GLObal NAvigation Satellite System or GLONASS), is a space-based satellite navigation system that provides a civilian radionavigation-satellite service and is also used by the Russian Aerospace Defence Forces. GLONASS has full global coverage since 1995 and with 24 active satellites. BeiDou First launch year: 2000 BeiDou started as the now-decommissioned Beidou-1, an Asia-Pacific local network on the geostationary orbits. The second generation of the system BeiDou-2 became operational in China in December 2011. The BeiDou-3 system is proposed to consist of 30 MEO satellites and five geostationary satellites (IGSO). A 16-satellite regional version (covering Asia and Pacific area) was completed by December 2012. Global service was completed by December 2018. On 23 June 2020, the BDS-3 constellation deployment is fully completed after the last satellite was successfully launched at the Xichang Satellite Launch Center. Galileo First launch year: 2011 The European Union and European Space Agency agreed in March 2002 to introduce their own alternative to GPS, called the Galileo positioning system. Galileo became operational on 15 December 2016 (global Early Operational Capability, EOC). At an estimated cost of €10 billion, the system of 30 MEO satellites was originally scheduled to be operational in 2010. The original year to become operational was 2014. The first experimental satellite was launched on 28 December 2005. Galileo is expected to be compatible with the modernized GPS system. The receivers will be able to combine the signals from both Galileo and GPS satellites to greatly increase the accuracy. The full Galileo constellation consists of 24 active satellites, the last of which was launched in December 2021. The main modulation used in Galileo Open Service signal is the Composite Binary Offset Carrier (CBOC) modulation. Regional navigation satellite systems NavIC The NavIC (acronym for Navigation with Indian Constellation) is an autonomous regional satellite navigation system developed by the Indian Space Research Organisation (ISRO). The Indian government approved the project in May 2006. It consists of a constellation of 7 navigational satellites. Three of the satellites are placed in geostationary orbit (GEO) and the remaining 4 in geosynchronous orbit (GSO) to have a larger signal footprint and lower number of satellites to map the region. It is intended to provide an all-weather absolute position accuracy of better than throughout India and within a region extending approximately around it. An Extended Service Area lies between the primary service area and a rectangle area enclosed by the 30th parallel south to the 50th parallel north and the 30th meridian east to the 130th meridian east, 1,500–6,000 km beyond borders. A goal of complete Indian control has been stated, with the space segment, ground segment and user receivers all being built in India. The constellation was in orbit as of 2018, and the system was available for public use in early 2018. NavIC provides two levels of service, the "standard positioning service", which will be open for civilian use, and a "restricted service" (an encrypted one) for authorized users (including military). There are plans to expand NavIC system by increasing constellation size from 7 to 11. India plans to make the NavIC global by adding 24 more MEO satellites. The Global NavIC will be free to use for the global public. Early BeiDou The first two generations of China's BeiDou navigation system were designed to provide regional coverage. Augmentation GNSS augmentation is a method of improving a navigation system's attributes, such as accuracy, reliability, and availability, through the integration of external information into the calculation process, for example, the Wide Area Augmentation System, the European Geostationary Navigation Overlay Service, the Multi-functional Satellite Augmentation System, Differential GPS, GPS-aided GEO augmented navigation (GAGAN) and inertial navigation systems. QZSS The Quasi-Zenith Satellite System (QZSS) is a four-satellite regional time transfer system and enhancement for GPS covering Japan and the Asia-Oceania regions. QZSS services were available on a trial basis as of January 12, 2018, and were started in November 2018. The first satellite was launched in September 2010. An independent satellite navigation system (from GPS) with 7 satellites is planned for 2023. EGNOS Comparison of systems Using multiple GNSS systems for user positioning increases the number of visible satellites, improves precise point positioning (PPP) and shortens the average convergence time. The signal-in-space ranging error (SISRE) in November 2019 were 1.6 cm for Galileo, 2.3 cm for GPS, 5.2 cm for GLONASS and 5.5 cm for BeiDou when using real-time corrections for satellite orbits and clocks. The average SISREs of the BDS-3 MEO, IGSO, and GEO satellites were 0.52 m, 0.90 m and 1.15 m, respectively. Compared to the four major global satellite navigation systems consisting of MEO satellites, the SISRE of the BDS-3 MEO satellites was slightly inferior to 0.4 m of Galileo, slightly superior to 0.59 m of GPS, and remarkably superior to 2.33 m of GLONASS. The SISRE of BDS-3 IGSO was 0.90 m, which was on par with the 0.92 m of QZSS IGSO. However, as the BDS-3 GEO satellites were newly launched and not completely functioning in orbit, their average SISRE was marginally worse than the 0.91 m of the QZSS GEO satellites. Related techniques DORIS Doppler Orbitography and Radio-positioning Integrated by Satellite (DORIS) is a French precision navigation system. Unlike other GNSS systems, it is based on static emitting stations around the world, the receivers being on satellites, in order to precisely determine their orbital position. The system may be used also for mobile receivers on land with more limited usage and coverage. Used with traditional GNSS systems, it pushes the accuracy of positions to centimetric precision (and to millimetric precision for altimetric application and also allows monitoring very tiny seasonal changes of Earth rotation and deformations), in order to build a much more precise geodesic reference system. LEO satellites The two current operational low Earth orbit (LEO) satellite phone networks are able to track transceiver units with accuracy of a few kilometres using doppler shift calculations from the satellite. The coordinates are sent back to the transceiver unit where they can be read using AT commands or a graphical user interface. This can also be used by the gateway to enforce restrictions on geographically bound calling plans. International regulation The International Telecommunication Union (ITU) defines a radionavigation-satellite service (RNSS) as "a radiodetermination-satellite service used for the purpose of radionavigation. This service may also include feeder links necessary for its operation". RNSS is regarded as a safety-of-life service and an essential part of navigation which must be protected from interferences. Aeronautical radionavigation-satellite (ARNSS) is – according to Article 1.47 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation service in which earth stations are located on board aircraft.» Maritime radionavigation-satellite service (MRNSS) is – according to Article 1.45 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as «A radionavigation-satellite service in which earth stations are located on board ships.» Classification ITU Radio Regulations (article 1) classifies radiocommunication services as: Radiodetermination service (article 1.40) Radiodetermination-satellite service (article 1.41) Radionavigation service (article 1.42) Radionavigation-satellite service (article 1.43) Maritime radionavigation service (article 1.44) Maritime radionavigation-satellite service (article 1.45) Aeronautical radionavigation service (article 1.46) Aeronautical radionavigation-satellite service (article 1.47) Examples of RNSS use Augmentation system GNSS augmentation Automatic Dependent Surveillance–Broadcast BeiDou Navigation Satellite System (BDS) GALILEO, European GNSS Global Positioning System (GPS), with Differential GPS (DGPS) GLONASS NAVIC Quasi-Zenith Satellite System (QZSS) Frequency allocation The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are: primary: indicated by writing in capital letters secondary: indicated by small letters exclusive or shared utilization: within the responsibility of administrations. See also Acronyms and abbreviations in avionics Geoinformatics GNSS positioning calculation GNSS reflectometry GPS spoofing GPS-aided geo-augmented navigation List of emerging technologies Moving map display Pseudolite Receiver Autonomous Integrity Monitoring Software GNSS Receiver Space Integrated GPS/INS (SIGI) United Kingdom Global Navigation Satellite System UNSW School of Surveying and Geospatial Engineering Notes References Further reading Office for Outer Space Affairs of the United Nations (2010), Report on Current and Planned Global and Regional Navigation Satellite Systems and Satellite-based Augmentation Systems. External links Information on specific GNSS systems ESA information on EGNOS Information on the Beidou system Global Navigation Satellite System Fundamentals Organizations related to GNSS United Nations International Committee on Global Navigation Satellite Systems (ICG) Institute of Navigation (ION) GNSS Meetings The International GNSS Service (IGS) International Global Navigation Satellite Systems Society Inc (IGNSS) International Earth Rotation and Reference Systems Service (IERS) International GNSS Service (IGS) US National Executive Committee for Space-Based Positioning, Navigation, and Timing US National Geodetic Survey Orbits for the Global Positioning System satellites in the Global Navigation Satellite System UNAVCO GNSS Modernization Asia-Pacific Economic Cooperation (APEC) GNSS Implementation Team Supportive or illustrative sites GPS and GLONASS Simulation (Java applet) Simulation and graphical depiction of the motion of space vehicles, including DOP computation. GPS, GNSS, Geodesy and Navigation Concepts in depth American inventions Aircraft instruments Avionics Geodesy Maritime communication Navigational equipment
Satellite navigation
[ "Mathematics", "Technology", "Engineering" ]
4,275
[ "Applied mathematics", "Avionics", "Measuring instruments", "Aircraft instruments", "Geodesy" ]
1,515,708
https://en.wikipedia.org/wiki/Brown%20note
The brown note, also sometimes called the brown frequency or brown noise, is a hypothetical infrasonic frequency capable of causing fecal incontinence by creating acoustic resonance in the human bowel. Considered an urban myth, the name is a metonym for the common color of human faeces. Attempts to demonstrate the existence of a "brown note" using sound waves transmitted through the air have failed. Frequencies supposedly involved are between 5 and 9 Hz, which are below the lower frequency limit of human hearing. High-power sound waves below 20 Hz are felt in the body. Physiological effects of low frequency vibration Air is a very inefficient medium for transferring low frequency vibration from a transducer to the human body. Mechanical connection of the vibration source to the human body, however, provides a potentially dangerous combination. The U.S. space program, worried about the harmful effects of rocket flight on astronauts, ordered vibration tests that used cockpit seats mounted on vibration tables to transfer "brown note" and other frequencies directly to the human subjects. Very high power levels of 160 dB were achieved at frequencies of 2–3 Hz. Test frequencies ranged from 0.5 Hz to 40 Hz. Test subjects suffered motor ataxia, nausea, visual disturbance, degraded task performance and difficulties in communication. These tests are assumed by researchers to be the nucleus of the current urban myth. Testing by Mythbusters In February 2005 the television show MythBusters attempted to verify whether the "brown note" was a reality. They used twelve Meyer Sound 700-HP subwoofers—a model and quantity that has been employed for major rock concerts. Normal operating frequency range of the selected subwoofer model was 28 Hz to 150 Hz but the 12 enclosures at MythBusters had been specially modified for deeper bass extension. Roger Schwenke and John Meyer directed the Meyer Sound team in devising a special test rig that would produce very high sound levels at infrasonic frequencies. The subwoofers' tuning ports were blocked and their input cards were altered. The modified cabinets were positioned in an open ring configuration, in four stacks, with each stack containing three subwoofers. Test signals were generated by a SIM 3 audio analyzer, with its software modified to produce infrasonic tones. A Brüel & Kjær sound level analyzer, fed with an attenuated signal from a model 4189 measurement microphone, displayed and recorded sound pressure levels. The hosts on the show tried a series of frequencies as low as 5 Hz, attaining a level of 120 decibels of sound pressure at 9 Hz and up to 153 dB at frequencies above 20 Hz, but the rumored physiological effects did not materialize. The test subjects all reported some physical anxiety and shortness of breath, even a small amount of nausea, but this was dismissed by the hosts, noting that sound at that frequency and intensity moves air rapidly in and out of one's lungs. The show declared the brown note myth "busted". See also Acoustic resonance Feraliminal Lycanthropizer – A fictional psychotechnographic machine The Mosquito – A commercial device that deters loitering by emitting sound with a very high frequency The Republic XF-84H – An experimental aircraft that produced enough noise to cause headaches, nausea and seizures among its ground crew Tesla's oscillator – A vibrating machine which is claimed to have the effect of a "mechanical laxative", causing subjects to run straight to the bathroom after use "World Wide Recorder Concert" – A South Park episode involving a fictional brown note References Sound Urban legends Ultrasound Fictional energy weapons Defecation
Brown note
[ "Biology" ]
745
[ "Excretion", "Defecation" ]
1,515,736
https://en.wikipedia.org/wiki/Eudiometer
A eudiometer is a laboratory device that measures the change in volume of a gas mixture following a physical or chemical change. Description Depending on the reaction being measured, the device can take a variety of forms. In general, it is similar to a graduated cylinder, and is most commonly found in two sizes: 50 mL and 100 mL. It is closed at the top end with the bottom end immersed in water or mercury. The liquid traps a sample of gas in the cylinder, and the graduation allows the volume of the gas to be measured. For some reactions, two platinum wires (chosen for their non-reactivity) are placed in the sealed end so an electric spark can be created between them. The electric spark can initiate a reaction in the gas mixture and the graduation on the cylinder can be read to determine the change in volume resulting from the reaction. The use of the device is quite similar to the original barometer, except that the gas inside displaces some of the liquid that is used. History In 1772, Joseph Priestley began experimenting with different "airs" using his own redesigned pneumatic trough in which mercury instead of water would trap gases that were usually soluble in water. From these experiments Priestley is credited with discovering many new gases such as oxygen, hydrogen chloride, and ammonia. He also discovered a way to find the purity or "goodness" of air using "nitrous air test". The eudiometer functions on the greater solubility of NO2 in water over NO, and the oxidation reaction of NO into NO2 by air oxygen: 2 NO + O2 → 2 NO2. A quantity of air is combined with NO over water, and the more soluble compound NO2 dissolves, leaving the remaining air somewhat contracted in volume. The richer the air was in oxygen, the greater was the contraction. Marsilio Landriani was studying pneumatic chemistry with Pietro Moscati when they attempted to quantify Priestley's nitric acid test for air quality. Landriani used a pneumatic trough in the form of a tall, graduated cylinder over water. As it measured the salubrity of air, he called it a eudiometer An associate of Moscati's, Felice Fontana also designed a eudiometer on the same principles and quantified the salubrity of the air. The eudiometer with the nitrous air test was the way Jan Ingenhousz verified that the bubbles given off under water by plant leaves exposed to sunlight were oxygen bubbles. His description of photosynthesis was published in 1779, and in 1785 he wrote about eudiometers in Journal de Physique (v 26, p 339). According to a biographer, Ingenhousz indicated that "many instruments were called eudiometers although strictly speaking they didn't deserve the name ... misunderstandings could exist when not everybody was using the same instruments." An electrified version of the eudiometer was developed by Count Alessandro Volta (1745–1827), an Italian physicist who is well known for his contributions to the electric battery and electricity. Aside from its laboratory function, the eudiometer is also known for its part in the "Volta pistol". Volta invented this instrument in 1777 for the purpose of testing the "goodness" of air, analyzing the flammability of gases, or to demonstrate the chemical effects of electricity. Volta's Pistol had a long glass tube that was closed at the top, like a eudiometer. Two electrodes were fed through the tube and produced a spark gap inside the tube. Volta's initial use of this instrument concerned the study of swamp gases in particular. Volta's pistol was filled with oxygen and another gas. The homogeneous mixture was taped shut with a cork. A spark could be introduced into the gas chamber by electrodes, and possibly catalyze a reaction by static electricity, using Volta's electrophorus. If the gases were flammable, they would explode, and increase the pressure within the gas chamber. This pressure would be too great and eventually cause the cork to become airborne. Volta's pistol was made with either glass or brass, however due to the electricity the glass was vulnerable to exploding. Volta's extensive studies on measuring and creating high levels of electric currents caused the electrical unit, the volt, to be named after him. In 1785 Henry Cavendish used a eudiometer to determine the fraction of oxygen in the Earth's atmosphere. Etymology The name "eudiometer" comes from the Greek meaning clear or mild, which is the combination of the prefix meaning "good", and meaning "heavenly" or "of Zeus" (the god of the sky and atmosphere), with the suffix -meter meaning "measure". Because the eudiometer was originally used to measure the amount of oxygen in the air, which was thought to be greater in "nice" weather, the root appropriately describes the apparatus. Usage Applications of a eudiometer include the analysis of gases and the determination of volume differences in chemical reactions. The eudiometer is filled with water, inverted so that its open end is facing the ground (while holding the open end so that no water escapes), and then submersed in a basin of water. A chemical reaction is taking place through which gas is created. One reactant is typically at the bottom of the eudiometer (which flows downward when the eudiometer is inverted) and the other reactant is suspended on the rim of the eudiometer, typically by means of a platinum or copper wire (due to their low reactivity). When the gas created by the chemical reaction is released, it should rise into the eudiometer so that the experimenter may accurately read the volume of the gas produced at any given time. Normally a person would read the volume when the reaction is completed. This procedure is followed in many experiments, including an experiment in which one experimentally determines the Ideal gas law constant R. The eudiometer is similar in structure to the meteorological barometer. Similarly, a eudiometer uses water to release gas into the eudiometer tube, converting the gas into a visible, measurable amount. A correct measurement of the pressure when performing these experiments is crucial for the calculations involved in the PV=nRT equation, because the pressure could change the density of the gas. See also Dalton's law Distillation Ideal gas law Laboratory glassware References Further reading Magellan, J. H. De. (2007) Description of a Glass Apparatus for Making Mineral Waters- Like those of Pyrmot, Spa, Seltzer, Etc., In a Few Minutes, and With a Very Little Expense: Together With the Description Of Some New Eudiometers, Inman Press. Marcet, William (1888) "A New Form of Eudiometer", Proceedings of the Royal Society of London 44: 383-387. Osman, W. A. (1958) "Alessandro Volta and the inflammable air eudiometer", Annals of Science Vol 14, Number 4: 215-242 (28). Weekes, W. H. (1828) A Memoir On the Universal Portable Eudiometer: An Apparatus Designed With a View To Operative Convenience and Accuracy Of Result In the Researches Of Philosophical Chemistry, T. E. Stow publisher. Laboratory glassware Volumetric instruments Measuring instruments
Eudiometer
[ "Technology", "Engineering" ]
1,517
[ "Volumetric instruments", "Measuring instruments" ]
1,515,853
https://en.wikipedia.org/wiki/Food%20engineering
Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field. Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation. www.alepeople.org,https://scholar.google.com/citations?user=o7sODVIAAAAJ&hl=en,https://orcid.org/0000-0002-7599-8085 History Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food quality, safety, taste, health and sustainability. Application and practices The following are some of the applications and practices used in food engineering to produce safe, healthy, tasty, and sustainable food: Refrigeration and freezing The main objective of food refrigeration and/or freezing is to preserve the quality and safety of food materials. Refrigeration and freezing contribute to the preservation of perishable foods, and to the conservation some food quality factors such as visual appearance, texture, taste, flavor and nutritional content. Freezing food slows the growth of bacteria that could potentially harm consumers. Evaporation Evaporation is used to pre-concentrate, increase the solid content, change the color, and reduce the water content of food and liquid products. This process is mostly seen when processing milk, starch derivatives, coffee, fruit juices, vegetable pastes and concentrates, seasonings, sauces, sugar, and edible oil. Evaporation is also used in food dehydration processes. The purpose of dehydration is to prevent the growth of molds in food, which only build when moisture is present. This process can be applied to vegetables, fruits, meats, and fish, for example. Packaging Food packaging technologies are used to extend the shelf-life of products, to stabilize food (preserve taste, appearance, and quality), and to maintain the food clean, protected, and appealing to the consumer. This can be achieved, for example, by packaging food in cans and jars. Because food production creates large amounts of waste, many companies are transitioning to eco-friendly packaging to preserve the environment and attract the attention of environmentally conscious consumers. Some types of environmentally friendly packaging include plastics made from corn or potato, bio-compostable plastic and paper products which disintegrate, and recycled content. Even though transitioning to eco-friendly packaging has positive effects on the environment, many companies are finding other benefits such as reducing excess packaging material, helping to attract and retain customers, and showing that companies care about the environment. Energy for food processing To increase sustainability of food processing there is a need for energy efficiency and waste heat recovery. The replacement of conventional energy-intensive food processes with new technologies like thermodynamic cycles and non-thermal heating processes provide another potential to reduce energy consumption, reduce production costs, and improve the sustainability in food production. Heat transfer in food processing Heat transfer is important in the processing of almost every commercialized food product and is important to preserve the hygienic, nutritional and sensory qualities of food. Heat transfer methods include induction, convection, and radiation. These methods are used to create variations in the physical properties of food when freezing, baking, or deep frying products, and also when applying ohmic heating or infrared radiation to food. These tools allow food engineers to innovate in the creation and transformation of food products. Food Safety Management Systems (FSMS) A Food Safety Management System (FSMS) is "a systematic approach to controlling food safety hazards within a business in order to ensure that the food product is safe to consume." In some countries FSMS is a legal requirement, which obliges all food production businesses to use and maintain a FSMS based on the principles of Hazard Analysis Critical Control Point (HACCP). HACCP is a management system that addresses food safety through the analysis and control of biological, chemical, and physical hazards in all stages of the food supply chain. The ISO 22000 standard specifies the requirements for FSMS. Emerging technologies The following technologies, which continue to evolve, have contributed to the innovation and advancement of food engineering practices: Three-dimensional printing of food Three-dimensional (3D) printing, also known as additive manufacturing, is the process of using digital files to create three dimensional objects. In the food industry, 3D printing of food is used for the processing of food layers using computer equipment. The process of 3D printing is slow, but is improving over time with the goal of reducing costs and processing times. Some of the successful food items that have been printed through 3D technology are: chocolate, cheese, cake frosting, turkey, pizza, celery, among others. This technology is continuously improving, and has the potential of providing cost-effective, energy efficient food that meets nutritional stability, safety and variety. Biosensors Biosensors can be used for quality control in laboratories and in different stages of food processing. Biosensor technology is one way in which farmers and food processors have adapted to the worldwide increase in demand for food, while maintaining their food production and quality high. Furthermore, since millions of people are affected by food-borne diseases caused by bacteria and viruses, biosensors are becoming an important tool to ensure the safety of food. They help track and analyze food quality during several parts of the supply chain: in food processing, shipping and commercialization. Biosensors can also help with the detection of genetically modified organisms (GMOs), to help regulate GMO products. With the advancement of technologies, like nanotechnology, the quality and uses of biosensors are constantly being improved. Milk pasteurization by microwave When storage conditions of milk are controlled, milk tends to have a very good flavor. However, oxidized flavor is a problem that affects the taste and safety of milk in a negative way. To prevent the growth of pathogenic bacteria and extend the shelf life of milk, pasteurization processes were developed. Microwaved milk has been studied and developed to prevent oxidation compared to traditional pasteurized milk methods, and it has been concluded that milk has a better quality when it has microwaved milk pasteurization. Education and training In the 1950s, food engineering emerged as an academic discipline, when several U.S. universities included food science and food technology in their curricula, and important works on food engineering appeared. Today, educational institutions throughout the world offer bachelors, masters, and doctoral degrees in food engineering. However, due to the unique character of food engineering, its training is more often offered as a branch of broader programs on food science, food technology, biotechnology, or agricultural and chemical engineering. In other cases, institutions offer food engineering education through concentrations, specializations, or minors. Food engineering candidates receive multidisciplinary training in areas like mathematics, chemistry, biochemistry, physics, microbiology, nutrition, and law. Food engineering is still growing and developing as a field of study, and academic curricula continue to evolve. Future food engineering programs are subject to change due to the current challenges in the food industry, including bio-economics, food security, population growth, food safety, changing eating behavior, globalization, climate change, energy cost and change in value chain, fossil fuel prices, and sustainability. To address these challenges, which require the development of new products, services, and processes, academic programs are incorporating innovative and practical forms of training. For example, innovation laboratories, research programs, and projects with food companies and equipment manufacturers are being adopted by some universities. In addition, food engineering competitions and competitions from other scientific disciplines are appearing. With the growing demand for safe, sustainable, and healthy food, and for environmentally friendly processes and packaging, there is a large job market for food engineering prospective employees. Food engineers are typically employed by the food industry, academia, government agencies, research centers, consulting firms, pharmaceutical companies, healthcare firms, and entrepreneurial projects. Job descriptions include but are not limited to food engineer, food microbiologist, bioengineering/biotechnology, nutrition, traceability, food safety and quality management. Challenges Sustainability Food engineering has negative impacts on the environment such as the emission of large quantities of waste and the pollution of water and air, which must be addressed by food engineers in the future development of food production and processing operations. Scientists and engineers are experimenting in different ways to create improved processes that reduce pollution, but these must continue to be improved in order to achieve a sustainable food supply chain. Food engineers must reevaluate current practices and technologies to focus on increasing productivity and efficiency while reducing the consumption of water and energy, and decreasing the amount of waste produced. Population growth Even though food supply expands yearly, there has also been an increase in the number of hungry people. The world population is expected to reach 9-10 billion people by 2050 and the problem of malnutrition remains a priority. To achieve food security, food engineers are required to address land and water scarcity to provide enough growth and food for undernourished people. In addition, food production depends on land and water supply, which are under stress as the population size increases. There is a growing pressure on land resources driven by expanding populations, leading to expansions of croplands; this usually involves the destruction of forests and exploitation of arable land. Food engineers face the challenge of finding sustainable ways to produce to adapt to the growing population. Human health Food engineers must adapt food technologies and operations to the recent consumer trend toward the consumption of healthy and nutritious food. To supply foods with these qualities, and for the benefit of human health, food engineers must work collaboratively with professionals in other domains such as medicine, biochemistry, chemistry, and consumerism. New technologies and practices must be developed to increase the production of foods that have a positive impact on human health. See also Pharmaceuticals Food science Food technology Aseptic processing Dietary supplement Food and biological process engineering Food fortification Food preservation Food rheology Food supplements Future food technology Nutraceutical Nutrification Food and Bioprocess Technology Food safety Food chemistry Food physical chemistry Pasteurization Food dehydration Biosensors Biochemistry Microbiology Food quality Stabiliser References Engineering disciplines Food science Food industry
Food engineering
[ "Engineering" ]
2,391
[ "Food engineering", "nan" ]
1,515,898
https://en.wikipedia.org/wiki/Thermodynamic%20equations
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. Introduction One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power: During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics. A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. Notation Some of the most common thermodynamic quantities are: The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions. The most important thermodynamic potentials are the following functions: Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems. Common material properties determined from the thermodynamic functions are the following: The following constants are constants that occur in many relationships due to the application of a standard system of units. Laws of thermodynamics The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are: Zeroth law of thermodynamics If A, B, C are thermodynamic systems such that A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, C is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated. First law of thermodynamics where is the infinitesimal increase in internal energy of the system, is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system. The first law is the law of conservation of energy. The symbol instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that Q and W are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as . Second law of thermodynamics The entropy of an isolated system never decreases: for an isolated system. A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged). Third law of thermodynamics when The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure. Onsager reciprocal relations – sometimes called the Fourth law of thermodynamics The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law. The fundamental equation The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k  different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: Some important aspects of this equation should be noted: , , The thermodynamic space has k+2 dimensions The differential quantities (U, S, V, Ni) are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance. The equation may be seen as a particular case of the chain rule. In other words: from which the following identifications can be made: These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system. The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for and find that Thermodynamic potentials By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: The thermodynamic square can be used as a tool to recall and derive these potentials. First order equations Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: where the are the natural variables of the potential. If is conjugate to then we have the equations of state for that potential, one for each set of conjugate variables. Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: For an ideal gas, this becomes the familiar PV=NkBT. Euler integrals Because all of the natural variables of the internal energy U are extensive quantities, it follows from Euler's homogeneous function theorem that Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: Note that the Euler integrals are sometimes also referred to as fundamental equations. Gibbs–Duhem relationship Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem. Second order equations There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations). Maxwell relations Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: {| |- | |width="80"| | |- | |width="80"| | |} The thermodynamic square can be used as a tool to recall and derive these relations. Material properties Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: Compressibility at constant temperature or constant entropy Specific heat (per-particle) at constant pressure or constant volume Coefficient of thermal expansion These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. Thermodynamic property relations Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, represents the specific latent heat, represents temperature, and represents the change in specific volume. The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R See also Thermodynamics Timeline of thermodynamics Notes References Chapters 1 - 10, Part 1: Equilibrium. (reprinted from Oxford University Press, 1978) Thermodynamics Chemical engineering
Thermodynamic equations
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,198
[ "Thermodynamic equations", "Equations of physics", "Chemical engineering", "Thermodynamics", "nan", "Dynamical systems" ]
1,515,923
https://en.wikipedia.org/wiki/Atom%20Smasher%20%28DC%20Comics%29
Albert Julian Rothstein (known by the aliases Nuklon and Atom Smasher) is a superhero appearing in American comic books published by DC Comics. Atom Smasher is known for his power of growth and super strength. The character made his live-action debut in The Flash, portrayed by Adam Copeland. He also appears in Black Adam, portrayed by Noah Centineo. Publication history Atom Smasher was created by Roy Thomas and Jerry Ordway, and first appeared in The All-Star Squadron #25 (September 1983). He was named after Thomas' friend Allan Rothstein. Creation Thomas spoke in the character's genesis stating, In All-Star Squadron #21, I'd had the non-super-powered Atom of 1942 knocked around by an atomically-charged villain I called Cyclotron. (An "atom-smasher," get it?) The artists were a couple of guys named Ordway and Machlan. It was hinted that radiation absorbed from Cyclotron would act slowly on The Atom--a subtle (?) foreshadowing of the atomic strength the Mighty Mite would gain in Flash and All-Star in 1948. Cyclotron was given a capeless costume otherwise nearly identical to The Atom's '48-'51 duds, thus retroactively establishing that Al adapted it from Cyclotron's. By the time All-Star Squadron introduced Cyclotron's newborn daughter Terri in its first Annual, the new teen-group's comic was well in the works; the Squadron connection was done to establish that Terri's radiation-altered genes would be passed on to her children. It had already been long enough since World War II that one of our new stars was going to be the grandson of a costumed character of that period--and a villain, to boot. Cyclotron--Dr. Terry Curtis, who had been a supporting character in a very early Superman/Ultra-Humanite story--thus became the grandpa of Albert Rothstein, whom Dann and I named after science-fiction/comics fan (and friend) Alan Rothstein out in L.A. We thought it high time comic books had an overtly Jewish super-hero. (Maybe we were first with that bit, maybe we weren't; we didn't know and didn't much care.) And so was born Nuklon, who ultimately got his strength from the same source as The Atom--and whom we made a virtual giant to contrast with his godfather's short stature. Of course, Nuklon, too, was not strictly a son or daughter of a JSAer. Why didn't we make him the son of The Atom? I can't remember, but maybe Al and Mary Pratt had been depicted as childless in one of those "Whatever Happened to...? backup features I had hated in DC Comics Presents. Fictional character biography Origin The godson of Al Pratt, the Golden Age Atom, Albert Rothstein acquired his metahuman powers of super strength and control over his molecular structure, allowing him to alter the size and density of his body, from his grandfather, a reluctant supervillain known as Cyclotron. This allowed him to fight crime first as Nuklon, and then, later, as Atom Smasher. As Nuklon, Albert was a charter member of Infinity, Inc. and subsequently served in the Justice League. He was considered a dependable, but rather insecure and indecisive superhero while in Infinity, Inc. During this time he had a mohawk haircut. While in the JLA, he forged a strong friendship with fellow former Infinity Inc. teammate Obsidian. The Justice Society Albert finally gets his dream and is invited to join the reunited JSA under his new name and identity, Atom Smasher. For years, Atom Smasher cherishes his role in upholding Pratt's legacy and constantly seeks to prove himself worthy to his Golden Age idols – especially when many of them became his teammates in the JSA. He looks up to the elder JSA members, but is himself looked up to by young rookie member Stargirl. When Albert's mother is murdered in a plane crash engineered by the terrorist Kobra, he becomes consumed by vengeance, nearly crushing Kobra in his hands before he is talked down by his teammate Jack Knight, who convinces him that he should not taint the memory of his mother by associating it with Kobra's murder. Not long after the fatal crash, Albert and Metron travel back in time and force the weakened villain Extant into a position where he takes the place of Albert's mother. Black Adam When Captain Marvel's longtime adversary Black Adam reforms and joins the JSA, he and Rothstein develop a rivalry at first as Al refuses to believe Adam has reformed. This soon turns to kinship after Adam justifies Al's murderous actions towards Extant. Indeed, Black Adam comments that he thinks of Atom Smasher as the brother he never had. Encouraged by Adam, Atom Smasher grows frustrated with the JSA's moral boundaries, especially when Kobra blackmails authorities into granting his release. Albert and Adam promptly quit the JSA after Kobra's escape. Shortly thereafter, the unlikely duo settle each other's personal scores. Adam kills Kobra, while Rothstein kills the dictatorial president of Khandaq, Adam's home country. Atom Smasher helps lead a team of rogue metahumans (including former Infinity Inc. teammates Brainwave and Northwind) in an invasion of Khandaq and overthrow its oppressive regime. Atom Smasher initially fights against his JSA teammates in Khandaq before deciding instead to help forge an uneasy truce—Black Adam and his compatriots can remain in power so long as they never leave the country. Atom Smasher remains in the Middle Eastern nation for a time, although he eventually begins to question Adam's motives. Rothstein perishes in JSA #75 while fighting against the Spectre, but is revived by Black Adam's lightning, and carried back to JSA headquarters. He is later put on trial for his actions in Khandaq and pleads guilty to all charges. Teammate Stargirl promises to "be there for him" when he gets out. Whilst in jail, he is approached by the founder of the Suicide Squad, Amanda Waller. In 52, he is seen assembling a new Suicide Squad under Waller's orders, instructed to fight Black Adam, and, unbeknownst to Atom Smasher himself, push his family to overreact. They succeed, and Osiris is disgraced and exposed for having killed a Squad member, as Amanda Waller was filming the events, leading to the downfall of the whole Black Marvel Family, and a murderous rampage of Black Adam, dubbed World War III. He then sides with the Justice Society, trying to apprehend Black Adam, but refuses to condemn him in any way, not even believing him guilty of the genocide in Bialya. When Adam is robbed of his powers by Captain Marvel, and is about to plunge to his death, it is Atom Smasher who saves him, though no character ever sees this, and Al keeps it hidden. In the Black Adam: The Dark Age series, Albert is shown searching for his former friend, who is intent upon resurrecting his dead wife Isis. In Black Adam #5, Albert brings Adam a bone from Isis' remains and tries unsuccessfully to persuade his friend to go into hiding. Modern-day JSA issues In the Justice Society of America: The Kingdom special, Stargirl recruits Atom Smasher to knock some sense into Damage, who has become an evangelist of sorts for Gog after the cosmic being temporarily healed his face. He views Pratt's son as a brother figure, since he was brought up by Pratt in the first place. Atom Smasher finally returns to the JSA during the "Black Adam and Isis" arc printed in Justice Society of America #23–25. Asking the team for a second chance at honoring the memory of Al Pratt, Atom Smasher joins the Justice Society in battling Black Adam and Isis, who have robbed Captain Marvel of his powers and his throne at the Rock of Eternity. At the conclusion of the story, despite Wildcat's distrust, Atom Smasher is readmitted into the JSA as a full member, along with all the other members of the team who had acted poorly in recent issues. He vanishes for several issues, but he reappears in the JSA: All-Stars book as a victim of kidnapping. In Doomsday Clock, Atom Smasher and the Justice Society are restored after Doctor Manhattan undoes his changes to the timeline that erased them. Powers and abilities Atom Smasher possesses superhuman strength and durability, and can further increase in size and strength at will. His strength and density increase proportionately to whatever size he chooses. Relationships with women Albert has had complicated relationships with women during his tenure on various super-hero teams. While on Infinity, Inc., he was shown to be clearly in love with teammate Fury, despite her engagement to his friend Silver Scarab. Many other characters make note of this, though none of them begrudge Al, and actually feel sorry for him because he will inevitably have his heart broken. Looking up to her even as children, he eventually proposes when Hector is killed and she is left pregnant, so that she will not be alone. She turns him down, saying that she prefers them to be friends. He also has a brief flirtation with the second Wildcat Yolanda Montez, but things never developed between them. During his time with the League, he dates Fire, but he discontinues the relationship because she is not Jewish — even though this did not stop his earlier or later crushes. During JSA All-Stars, he shown to be flirting and interested in Anna Fortune during the All-Stars' beach volleyball hangout. His relationship with Stargirl is even more complex. While Stargirl has shown some romantic feelings for Atom Smasher in the past, there is never any reciprocation on his part. Later issues clearly establish Stargirl's true feelings, as various friends (such as Captain Marvel or her friend Mary) accuse her of liking Al, and she promises to wait for him upon his return from prison. When Al is killed temporarily by Spectre, she reveals the depths of her feelings for him, weeping over his dead body. Albert finally acknowledges his own feelings when he rejoins the JSA to fight Black Adam, admitting that Billy Batson deserves her far more than Al himself does, in a regretful tone. Al's teammates realize the couple's mutual attraction once they start openly fawning over each other in public, and while Power Girl is supportive ("Go rescue your fair maiden"), the elder members force Al to turn Courtney down due to the age difference. This leaves Al melancholy, and Courtney runs off crying. Later issues of JSA: All-Stars reveal the two still love each other, but after Johnny Sorrow mimics Al to force a kiss from the young girl, they both recognize the need for "space." Other versions Al Rothstein / Atom-Smasher appears in Kingdom Come as a member of Superman's Justice League. In other media Television Tom Turbine, an original character based on Atom Smasher, Superman, and Al Pratt / Atom, appears in the Justice League episode "Legends", voiced by Ted McGinley. Albert Rothstein as Atom Smasher makes non-speaking cameo appearances in Justice League Unlimited as a member of the Justice League. A villainous Earth-2 incarnation of Albert Rothstein / Atom Smasher appears in The Flash episode "The Man Who Saved Central City", portrayed by Adam Copeland. While Eobard Thawne listed the Earth-1 version of Rothstein as a casualty of the S.T.A.R. Labs particle accelerator accident, the latter was retroactively stated to have been in Hawaii at the time and thus never acquired powers. The Earth-2 Rothstein kills his Earth-1 counterpart before attempting to do the same to the Flash on Zoom's behalf, having been promised that he will be able to return to his native Earth, only to be defeated and killed by the Flash. Film Albert Rothstein / Atom Smasher appears in Black Adam, portrayed by Noah Centineo. This version is a member of the Justice Society and Al Pratt's nephew. Video games Atom Smasher makes a background appearance in Injustice: Gods Among Us via the Hall of Justice stage. Merchandise Atom Smasher received an action figure in Mattel's Justice League Unlimited toyline in the summer of 2005. In February 2009, Atom Smasher received a Collect-and-Connect figure of the DC Universe Classics line. In December 2023, Australia's National Basketball League (NBL) held an event called the "DC Multiverse Round", wherein the teams wore stylized jerseys based on DC Comics characters, which were also made available to the public. Atom Smasher was featured on the Perth Wildcats' jersey. References External links Atom Smasher at Comic Vine Characters created by Jerry Ordway Characters created by Roy Thomas Comics characters introduced in 1983 DC Comics shapeshifters DC Comics characters who can move at superhuman speeds DC Comics characters with superhuman durability or invulnerability DC Comics characters with superhuman strength DC Comics male superheroes DC Comics metahumans Fictional American Jews in comics Fictional characters who can change size Fictional characters with density control abilities Fictional giants Earth-Two Jewish superheroes
Atom Smasher (DC Comics)
[ "Physics" ]
2,798
[ "Density", "Fictional characters with density control abilities", "Physical quantities" ]
1,515,964
https://en.wikipedia.org/wiki/Wedge%20strategy
The Wedge Strategy is a creationist political and social agenda authored by the Discovery Institute, the hub of the pseudoscientific intelligent design movement. The strategy was presented in a Discovery Institute internal memorandum known as the Wedge Document. Its goal is to change American culture by shaping public policy to reflect politically conservative fundamentalist evangelical Protestant values. The wedge metaphor is attributed to Phillip E. Johnson and depicts a metal wedge splitting a log. Intelligent design is the pseudoscientific religious belief that certain features of the universe and of living things are best explained by an intelligent cause, not a naturalistic process such as evolution by natural selection. Implicit in the intelligent design doctrine is a redefining of science and how it is conducted (see theistic science). Wedge strategy proponents are opposed to materialism, naturalism, and evolution, and have made the removal of each from how science is conducted and taught an explicit goal. The strategy was originally brought to the public's attention when the Wedge Document was leaked on the Web. The Wedge strategy forms the governing basis of a wide range of Discovery Institute intelligent design campaigns. Overview The Wedge Document outlines a public relations campaign meant to sway the opinion of the public, popular media, charitable funding agencies, and public policy makers. The document sets forth the short-term and long-term goals with milestones for the intelligent design movement, with its governing goals stated in the opening paragraph: "To defeat scientific materialism and its destructive moral, cultural and political legacies" "To replace materialistic explanations with the theistic understanding that nature and human beings are created by God" There are three Wedge Projects, referred to in the strategy as three phases designed to reach a governing goal: Scientific Research, Writing, and Publicity Publicity and Opinion-making Cultural Confrontation & Renewal Recognizing the need for support, the institute affirms the strategy's Christian, evangelistic orientation: The wedge strategy was designed with both five-year and twenty-year goals in mind in order to achieve the conversion of the mainstream. One notable component of the work was its desire to address perceived social consequences and to promote a social conservative agenda on a wide range of issues including abortion, euthanasia, sexuality, and other social reform movements. It criticized "materialist reformers [who] advocated coercive government programs" which it referred to as "a virulent strain of utopianism". Beyond promotion of the Phase I goals of proposing Intelligent Design-related research, publications, and attempted integration into academia, the wedge strategy places an emphasis on Phases II and III advocacy aimed at increasing popular support of the Discovery Institute's ideas. Support for the creation of popular-level books, newspaper and magazine articles, op-ed pieces, video productions, and apologetics seminars was hoped to embolden believers and sway the broader culture towards acceptance of intelligent design. This, in turn, would lead the ultimate goal of the wedge strategy; a social and political reformation of American culture. In 20 years, the group hopes that they will have achieved their goal of making intelligent design the main perspective in science as well as to branch out to ethics, politics, philosophy, theology, and the fine arts. A goal of the wedge strategy is to see intelligent design "permeate religious, cultural, moral and political life." By accomplishing this goal the ultimate goal as stated by the Center for Science and Culture (CSC) of the "overthrow of materialism and its damning cultural legacies" and reinstating the idea that humans are made in the image of God, thereby reforming American culture to reflect conservative Christian values, will be achieved. The preamble of the Wedge Document is mirrored largely word-for-word in the early mission statement of the CSC, then called the Center for the Renewal of Science and Culture. The theme is again picked up in the controversial book From Darwin to Hitler authored by Center for Science and Culture Fellow Richard Weikart and published with the center's assistance. The wedge strategy was largely authored by Phillip E. Johnson, and features in his book The Wedge of Truth: Splitting the Foundations of Naturalism. Origins Wedge Document Drafted in 1998 by Discovery Institute staff, the Wedge Document first appeared publicly after it was posted to the World Wide Web on February 5, 1999, by Tim Rhodes, having been shared with him in late January 1999 by Matt Duss, a part-time employee of a Seattle-based international human-resources firm. There Duss had been given a document to copy titled The Wedge and marked "Top Secret" and "Not For Distribution." Meyer once claimed that the Wedge Document was stolen from the Discovery Institute's offices. Discovery Institute co-founder and CSC Vice President Stephen C. Meyer eventually acknowledged the Institute as the source of the document. The Institute still seeks to downplay its significance, saying "Conspiracy theorists in the media continue to recycle the urban legend of the 'Wedge' document". The Institute also portrays the scientific community's reaction to the Wedge document as driven by "Darwinist Paranoia." Despite insisting that intelligent design is not a form of creationism, the Discovery Institute chose to use an image of Michelangelo's The Creation of Adam, depicting God reaching out to impart life from his finger into Adam. Movement and strategy According to Phillip E. Johnson, the wedge movement, if not the term, began in 1992: In 1993, a year after the SMU conference, "the Johnson-Behe cadre of scholars met at Pajaro Dunes. Here, Behe presented for the first time the seed thoughts that had been brewing in his mind for a year--the idea of 'irreducibly complex' molecular machinery." Nancy Pearcey, a CSC fellow, and Johnson associate acknowledges Johnson's leadership of the intelligent design movement in two of her most recent publications. In an interview with Johnson for World magazine, Pearcey says, "It is not only in politics that leaders forge movements. Phillip Johnson has developed what is called the 'Intelligent Design' movement." In Christianity Today, she reveals Johnson's religious beliefs and his animosity toward evolution and affirms Johnson as "The unofficial spokesman for ID." In his 1997 book Defeating Darwinism by Opening Minds Johnson summed up the underlying philosophy of the strategy: At the 1999 "Reclaiming America for Christ Conference" called by Reverend D. James Kennedy of Coral Ridge Ministries, Johnson gave a speech called "How the Evolution Debate Can Be Won". In it he summed up the theological and epistemological underpinnings of intelligent design and its strategy for winning the battle: Johnson cites the foundation of intelligent design as The Gospel According to Saint John, in the New Testament, specifically, Chapter 1:1: "In the beginning was the Word, and the Word was with God, and the Word was God" (King James Version). The 1999 establishment of the Michael Polanyi Center at Baylor University by Baylor president Robert B. Sloan was a major step forward in the Wedge Strategy. The center was directed by William Dembski and Bruce L. Gordon, with funding from the John Templeton Foundation via the Discovery Institute. The center was disbanded the next year in the face of protests from Baylor's faculty and the recommendation of an outside advisory council. By 2005 Baylor had also hired two other wedge proponents, Walter Bradley and Francis J. Beckwith. Elaborating on the goals and methods of wedge strategy, Johnson stated in an interview conducted in 2002 for Touchstone Magazine that "The mechanism of the wedge strategy is to make it attractive to Catholics, Orthodox, non-fundamentalist Protestants, observant Jews, and so on." He went on to elaborate: Other statements of Johnson's acknowledge that the goal of the intelligent design movement is to promote a theistic and creationist agenda cast as a scientific concept. Critics claim that Johnson's statements validate claims leveled by those who allege that the Discovery Institute and its allied organizations are merely stripping religious content from their anti-evolution, creationist assertions as a means of avoiding the separation of church and state mandated by the Establishment Clause of the First Amendment. The statements, when viewed in the light of the Wedge document and the US District Court's Kitzmiller decision, show ID and the ID movement is an attempt to put a gloss of secularity on top of what is a fundamentally religious belief. The wedge strategy details a simultaneous assault on state boards of education, state and federal legislatures and on the print and broadcast media. The Discovery Institute has carried out the strategy through its role in the intelligent design movement, where it aggressively promoted ID and its Teach the Controversy campaign to the public, education officials and public policymakers. Intelligent design proponents, through the Discovery Institute, have employed a number of specific political strategies and tactics in their furtherance of their goals. These range from attempts at the state level to undermine or remove altogether the presence of evolutionary theory from the public school classroom, to having the federal government mandate the teaching of intelligent design, to 'stacking' municipal, county and state school boards with ID proponents. The Discovery Institute has provided material support and assisted federal, state and local elected representatives in drafting legislation that would deemphasize or refute evolution in science curricula. The DI has also supported and advised individual parents and local groups who raise the subject with school boards. During school board meetings in Kansas, Ohio, and Texas, the political and social agenda of the Discovery Institute were used to call into question both the motives of the intelligent design proponents and the validity of their position. The Discovery Institute fellows have significant advantages in money, political sophistication, and experience over their opponents in the scientific and educational communities, who do not have the benefit of funding from wealthy benefactors, clerical and technical support staff, and expensive advertising campaigns and extensive political networking. The Discovery Institute's "Teach the Controversy" campaign is designed to leave the scientific establishment looking close-minded, appearing as if it is attempting to stifle and suppress new scientific discoveries that challenge the status quo. This is made with the knowledge that it's unlikely many in the public understand advanced biology or can consult the current scientific literature or contact major scientific organizations to verify Discovery Institute claims. This part of the strategy also plays on undercurrents of anti-intellectualism and distrust of science and scientists that can be found in particular segments of American society. There is a noticeable conflict between what intelligent design backers tell the public through the media and what they say before conservative Christian audiences. This is studied and deliberate as advocated by wedge strategy author Phillip E. Johnson. When speaking to a mainstream audience and to the media, ID proponents cast ID as a secular, scientific theory. But when speaking to what the Wedge Document calls their "natural constituency, namely (conservative) Christians," ID proponents express themselves in unambiguously religious language. This in the belief that they cannot afford to alienate their constituency and major funding sources, virtually all of which are conservative religious organizations and individuals such as Howard Ahmanson. Having written extensively about ID, philosopher of science Robert Pennock says "When lobbying for ID in the public schools, wedge members sometimes deny that ID makes any claims about the identity of the designer. It is ironic that their political strategy leads them to deny God in the public square more often than Peter did." The term "intelligent design" has become a liability for wedge advocates since the ruling in Kitzmiller v. Dover Area School District. Because of the success of the Discovery Institute's public relations campaign to make "intelligent design" a household phrase, and the ruling in Kitzmiller v. Dover Area School District that ID is essentially religious in nature more people recognize it as the religious concept of creationism. Having come closest to accomplishing getting ID into public school science classes in Kansas and Ohio where they succeeded in getting the State Board of Education to adopt ID lesson plans, intelligent design proponents advocated "teach the controversy" as a legally defensible alternative to teaching intelligent design. The Kitzmiller ruling also characterized "teaching the controversy" as part of the same religious ploy as presenting intelligent design as an alternative to evolution. This prompted a move to a fallback position, teaching "critical analysis" of evolutionary theory. Teaching "critical analysis" is viewed as a means of teaching all the ID arguments without using that label. It also picks up the themes of the teach the controversy strategy, emphasizing what they say are the "strengths and weaknesses" of evolutionary theory and "arguments against evolution," which they falsely portray as "a theory in crisis." Critics state about the wedge strategy that its "ultimate goal is to create a theocratic state". See also Intelligent design movement Kansas evolution hearings Santorum Amendment Naturalism (philosophy) Footnotes References External links (pdf format) Tim Rhodes puts a text copy of the wedge on the Internet The "Wedge" Archives at the Access Research Network website. The Wedge Strategy: So What? The Wedge at Work: How Intelligent Design Creationism Is Wedging Its Way into the Cultural and Academic Mainstream Chapter 1 of the book Intelligent Design Creationism and Its Critics by Barbara Forrest, Ph.D. MIT Press, 2001 The Wedge: Breaking the Modernist Monopoly on Science by Phillip E. Johnson. Originally published in Touchstone: A Journal of Mere Christianity. July/August 1999. Wedging Creationism into the Academy. Proponents of a controversial theory struggle to gain purchase within academia. A case study of the quest for academic legitimacy. By Barbara Forrest and Glenn Branch. 2005. Published in Academe. Southern Baptist Convention 1998 webpage about "The Wedge" Creationism Intelligent design movement Intelligent design controversies Discovery Institute campaigns Conservatism in the United States Right-wing politics in the United States Theocracy Strategy
Wedge strategy
[ "Biology" ]
2,813
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
1,516,033
https://en.wikipedia.org/wiki/Kummer%27s%20function
In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer. Kummer's function is defined by The duplication formula is . Compare this to the duplication formula for the polylogarithm: An explicit link to the polylogarithm is given by References . Special functions hu:Kummer-függvény
Kummer's function
[ "Mathematics" ]
113
[ "Mathematical analysis", "Special functions", "Mathematical analysis stubs", "Combinatorics" ]
1,516,049
https://en.wikipedia.org/wiki/Pylon%20%28architecture%29
A pylon is a monumental gate of an Egyptian temple (Egyptian: bxn.t in the Manuel de Codage transliteration). The word comes from the Greek term 'gate'. It consists of two pyramidal towers, each tapered and surmounted by a cornice, joined by a less elevated section enclosing the entrance between them. The gate was generally about half the height of the towers. Contemporary paintings of pylons show them with long poles flying banners. Egyptian architecture In ancient Egyptian religion, the pylon mirrored the hieroglyph akhet 'horizon', which was a depiction of two hills "between which the sun rose and set". Consequently, it played a critical role in the symbolic architecture of a building associated with the place of re-creation and rebirth. Pylons were often decorated with scenes emphasizing a king's authority since it was the public face of a building. On the first pylon of the temple of Isis at Philae, the pharaoh is shown slaying his enemies while Isis, Horus and Hathor look on. Other examples of pylons can be seen in Karnak, Luxor Temple and Edfu. Rituals to the god Amun were often carried out on the top of temple pylons. A pair of obelisks usually stood in front of a pylon. In addition to standard vertical grooves on the exterior face of a pylon wall which were designed to hold flag poles, some pylons also contained internal stairways and rooms. The oldest intact pylons belong to mortuary temples from the Ramesside period in the 13th and 12th centuries BCE. Revival architecture Both Neoclassical and Egyptian Revival architecture employ the pylon form, with Boodle's gentlemen's club in London being an example of the Neoclassical style. The 19th and 20th centuries saw pylon architecture employed for bridges such as the Sydney Harbour Bridge and as stand-alone monuments such as the Patcham Pylon in Brighton and Hove, England. Gallery See also Ancient Egyptian architecture Column References External links Architectural elements Gates
Pylon (architecture)
[ "Technology", "Engineering" ]
429
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,516,095
https://en.wikipedia.org/wiki/Confluent%20hypergeometric%20function
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions: Kummer's (confluent hypergeometric) function , introduced by , is a solution to Kummer's differential equation. This is also known as the confluent hypergeometric function of the first kind. There is a different and unrelated Kummer's function bearing the same name. Tricomi's (confluent hypergeometric) function introduced by , sometimes denoted by , is another solution to Kummer's equation. This is also known as the confluent hypergeometric function of the second kind. Whittaker functions (for Edmund Taylor Whittaker) are solutions to Whittaker's equation. Coulomb wave functions are solutions to the Coulomb wave equation. The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables. Kummer's equation Kummer's equation may be written as: with a regular singular point at and an irregular singular point at . It has two (usually) linearly independent solutions and . Kummer's function of the first kind is a generalized hypergeometric series introduced in , given by: where: is the rising factorial. Another common notation for this solution is . Considered as a function of , , or with the other two held constant, this defines an entire function of or , except when As a function of it is analytic except for poles at the non-positive integers. Some values of and yield solutions that can be expressed in terms of other known functions. See #Special cases. When is a non-positive integer, then Kummer's function (if it is defined) is a generalized Laguerre polynomial. Just as the confluent differential equation is a limit of the hypergeometric differential equation as the singular point at 1 is moved towards the singular point at ∞, the confluent hypergeometric function can be given as a limit of the hypergeometric function and many of the properties of the confluent hypergeometric function are limiting cases of properties of the hypergeometric function. Since Kummer's equation is second order there must be another, independent, solution. The indicial equation of the method of Frobenius tells us that the lowest power of a power series solution to the Kummer equation is either 0 or . If we let be then the differential equation gives which, upon dividing out and simplifying, becomes This means that is a solution so long as is not an integer greater than 1, just as is a solution so long as is not an integer less than 1. We can also use the Tricomi confluent hypergeometric function introduced by , and sometimes denoted by . It is a combination of the above two solutions, defined by Although this expression is undefined for integer , it has the advantage that it can be extended to any integer by continuity. Unlike Kummer's function which is an entire function of , usually has a singularity at zero. For example, if and then is asymptotic to as goes to zero. But see #Special cases for some examples where it is an entire function (polynomial). Note that the solution to Kummer's equation is the same as the solution , see #Kummer's transformation. For most combinations of real or complex and , the functions and are independent, and if is a non-positive integer, so doesn't exist, then we may be able to use as a second solution. But if is a non-positive integer and is not a non-positive integer, then is a multiple of . In that case as well, can be used as a second solution if it exists and is different. But when is an integer greater than 1, this solution doesn't exist, and if then it exists but is a multiple of and of In those cases a second solution exists of the following form and is valid for any real or complex and any positive integer except when is a positive integer less than : When a = 0 we can alternatively use: When this is the exponential integral . A similar problem occurs when is a negative integer and is an integer less than 1. In this case doesn't exist, and is a multiple of A second solution is then of the form: Other equations Confluent Hypergeometric Functions can be used to solve the Extended Confluent Hypergeometric Equation whose general form is given as: Note that for or when the summation involves just one term, it reduces to the conventional Confluent Hypergeometric Equation. Thus Confluent Hypergeometric Functions can be used to solve "most" second-order ordinary differential equations whose variable coefficients are all linear functions of , because they can be transformed to the Extended Confluent Hypergeometric Equation. Consider the equation: First we move the regular singular point to by using the substitution of , which converts the equation to: with new values of , and . Next we use the substitution: and multiply the equation by the same factor, obtaining: whose solution is where is a solution to Kummer's equation with Note that the square root may give an imaginary or complex number. If it is zero, another solution must be used, namely where is a confluent hypergeometric limit function satisfying As noted below, even the Bessel equation can be solved using confluent hypergeometric functions. Integral representations If , can be represented as an integral thus is the characteristic function of the beta distribution. For with positive real part can be obtained by the Laplace integral The integral defines a solution in the right half-plane . They can also be represented as Barnes integrals where the contour passes to one side of the poles of and to the other side of the poles of . Asymptotic behavior If a solution to Kummer's equation is asymptotic to a power of as , then the power must be . This is in fact the case for Tricomi's solution . Its asymptotic behavior as can be deduced from the integral representations. If , then making a change of variables in the integral followed by expanding the binomial series and integrating it formally term by term gives rise to an asymptotic series expansion, valid as : where is a generalized hypergeometric series with 1 as leading term, which generally converges nowhere, but exists as a formal power series in . This asymptotic expansion is also valid for complex instead of real , with The asymptotic behavior of Kummer's solution for large is: The powers of are taken using . The first term is not needed when is finite, that is when is not a non-positive integer and the real part of goes to negative infinity, whereas the second term is not needed when is finite, that is, when is a not a non-positive integer and the real part of goes to positive infinity. There is always some solution to Kummer's equation asymptotic to as . Usually this will be a combination of both and but can also be expressed as . Relations There are many relations between Kummer functions for various arguments and their derivatives. This section gives a few typical examples. Contiguous relations Given , the four functions are called contiguous to . The function can be written as a linear combination of any two of its contiguous functions, with rational coefficients in terms of , and . This gives relations, given by identifying any two lines on the right hand side of In the notation above, , , and so on. Repeatedly applying these relations gives a linear relation between any three functions of the form (and their higher derivatives), where , are integers. There are similar relations for . Kummer's transformation Kummer's functions are also related by Kummer's transformations: . Multiplication theorem The following multiplication theorems hold true: Connection with Laguerre polynomials and similar representations In terms of Laguerre polynomials, Kummer's functions have several expansions, for example or Special cases Functions that can be expressed as special cases of the confluent hypergeometric function include: Some elementary functions where the left-hand side is not defined when is a non-positive integer, but the right-hand side is still a solution of the corresponding Kummer equation: (a polynomial if is a non-positive integer) for non-positive integer is a generalized Laguerre polynomial. for non-positive integer is a multiple of a generalized Laguerre polynomial, equal to when the latter exists. when is a positive integer is a closed form with powers of , equal to when the latter exists. for non-negative integer is a Bessel polynomial (see lower down). etc. Using the contiguous relation we get, for example, Bateman's function Bessel functions and many related functions such as Airy functions, Kelvin functions, Hankel functions. For example, in the special case the function reduces to a Bessel function: This identity is sometimes also referred to as Kummer's second transformation. Similarly When is a non-positive integer, this equals where is a Bessel polynomial. The error function can be expressed as Coulomb wave function Cunningham functions Exponential integral and related functions such as the sine integral, logarithmic integral Hermite polynomials Incomplete gamma function Laguerre polynomials Parabolic cylinder function (or Weber function) Poisson–Charlier function Toronto functions Whittaker functions are solutions of Whittaker's equation that can be expressed in terms of Kummer functions and by The general -th raw moment ( not necessarily an integer) can be expressed as In the second formula the function's second branch cut can be chosen by multiplying with . Application to continued fractions By applying a limiting argument to Gauss's continued fraction it can be shown that and that this continued fraction converges uniformly to a meromorphic function of in every bounded domain that does not include a pole. Notes References External links Confluent Hypergeometric Functions in NIST Digital Library of Mathematical Functions Kummer hypergeometric function on the Wolfram Functions site Tricomi hypergeometric function on the Wolfram Functions site Hypergeometric functions Special hypergeometric functions Special functions
Confluent hypergeometric function
[ "Mathematics" ]
2,201
[ "Special functions", "Combinatorics" ]
1,516,103
https://en.wikipedia.org/wiki/Basalt%20fiber
Basalt fibers are produced from basalt rocks by melting them and converting the melt into fibers. Basalts are rocks of igneous origin. The main energy consumption for the preparation of basalt raw materials to produce of fibers is made in natural conditions. Basalt fibers are classified into 3 types: Basalt continuous fibers (BCF), used for the production of reinforcing materials and composite products, fabrics, and non-woven materials; Basalt staple fibers, for the production of thermal insulation materials; and Basalt superthin fibers (BSTF), for the production of high quality heat- and sound-insulating and fireproof materials. Manufacturing process The technology of production of basalt continuous fiber (BCF) is a one-stage process: melting, homogenization of basalt and extraction of fibers. Basalt is heated only once. Further processing of BCF into materials is carried out using "cold technologies" with low energy costs. Basalt fiber is made from a single material, crushed basalt, from a carefully chosen quarry source. Basalt of high acidity (over 46% silica content) and low iron content is considered desirable for fiber production. Unlike with other composites, such as glass fiber, essentially no materials are added during its production. The basalt is simply washed and then melted. The manufacture of basalt fiber requires the melting of the crushed and washed basalt rock at about . The molten rock is then extruded through small nozzles to produce continuous filaments of basalt fiber. The basalt fibers typically have a filament diameter of between 10 and 20 μm which is far enough above the respiratory limit of 5 μm to make basalt fiber a suitable replacement for asbestos. They also have a high elastic modulus, resulting in high specific strength—three times that of steel. Thin fiber is usually used for textile applications mainly for production of woven fabric. Thicker fiber is used in filament winding, for example, for production of compressed natural gas (CNG) cylinders or pipes. The thickest fiber is used for pultrusion, geogrid, unidirectional fabric, multiaxial fabric production and in form of chopped strand for concrete reinforcement. One of the most prospective applications for continuous basalt fiber and the most modern trend at the moment is production of basalt rebar that more and more substitutes traditional steel rebar on construction market. Properties The table refers to the continuous basalt fiber specific producer. Data from all the manufacturers are different, the difference is sometimes very large values. Comparison: History The first attempts to produce basalt fiber were made in the United States in 1923 by Paul Dhe who was granted . These were further developed after World War II by researchers in the US, Europe and the Soviet Union especially for military and aerospace applications. Since declassification in 1995 basalt fibers have been used in a wider range of civilian applications. Schools RWTH Aachen University. Every two year RWTH Aachen University's Institut für Textiltechnik hosts the International Glass Fibers Symposium where basalt fiber is devoted a separate section. The university conducts regular research to study and improve basalt fiber properties. Textile concrete is also more corrosion-resistant and more malleable than conventional concrete. Replacement of carbon fibers with basalt fibers can significantly enhance the application fields of the innovative composite material that is textile concrete, says Andreas Koch. The Institute for Lightweight Design Materials Science at the University of Hannover The German Plastics Institute (DKI) in Darmstadt The Technical University of Dresden had contributed in the studying of basalt fibers. Textile reinforcements in concrete construction - basic research and applications. The Peter Offermann covers the range from the beginning of fundamental research work at the TU Dresden in the early 90s to the present. The idea that textile lattice structures made of high-performance threads for constructional reinforcement could open up completely new possibilities in construction was the starting point for today's large research network. Textile reinforcements in concrete construction - basic research and applications. As a novelty, parallel applications to the research with the required approvals in individual cases, such as the world's first textile reinforced concrete bridges and the upgrading of shell structures with the thinnest layers of textile concrete, are reported. University of Applied Sciences Regensburg, Department of Mechanical Engineering. Mechanical characterization of basalt fibre reinforced plastic with different fabric reinforcements – Tensile tests and FE-calculations with representative volume elements (RVEs). Marco Romano, Ingo Ehrlich. Uses Heat protection Friction materials Windmill blades Lamp posts Ship hulls Car bodies Sports equipment Speaker cones Cavity wall ties Rebar Load bearing profiles CNG cylinders and pipes Absorbent for oil spills Chopped strand for concrete reinforcement High pressure vessels (e.g. tanks and gas cylinders) Pultruded rebar for concrete reinforcement (e.g. for bridges and buildings) Design codes Russia Since October 18, 2017, JV 297.1325800.2017 "Fibreconcrete constructions with nonmetallic fiber has been put into operation. Design rules, "which eliminated the legal vacuum in the design of basalt reinforced fiber reinforced concrete. According to paragraph 1.1. the standard extends to all types of non-metallic fibers (polymers, polypropylene, glass, basalt and carbon). When comparing different fibers, it can be noted that polymer fibers are inferior to mineral strengths, but their use makes it possible to improve the characteristics of building composites. See also Pele's hair Mineral wool Glass wool Beta cloth References Bibliography E. Lauterborn, Dokumentation Ultraschalluntersuchung Eingangsprüfung, Internal Report wiweb Erding, Erding,bOctober (2011). K. Moser, Faser-Kunststoff-Verbund – Entwurfs- und Berechnungsgrundlagen. VDI-Verlag, Düsseldorf, (1992). N. K. Naik, Woven Fabric Composites. Technomic Publishing Co., Lancaster (PA), (1994). Bericht 2004-1535 – Prüfung eines Sitzes nach BS 5852:1990 section 5 – ignition source crib 7, für die Fa. Franz Kiel gmbh&Co. KG. Siemens AG, A&D SP, Frankfurt am Main, (2004). DIN EN 2559 – Luft- und Raumfahrt – Kohlenstoffaser-Prepregs – Bestimmung des Harz- und Fasermasseanteils und der flächenbezogenen Fasermasse. Normenstelle Luftfahrt (NL) im DIN Deutsches Institut für Normung e.V., Beuth Verlag, Berlin, (1997). Epoxidharz L, Härter L – Technische Daten. Technical Data Sheet by R&G, (2011). Quality Certificates for Fabrics and Rovings. Incotelogy Ltd., Bonn, January (2012). L. Papula, Mathematische Formelsammlung für Naturwissenschaftler und Ingenieure. 10. Auflage, Vieweg+Teubner, Wiesbaden, (2009). • Osnos S, Osnos M, «BCF: developing industrial production for reinforcement materials and composites». JEC Composites magazine / N° 139 March - April 2021, p.19 – 24. • Osnos S., Rozhkov I. «Application of basalt rock-based materials in the automotive industry». JEC Composites magazine / N° 147, 2022, p. 33 – 36. External links The production of basalt fibers Information from the Uzbekistan state scientific committee Basalt Continuous Fiber - Information and Characteristics Basalt Roving Dome Video demonstration of concrete construction reinforced with basalt fiber Generation 2.0 of Continuous Basalt Fiber Comparing the technologies used in CBF production Compressive behavior of Basalt Fiber Reinforced Composite Product range of Basfiber products offered by Kamenny Vek Extruded Acrylic Sheet - Excellent Thermoforming Capabilities Some aspects of the technological process of continuous basalt fiber CBF Video demonstration of production of continuous basalt fiber at Kamenny Vek Basalt Composite materials Insulation fibers Synthetic fibers
Basalt fiber
[ "Physics", "Chemistry" ]
1,665
[ "Synthetic fibers", "Synthetic materials", "Composite materials", "Materials", "Matter" ]
1,516,108
https://en.wikipedia.org/wiki/Crystallographic%20texture
In materials science and related fields, crystallographic texture is the distribution of crystallographic orientations of a polycrystalline sample. A sample in which these orientations are fully random or is amorphous and thus no crystallographic planes, is said to have no texture. If the crystallographic orientations are not random, but have some preferred orientation, then the sample may have a weak, moderate or strong texture. The degree is dependent on the percentage of crystals having the preferred orientation. Texture is seen in almost all engineered materials, and can have a great influence on materials properties. The texture forms in materials during thermo-mechanical processes, for example during production processes e.g. rolling. Consequently, the rolling process is often followed by a heat treatment to reduce the amount of unwanted texture. Controlling the production process in combination with the characterization of texture and the material's microstructure help to determine the materials properties, i.e. the processing-microstructure-texture-property relationship. Also, geologic rocks show texture due to their thermo-mechanic history of formation processes. One extreme case is a complete lack of texture: a solid with perfectly random crystallite orientation will have isotropic properties at length scales sufficiently larger than the size of the crystallites. The opposite extreme is a perfect single crystal, which likely has anisotropic properties by geometric necessity. Characterization and representation Texture can be determined by various methods. Some methods allow a quantitative analysis of the texture, while others are only qualitative. Among the quantitative techniques, the most widely used is X-ray diffraction using texture goniometers, followed by the electron backscatter diffraction (EBSD) method in scanning electron microscopes. Qualitative analysis can be done by Laue photography, simple X-ray diffraction or with a polarized microscope. Neutron and synchrotron high-energy X-ray diffraction are suitable for determining textures of bulk materials and in situ analysis, whereas laboratory X-ray diffraction instruments are more appropriate for analyzing textures of thin films. Texture is often represented using a pole figure, in which a specified crystallographic axis (or pole) from each of a representative number of crystallites is plotted in a stereographic projection, along with directions relevant to the material's processing history. These directions define the so-called sample reference frame and are, because the investigation of textures started from the cold working of metals, usually referred to as the rolling direction RD, the transverse direction TD and the normal direction ND. For drawn metal wires the cylindrical fiber axis turned out as the sample direction around which preferred orientation is typically observed (see below). Common textures There are several textures that are commonly found in processed (cubic) materials. They are named either by the scientist that discovered them, or by the material they are most found in. These are given in Miller indices for simplification purposes. Cube component: (001)[100] Brass component: (110)[-112] Copper component: (112)[11-1] S component: (123)[63-4] Orientation distribution function The full 3D representation of crystallographic texture is given by the orientation distribution function (ODF) which can be achieved through evaluation of a set of pole figures or diffraction patterns. Subsequently, all pole figures can be derived from the ODF. The ODF is defined as the volume fraction of grains with a certain orientation . The orientation is normally identified using three Euler angles. The Euler angles then describe the transition from the sample’s reference frame into the crystallographic reference frame of each individual grain of the polycrystal. One thus ends up with a large set of different Euler angles, the distribution of which is described by the ODF. The orientation distribution function, ODF, cannot be measured directly by any technique. Traditionally both X-ray diffraction and EBSD may collect pole figures. Different methodologies exist to obtain the ODF from the pole figures or data in general. They can be classified based on how they represent the ODF. Some represent the ODF as a function, sum of functions or expand it in a series of harmonic functions. Others, known as discrete methods, divide the ODF space in cells and focus on determining the value of the ODF in each cell. Origins In wire and fiber, all crystals tend to have nearly identical orientation in the axial direction, but nearly random radial orientation. The most familiar exceptions to this rule are fiberglass, which has no crystal structure, and carbon fiber, in which the crystalline anisotropy is so great that a good-quality filament will be a distorted single crystal with approximately cylindrical symmetry (often compared to a jelly roll). Single-crystal fibers are also not uncommon. The making of metal sheet often involves compression in one direction and, in efficient rolling operations, tension in another, which can orient crystallites in both axes by a process known as grain flow. However, cold work destroys much of the crystalline order, and the new crystallites that arise with annealing usually have a different texture. Control of texture is extremely important in the making of silicon steel sheet for transformer cores (to reduce magnetic hysteresis) and of aluminium cans (since deep drawing requires extreme and relatively uniform plasticity). Texture in ceramics usually arises because the crystallites in a slurry have shapes that depend on crystalline orientation, often needle- or plate-shaped. These particles align themselves as water leaves the slurry, or as clay is formed. Casting or other fluid-to-solid transitions (i.e., thin-film deposition) produce textured solids when there is enough time and activation energy for atoms to find places in existing crystals, rather than condensing as an amorphous solid or starting new crystals of random orientation. Some facets of a crystal (often the close-packed planes) grow more rapidly than others, and the crystallites for which one of these planes faces in the direction of growth will usually out-compete crystals in other orientations. In the extreme, only one crystal will survive after a certain length: this is exploited in the Czochralski process (unless a seed crystal is used) and in the casting of turbine blades and other creep-sensitive parts. Texture and materials properties Material properties such as strength, chemical reactivity, stress corrosion cracking resistance, weldability, deformation behavior, resistance to radiation damage, and magnetic susceptibility can be highly dependent on the material’s texture and related changes in microstructure. In many materials, properties are texture-specific, and development of unfavorable textures when the material is fabricated or in use can create weaknesses that can initiate or exacerbate failures. Parts can fail to perform due to unfavorable textures in their component materials. Failures can correlate with the crystalline textures formed during fabrication or use of that component. Consequently, consideration of textures that are present in and that could form in engineered components while in use can be a critical when making decisions about the selection of some materials and methods employed to manufacture parts with those materials. When parts fail during use or abuse, understanding the textures that occur within those parts can be crucial to meaningful interpretation of failure analysis data. Thin film textures As the result of substrate effects producing preferred crystallite orientations, pronounced textures tend to occur in thin films. Modern technological devices to a large extent rely on polycrystalline thin films with thicknesses in the nanometer and micrometer ranges. This holds, for instance, for all microelectronic and most optoelectronic systems or sensoric and superconducting layers. Most thin film textures may be categorized as one of two different types: (1) for so-called fiber textures the orientation of a certain lattice plane is preferentially parallel to the substrate plane; (2) in biaxial textures the in-plane orientation of crystallites also tend to align with respect to the sample. The latter phenomenon is accordingly observed in nearly epitaxial growth processes, where certain crystallographic axes of crystals in the layer tend to align along a particular crystallographic orientation of the (single-crystal) substrate. Tailoring the texture on demand has become an important task in thin film technology. In the case of oxide compounds intended for transparent conducting films or surface acoustic wave (SAW) devices, for instance, the polar axis should be aligned along the substrate normal. Another example is given by cables from high-temperature superconductors that are being developed as oxide multilayer systems deposited on metallic ribbons. The adjustment of the biaxial texture in YBa2Cu3O7−δ layers turned out as the decisive prerequisite for achieving sufficiently large critical currents. The degree of texture is often subjected to an evolution during thin film growth and the most pronounced textures are only obtained after the layer has achieved a certain thickness. Thin film growers thus require information about the texture profile or the texture gradient in order to optimize the deposition process. The determination of texture gradients by x-ray scattering, however, is not straightforward, because different depths of a specimen contribute to the signal. Techniques that allow for the adequate deconvolution of diffraction intensity were developed only recently. References Further reading Bunge, H.-J. "Mathematische Methoden der Texturanalyse" (1969) Akademie-Verlag, Berlin Bunge, H.-J. "Texture Analysis in Materials Science" (1983) Butterworth, London Kocks, U. F., Tomé, C. N., Wenk, H.-R., Beaudoin, A. J., Mecking, H. "Texture and Anisotropy – Preferred Orientations in Polycrystals and Their Effect on Materials Properties" (2000) Cambridge University Press Birkholz, M., chapter 5 of "Thin Film Analysis by X-ray Scattering" (2006) Wiley-VCH, Weinheim External links aluMatter: Representing Texture MAUD program for Texture Analysis (diffraction, from patterns or pole figures) MTEX MATLAB toolbox for Texture Analysis (EBSD or diffraction, from pole figures) Labotex, ODF/texture analysis software for Microsoft Windows (EBSD or diffraction, from pole figures) Crystallographic Texture Combined Analysis Crystallography Metallurgy Materials science Neutron-related techniques Petrology Synchrotron-related techniques
Crystallographic texture
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,178
[ "Applied and interdisciplinary physics", "Metallurgy", "Materials science", "Crystallography", "Condensed matter physics", "nan" ]
1,516,120
https://en.wikipedia.org/wiki/Giant%20retinal%20ganglion%20cells
Giant retinal ganglion cells are photosensitive ganglion cells with large dendritic trees discovered in the human and macaque retina by Dacey et al. (2005). Giant retinal ganglion cells contain a photo-pigment, melanopsin, allowing them to respond directly to light. They also receive connections from rods and cones, allowing them to encode colour and spatial information. Dacey et al. found the giants' receptive field sizes to be about three times the diameter of those of parasol ganglion cells. When a giant is responding directly to light, Dacey et al. found its spectral sensitivity function to be similar in shape to those of rods and cones, but with a peak at 482 nm, in between S cones and rods. Dacey et al. also found giants' dynamic range to be 3-4 log units, far larger than any other photoreceptor type's and covering nearly the entire range of illuminations of natural daylight. Under naturalistic lighting conditions, responses to the rods and cones are superimposed on the melanopsin response of giant retinal ganglion cells. Giants encode colour via an S-Off, (L + M)-On opponency. Their spatial modulation transfer function is low-pass, with an upper limit of about 0.6 cycles per degree. Dacey et al. propose that the giants subserve the subconscious, 'non-image-forming' functions of circadian photoentrainment and pupillary diameter, and via the rod and cone inputs, may help mediate conscious perception of irradiance. Human eye anatomy Histology Circadian rhythm
Giant retinal ganglion cells
[ "Chemistry", "Biology" ]
346
[ "Behavior", "Histology", "Circadian rhythm", "Microscopy", "Sleep" ]
1,516,144
https://en.wikipedia.org/wiki/Related-key%20attack
In cryptography, a related-key attack is any form of cryptanalysis where the attacker can observe the operation of a cipher under several different keys whose values are initially unknown, but where some mathematical relationship connecting the keys is known to the attacker. For example, the attacker might know that the last 80 bits of the keys are always the same, even though they don't know, at first, what the bits are. KASUMI KASUMI is an eight round, 64-bit block cipher with a 128-bit key. It is based upon MISTY1 and was designed to form the basis of the 3G confidentiality and integrity algorithms. Mark Blunden and Adrian Escott described differential related key attacks on five and six rounds of KASUMI. Differential attacks were introduced by Biham and Shamir. Related key attacks were first introduced by Biham. Differential related key attacks are discussed in Kelsey et al. WEP An important example of a cryptographic protocol that failed because of a related-key attack is Wired Equivalent Privacy (WEP) used in Wi-Fi wireless networks. Each client Wi-Fi network adapter and wireless access point in a WEP-protected network shares the same WEP key. Encryption uses the RC4 algorithm, a stream cipher. It is essential that the same key never be used twice with a stream cipher. To prevent this from happening, WEP includes a 24-bit initialization vector (IV) in each message packet. The RC4 key for that packet is the IV concatenated with the WEP key. WEP keys have to be changed manually and this typically happens infrequently. An attacker therefore can assume that all the keys used to encrypt packets share a single WEP key. This fact opened up WEP to a series of attacks which proved devastating. The simplest to understand uses the fact that the 24-bit IV only allows a little under 17 million possibilities. Because of the birthday paradox, it is likely that for every 4096 packets, two will share the same IV and hence the same RC4 key, allowing the packets to be attacked. More devastating attacks take advantage of certain weak keys in RC4 and eventually allow the WEP key itself to be recovered. In 2005, agents from the U.S. Federal Bureau of Investigation publicly demonstrated the ability to do this with widely available software tools in about three minutes. Preventing related-key attacks One approach to preventing related-key attacks is to design protocols and applications so that encryption keys will never have a simple relationship with each other. For example, each encryption key can be generated from the underlying key material using a key derivation function. For example, a replacement for WEP, Wi-Fi Protected Access (WPA), uses three levels of keys: master key, working key and RC4 key. The master WPA key is shared with each client and access point and is used in a protocol called Temporal Key Integrity Protocol (TKIP) to create new working keys frequently enough to thwart known attack methods. The working keys are then combined with a longer, 48-bit IV to form the RC4 key for each packet. This design mimics the WEP approach enough to allow WPA to be used with first-generation Wi-Fi network cards, some of which implemented portions of WEP in hardware. However, not all first-generation access points can run WPA. Another, more conservative approach is to employ a cipher designed to prevent related-key attacks altogether, usually by incorporating a strong key schedule. A newer version of Wi-Fi Protected Access, WPA2, uses the AES block cipher instead of RC4, in part for this reason. There are related-key attacks against AES, but unlike those against RC4, they're far from practical to implement, and WPA2's key generation functions may provide some security against them. Many older network cards cannot run WPA2. References Cryptographic attacks
Related-key attack
[ "Technology" ]
809
[ "Cryptographic attacks", "Computer security exploits" ]
1,516,251
https://en.wikipedia.org/wiki/Column%20inch
A column inch was the standard measurement of the amount of content in published works that use multiple columns per page. A column inch is a unit of space one column wide by high. A newspaper page Newspaper pages are laid out on a grid that consists of a margin on 4 sides, a number of vertical columns and space in between columns, called gutters. Broadsheet newspaper pages in the United States usually have 6-9 columns, while tabloid sized publications have 5 columns. Column width In the United States, a common newspaper column measurement is about 11 picas wide —about —though this measure varies from paper to paper and in other countries. The examples in this article follow this assumption for illustrative purposes only. Column inches and advertising Newspapers sell advertising space on a page to retail advertisers, advertising agencies and other media buyers. Newspapers publish a "per column inch" rate based on their circulation and demographic figures. Generally, the more readers the higher the column inch rate is. Newspapers with more affluent readers may be able to command an even higher column inch rate. For most newspapers, however, the published rate is just a starting point. Sales representatives generally negotiate lower rates for frequent advertisers. Advertisements are measured using column inches. An advertisement that is 1 column inch square is 11 picas wide by 1 inch high. The column inch size for advertisements that spread over more than one column is determined by multiplying the number of inches high by number of columns. For example, an advertisement that is 3 columns wide by 6 inches high takes up 18 column inches (3 columns wide multiplied by 6 inches high). To determine the cost of the advertisement, multiply the number of column inches by the newspaper's rate. So, if a newspaper charges $10 per column inch, the cost for the advertisement discussed above would be $180.00 (18 column inches multiplied by $10.00). Advertisements that span over more than one column also gain a small amount of extra space in between columns because they stretch across the gutters. Gutters are the empty space between columns. Gutters range from about 10 points to about 1 pica wide. In addition, most newspapers charge for an extra column if an advertisement is a double truck. Terminology In the newspaper industry, ad and newsroom staffers will refer to an advertisement's size by saying "it's a (column width) by (inch height) ad," replacing the words in parentheses () with a single figure. This can be confusing because it refers to two distinct measurements as if they were measured with the same unit. Normally one would think a 3 by 6 advertisement would be 3 inches wide by 6 inches high — but in reality it's actually about 5.5 inches wide by 6 inches high. In writing, an "×" is usually used to separate the two figures. Whether written or spoken, most, if not all, newspaper professionals understand the first figure is the column width and the second figure is the inch height. Nowadays, most newspapers and magazines have converted to a "modular" system that simplifies ad size and eliminates the need to figure out column inches. In a modular system ad sizes are represented by the amount of the total page the ad takes up. For example, 1/2 page, 1/4 page, 1/8 page, etc. This has been a popular system among many newspapers because it simplifies the layout process (i.e. less ad sizes to fit in newspaper) and makes pricing much easier for an advertiser to understand. Use and number of words Column inches are also used as an ad hoc estimate of the importance of a news story and are also used to tell how much copy a reporter should write or has written, and how much should be cut from a story to fit the available space. This harks back to the days of the late 19th century linotype machine and its relatively uniform newspaper column-widths, echoed in the phototypesetting and paste up days of the late 20th Century, when typeset newspaper stories were still printed on long strips of paper one-column wide, then pasted into page layouts. Even when pages were designed using various wider column widths, story lengths were still estimated "as if" set in the standard narrower columns. Correspondents, for example, might be paid "by the inch" for their stories, and some organizations explained their nickname Stringer (journalism) as originating when regular-width columns of set type were measured with lengths of string. The software used in most present-day newsrooms still measures column inches to give reporters and editors an estimate on how much space a story will take up on a page. Reporters usually refer to story lengths in inches, which actually refers to how many column inches a story takes up. Although it varies, it is generally agreed upon that there are 25-35 words in a column inch. Newsroom staffers also measure items such as photographs and infographics using column inches. See also Inch References Newspaper terminology Units of measurement
Column inch
[ "Mathematics" ]
1,026
[ "Quantity", "Units of measurement" ]
1,516,323
https://en.wikipedia.org/wiki/Military%20Grid%20Reference%20System
The Military Grid Reference System (MGRS) is the geocoordinate standard used by NATO militaries for locating points on Earth. The MGRS is derived from the Universal Transverse Mercator (UTM) grid system and the Universal Polar Stereographic (UPS) grid system, but uses a different labeling convention. The MGRS is used as geocode for the entire Earth. It’s also referred as 10-digit coordinates. An example of an MGRS coordinate, or grid reference, would be [21_18_34.0_N_157_55_0.7_W_&language=en 4QFJ12345678], which consists of three parts: 4Q (grid zone designator, GZD) FJ (the 100,000-meter square identifier) 1234 5678 (numerical location; easting is 1234 and northing is 5678, in this case specifying a location with 10 m resolution) An MGRS grid reference is a point reference system. When the term 'grid square' is used, it can refer to a square with a side length of , 1 km, , 10 m or 1 m, depending on the precision of the coordinates provided. (In some cases, squares adjacent to a Grid Zone Junction (GZJ) are clipped, so polygon is a better descriptor of these areas.) The number of digits in the numerical location must be even: 0, 2, 4, 6, 8 or 10, depending on the desired precision. When changing precision levels, it is important to truncate rather than round the easting and northing values to ensure the more precise polygon will remain within the boundaries of the less precise polygon. Related to this is the primacy of the southwest corner of the polygon being the labeling point for an entire polygon. In instances where the polygon is not a square and has been clipped by a grid zone junction, the polygon keeps the label of the southwest corner as if it had not been clipped. 4Q ......................GZD only, precision level 6° × 8° (in most cases) 4Q FJ ...................GZD and 100 km Grid Square ID, precision level 100 km 4Q FJ 1 6 ...............precision level 10 km 4Q FJ 12 67 .............precision level 1 km 4Q FJ 123 678 ...........precision level 100 m 4Q FJ 1234 6789 .........precision level 10 m 4Q FJ 12345 67890 .......precision level 1 m Grid zone designation The first part of an MGRS coordinate is the grid-zone designation. The 6° wide UTM zones, numbered 1–60, are intersected by latitude bands that are normally 8° high, lettered C–X (omitting I and O). The northmost latitude band, X, is 12° high. The intersection of a UTM zone and a latitude band is (normally) a 6° × 8° polygon called a grid zone, whose designation in MGRS is formed by the zone number (one or two digits – the number for zones 1 to 9 is just a single digit, according to the example in DMA TM 8358.1, Section 3-2, Figure 7), followed by the latitude band letter (uppercase). This same notation is used in both UTM and MGRS, i.e. the UTM grid reference system; the article on Universal Transverse Mercator shows many maps of these grid zones, including the irregularities for Svalbard and southwest Norway. As Figure 1 illustrates, Honolulu is in grid zone 4Q. 100,000-meter square identification The second part of an MGRS coordinate is the 100,000-meter square identification. Each UTM zone is divided into 100,000 meter squares, so that their corners have UTM-coordinates that are multiples of 100,000 meters. The identification consists of a column letter (A–Z, omitting I and O) followed by a row letter (A–V, omitting I and O). Near the equator, the columns of UTM zone 1 have the letters A–H, the columns of UTM zone 2 have the letters J–R (omitting O), and the columns of UTM zone 3 have the letters S–Z. At zone 4, the column letters start over from A, and so on around the world. For the row letters, there are actually two alternative lettering schemes within MGRS: In the AA scheme, also known as MGRS-New, which is used for WGS84 and some other modern geodetic datums, the letter for the first row – just north of the equator – is A in odd-numbered zones, and F in even-numbered zones, as shown in figure 1. Note that the westmost square in this row, in zone 1, has identification AA. In the alternative AL scheme, also known as MGRS-Old, which is used for some older geodetic datums, the row letters are shifted 10 steps in the alphabet. This means that the letter for the first row is L in odd-numbered zones and R in even-numbered zones. The westmost square in the first row, in zone 1, has identification AL. If an MGRS coordinate is complete (with both a grid zone designation and a 100,000 meter square identification), and is valid in one lettering scheme, then it is usually invalid in the other scheme, which will have no such 100,000 meter square in the grid zone. (Latitude band X is the exception to this rule.) Therefore, a position reported in a modern datum usually cannot be misunderstood as using an old datum, and vice versa – provided the datums use different MGRS lettering schemes. In the map (figure 1), which uses the AA scheme, we see that Honolulu is in grid zone 4Q, and square FJ. To give the position of Honolulu with 100 km resolution, we write 4QFJ. Numerical location The third part of an MGRS coordinate is the numerical location within a 100,000 meter square, given as n + n digits, where n is 1, 2, 3, 4, or 5. If 5 + 5 digits is used, the first 5 digits give the easting in meters, measured from the left edge of the square, and the last 5 digits give the northing in meters, measured from the bottom edge of the square. The resolution in this case is 1 meter, so the MGRS coordinate would represent a 1-meter square, where the easting and northing are measured to its southwest corner. If a resolution of 10 meters is enough, the final digit of the easting and northing can be dropped, so that only 4 + 4 digits are used, representing a 10-meter square. If a 100-meter resolution is enough, 3 + 3 digits suffice; if a 1 km resolution is enough, 2 + 2 digits suffice; if 10 km resolution is enough, 1 + 1 digits suffice. 10 meter resolution (4 + 4 digits) is sufficient for many purposes, and is the NATO standard for specifying coordinates. If we zoom in on Hawaii (figure 2), we see that the square that contains Honolulu, if we use 10 km resolution, would be written 4QFJ15. If the grid zone or 100,000-meter square are clear from context, they can be dropped, and only the numerical location is specified. For example: If every position being located is within the same grid zone, only the 100,000-meter square and numerical location are specified. If every position being located is within the same grid zone and 100,000-meter square, only the numerical location is specified. However, even if every position being located is within a small area, but the area overlaps multiple 100,000-meter squares or grid zones, the entire grid reference is required. One always reads map coordinates from west to east first (easting), then from south to north (northing). Common mnemonics include "in the house, up the stairs", "left-to-right, bottom-to-top" and "Read Right Up". Truncation, not rounding and read more As mentioned above, when converting UTM coordinates to an MGRS grid reference, or when abbreviating an MGRS grid reference to lower precision, one should the coordinates, not round. This has been controversial in the past, since the oldest specification, TM8358.1, used rounding, as did GEOTRANS before version 3.0. However, truncation is used in GEOTRANS since version 3.0, and in NGA Military Map Reading 201 (page 5) and in the US Army Field Manual 3-25.26. The civilian version of MGRS, USNG, also uses truncation. Squares that cross a latitude band boundary The boundaries of the latitude bands are parallel circles (dashed black lines in figure 1), which do not coincide with the boundaries of the 100,000-meter squares (blue lines in figure 1). For example, at the boundary between grid zones 1P and 1Q, we find a 100,000-meter square BT, of which about two thirds is south of latitude 16° and therefore in grid zone 1P, while one third is north of 16° and therefore in 1Q. So, an MGRS grid reference for a position in BT should begin with 1PBT in the south part of BT, and with 1QBT in the north part of BT. At least, this is possible if the precision of the grid reference is enough to place the denoted area completely inside either 1P or 1Q. But an MGRS grid reference can denote an area that crosses a latitude band boundary. For example, when describing the entire square BT, should it be called 1PBT or 1QBT? Or when describing the 1000-meter square BT8569, should it be called 1PBT8569 or 1QBT8569? In these cases, software that interprets an MGRS grid reference should accept both of the possible latitude band letters. A practical motivation was given in the release notes for GEOTRANS, Release 2.0.2, 1999: The MGRS module was changed to make the final latitude check on MGRS to UTM conversions sensitive to the precision of the input MGRS coordinate string. The lower the input precision, the more "slop" is allowed in the final check on the latitude zone letter. This is to handle an issue raised by some F-16 pilots, who truncate MGRS strings that they receive from the Army. This truncation can put them on the wrong side of a latitude zone boundary, causing the truncated MGRS string to be considered invalid. The correction causes truncated strings to be considered valid if any part of the square which they denote lies within the latitude zone specified by the third letter of the string. Polar regions In the polar regions, a different convention is used. South of 80°S, UPS South (Universal Polar Stereographic) is used instead of a UTM projection. The west half-circle forms a grid zone with designation A; the east half-circle forms one with designation B; see figure 3. North of 84°N, UPS North is used, and the west half-circle is Y, the east one is Z; see figure 4. Since the letters A, B, Y, and Z are not used for any latitude bands of UTM, their presence in an MGRS coordinate, with the omission of a zone number, indicates that the coordinates are in the UPS system. The lettering scheme for 100,000 m squares is slightly different in the polar regions. The column letters use a more restricted alphabet, going from A to Z but omitting D, E, I, M, N, O, V, W; the columns are arranged so that the rightmost column in grid zone A and Y has column letter Z, and the next column in grid zone B or Z starts over with column letter A. The row letters go from A to Z, omitting I and O. The restricted column alphabet for UPS ensures that no UPS square will be adjacent to a UTM square with the same identification. In the polar regions, there is only one version of the lettering scheme. See also There are other geographic naming systems of this alphanumeric kind: Global Area Reference System (GARS) has been adopted by the National Geospatial-Intelligence Agency for use across the Department of Defense for certain applications. GARS defines areas of 30x30, 15x15, and no smaller than 5x5 minutes of latitude and longitude. Ordnance Survey National Grid is another Transverse Mercator system designed for locations in the British Isles Irish Transverse Mercator has replaced the Irish grid reference system United States National Grid (USNG), developed by the Federal Geographic Data Committee. World Geographic Reference System (GEOREF) has been used for air navigation, but is rarely seen today. Maidenhead Locator System is used by amateur radio operators. Natural Area Code References External links A list of coordinate systems – by Prof. Peter H. Dana at the Univ. of Colorado Grids and Reference Systems, by NGA. Army Study Guide: Locate a point using the US Army Military Grid Reference System (MGRS) Geographic coordinate systems Military cartography Geodesy Geocodes
Military Grid Reference System
[ "Mathematics" ]
2,878
[ "Geographic coordinate systems", "Applied mathematics", "Geodesy", "Coordinate systems" ]
1,516,371
https://en.wikipedia.org/wiki/Cavendish%20Astrophysics%20Group
The Cavendish Astrophysics Group (formerly the Radio Astronomy Group) is based at the Cavendish Laboratory at the University of Cambridge. The group operates all of the telescopes at the Mullard Radio Astronomy Observatory except for the 32m MERLIN telescope, which is operated by Jodrell Bank. The group is the second largest of three astronomy departments in the University of Cambridge. Instruments under development by the group The Atacama Large Millimeter Array (ALMA) - several modules of this international project The Magdalena Ridge Observatory Interferometer (MRO Interferometer) The SKA The Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) Instruments in service The Arcminute Microkelvin Imager (AMI) A Heterodyne Array Receiver for B-band (HARP-B) at the James Clerk Maxwell Telescope The Planck Surveyor Previous instruments The CLOVER telescope The Very Small Array The 5 km Ryle Telescope The Cambridge Optical Aperture Synthesis Telescope (COAST) The Cosmic Anisotropy Telescope The Cambridge Low Frequency Synthesis Telescope The Half-Mile Telescope The One-Mile Telescope The Interplanetary Scintillation Array which discovered the first pulsar The 4C Array which made the 4C catalogue The Cambridge Interferometer The Long Michelson Interferometer Various aperture masking instruments for optical aperture synthesis Catalogues published by the group Preliminary survey of the radio stars in the Northern Hemisphere (sometimes called the 1C catalogue) at 81.5-MHz (unreliable at low flux levels) 2C catalogue 81.5-MHz (unreliable at low flux levels) 3C catalogue 159 MHz 4C catalogue 178 MHz 5C catalogue 408 MHz and 1407 MHz 6C catalogue 151 MHz 7C catalogue 151 MHz 8C catalogue 38 MHz 9C catalogue 15 GHz 10C catalogue 14–18 GHz Cambridge Interplanetary Scintillation survey Famous Group Members Sir Martin Ryle, 1918–1984, Nobel Prize for Physics, founder of the group, former British Astronomer Royal Tony Hewish, Nobel Prize for Physics, designed the telescope which discovered the first pulsars Malcolm Longair Jacksonian Professor of Natural Philosophy, former head of the Cavendish Laboratory Jocelyn Bell Burnell, detected the first signal from a pulsar John E. Baldwin Richard Edwin Hills F. Graham Smith - early co-worker with Ryle, later Astronomer Royal David Saint-Jacques Canadian astronaut External links Cavendish Astrophysics Group webpage Cavendish Laboratory Astronomy institutes and departments
Cavendish Astrophysics Group
[ "Astronomy" ]
485
[ "Astronomy organizations", "Astronomy institutes and departments" ]
1,516,465
https://en.wikipedia.org/wiki/Nota%20accusativi
Nota accusativi is a grammatical term for a particle (an uninflected word) that marks a noun as being in the accusative case. An example is the use of the word in Spanish before an animate direct object: . Esperanto Officially, in Esperanto, the suffix letter is used to mark an accusative. But a few modern speakers use the unofficial preposition instead of the final . Hebrew In Hebrew the preposition is used for definite nouns in the accusative. Those nouns might be used with the definite article ( ). Otherwise, the object is modified by a possessive pronominal suffix, by virtue of being a within a genitive phrasing, or as a proper name. To continue with the Hebrew example: On the other hand, "I see a dog" is simply This example is obviously a specialized use of the , since Hebrew does not use the unless the noun is in the definitive. Japanese In Japanese, the particle (pronounced ) is the direct object marker and marks the recipient of an action. Korean In Korean, the postposition or is the direct object marker and marks the recipient of an action. For example: is used when the previous syllable ( in this case) is closed, i.e. when it ends with a consonant ( in in this case). is used when the previous syllable ( in this case) is open, i.e. when it ends with a vowel ( in in this case). Toki Pona In Toki Pona, the word is used to mark a direct object. Other languages Nota accusativi also exists in Armenian, Greek and other languages. In other languages, especially those with grammatical case, there is usually a separate form (for each declension if declensions exist) of the accusative case. The nota accusativi should not be confused with such case forms, as the term is a separate particle of the accusative case. See also Accusative case References Grammatical cases Parts of speech
Nota accusativi
[ "Technology" ]
421
[ "Parts of speech", "Components" ]
1,516,469
https://en.wikipedia.org/wiki/Web%20usability
Web usability of a website consists of broad goals of usability, presentation of information, choices made in a clear and concise way, a lack of ambiguity and the placement of important items in appropriate areas as well as ensuring that the content works on various devices and browsers. Definition and components Web usability includes a small learning curve, easy content exploration, findability, task efficiency, user satisfaction, and automation. These new components of usability are due to the evolution of the Web and personal devices. Examples: automation: auto fill, databases, personal account; efficiency: voice commands (Siri, Alexa, and other artificial intelligence assistants); findability. The number of websites has surpassed 1.5 billion thus increasing the need for well-designed websites that serve their users as best as possible in the constantly more competitive market. With good usability, users can find what they are looking for quickly. With the wide spread of mobile devices and wireless internet access, companies are now able to reach a global market with users of all nationalities at any time and almost any place in the world. It is important for websites to be usable regardless of users' language and culture. Most users in developed countries conduct their personal business online: banking, studying, errands, etc., which has enabled people with disabilities to be independent. Websites also need to be accessible for those users. The goal of web usability is to provide user experience satisfaction by minimizing the time it takes to the user to learn new functionality and page navigation system, allowing the user to accomplish a task efficiently without major roadblocks, providing the user easy ways to overcome roadblocks, and fixing errors and re-adapting to the website or application system and functionality with minimum effort. ISO approach According to ISO 9241 (Ergonomic Requirements for Office Work with Visual Display Terminals), usability is "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use". Therefore, web usability can be defined as the ability of web applications to support web-related tasks with effectiveness, efficiency and satisfaction. Effectiveness represents accuracy and completeness when users achieve a specified goal. Efficiency is resource cost in relation to the accuracy and completeness. Satisfaction is the comfort and acceptability of use. Considerations Accessibility To attain universal usability for web-based services, designers and developers should take technology variety, user diversity and gaps in user knowledge into consideration. Web usability improvements may include providing a strong contrast mode for people with color vision deficiency. Language Multilingual websites should offer the same experience to the users. UI Alterations because the language and characters used should still provide the different components of usability. Mobile usability With many different mobile devices, it is crucial to consider how the users accomplish their task on a small screen. Web usability components should be appropriate for the mobile device. The users should be awarded with a similar level of satisfaction and accomplishment as if they had used a desktop or laptop. According to a survey conducted by Google, users want mobile-friendly websites, especially for research. They found that mobile users value short load times, big buttons and readable text, and simple input boxes. Moreover, if a website is mobile friendly, the users are more likely to return, but they will abandon the website if it is not. Google found that the three most sought-after pieces of information for mobile users are locations, opening hours and contact information. Google has also created an online tool called "mobile friendly test" on the Google search console which allows to check mobile-friendliness of a website. Criteria Nielsen's 10 heuristics Jakob Nielsen's heuristics are widely adopted in interface design. It provides expert reviewers with a set of principles to discover usability problems and then categorize and rate them in a quick way. This set of heuristics includes visibility of system status, match between system and the real world and so on. According to Nielsen, there are 10 general principles: Visibility of system status: the users should be informed by a system all the time that people can make better decisions. Match between system and the real world: the systems' language should be similar to users' language. User control and freedom: It happens many times that users choose the wrong system functions by a mistake, therefore, the system needs to contain the "emergency exit" to give an option for users to leave the unwanted state without any problem. Consistency and standards: the users have to be aware that different words, actions and situations can mean the same thing. Error prevention: System should have a careful design that can prevent a problem that can occur in the first place. Recognition rather than recall: the system should have the actions and options visible so the users do not need to memorize everything from previous steps. Instructions about the system usability should be always visible. Flexibility and efficiency of use: the system should have an accelerator to help experienced and inexperienced users to make work faster and easier. Aesthetic and minimalist design: the systems needs to contain only relevant and useful information. The information should be clear and short. Help users recognize, diagnose, and recover from errors: the error messages should be presented in a clear language and understandable form (no codes), and suggest the solutions. Help and documentation: the information about help and documentation should be easy to find and should focus on the users' tasks. Accessibility guidelines The W3C publishes a set of guidelines on Web accessibility called Web Content Accessibility Guidelines (WGAC). The second revision of WCAG, WCAG 2.0, is composed of twelve guidelines, distilled following the four principles that Web content should adhere to: being Perceivable, Operable, Understandable and Robust. W3C also provides a detailed checklist for this set of guidelines. Understanding usability The concept of usability, particularly in the context of the internet, is most effectively understood from the perspective of the users. Digital literacy has been growing steadily, leading to a transformation in what Steve Krug highlights "how we really use the internet". Owing to the familiarity and frequency of internet use, users have evolved from reading websites thoroughly to scanning them quickly, often in pursuit of specific information. This shift reflects the efficiency with which users have learned to filter and identify only the information they need, delving deeper only if the initial information doesn't fully meet their requirements. Moreover, users tend to prioritize satisfactory outcomes over optimal ones when browsing the web, a behavior known as satisficing. This is largely due to the fast-paced nature of internet use and the negligible consequences of incorrect choices, such as clicking a wrong link, which can be easily rectified with a single click of the back button. This lack of penalty for guessing eliminates the need for users to deliberate extensively over which options to select. As a consequence, most users are less concerned with understanding the underlying mechanics of websites as long as they can effectively navigate and utilize them. However, this behavior can lead to unanticipated thinking patterns and usage methods, which may deviate from the intended functionality of a website. Usability testing and improvement As more results of usability research become available, this leads to the development of methodologies for enhancing web usability. Usability testing is evaluating the different components of web usability (learnability, efficiency, memorability, errors and satisfaction) by watching the users accomplishing their task. Usability testing allows to uncover the roadblocks and errors users encounter while accomplishing a task. However, testing is not a one time event but rather an ongoing process. See also Eye tracking, a fast and accurate usability tool Multivariate testing, a statistical testing of user responses Web development Web navigation References External links Usability.gov—usability basics with focus on web usability Evaluating Websites for Accessibility—accessibility is a crucial subset of usability for people with disabilities. This W3C/WAI suite includes a section on involving users in testing for accessibility. Usability News from the Software Usability Research Laboratory at Wichita State University Usability Web design pl:Użyteczność (informatyka)
Web usability
[ "Engineering" ]
1,677
[ "Design", "Web design" ]
1,516,500
https://en.wikipedia.org/wiki/Chemical%20file%20format
A chemical file format is a type of data file which is used specifically for depicting molecular data. One of the most widely used is the chemical table file format, which is similar to Structure Data Format (SDF) files. They are text files that represent multiple chemical structure records and associated data fields. The XYZ file format is a simple format that usually gives the number of atoms in the first line, a comment on the second, followed by a number of lines with atomic symbols (or atomic numbers) and cartesian coordinates. The Protein Data Bank Format is commonly used for proteins but is also used for other types of molecules. There are many other types which are detailed below. Various software systems are available to convert from one format to another. Distinguishing formats Chemical information is usually provided as files or streams and many formats have been created, with varying degrees of documentation. The format is indicated in three ways:(see ) file extension (usually 3 letters). This is widely used, but fragile as common suffixes such as .mol and .dat are used by many systems, including non-chemical ones. self-describing files where the format information is included in the file. Examples are CIF and CML. chemical/MIME type added by a chemically aware server. Chemical Markup Language Chemical Markup Language (CML) is an open standard for representing molecular and other chemical data. The open source project includes XML Schema, source code for parsing and working with CML data, and an active community. The articles Tools for Working with Chemical Markup Language and XML for Chemistry and Biosciences discusses CML in more detail. CML data files are accepted by many tools, including JChemPaint, Jmol, XDrawChem and MarvinView. Protein Data Bank Format The Protein Data Bank Format is an obsolete format for protein structures developed in 1972. It is a fixed-width format and thus limited to a maximum number of atoms, residues, and chains; this resulted in splitting very large structures such as ribosomes into multiple files. For example, the E. coli 70S was represented as 4 PDB files in 2009: 3I1M , 3I1N , 3I1O, and 3I1P. In 2014, they were consolidated into a single file, 4V6C. In 2014, the PDB format was officially replaced with mmCIF, and newer PDB structures may not have PDB files available. Some PDB files contained an optional section describing atom connectivity as well as position. Because these files were sometimes used to describe macromolecular assemblies or molecules represented in explicit solvent, they could grow very large and were often compressed. Some tools, such as Jmol and KiNG, could read PDB files in gzipped format. The wwPDB maintained the specifications of the PDB file format and its XML alternative, PDBML. There was a fairly major change in PDB format specification (to version 3.0) in August 2007, and a remediation of many file problems in the existing database. The typical file extension for a PDB file was .pdb, although some older files used .ent or .brk. Some molecular modeling tools wrote nonstandard PDB-style files that adapted the basic format to their own needs. GROMACS format The GROMACS file format family was created for use with the molecular simulation software package GROMACS. It closely resembles the PDB format but was designed for storing output from molecular dynamics simulations, so it allows for additional numerical precision and optionally retains information about particle velocity as well as position at a given point in the simulation trajectory. It does not allow for the storage of connectivity information, which in GROMACS is obtained from separate molecule and system topology files. The typical file extension for a GROMACS file is .gro. CHARMM format The CHARMM molecular dynamics package can read and write a number of standard chemical and biochemical file formats; however, the CARD (coordinate) and PSF (protein structure file) are largely unique to CHARMM. The CARD format is fixed-column-width, resembles the PDB format, and is used exclusively for storing atomic coordinates. The PSF file contains atomic connectivity information (which describes atomic bonds) and is required before beginning a simulation. The typical file extensions used are .crd and .psf respectively. GSD format The General Simulation Data (GSD) file format created for efficient reading / writing of generic particle simulations, primarily - but not restricted to - those from HOOMD-blue. The package also contains a python module that reads and writes HOOMD schema gsd files with an easy to use syntax. Ghemical file format The Ghemical software can use OpenBabel to import and export a number of file formats. However, by default, it uses the GPR format. This file is composed of several parts, separated by a tag (!Header, !Info, !Atoms, !Bonds, !Coord, !PartialCharges and !End). The proposed MIME type for this format is application/x-ghemical. SYBYL Line Notation SYBYL Line Notation (SLN) is a chemical line notation. Based on SMILES, it incorporates a complete syntax for specifying relative stereochemistry. SLN has a rich query syntax that allows for the specification of Markush structure queries. The syntax also supports the specification of combinatorial libraries of ChemDraw. {| class="wikitable" |+Example SLNs ! Description ! SLN string |- | Benzene | C[1]H:CH:CH:CH:CH:CH:@1 |- | Alanine | NH2C[s=n]H(CH3)C(=O)OH |- | Query showing R sidechain | R1[hac>1]C[1]:C:C:C:C:C:@1 |- | Query for amide/sulfamide | NHC=M1{M1:O,S} |} SMILES The simplified molecular input line entry system, or SMILES, is a line notation for molecules. SMILES strings include connectivity but do not include 2D or 3D coordinates. Hydrogen atoms are not represented. Other atoms are represented by their element symbols B, C, N, O, F, P, S, Cl, Br, and I. The symbol = represents double bonds and # represents triple bonds. Branching is indicated by ( ). Rings are indicated by pairs of digits. Some examples are {| class="wikitable" |- ! Name ! Formula ! SMILES string |- | Methane | CH4 | C |- | Ethanol | C2H6O | CCO |- | Benzene | C6H6 | C1=CC=CC=C1 or c1ccccc1 |- | Ethylene | C2H4 | C=C |} XYZ The XYZ file format is a simple format that usually gives the number of atoms in the first line, a comment on the second, followed by a number of lines with atomic symbols (or atomic numbers) and cartesian coordinates. MDL number The MDL number contains a unique identification number for each reaction and variation. The format is RXXXnnnnnnnn. R indicates a reaction, XXX indicates which database contains the reaction record. The numeric portion, nnnnnnnn, is an 8-digit number. Other common formats One of the most widely used industry standards are chemical table file formats, like the Structure Data Format (SDF) files. They are text files that adhere to a strict format for representing multiple chemical structure records and associated data fields. The format was originally developed and published by Molecular Design Limited (MDL). MOL is another file format from MDL. It is documented in Chapter 4 of CTfile Formats. PubChem also has XML and ASN1 file formats, which are export options from the PubChem online database. They are both text based (ASN1 is most often a binary format). There are a large number of other formats listed in the table below Converting between formats OpenBabel and JOELib are freely available open source tools specifically designed for converting between file formats. Their chemical expert systems support a large atom type conversion tables. obabel -i input_format input_file -o output_format output_file For example, to convert the file epinephrine.sdf in SDF to CML use the command obabel -i sdf epinephrine.sdf -o cml epinephrine.cml The resulting file is epinephrine.cml. IOData is a free and open-source Python library for parsing, storing, and converting various file formats commonly used by quantum chemistry, molecular dynamics, and plane-wave density-functional-theory software programs. It also supports a flexible framework for generating input files for various software packages. For a complete list of supported formats, please go to https://iodata.readthedocs.io/en/latest/formats.html. A number of tools intended for viewing and editing molecular structures are able to read in files in a number of formats and write them out in other formats. The tools JChemPaint (based on the Chemistry Development Kit), XDrawChem (based on OpenBabel), Chime, Jmol, Mol2mol and Discovery Studio fit into this category. The Chemical MIME Project "Chemical MIME" is a de facto approach for adding MIME types to chemical streams. This project started in January 1994, and was first announced during the Chemistry workshop at the First WWW International Conference, held at CERN in May 1994. ... The first version of an Internet draft was published during May–October 1994, and the second revised version during April–September 1995. A paper presented to the CPEP (Committee on Printed and Electronic Publications) at the IUPAC meeting in August 1996 is available for discussion. In 1998 the work was formally published in the JCIM. {| class="wikitable" ! File extension ! MIME Type ! Proper Name ! Description |- | .alc | chemical/x-alchemy | Alchemy Format | |- | .csf | chemical/x-cache-csf | CAChe MolStruct CSF | |- | .cbin, .cascii, .ctab | chemical/x-cactvs-binary | CACTVS format | |- | .cdx | chemical/x-cdx | ChemDraw eXchange file | |- | .cer | chemical/x-cerius | MSI Cerius II format | |- | .c3d | chemical/x-chem3d | Chem3D Format | |- | .chm | chemical/x-chemdraw | ChemDraw file | |- | .cif | chemical/x-cif | Crystallographic Information File, Crystallographic Information Framework | Promulgated by the International Union of Crystallography |- | .cmdf | chemical/x-cmdf | CrystalMaker Data format | |- | .cml | chemical/x-cml | Chemical Markup Language | XML based Chemical Markup Language. |- | .cpa | chemical/x-compass | Compass program of the Takahashi | |- | .bsd | chemical/x-crossfire | Crossfire file | |- | .csm, .csml | chemical/x-csml | Chemical Style Markup Language | |- | .ctx | chemical/x-ctx | Gasteiger group CTX file format | |- | .cxf, .cef | chemical/x-cxf | Chemical eXchange Format | |- | .emb, .embl | chemical/x-embl-dl-nucleotide | EMBL Nucleotide Format | |- | .spc | chemical/x-galactic-spc | SPC format for spectral and chromatographic data | |- | .inp, .gam, .gamin | chemical/x-gamess-input | GAMESS Input format | |- | .fch, .fchk | chemical/x-gaussian-checkpoint | Gaussian Checkpoint Format | |- | .cub | chemical/x-gaussian-cube | Gaussian Cube (Wavefunction) Format | |- | .gau, .gjc, .gjf, .com | chemical/x-gaussian-input | Gaussian Input Format | |- | .gcg | chemical/x-gcg8-sequence | Protein Sequence Format | |- | .gen | chemical/x-genbank | ToGenBank Format | |- | .istr, .ist | chemical/x-isostar | IsoStar Library of Intermolecular Interactions | |- | .jdx, .dx | chemical/x-jcamp-dx | JCAMP Spectroscopic Data Exchange Format | |- | .kin | chemical/x-kinemage | Kinetic (Protein Structure) Images; Kinemage | |- | .mcm | chemical/x-macmolecule | MacMolecule File Format | |- | .mmd, .mmod | chemical/x-macromodel-input | MacroModel Molecular Mechanics | |- | .mol | chemical/x-mdl-molfile | MDL Molfile | |- | .smiles, .smi | chemical/x-daylight-smiles | Simplified molecular input line entry specification | A line notation for molecules. |- | .sdf | chemical/x-mdl-sdfile | Structure-Data File | |- | .el | chemical/x-sketchel | SketchEl Molecule | |- | .ds | chemical/x-datasheet | SketchEl XML DataSheet | |- | .inchi | chemical/x-inchi | IUPAC International Chemical Identifier (InChI) | |- | .jsd, .jsdraw | chemical/x-jsdraw | JSDraw native file format | |- | .helm, .ihelm | chemical/x-helm | Pistoia Alliance HELM string | A line notation for biological molecules |- | .xhelm | chemical/x-xhelm | Pistoia Alliance XHELM XML file | XML based HELM including monomer definitions |} Support For Linux/Unix, configuration files are available as a "chemical-mime-data" package in .deb, RPM and tar.gz formats to register chemical MIME types on a web server. Programs can then register as viewer, editor or processor for these formats so that full support for chemical MIME types is available. Sources of chemical data Here is a short list of sources of freely available molecular data. There are many more resources than listed here out there on the Internet. Links to these sources are given in the references below. The US National Institute of Health PubChem database is a huge source of chemical data. All of the data is in two-dimensions. Data includes SDF, SMILES, PubChem XML, and PubChem ASN1 formats. The worldwide Protein Data Bank (wwPDB) is an excellent source of protein and nucleic acid molecular coordinate data. The data is three-dimensional and provided in Protein Data Bank (PDB) format. is a commercial database for molecular data. The data includes a two-dimensional structure diagram and a smiles string for each compound. supports fast substructure searching based on parts of the molecular structure. ChemExper is a commercial data base for molecular data. The search results include a two-dimensional structure diagram and a mole file for many compounds. New York University Library of 3-D Molecular Structures. The US Environmental Protection Agency's The Distributed Structure-Searchable Toxicity (DSSTox) Database Network is a project of EPA's Computational Toxicology Program. The database provides SDF molecular files with a focus on carcinogenic and otherwise toxic substances. See also File format OpenBabel, JOELib, OELib Chemistry Development Kit Chemical Markup Language Software for molecular modeling NCI/CADD Chemical Identifier Resolver References External links
Chemical file format
[ "Chemistry" ]
3,432
[ "Chemistry software", "Chemical file formats" ]
1,516,575
https://en.wikipedia.org/wiki/Institute%20of%20Astronomy%2C%20Cambridge
The Institute of Astronomy (IoA) is the largest of the three astronomy departments in the University of Cambridge, and one of the largest astronomy sites in the United Kingdom. Around 180 academics, postdocs, visitors and assistant staff work at the department. Research at the department is made in a number of scientific areas, including exoplanets, stars, star clusters, cosmology, gravitational-wave astronomy, the high-redshift universe, AGN, galaxies and galaxy clusters. This is a mixture of observational astronomy, over the entire electromagnetic spectrum, computational theoretical astronomy, and analytic theoretical research. The Kavli Institute for Cosmology is also located on the department site. This institute has an emphasis on The Universe at High Redshifts. The Cavendish Astrophysics Group are based in the Battcock Centre, a building in the same grounds. History The institute was formed in 1972 from the amalgamation of earlier institutions: The University Observatory, founded in 1823. Its Cambridge Observatory building now houses offices and the department library. The Solar Physics Observatory, which started in Cambridge in 1912. The building was partly demolished in 2008 to make way for the Kavli Institute for Cosmology. The Institute of Theoretical Astronomy, which was created by Fred Hoyle in 1967. Its building is the main departmental site (the Hoyle Building), with a lecture theatre added in 1999, and a second two-storey wing built in 2002. From 1990 to 1998, the Royal Greenwich Observatory was based in Cambridge, where it occupied Greenwich House on a site adjacent to the Institute of Astronomy. Teaching The department teaches 3rd and 4th year undergraduates as part of the Natural Sciences Tripos or Mathematical Tripos. Around 30 students normally study the masters which consists of a substantial research project (around 1/3 of the masters) and students have an opportunity to study courses such as General Relativity, Cosmology, Black Holes, Extrasolar Planets, Astrophysical Fluid Dynamics, Structure and Evolution of Stars & Formation of Galaxies. In addition, there are around 12 to 18 graduate PhD students at the department per year, mainly funded by the STFC. The graduate programme is particularly unusual in the UK as the students are free to choose their own PhD supervisor or adviser from the staff at the department, and this choice is often made as late as the end of their first term. Notable current staff An incomplete list of notable current members of the department. Cathie Clarke Carolin Crawford George Efstathiou Andrew Fabian Paul Hewett Mike Irwin Gerry Gilmore Douglas Gough Nikku Madhusudhan Richard McMahon Hiranya Peiris Max Pettini James E. Pringle Martin Rees Christopher Tout Anna Zytkow Notable past members and students Here are some notable members of the department and its former institutes. Sverre_Aarseth Suzanne Aigrain George Airy Robert Stawell Ball James Challis Donald Clayton John Couch Adams Arthur Eddington Richard Ellis Roger Griffin Stephen Hawking Cyril Hazard Fred Hoyle Ofer Lahav Mike Irwin Jamal Nazrul Islam Harold Jeffreys Robert Kennicutt Donald Lynden-Bell Jayant Narlikar Jeremiah Ostriker Christopher S. Reynolds Robert Woodhouse Telescopes The Institute houses several telescopes on its site. Although some scientific work is done with the telescopes, they are mostly used for public observing and astronomical societies. The poor weather and light-pollution in Cambridge makes most modern astronomy difficult. The telescopes on the site include: The Northumberland Telescope donated by the Duke of Northumberland in 1833. This is a diameter refractor on an English mount. The smaller Thorrowgood Telescope, on extended loan from the Royal Astronomical Society. The telescope is an refractor. The 36-inch Telescope, built in 1951. The Three-Mirror Telescope, which is a prototype telescope with a unique design to have wide field of view, sharp images and all-reflection optics. The institute's former 24" Schmidt Camera was donated to the Spaceguard Centre in Knighton, Powys in Wales in June 2009. The Cambridge University Astronomical Society (CUAS) and Cambridge Astronomical Association (CAA) both regularly observe. The Institute holds public observing evenings on Wednesdays from October to March. Public activities The department holds a number of events involving the general public in astronomy. These include or have included: Open evenings on Wednesdays during the winter, with a talk given by a member of the institute followed by observing in clear weather Hosting the Astroblast conference Annual sculpture exhibition showing work of Anglia Ruskin University Annual open day during the Cambridge Science Festival A monthly podcast, the 'Astropod', aimed at the general public (the Astropod originally ran from 2009 to 2011, and was relaunched in 2020) Extra observing nights for special events such as IYA Moonwatch and BBC stargazing live Library The institute library is housed in the old Cambridge Observatory building. It is a specialist library concentrating on the subjects of astronomy, astrophysics and cosmology. The collection has approximately 17,000 books and subscribes to about 80 current journals. The library also has a collection of rare astronomical books, many of which belonged to John Couch Adams. Achievements Among the significant contributions to astronomy made by the institute, the now decommissioned Automatic Plate Measuring (APM) machine was used to create a major catalogue of astronomical objects in the northern sky. References External links Institute of Astronomy at the University of Cambridge Kavli Institute of Cosmology, Cambridge Images from the Institute of Astronomy Library Astronomy institutes and departments Astronomy, Institute of Research institutes in the United Kingdom Astronomy in the United Kingdom Research institutes established in 1972
Institute of Astronomy, Cambridge
[ "Astronomy" ]
1,138
[ "Astronomy organizations", "Astronomy institutes and departments" ]