id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
17,656,945
https://en.wikipedia.org/wiki/Karl%20Schwarzschild%20Medal
The Karl Schwarzschild Medal, named after the astrophysicist Karl Schwarzschild, is an award presented by the Astronomische Gesellschaft (German Astronomical Society) to eminent astronomers and astrophysicists. Recipients Source: German Astronomical Society See also List of astronomy awards References Astronomy prizes German awards Awards established in 1959
Karl Schwarzschild Medal
[ "Astronomy", "Technology" ]
69
[ "Science and technology awards", "Astronomy prizes" ]
17,656,978
https://en.wikipedia.org/wiki/Jos%C3%A9%20Cardero
José Cardero (also Josef Cardero, in full Manuel, José, (Josef, Joseph) Antonio Cardero) (1766 – after 1811) was a Spanish draughtsman and artist. He is most remembered for his work on the expedition of Alejandro Malaspina and the related expedition of Dionisio Alcalá Galiano. During the Galiano voyage Cordero Channel was named in his honor. Other places in British Columbia were later named in his honor as well, including Dibuxante Point, "dibuxante" being Spanish for "draughtsman". Biography He was born in 1766 in Écija, Spain. Nothing is known about Cardero's life until he sailed with Malaspina in 1789. He was a member of the crew of Malaspina's corvette, the Descubierta, perhaps as a servant. He showed an aptitude for drawing early in the voyage and after Juan del Pozo Bauzá, one of the official artists, was discharged in Peru, Cardero began producing drawings regularly. In 1791, when the expedition was in Acapulco, New Spain (Mexico), Cardero was officially confirmed as an artist and map drawer of the expedition. He sailed with Malaspina to Alaska, where he made many drawings of the Tlingit. After returning to Mexico Malaspina assigned him to serve as an artist on the expedition of Galiano and Cayetano Valdés, both officers of Malaspina's who were given ships and the task of exploring the Strait of Georgia. Cardero sailed on Valdés's ship, the Mexicana, in 1792. During the voyage the Spanish met and worked with George Vancouver, who was exploring the Strait of Georgia for the British. Both expeditions sailed around Vancouver Island. Cardero's duties on the Galiano expedition included not only making drawings and fair copies of sketch maps, but serving in boat parties sent out to explore. After the voyage many of Cardero's drawings were copied and improved upon by other artists, especially the painter Fernando Brambila in Madrid. Brambila, who had never been to the Pacific Northwest, produced higher quality artwork but sometimes added unrealistic details. After the Galiano voyage, Cardero returned to Spain and worked with Valdés and Malaspina briefly. In 1795 he was reassigned as a Ship Accountant in the Spanish Navy and sent to Cádiz. His name appears on a list of permanent officers of the navy from 1797 to 1811, after which there is no further mention of him in known records. The reason for the removal of his name from the list of officers in 1811 is not known. Legacy Cordero Channel, originally Canal de Cardero, commemorates Jose Cardero. Cardero Street, in Vancouver's West End is named for the strait, and only as a result indirectly for Jose Cardero. See also Spanish expeditions to the Pacific Northwest References Explorers of British Columbia Spanish explorers of North America Spanish history in the Pacific Northwest 18th-century Spanish military personnel 18th-century Spanish explorers 18th-century Spanish artists 1766 births Explorers of Alaska 19th-century deaths People from Écija Draughtsmen 19th-century Spanish military personnel
José Cardero
[ "Engineering" ]
644
[ "Design engineering", "Draughtsmen" ]
17,657,397
https://en.wikipedia.org/wiki/MHV%20amplitudes
In theoretical particle physics, maximally helicity violating amplitudes (MHV) are amplitudes with massless external gauge bosons, where gauge bosons have a particular helicity and the other two have the opposite helicity. These amplitudes are called MHV amplitudes, because at tree level, they violate helicity conservation to the maximum extent possible. The tree amplitudes in which all gauge bosons have the same helicity or all but one have the same helicity vanish. MHV amplitudes may be calculated very efficiently by means of the Parke–Taylor formula. Although developed for pure gluon scattering, extensions exist for massive particles, scalars (the Higgs) and for fermions (quarks and their interactions in QCD). Parke–Taylor amplitudes Work done in 1980s by Stephen Parke and Tomasz Taylor found that when considering the scattering of many gluons, certain classes of amplitude vanish at tree level; in particular when fewer than two gluons have negative helicity (and all the rest have positive helicity): The first non-vanishing case occurs when two gluons have negative helicity. Such amplitudes are known as "maximally helicity violating" and have an extremely simple form in terms of momentum bilinears, independent of the number of gluons present: The compactness of these amplitudes makes them extremely attractive, particularly for data taking at the LHC, for which it is necessary to remove the dominant background of standard model events. A rigorous derivation of the Parke–Taylor amplitudes was given by Berends and Giele. CSW rules The MHV were given a geometrical interpretation using Witten's twistor string theory which in turn inspired a technique of "sewing" MHV amplitudes together (with some off-shell continuation) to build arbitrarily complex tree diagrams. The rules for this formalism are called the CSW rules (after Freddy Cachazo, Peter Svrcek, Edward Witten). The CSW rules can be generalised to the quantum level by forming loop diagrams out of MHV vertices. There are missing pieces in this framework, most importantly the vertex, which is clearly non-MHV in form. In pure Yang–Mills theory this vertex vanishes on-shell, but it is necessary to construct the amplitude at one loop. This amplitude vanishes in any supersymmetric theory, but does not in the non-supersymmetric case. The other drawback is the reliance on cut-constructibility to compute the loop integrals. This therefore cannot recover the rational parts of amplitudes (i.e. those not containing cuts). The MHV Lagrangian A Lagrangian whose perturbation theory gives rise to the CSW rules can be obtained by performing a canonical change of variables on the light-cone Yang–Mills (LCYM) Lagrangian. The LCYM Lagrangrian has the following helicity structure: The transformation involves absorbing the non-MHV three-point vertex into the kinetic term in a new field variable: When this transformation is solved as a series expansion in the new field variable, it gives rise to an effective Lagrangian with an infinite series of MHV terms: The perturbation theory of this Lagrangian has been shown (up to the five-point vertex) to recover the CSW rules. Moreover, the missing amplitudes which plague the CSW approach turn out to be recovered within the MHV Lagrangian framework via evasions of the S-matrix equivalence theorem. An alternative approach to the MHV Lagrangian recovers the missing pieces mentioned above by using Lorentz-violating counterterms. BCFW recursion BCFW recursion, also known as the Britto–Cachazo–Feng–Witten (BCFW) on-shell recursion method, is a way of calculating scattering amplitudes. Extensive use is now made of these techniques. References Scattering theory Quantum chromodynamics
MHV amplitudes
[ "Chemistry" ]
859
[ "Scattering", "Scattering theory" ]
17,657,779
https://en.wikipedia.org/wiki/Painting%20and%20Decorating%20Contractors%20of%20America
The Painting Contractors Association (PCA) is a non-profit association established in 1884 to represent the painting and decorating industry. It was founded as the "Master House Painters Association of the United States and Canada". PCA has established industry standards, issues publications, and has a contractor accreditation program through their 'Contractor College' website. Members of PDCA can be found throughout the United States, Canada and other countries. They are served regionally and locally by more than 100 volunteer based Councils, Chapters and Forums. Scholarships The A. E. Robert Friedman/PDCA Scholarship Fund of PDCA was formed in 1978, to honor Bob Friedman on the occasion of his fortieth anniversary as legal counsel for the PDCA. Since 1978, over 158 scholarships have been given to students between the ages of 18 and 24 of any background and career choice that have been nominated by an active member of PDCA. It is intended to show the commitment of PDCA members to supporting the educational efforts of young people in search of promising careers. In 2013, seven scholarships of $3,000 per recipient were awarded, up from five scholarships the previous year. Annual convention The association hosts an annual convention, the "Painting Contractors EXPO", which presents new developments in the painting industry. During the event, PCA presents awards to members of the industry. Education PDCA offers online education and an accreditation program at Contractor College. Publications created by PDCA have been recognized by other professional organizations for training purposes and reference. Industry leadership PCA has connected with a variety of groups to provide services to the painting and decorating profession. Some examples of these initiatives have focused on improving occupational safety and health, and the role of women in painting References External links External Website Professional associations based in the United States Organizations established in 1884 Paint and coatings industry
Painting and Decorating Contractors of America
[ "Chemistry" ]
367
[ "Coatings", "Paint and coatings industry" ]
17,658,218
https://en.wikipedia.org/wiki/Consortium%20of%20Local%20Authorities%20Special%20Programme
The Consortium of Local Authorities Special Programme (abbreviated and more commonly referred to as CLASP) was formed in England in 1957 to combine the resources of Local Authorities with the purpose of developing a prefabricated school building programme. Initially developed by Charles Herbert Aslin, the county architect for Hertfordshire, the system was used as a model for several other counties, most notably Nottinghamshire and Derbyshire. CLASP's popularity in these coal mining areas was in part because the system permitted fairly straightforward replacement of subsidence-damaged sections of building. Characteristics The system utilised prefabricated light gauge steel frames which could be built economically up to a maximum of 4 storeys. The frames were finished in a variety of claddings and their modular nature could be employed to produce architecturally satisfying buildings. Initially developed solely for schools, the system was also used to provide offices and housing. A later development was known as SCOLA (Second Consortium of Local Authorities) and MACE (Metropolitan Architectural Consortium for Education). The cynics' definition of the CLASP acronym, circulating in the 1970s, was "collection of loosely assembled steel parts". CLASP buildings fell out of favour in the late 1970s. Budgetary advances and changing architectural tastes made the scheme obsolete. Examples of use Important examples include many Hertfordshire schools, some of which have since been listed. The system was also used in the construction of the independent St Paul's School, London, designed by Philip Powell and Hidalgo Moya, which was constructed on unstable ground on a former reservoir, and completed in 1968. In addition to schools, the CLASP system was also used in the 1960s for the buildings of the University of York, designed by architect Andrew Derbyshire between 1961 and 1963. An unusual, perhaps unique use of the system is the Catholic church of St Michael and All Angels in Wombwell, South Yorkshire. Wombwell is prone to mining subsidence and the first church on the site was condemned only ten years after it was built. The replacement church, which was designed by David and Patricia Brown of Weightman & Bullen, opened in 1968, is on a hexagonal plan and clad in concrete panels; the windows are polyester resin instead of stained glass. Railways Between the late 1960s and the early 1970s, the CLASP system was implemented by British Rail, particularly in the former Southern Region. Modernisation projects Mid century built CLASP buildings are coming to the end of their designed operational life. However many projects have been carried out over the years to CLASP buildings to modernise the buildings fabric and increase energy efficiencies. Such projects involve re-roofing work which can increase energy efficiencies, re-cladding or painting the external skin of the building to give a modern look, replacement of sky lights and atriums with double glazing solar reduction glass and internal refits where additional insulation is added when internal rooms are renovated. Internal renovations can include new carpets, new ceiling tiles, efficient LED lighting and smart building management system controls. The structural integrity of CLASP buildings are strong and robust, the design being based on; strong concrete foundations, metal framing supports and concrete cladding give the building a unlimited lifetime timeframe (with small maintenance carried out). It is these design fundamentals of CLASP that can allow buildings to last over a hundred years. A report commissioned by Nottinghamshire County Council in 2008 stated it is far more efficient and environmentally friendly to modernise CLASP buildings than to knock down and replace them. When costings for repairs of CLASP buildings match or exceed the cost for a new build, a factor which is never taken into consideration is the environmental damage caused by removing one building, and using up precious resources to build another. There must be a strong business case to justify why the environmental aspect of demolish and rebuild is ignored when it comes to modernising assets. Asbestos in CLASP buildings Around 3,000 CLASP buildings are still in use across Britain. Since they were built using asbestos, including as fire-proofing on structural columns and as a replacement for materials of which there were shortages, they are a particular focus of the campaign to remove asbestos from school buildings in the UK. Asbestos is now known to present a serious health concern. References Notes Bibliography Ford, Boris, The Cambridge cultural history of Britain Jones, Martyn and Saad, Mohammed Managing Innovation in Construction Cook, Martin Design quality manual:Improving Building Performance External links From Here to Modernity Buildings Architecture in England Education in England Local government in England Prefabricated buildings
Consortium of Local Authorities Special Programme
[ "Engineering" ]
902
[ "Building engineering", "Prefabricated buildings" ]
8,974,925
https://en.wikipedia.org/wiki/Apdex
Apdex (Application Performance Index) is an open standard developed by an alliance of companies for measuring performance of software applications in computing. Its purpose is to convert measurements into insights about user satisfaction, by specifying a uniform way to analyze and report on the degree to which measured performance meets user expectations. It is based on counts of "satisfied", "tolerating", and "frustrated" users, given a maximum satisfactory response time tof , a maximum tolerable response time of 4t, and where users are assumed to be frustrated above 4t. The score is equivalent to a weighted average of these user counts with weights 1, 0.5, and 0, respectively. Problems addressed When engaging in application performance management, for example in the course of website monitoring, enterprises collect many measurements of the performance of information technology applications. However, this measurement data may not provide a clear and simple picture of how well those applications are performing from a business point of view, a characteristic desired in metrics that are used as key performance indicators. Reporting several different kinds of data can confuse. Reducing measurement data to a single well understood metric is a convenient way to track and report on quality of experience. Measurements of application response times, in particular, may be difficult to evaluate because: Viewed alone, they do not reveal whether people using the application consider its behavior to be highly responsive to their particular needs, merely tolerable, or frustratingly slow. Using averages to summarize many measurement samples washes out important details in the measurement distribution, and may obscure evidence that many users may have been frustrated with application response times that were significantly slower than the average value. The objectives (or goals or targets) set for response time values are not uniform across different applications. This makes it difficult to view comparable data for several applications side-by-side (such as in a digital dashboard), and see quickly which are meeting their objectives and which are not. The Apdex method seeks to address these problems. Apdex method Proponents of the Apdex standard believe that it offers a better way to "measure what matters". The Apdex method converts many measurements into one number on a uniform scale of 0 to 1 (0 = no users satisfied, 1 = all users satisfied). The resulting Apdex score is a numerical measure of user satisfaction with the performance of enterprise applications. This metric can be used to report on any source of end-user performance measurements for which a performance objective has been defined. The Apdex formula is the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples: where the sub-script t is the target time, and the tolerable time is assumed to be 4 times the target time. So it is easy to see how this ratio is always directly related to users' perceptions of satisfactory application responsiveness. Example: assuming a performance objective of 3 seconds or better, and a tolerable standard of 12 seconds or better, given a dataset with 100 samples where 60 are below 3 seconds, 30 are between 3 and 12 seconds, and the remaining 10 are above 12 seconds, the Apdex score is: The Apdex formula is equivalent to a weighted average, where a satisfied user is given a score of 1, a tolerating user is given a score of 0.5, and a frustrated user is given a score of 0. Apdex Alliance The Apdex Alliance, headquartered in Charlottesville, Virginia, was founded in 2004 by Peter Sevcik, President of NetForecast, Inc. The Alliance is a group of companies that are collaborating to establish the Apdex standard. These companies have perceived the need for a simple and uniform way to report on application performance, are adopting the Apdex method in their internal operations or software products, and are participating in the work of refining and extending the definition of the Apdex specifications. Alliance contributing members who incorporate the standard into their products may use the Apdex name or logo where the Alliance has certified them as compliant. In January 2007, the Alliance comprised 11 contributing member companies, and over 200 individual members. While the number of contributing companies has remained relatively stable, individual membership grew to over 800 by December 2008, and reached 2000 in 2010. In 2008 the Alliance began publishing a blog, the Apdex Exchange, and in 2010, began offering educational Webinars. These activities address performance management topics, with an emphasis on how to apply the Apdex methodology. External links Apdex website Apdex specifications Defining The Application Performance Index by Peter Sevcik, Business Communications Review, March 2005. CloudNetCare (Load testing tool using APDEX) by NLiive, April 2012 Standards organizations in the United States Computer performance Computer standards
Apdex
[ "Technology" ]
982
[ "Computer standards", "Computer performance" ]
8,975,395
https://en.wikipedia.org/wiki/Word%20mark%20%28computer%20hardware%29
In computer hardware, a word mark or flag is a bit in each memory location on some early variable word length computers (e.g., IBM 1401, 1410, 1620) used to mark the end of a word. Sometimes the actual bit used as a word mark on a given machine is not called word mark, but has a different name (e.g., flag on the IBM 1620, because on this machine it is multipurpose). The term word mark should not be confused with group mark or with record mark, which are distinct characters. References Computing terminology Early computers
Word mark (computer hardware)
[ "Technology" ]
122
[ "Computing terminology" ]
8,975,663
https://en.wikipedia.org/wiki/Coding%20gain
In coding theory, telecommunications engineering and other related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC). Example If the uncoded BPSK system in AWGN environment has a bit error rate (BER) of 10−2 at the SNR level 4 dB, and the corresponding coded (e.g., BCH) system has the same BER at an SNR of 2.5 dB, then we say the coding gain = , due to the code used (in this case BCH). Power-limited regime In the power-limited regime (where the nominal spectral efficiency [b/2D or b/s/Hz], i.e. the domain of binary signaling), the effective coding gain of a signal set at a given target error probability per bit is defined as the difference in dB between the required to achieve the target with and the required to achieve the target with 2-PAM or (2×2)-QAM (i.e. no coding). The nominal coding gain is defined as This definition is normalized so that for 2-PAM or (2×2)-QAM. If the average number of nearest neighbors per transmitted bit is equal to one, the effective coding gain is approximately equal to the nominal coding gain . However, if , the effective coding gain is less than the nominal coding gain by an amount which depends on the steepness of the vs. curve at the target . This curve can be plotted using the union bound estimate (UBE) where Q is the Gaussian probability-of-error function. For the special case of a binary linear block code with parameters , the nominal spectral efficiency is and the nominal coding gain is kd/n. Example The table below lists the nominal spectral efficiency, nominal coding gain and effective coding gain at for Reed–Muller codes of length : Bandwidth-limited regime In the bandwidth-limited regime (, i.e. the domain of non-binary signaling), the effective coding gain of a signal set at a given target error rate is defined as the difference in dB between the required to achieve the target with and the required to achieve the target with M-PAM or (M×M)-QAM (i.e. no coding). The nominal coding gain is defined as This definition is normalized so that for M-PAM or (M×M)-QAM. The UBE becomes where is the average number of nearest neighbors per two dimensions. See also Channel capacity Eb/N0 References MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes sections 5.3, 5.5, 6.3, 6.4 Coding theory Error detection and correction
Coding gain
[ "Mathematics", "Engineering" ]
601
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
8,976,534
https://en.wikipedia.org/wiki/Revenue%20management
Revenue management (RM) is a discipline to maximize profit by optimizing rate (ADR) and occupancy (Occ). In its day to day application the maximization of Revenue per Available Room (RevPAR) is paramount. It is seen by some as synonymous with yield management. Overview Businesses face important decisions regarding what to sell, when to sell, to whom to sell, and for how much. Revenue management uses data-driven tactics and strategy to answer these questions in order to increase revenue. The discipline of revenue management (RM) is also known as also known as Yield Management (YM), and is a cross-disciplinary field. It combines operations research or management science, analytics, economics, human resource management, software development, marketing, e-commerce, consumer behaviour, and consulting. For destinations with benchmark data available the maximization of RGI (Revenue Generated Index or RevPar Index) is the focus of this discipline. History Before the emergence of revenue management, BOAC (now British Airways) experimented with differentiated fare products by offering capacity-controlled "Earlybird" discounts to stimulate demand for seats that would otherwise fly empty. Taking it a step further, Robert Crandall, former chairman and CEO of American Airlines (AA), pioneered a practice he called yield management, which focused primarily on maximizing revenue through analytics-based inventory control. Under Crandall's leadership, American continued to invest in yield management's forecasting, inventory control and overbooking capabilities. By the early 1980s, the combination of a mild recession and new competition spawned by airline deregulation act (1978) posed an additional threat. Low-cost, low-fare airlines like People Express were growing rapidly because of their ability to charge even less than American's Super Saver fares. After investing millions in the next generation capability which they would call DINAMO (Dynamic Inventory Optimization and Maintenance Optimizer), American announced Ultimate Super Saver Fares in 1985 that were priced lower than those of People Express. These fares were non-refundable in addition to being advance-purchase restricted and capacity controlled. This yield management system targeted those discounts to only those situations where they had a surplus of empty seats. The system and analysts engaged in continual re-evaluation of the placement of the discounts to maximize their use. Over the next year, American's revenue increased 14.5% and its profits were up 47.8%. Other industries took note of AA's success and implemented similar systems. Robert Crandall discussed his success with yield management with J. W. "Bill" Marriott, Jr., CEO of Marriott International. Marriott International had many of the same issues that airlines did: perishable inventory, customers booking in advance, lower cost competition and wide swings with regard to balancing supply and demand. Since "yield" was an airline term and did not necessarily pertain to hotels, Marriott International and others began calling the practice Revenue Management. The company created a Revenue Management organization and invested in automated Revenue Management Systems (RMS) that would provide daily forecasts of demand and make inventory recommendations for each of its 160,000 rooms at its Marriott, Courtyard Marriott and Residence Inn brands. They also created "fenced rate" logic similar to airlines, which would allow them to offer targeted discounts to price sensitive market segments based on demand. To address the additional complexity created by variable lengths-of-stay, Marriott's Demand Forecast System (DFS) was built to forecast guest booking patterns and optimize room availability by price and length of stay. By the mid-1990s, Marriott's successful execution of revenue management was adding between $150 million and $200 million in annual revenue. A natural extension of hotel revenue management was to rental car firms, which experienced similar issues of discount availability and duration control. In 1994, revenue management saved National Car Rental from bankruptcy. Their revival from near collapse to making profits served as an indicator of revenue management's potential. Up to this point, revenue management had focused on driving revenue from Business to Consumer (B2C) relationships. In the early 1990s UPS developed revenue management further by revitalizing their Business to Business (B2B) pricing strategy. Faced with the need for volume growth in a competitive market, UPS began building a pricing organization that focused on discounting. Prices began to erode rapidly, however, as they began offering greater discounts to win business. The executive team at UPS prioritized specific targeting of their discounts but could not strictly follow the example set by airlines and hotels. Rather than optimizing the revenue for a discrete event such as the purchase of an airline seat or a hotel room, UPS was negotiating annual rates for large-volume customers using a multitude of services over the course of a year. To alleviate the discounting issue, they formulated the problem as a customized bid-response model, which used historical data to predict the probability of winning at different price points. They called the system Target Pricing. With this system, they were able to forecast the outcomes of any contractual bid at various net prices and identify where they could command a price premium over competitors and where deeper discounts were required to land deals. In the first year of this revenue management system, UPS reported increased profits of over $100 million. The concept of maximizing revenue on negotiated deals found its way back to the hospitality industry. Marriott's original application of revenue management was limited to individual bookings, not groups or other negotiated deals. In 2007, Marriott introduced a "Group Price Optimizer" that used a competitive bid-response model to predict the probability of winning at any price point, thus providing accurate price guidance to the sales force. The initial system generated an incremental $46 million in profit. This led to an Honorable Mention for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences in 2009. By the early 1990s revenue management also began to influence television ad sales. Companies like Canadian Broadcast Corporation, ABC, and NBC developed systems that automated the placement of ads in proposals based on total forecasted demand and forecasted ratings by program. Today, many television networks around the globe have revenue management systems. Revenue management to this point had been utilized in the pricing of perishable products. In the 1990s, however, the Ford Motor Company began adopting revenue management to maximize profitability of its vehicles by segmenting customers into micro-markets and creating a differentiated and targeted price structure. Pricing for vehicles and options packages had been set based upon annual volume estimates and profitability projections. The company found that certain products were overpriced and some were underpriced. Understanding the range of customer preferences across a product line and geographical market, Ford leadership created a Revenue management organization to measure the price-responsiveness of different customer segments for each incentive type and to develop an approach that would target the optimal incentive by product and region. By the end of the decade, Ford estimated that roughly $3 billion in additional profits came from revenue management initiatives. The public success of Pricing and Revenue Management at Ford solidified the ability of the discipline to address the revenue generation issues of virtually any company. Many auto manufacturers have adopted the practice for both vehicle sales and the sale of parts. Retailers have leveraged the concepts pioneered at Ford to create more dynamic, targeted pricing in the form of discounts and promotions to more accurately match supply with demand. Promotions planning and optimization assisted retailers with the timing and prediction of the incremental lift of a promotion for targeted products and customer sets. Companies have rapidly adopted price markdown optimization to maximize revenue from end-of-season or end-of-life items. Furthermore, strategies driving promotion roll-offs and discount expirations have allowed companies to increase revenue from newly acquired customers. By 2000, virtually all major airlines, hotel firms, cruise lines and rental car firms had implemented revenue management systems to predict customer demand and optimize available price. These revenue management systems had limited "optimize" to imply managing the availability of pre-defined prices in pre-established price categories. The objective function was to select the best blends of predicted demand given existing prices. The sophisticated technology and optimization algorithms had been focused on selling the right amount of inventory at a given price, not on the price itself. Realizing that controlling inventory was no longer sufficient, InterContinental Hotels Group (IHG) launched an initiative to better understand the price sensitivity of customer demand. IHG determined that calculating price elasticity at very granular levels to a high degree of accuracy still was not enough. Rate transparency had elevated the importance of incorporating market positioning against substitutable alternatives. IHG recognized that when a competitor changes its rate, the consumer's perception of IHG's rate also changes. Working with third party competitive data, the IHG team was able to analyze historical price, volume and share data to accurately measure price elasticity in every local market for multiple lengths of stay. These elements were incorporated into a system that also measured differences in customer elasticity based upon how far in advance the booking is being made relative to the arrival date. The incremental revenue from the system was significant as this new Price Optimization capability increased Revenue per Available Room (RevPAR) by 2.7%. IHG and Revenue Analytics, a pricing and revenue management consulting firm, were selected as finalists for the Franz Edelman Award for Achievement in Operations Research and the Management Sciences for their joint effort in implementing Price Optimization at IHG. In 2017, Holiday Retirement and Prorize LLC were awarded with the Franz Edelman Award for Achievement in Operations Research and the Management Sciences () for their use of operations research (O.R.) to improve the pricing model for more than 300 senior living communities. Holiday Retirement partnered with Prorize LLC, an Atlanta-based revenue management firm that leveraged O.R. to develop its Senior Living Rent Optimizer. The revenue management system developed by Prorize enabled a consistent and proactive pricing process across Holiday, while simultaneously providing optimal pricing recommendations for each unit in every one of their communities. As a result of their joint efforts, they were able to consistently raise revenues by over 10%. Levers Whereas yield management involves specific actions to generate yield through perishable inventory management, revenue management encompasses a wide range of opportunities to increase revenue. A company can utilize these different categories like a series of levers in the sense that all are usually available, but only one or two may drive revenue in a given situation. The primary levers are: Pricing This category of revenue management involves redefining pricing strategy and developing disciplined pricing tactics. The key objective of a pricing strategy is anticipating the value created for customers and then setting specific prices to capture that value. A company may decide to price against their competitors or even their own products, but the most value comes from pricing strategies that closely follow market conditions and demand, especially at a segment level. Once a pricing strategy dictates what a company wants to do, pricing tactics determine how a company actually captures the value. Tactics involve creating pricing tools that change dynamically, in order to react to changes and continually capture value and gain revenue. Price Optimization, for example, involves constantly optimizing multiple variables such as price sensitivity, price ratios, and inventory to maximize revenues. A successful pricing strategy, supported by analytically based pricing tactics, can drastically improve a firm's profitability. Inventory When focused on controlling inventory, revenue management is mainly concerned with how best to price or allocate capacity. First, a company can discount products in order to increase volume. By lowering prices on products, a company can overcome weak demand and gain market share, which ultimately increases revenue. On the other hand, in situations where demand is strong for a product but the threat of cancellations rooms (e.g. hotel rooms or airline seats), firms often overbook in order to maximize revenue from full capacity. Overbooking's focus is increasing the total volume of sales in the presence of cancellations rather than optimizing customer mix. Marketing Price promotions allow companies to sell higher volumes by temporarily decreasing the price of their products. Revenue management techniques measure customer responsiveness to promotions in order to strike a balance between volume growth and profitability. An effective promotion helps maximize revenue when there is uncertainty about the distribution of customer willingness to pay. When a company's products are sold in the form of long-term commitments, such as internet or telephone service, promotions help attract customers who will then commit to contracts and produce revenue over a long time horizon. When this occurs, companies must also strategize their promotion roll-off policies; they must decide when to begin increasing the contract fees and by what magnitude to raise the fees in order to avoid losing customers. Revenue management optimization proves useful in balancing promotion roll-off variables in order to maximize revenue while minimizing churn. Channels Revenue management through channels involves strategically driving revenue through different distribution channels. Different channels may represent customers with different price sensitivities. For example, customers who shop online are usually more price sensitive than customers who shop in a physical store. Different channels often have different costs and margins associated with those channels. When faced with multiple channels to retailers and distributors, revenue management techniques can calculate appropriate levels of discounts for companies to offer distributors through opaque channels to push more products without losing integrity with respect to public perception of quality. Since the advent of the Internet the distribution network and control has become a major concern for service providers. When the producer collaborates with a powerful provider, sacrifices may be necessary, particularly concerning the selling price/commission rate, in exchange for the capacity to reach a certain clientele and sales volumes. Process Data collection The revenue management process begins with data collection. Relevant data is paramount to a revenue management system's capability to provide accurate, actionable information. A system must collect and store historical data for inventory, prices, demand, and other causal factors. Any data that reflects the details of products offered, their prices, competition, and customer behavior must be collected, stored, and analyzed. In some markets, specialized data collection methods have rapidly emerged to service their relevant sector, and sometimes have even become a norm. In the European Union for example, the European Commission makes sure businesses and governments stick to EU rules on fair competition, while still leaving space for innovation, unified standards, and the development of small businesses. To support this, third-party sources are utilized to collect data and make only averages available for commercial purposes, such as is the case with the hotel sector – in Europe and the Middle East & North Africa region, where key operating indicators are monitored, such as Occupancy Rate (OR), Average Daily Rate (ADR) and Revenue per Available Room (RevPAR). Data is supplied directly by hotel chains and groups (as well as independent properties) and benchmark averages are produced by direct market (competitive set) or wider macro market. This data is also utilized for financial reporting, forecasting trends and development purposes. Information about customer behavior is a valuable asset that can reveal consumer behavioral patterns, the impact of competitors' actions, and other important market information. This information is crucial to starting the revenue management process. Segmentation After collecting the relevant data, market segmentation is the key to market-based pricing and revenue maximization. Airlines, for example, employed this tactic in differentiating between price-sensitive leisure customers and price-insensitive business customers. Leisure customers tend to book earlier and are flexible about when they fly and are willing to sit in coach seats to save more money for their destination, whereas business customers tend to book closer to departure and are typically less price sensitive. Success hinges on the ability to segment customers into similar groups based on a calculation of price responsiveness of customers to certain products based upon the circumstances of time and place. Revenue management strives to determine the value of a product to a very narrow micro-market at a specific moment in time and then chart customer behavior at the margin to determine the maximum obtainable revenue from those micro-markets. Micro-markets can be derived qualitatively by conducting a dimensional analysis. Business customers and leisure customers are two segments, but business customers could be further segmented by the time they fly (those who book late and fly in the morning etc.). Useful tools such as Cluster Analysis allow Revenue Managers to create a set of data-driven partitioning techniques that gather interpretable groups of objects together for consideration. Market segmentation based upon customer behavior is essential to the next step, which is forecasting demand associated with the clustered segments. Forecasting Revenue management requires forecasting various elements such as demand, inventory availability, market share, and total market. Its performance depends critically on the quality of these forecasts. Forecasting is a critical task of revenue management and takes much time to develop, maintain, and implement; see Financial forecast. Quantity-based forecasts, which use time-series models, booking curves, cancellation curves, etc., project future quantities of demand, such as reservations or products bought. See Demand forecasting and Production budget. Price-based forecasts seek to forecast demand as a function of marketing variables, such as price or promotion. These involve building specialized forecasts such as market response models or cross price elasticity of demand estimates to predict customer behavior at certain price points. By combining these forecasts with calculated price sensitivities and price ratios, a revenue management system can then quantify these benefits and develop price optimization strategies to maximize revenue. Optimization While forecasting suggests what customers are likely to do, optimization suggests how a firm should respond. Often considered the pinnacle of the revenue management process, optimization is about evaluating multiple options on how to sell your product and to whom to sell your product. Optimization involves solving two important problems in order to achieve the highest possible revenue. The first is determining which objective function to optimize. A business must decide between optimizing prices, total sales, contribution margins, or even customer lifetime values. Secondly, the business must decide which optimization technique to utilize. For example, many firms utilize linear programming, a complex technique for determining the best outcome from a set of linear relationships, to set prices in order to maximize revenue. Regression analysis, another statistical tool, involves finding the ideal relationship between several variables through complex models and analysis. Discrete choice models can serve to predict customer behavior in order to target them with the right products for the right price. Tools such as these allow a firm to optimize its product offerings, inventory levels, and pricing points in order to achieve the highest revenue possible. Dynamic re-evaluation Revenue management requires that a firm must continually re-evaluate their prices, products, and processes in order to maximize revenue. In a dynamic market, an effective revenue management system constantly re-evaluates the variables involved in order to move dynamically with the market. As micro-markets evolve, so must the strategy and tactics of revenue management adjust. (See and Volume risk.) In an organization Revenue management's fit within the organizational structure depends on the type of industry and the company itself. Some companies place revenue management teams within Marketing because marketing initiatives typically focus on attracting and selling to customers. Other firms dedicate a section of Finance to handle revenue management responsibilities because of the tremendous bottom line implications. Some companies have elevated the position of chief revenue officer, or CRO, to the senior management level. This position typically oversees functions like sales, pricing, new product development, and advertising and promotions. A CRO in this sense would be responsible for all activities that generate revenue and directing the company to become more "revenue-focused". Supply chain management and revenue management have many natural synergies. Supply chain management (SCM) is a vital process in many companies today and several are integrating this process with a revenue management system. On one hand, supply chain management often focuses on filling current and anticipated orders at the lowest cost, while assuming that demand is primarily exogenous. Conversely, revenue management generally assumes costs and sometimes capacity are fixed and instead looks to set prices and customer allocations that maximize revenue given these constraints. A company that has achieved excellence in supply chain management and revenue management individually may have many opportunities to increase profitability by linking their respective operational focus and customer-facing focus together. Business intelligence platforms have also become increasingly integrated with the revenue management process. These platforms, driven by data mining processes, offer a centralized data and technology environment that delivers business intelligence by combining historical reporting and advanced analytics to explain and evaluate past events, deliver recommended actions and eventually optimize decision-making. Not synonymous with Customer Relationship Management (CRM), Business intelligence generates proactive forecasts, whereas CRM strategies track and document a company's current and past interactions with customers. Data mining this CRM information, however, can help drive a business intelligence platform and provide actionable information to aid decision-making. Developing industries The ability for revenue management to optimize price based on forecast demand, price elasticity, and competitive rates has incredible benefits, and many companies rushed to develop their own revenue management capabilities in the early 2000s. Industries embracing revenue management include the following: Hotel, hospitality, and tourism services – daily revenue or yield management strategies are a popular practice within the hotel sector, particularly prominent in mature and large hotel markets such as in Western Europe and the North America. Key operating indicators Occupancy Rate (OR), Average Daily Rate (ADR) and Revenue per Available Room (RevPAR) are tracked using third-party sources to follow direct competitor set averages in demand and price, thereby indicating penetration rate and performance index. Leisure industries Media/Telecommunications – a promotion-driven industry often focused on attracting customers with discounted plans and then retaining them at higher price points. Businesses in this industry often face regulatory constraints, demand volatility, and sales through multiple channels to both business and consumer segments. Revenue management can help these companies understand micro-markets and forecast demand in order to optimize advertising sales and long-term contracts. Retail industries Distributors – face a complex environment that often includes thousands of individual SKUs with several different product lifecycles. Each distributor must account for factors such as channel conflict, cross-product cannibalization, and competitive actions. Revenue Management has proved useful to distributors in promotion analysis and negotiated contracts. Medical products and services – deal with large fluctuations in demand depending on time of day and day of week. Hospital surgeries are often overflowing on weekday mornings but sit empty and underutilized on the weekend. Hospitals may experiment with optimizing their inventory of services and products based on different demand points. Additionally, revenue management techniques allow hospitals to mitigate claim underpayments and denials, thus preventing significant revenue leakage. Financial services – offer a wide range of products to a wide range of customers. Banks have applied segmented pricing tactics to loan holders, often utilizing heavy amounts of data and modeling to project interest rates based on how much a customer is willing to pay. Industry organizations RMS/RMAPI (UK) The Revenue Management Society (RMS), now operating under the name Revenue Management and Pricing International Limited, is the industry body representing companies and practitioners working in this area in the UK. It was founded as Revenue Management Club by Steve Marchant and Tim Rosen in 2003, becoming incorporated as the Revenue Management Society in 2007. In 2013, Marchant resigned, and Tim Rosen, with the support of the committee, restructured the organisation (still retaining the company name) and started operating under the name Revenue Management and Pricing International Limited (RMAPI). Originally covering the tourism and leisure industries, RMS/RMAPI expanded into other sectors using the same disciplines, including retail, telecommunications, and media. Membership is by annual subscription, and it aims to provide a forum for practitioners of revenue management and pricing and related disciplines, including conferences and other events. Others The equivalent in France is the Revenue Management Club. Journals A bi-monthly journal, Journal of Revenue and Pricing Management provides an international forum for research in revenue management and pricing. It publishes applied research papers, case studies, models and theories, along with new trends and future ideas by experts and practitioners. It is aimed at senior professionals in private and public sector organisations as well as academics in universities and business schools. The editorial board, headed by Ian Yeoman, Professor of Innovation, Disruption and New Phenomena, at the Stenden University of Applied Sciences, Netherlands, is drawn from across the globe. See also Forecasting Inventory theory Linear programming Operations research Optimization Regression analysis Target income sales References Revenue Marketing analytics Consumer behaviour
Revenue management
[ "Biology" ]
4,997
[ "Behavior", "Consumer behaviour", "Human behavior" ]
8,976,994
https://en.wikipedia.org/wiki/International%20Association%20of%20GeoChemistry
The IAGC (International Association of GeoChemistry, formerly known as the International Association of Geochemistry and Cosmochemistry) is affiliated with the International Union of Geological Sciences and has been one of the pre-eminent international geochemical organizations for over thirty-five years. The principal objective of the IAGC is to foster co-operation in, and advancement of, geochemistry in the broadest sense. This is achieved by: working with any interested group in planning symposia and other types of meetings related to geochemistry, sponsoring publications in geochemistry of a type not normally covered by existing organizations and, the activities of working groups which study problems that require, or would benefit from, international co-operation. The scientific thrust of the IAGC takes place through its Working Groups (many of which organize regular symposia) and the official journal, Applied Geochemistry. The specific objectives of the IAGC are: To foster the use of the tools and techniques of chemistry to advance the understanding of the earth and its component systems for the benefit of mankind and modern society; To contribute to advancement in geochemical research throughout the world, including both fundamental geochemical research aimed at understanding the global earth system and applied geochemical research that addresses problems of particular relevance to the welfare of mankind and society; To promote international and education cooperation in geochemistry through outreach activities that include: establishing internal specialty-area working groups in topic areas that would benefit from international scientific cooperation, sponsoring international scientific meetings related to geochemistry, disseminating new knowledge through publication of the journal “Applied Geochemistry,” fostering communication in geochemistry across the international scientific community, encouraging the early career development of young geochemists, contributing to geochemical education, enhancing the visibility of the science of geochemistry and demonstrating its importance to mankind and society. History The International Association of Geochemistry and Cosmochemistry (IAGC) was formally founded on 8 May 1967. Prior to that time the organization of international geochemical affairs was largely carried out through the Inorganic Chemistry section of the International Union of Pure and Applied Chemistry (IUPAC) starting in 1960. It was at the twenty-first International Geological Congress (IGC) at Copenhagen in 1960 when the International Union of Geological Sciences (IUGS) was formally established and geochemists formed a close bond with the world geological community. Earl Ingerson, as Chairman or Secretary to three of the then existing international geochemical organizations, coordinated a meeting of members of the committees on geochemistry of the IGC, IUGG and IUPAC in New Delhi in 1964, but was himself unable to attend. This meeting, chaired by Ken Sugawara, drew up draft statutes and nominated temporary officers, with the result that in November 1965, Earl Ingerson called a meeting in Paris to name the association, complete the statutes, elect temporary officers and apply to IUGS for immediate affiliation. The first Council meeting was held on 8 May 1967 at UNESCO headquarters in Paris, presided over by Earl Ingerson. Until 2000, the Association's governing body was the General Assembly which met during each IGC. The main internal financial support was provided by National Members who voted at the General Assembly. Some outside funding also came from UNESCO and IUGS Day-to-day operations between each General Assembly were carried out by a Council of five officers and eight Council members. During its existence, IAGC has, through its various working groups and members, sponsored or co-sponsored more than 40 international meetings, which represent its main financial expenditure. Many of these meetings result from close cooperation with other associations affiliated with IUGS and IUGG, as well as various international, national, provincial and academic organizations. Proceedings of these meetings are usually published. In 1986 the IAGC launched its official journal, Applied Geochemistry. At the General Assembly of the IAGC in Rio de Janeiro, National Memberships were terminated as it was widely felt that the IAGC was sufficiently mature and financially stable that the control and support of individual countries on the IAGC, through designated representatives (who may not have been geochemists), was redundant and potentially counter-productive. Thus, the IAGC evolved into a self-supported organization whose activities were controlled by its members, through an elected Executive and Council. Recently, the Statutes of the IAGC have undergone important revisions to be more applicable to current plans and operations. Also, as described on the IAGC homepage, there has been a name change to reflect the applied geochemical nature of the IAGC (now the International Association of GeoChemistry). References External links IAGC Geochemistry organizations International scientific organizations
International Association of GeoChemistry
[ "Chemistry" ]
956
[ "Geochemistry organizations" ]
8,977,541
https://en.wikipedia.org/wiki/Leaf%20sensor
A leaf sensor is a phytometric device (measurement of plant physiological processes) that measures water loss or the water deficit stress (WDS) in plants by real-time monitoring the moisture level in plant leaves. The first leaf sensor was developed by LeafSens, an Israeli company granted a US patent for a mechanical leaf thickness sensing device in 2001. LeafSen has made strides incorporating their leaf sensory technology into citrus orchards in Israel. A solid state smart leaf sensor technology was developed by the University of Colorado at Boulder for NASA in 2007. It was designed to help monitor and control agricultural water demand. AgriHouse received a National Science Foundation (NSF) STTR grant in conjunction with the University of Colorado to further develop the solid state leaf sensor technology for precision irrigation control in 2007. Precision monitoring Water deficit stress measurements A Phase I research grant from the National Science Foundation in 2007 showed that the leaf sensor technology has the potential to save between 30% and 50% of irrigation water by reducing irrigation from once every 24 hours to about every 2 to 2.5 days by sensing impending water deficit stress. Leaf sensor technology developed by AgriHouse indicates water deficit stress by measuring the turgor pressure of a leaf, which decreases dramatically at the onset of leaf dehydration. Early detection of impending water deficit stress in plants can be used as an input parameter for precision irrigation control by allowing plants to communicate water requirements directly to humans and/or electronic interfaces. For example, a base system utilizing the wirelessly transmitted information of several sensors appropriately distributed over various sectors of a round field irrigated by a center-pivot irrigation system could tell the irrigation lever exactly when and what field sector needs to be irrigated. Irrigation control In a 2008 USDA sponsored field study AgriHouse's SG-1000 Leaf Sensor attached to dry beans demonstrated a 25% savings of irrigation water and pumping cost. In 2010 the University of Colorado, Boulder, Colorado granted AgriHouse Inc an exclusive license of its patented leaf sensor technology. Precision irrigation monitoring using the SG-1000 leaf sensor and commercial data loggers for irrigation control has been achieved in recent years. Researchers have found a direct correlation between leaf thickness and Relative Water Content (RWC) of plant leaves using the SG-1000 Leaf Sensor under field conditions. Water and Energy Conservation The agriculture sustainability benefits of water and energy savings have been established using the SG-1000 leaf sensor under field conditions and in greenhouses. Plant science researchers and agronomists have utilized the SG-1000 Leaf Sensor for studying the relationship between water content and leaf cell turgidity potential and leaf thickness. Plant leaf characteristics including water potential and osmotic water potential relationships have been studied with the device. See also Aeroponics Agriculture Agronomy Drought Conservation Greenhouse Hydroponics Irrigation Plant physiology Plant Science Soil Sustainability Turgor Pressure Water Footnotes Sensors Plant physiology
Leaf sensor
[ "Technology", "Engineering", "Biology" ]
581
[ "Plant physiology", "Sensors", "Plants", "Measuring instruments" ]
8,977,564
https://en.wikipedia.org/wiki/Recording%20practices%20of%20the%20Beatles
The studio practices of the Beatles evolved during the 1960s and, in some cases, influenced the way popular music was recorded. Some of the effects they employed were sampling, artificial double tracking (ADT) and the elaborate use of multitrack recording machines. They also used classical instruments on their recordings and guitar feedback. The group's attitude towards the recording process was summed up by Paul McCartney: "We would say, 'Try it. Just try it for us. If it sounds crappy, OK, we'll lose it. But it might just sound good.' We were always pushing ahead: Louder, further, longer, more, different." Studios EMI (Abbey Road) In the early part of the 1960s, EMI's Abbey Road Studios was equipped with EMI-made British Tape Recorders (BTR) which were developed in 1948, as copies of German wartime recorders. The BTR was a twin-track, valve-based machine. When recording on the twin-track machine there was very little opportunity for overdubbing; the recording was essentially that of a live music performance. The first two Beatles albums, Please Please Me and With The Beatles, were recorded on the BTR two-track machines; with the introduction of four-track machines in 1963 (the first 4-track Beatles recording was "I Want to Hold Your Hand") there came a change in the way recordings were made—tracks could be built up layer by layer, encouraging experimentation in the multitrack recording process. In 1968 eight-track recorders became available, but Abbey Road was somewhat slow in adopting the new technology and a number of Beatles tracks (including "Hey Jude") were recorded in other studios in London to get access to the new eight-track recorders. The Beatles' album Abbey Road, was the only one to be recorded using a transistorised mixing console, the EMI TG12345, rather than the earlier REDD valve consoles. Let It Be was recorded largely at the Beatles' own Apple Studios, using borrowed REDD valve consoles from EMI after the designer Magic Alex (Alex Mardas) failed to come up with a suitable desk for the studio. Engineer Geoff Emerick has said that the transistorised console played a large part in shaping overall sound of Abbey Road, lacking the aggressive edge of the valve consoles. Personnel The Beatles The success of the Beatles meant that EMI gave them carte blanche access to the Abbey Road studios—they were not charged for studio time and could spend as long as they wanted working on music. Starting around 1965 with the Rubber Soul sessions, the Beatles increasingly used the studio as an instrument in itself, spending long hours experimenting and writing. The Beatles demanded a lot from the studio; Lennon allegedly wanted to know why the bass on a certain Wilson Pickett record far exceeded the bass on any Beatles records. This prompted EMI engineer Geoff Emerick to try new techniques for "Paperback Writer". He explains that the song "was the first time the bass sound had been heard in all its excitement ... To get the loud bass sound Paul played a different bass, a Rickenbacker. Then we boosted it further by using a loudspeaker as a microphone. We positioned it directly in front of the bass speaker and the moving diaphragm of the second speaker made the electric current." Combined with this was the conscious desire to be different. McCartney said, "Each time we just want to do something different. After Please Please Me we decided we must do something different for the next song... Why should we ever want to go back? That would be soft." The desire to "do something different" pushed EMI's recording technology through overloading the mixing desk as early as 1964 in tracks such as "Eight Days a Week" even at this relatively early date, the track begins with a gradual fade-in, a device which had rarely been employed in rock music. Paul McCartney would create more sophisticated bass lines by overdubbing in counterpoint to Beatles tracks that were previously completed. Also overdubbed vocals were used for new artistic purposes on "Julia" with John Lennon overlapping the end of one vocal phrase with the beginning of his next. On "I Want to Hold Your Hand" (1963) the Beatles innovated using organ sounding guitars which was achieved by extreme compression on Lennon's rhythm guitar. Engineers and other Abbey Road staff have reported that the Beatles would try to take advantage of accidental occurrences in the recording process; "I Feel Fine" and "It's All Too Much"'s feedback and "Long, Long, Long"'s resonating glass bottle (towards the end of the track) are examples of this. In other instances the group deliberately toyed with situations and techniques which would foster chance effects, such as the live (and thereby unpredictable) mixing of a UK radio broadcast into the fade of "I Am the Walrus" or the chaotic assemblage of "Tomorrow Never Knows". The Beatles' song "You Like Me Too Much" has one of the earliest examples of this technique: the Beatles recorded the electric piano through a Hammond B-3's rotating Leslie speaker, a 122 or 122RV, a trick they would come back to over and over again. (At the end of the intro, the switching off of the Leslie is audible.) Also on "Tomorrow Never Knows" the vocal was sent through a Leslie speaker. Although not the first recorded vocal use of a Leslie speaker, the technique would later be used by the Grateful Dead, Cream, The Moody Blues and others. All of the Beatles had Brenell tape recorders at home, which allowed them to record outside of the studio. Some of their home experiments were used at Abbey Road and ended up on finished masters, in particular on "Tomorrow Never Knows". Engineers and producers Norman Smith Session musicians Although strings were commonly used on pop recordings, George Martin's suggestion that a string quartet be used for the recording of "Yesterday" marked a major departure for the Beatles. McCartney recalled playing it to the other Beatles and Starr saying it did not make sense to have drums on the track and Lennon and Harrison saying there was no point having extra guitars. George Martin suggested a solo acoustic guitar and a string quartet. As the Beatles' musical work developed, particularly in the studio, classical instruments were increasingly added to tracks. Lennon recalled the two way education; the Beatles and Martin learning from each other – George Martin asking if they'd heard an oboe and the Beatles saying, "No, which one's that one?" Geoff Emerick documented the change in attitude to pop, as opposed to classical music during the Beatles career. In EMI at the start of the 1960s, balance engineers were either "classical" or "pop". Similarly, Paul McCartney recalled a large "Pop/Classical" switch on the mixing console. Emerick also noted a tension between the classical and pop people - even eating separately in the canteen. The tension was also increased as it was the money from pop sales that paid for the classical sessions. Emerick was the engineer on "A Day in the Life", which used a 40-piece orchestra and recalled "dismay" amongst the classical musicians when they were told to improvise between the lowest and highest notes of their instruments (whilst wearing rubber noses). However, Emerick also saw a change in attitude at the end of the recording when everyone present (including the orchestra) broke into spontaneous applause. Emerick recalled the evening as the "passing of the torch" between the old attitudes to pop music and the new. Techniques Guitar feedback Audio feedback was used by composers such as Robert Ashley in the early 60s. Ashley's The Wolfman, which uses feedback extensively, was composed early in 1964, though not heard publicly until the autumn of that year. In the same year as Ashley's feedback experiments, The Beatles song "I Feel Fine", recorded on 18 October, starts with a feedback note produced by plucking the A-note on McCartney's bass guitar, which was picked up on Lennon's semi-acoustic guitar. It was distinguished from its predecessors by a more complex guitar sound, particularly in its introduction, a sustained plucked electric note that after a few seconds swelled in volume and buzzed like an electric razor. This was the very first use of feedback on a rock record. Speaking in one of his last interviews — with the BBC's Andy Peebles — Lennon said this was the first intentional use of feedback on a music record. In The Beatles Anthology series, George Harrison said that the feedback started accidentally when a guitar was placed on an amplifier but that Lennon had worked out how to achieve the effect live on stage. In The Complete Beatles Recording Sessions, Mark Lewisohn states that all the takes of the song included the feedback. The Beatles continued to use feedback on later songs. "It's All Too Much", for instance, begins with sustained guitar feedback. Close miking of acoustic instruments During the recording of "Eleanor Rigby" on 28 April 1966, McCartney said he wanted to avoid "Mancini" strings. To fulfil this brief, Geoff Emerick close-miked the strings—the microphones were almost touching the strings. George Martin had to instruct the players not to back away from the microphones. Microphones began to be placed closer to the instruments in order to produce a fuller sound. Ringo's drums had a large sweater stuffed in the bass drum to 'deaden' the sound while the bass drum microphone was positioned very close, which resulted in the drum being more prominent in the mix. "Eleanor Rigby" features just McCartney and a double string quartet that has the instruments miked so close to the string that 'the musicians were in horror'. In "Got to Get You into My Life", the brass were miked in the bells of their instruments then put through a Fairchild limiter. According to Emerick, in 1966, this was considered a radically new way of recording strings; nowadays it is common practice. Direct input Direct input was first used by the Beatles on 1 February 1967 to record McCartney's bass on "Sgt. Pepper's Lonely Hearts Club Band". With direct input the guitar pick-up is connected to the recording console via an impedance matching DI box. Ken Townsend claimed this as the first use anywhere in the world, although Joe Meek, an independent producer from London, is known to have done it earlier (early 1960s) and in America, Motown's engineers had been using Direct Input since the early 1960s for guitars and bass guitars, primarily due to restrictions of space in their small 'Snakepit' recording studio. Tape manipulation Artificial double tracking Artificial double tracking (ADT) was invented by Ken Townsend in 1966, during the recording of Revolver. With the advent of four-track recordings, it became possible to double track vocals whereby the performer sings along with their own previously recorded vocal. Phil McDonald, a member of the studio staff, recalled that Lennon did not really like singing a song twice - it was obviously important to sing exactly the same words with the same phrasing—and after a particularly trying evening of double tracking vocals, Townsend "had an idea" while driving home one evening hearing the sound of the car in front. ADT works by taking the original recording of a vocal part and duplicating it onto a second tape machine which has a variable speed control. The manipulation of the speed of the second machine during playback introduces a delay between the original vocal and the second recording of it, giving the effect of double tracking without having to sing the part twice. The effect had been created "accidentally" earlier, when recording "Yesterday": loudspeakers were used to cue the string quartet and some of McCartney's voice was recorded onto the string track, which can be heard on the final recording. It has been claimed that George Martin's pseudoscientific explanation of ADT ("We take the original image and we split it through a double-bifurcated sploshing flange") given to Lennon originated the phrase flanging in recording, as Lennon would refer to ADT as "Ken's flanger", although other sources claim the term originated from pressing a finger on the tape recorder's tape supply reel (the flange) to make small adjustments to the phase of the copy relative to the original. ADT greatly influenced recording—virtually all the tracks on Revolver and Sgt. Pepper's Lonely Hearts Club Band had the treatment and it is still widely used for instruments and voices. Nowadays, the effect is more often known as automatic double tracking. ADT can be heard on the lead guitar on "Here, There and Everywhere" and the vocals on "Eleanor Rigby" for example. The technique was used later by bands like the Grateful Dead and Iron Butterfly, amongst others. Sampling The Beatles first used samples of other music on "Yellow Submarine", the samples being added on 1 June 1966. The brass band solo was constructed from a Sousa march by George Martin and Geoff Emerick, the original solo was in the same key and was transferred to tape, cut into small segments and re-arranged to form a brief solo which was added to the song. A similar technique was used for "Being for the Benefit of Mr. Kite" on 20 February 1967. To try to create the atmosphere of a circus, Martin first proposed the use of a calliope (a steam-driven organ). Such was the power of the Beatles within EMI that phone calls were made to see if a calliope could be hired and brought into the studio. However, only automatic calliopes, controlled by punched cards, were available, so other techniques had to be used. Martin came up with taking taped samples from several steam organ pieces, cutting them into short lengths, "throwing them in the air" and splicing them together. It took two trials; in the first attempt, the pieces coincidentally came back in more or less original order. More obvious, and therefore more influential samples were used on "I Am the Walrus"—a live BBC Third Programme broadcast of King Lear was mixed into the track on 29 September 1967. McCartney has also described a lost opportunity of live sampling: the EMI studio was set up in such a way that the echo track from the echo chamber could be picked up in any of the control rooms. Paul Jones was recording in a studio whilst "I Am the Walrus" was being mixed and the Beatles were tempted to "nick" (steal) some of Jones's singing to put into the mix. Synchronising tape machines One way of increasing the number of tracks available for recording is to synchronise tape machines together. Since the early 1970s SMPTE timecode has been used to synchronise tape machines. Modern SMPTE timecode controlled recorders provide a mechanism so that the second machine will automatically position the tape correctly and start and stop simultaneously with the master machine. However, in 1967, SMPTE timecode was not available and other techniques had to be used. On 10 February 1967 during the recording of "A Day in the Life", Ken Townsend synchronised two machines so that extra tracks were available for recording the orchestra. Speaking in an interview with Australia's ABC, Geoff Emerick described the technique; EMI tape machines' speed could be controlled using an external speed controller which adjusted the frequency of the mains supply to the motor. By using the same controller to control two machines, they were synchronised. Townsend thereby effectively used pilottone, a technique that was common in 16mm news gathering whereby a 50/60 Hz tone was sent from the movie camera to a tape recorder during filming in order to achieve lip-synch sound recording. With the simple tone used for "A Day in the Life", the start position was marked with a wax pencil on the two machines and the tape operator had to align the tapes by eye and attempt to press play and record simultaneously for each take. Although the technique was reasonably successful, Townsend recalled that when they tried to use the tape on a different machine, the synchronisation was sometimes lost. George Martin claimed this as the first time tape machines had been synchronised, although SMPTE synchronisation for video/audio synchronisation was developed around 1967. Backwards tapes As the Beatles pioneered the use of musique concrète in pop music (i.e. the sped-up tape loops in "Tomorrow Never Knows"), backward recordings came as a natural exponent of this experimentation. "Rain", the first rock song featuring a backwards vocal (Lennon singing the first verse of the song), came about when Lennon (claiming the influence of marijuana) accidentally loaded a reel-to-reel tape of the song on his machine backwards and essentially liked what he heard so much he quickly had the reversed overdub. A quick follow-up was the reversed guitar on "I'm Only Sleeping", which features a dual guitar solo by George Harrison played backwards. Harrison worked out a guitar part, learned to play the part in reverse, and recorded it backwards. Likewise, a backing track of reversed drums and cymbals made its way into the verses of "Strawberry Fields Forever". The Beatles' well-known use of reversed tapes led to rumours of backwards messages, including many that fueled the Paul is Dead urban myth. However, only "Rain" and "Free as a Bird" include intentional reversed vocals in Beatles songs. The stereo version of George Harrison's "Blue Jay Way" (1967, Magical Mystery Tour) also includes backwards vocals, which is actually a backwards copy of the entire mix, including all instruments, which is faded up at the end of each phrase. See also Outline of the Beatles The Beatles timeline References Bibliography Recording technology British music industry Sound recording technology Music production
Recording practices of the Beatles
[ "Technology" ]
3,665
[ "Recording devices", "Sound recording technology" ]
8,978,043
https://en.wikipedia.org/wiki/Loop-O-Plane
The Loop-O-Plane is an amusement park ride that originated in America. It was invented by Lee Eyerly and manufactured by the Eyerly Aircraft Company of Salem, Oregon, in 1933. The ride was immediately popular with customers and became a staple of amusement parks. The ride was imported into Europe, where it was first used in the UK in 1937. The ride has two 16-foot-long arms, each with an enclosed car at one end and a counterweight at the other. Each car holds four riders seated in pairs facing opposite directions making the maximum occupancy eight riders. Propelled by an electric motor, the arms swing in directions opposite to each other until they 'loop' taking the riders upside down. The minimum rider height requirement is 46 inches tall. An updated version of this ride exists known as the Roll-O-Plane. Some of the surviving machines were also converted into a variation named Rock-O-Plane. Ride locations A partial list containing both open and closed rides and their locations follows. Green Machine (Eylerly Loop-O-Plane) - Hydro Free Fair- Hydro, Oklahoma Loop-O-Plane - Keansburg Amusement Park, Keansburg, New Jersey Loop-O-Plane - Idora Park - Youngstown, Ohio Loop-O-Plane - Kennywood - West Mifflin, Pennsylvania Loop-O-Plane - Lagoon - Farmington, Utah Loop-O-Plane - Lakeside Amusement Park - Lakeside, Colorado Bullet - Miracle Strip Amusement Park - Panama City Beach, Florida References External links The Flat Joint Loop-O-Plane page with photos Amusement rides
Loop-O-Plane
[ "Physics", "Technology" ]
328
[ "Physical systems", "Machines", "Amusement rides" ]
8,978,246
https://en.wikipedia.org/wiki/Stone%20sealer
Stone sealing is the application of a surface treatment to products constructed of natural stone to retard staining and corrosion. All bulk natural stone is riddled with interconnected capillary channels that permit penetration by liquids and gases. This is true for igneous rock types such as granite and basalt, metamorphic rocks such as marble and slate, and sedimentary rocks such as limestone, travertine, and sandstone. These porous channels act like a sponge, and capillary action draws in liquids over time, along with any dissolved salts and other solutes. Very porous stone, such as sandstone absorb liquids relatively quickly, while denser igneous stones such as granite are significantly less porous; they absorb smaller volumes, and more slowly, especially when absorbing viscous liquids. Motivation Natural stone is used in kitchens, floors, walls, bathrooms, dining rooms, around swimming pools, building foyers, public areas and facades. Since ancient times, stone has been popular for building and decorative purposes. It has been valued for its strength, durability, and insulation properties. It can be cut, cleft, or sculpted to shape as required, and the variety of natural stone types, textures, and colors provide an exceptionally versatile range of building materials. The porosity and makeup of most stone does, however, leave it prone to certain types of damage if unsealed. Staining is the most common form of damage. It is the result of oils or other liquids penetrating deeply into the capillary channels and depositing material that is effectively impossible to remove without destroying the stone. Salt Attack occurs when salts dissolved in water are carried into the stone. The two commonest effects are efflorescence and spalling. Salts that expand on crystallization in capillary gaps can cause surface spalling. For example, various magnesium and calcium salts in sea water expand considerably on drying by taking on water of crystallization. However, even sodium chloride, which does not include water of crystallization, can exert considerable expansive forces as its crystals grow. Efflorescence is the formation of a gritty deposit, commonly white, on the surface. Efflorescence is usually the result of mineral solutions in the capillary channels being drawn to the surface. If the water evaporates, the minerals remain as the so-called efflorescence. It also can be the result of chemical reaction; if badly prepared cement-based mortar is applied to maintain the stone in position, free calcium hydroxide may leach out. In the open air the lime reacts with carbon dioxide to form water-insoluble calcium carbonate that might take the form of powdery efflorescence or dripstone-like crusting. Acid Attack. Acid-soluble stone materials such as the calcite in marble, limestone and travertine, as well as the internal cement that binds the resistant grains in sandstone, react with acidic solutions on contact, or on absorbing acid-forming gases in polluted air, such as oxides of sulfur or nitrogen. Acid erodes the stone, leaving dull marks on polished surfaces. In time it may cause deep pitting, eventually totally obliterating the forms of statues, memorials and other sculptures. Even mild household acids, including cola, wine, vinegar, lemon juice and milk, can damage vulnerable types of stone. The milder the acid, the longer it takes to etch calcite-based stone; stronger acids can cause irreparable damage in seconds. Picture Framing occurs when water or grout moves into the edges of the stone to create an unsightly darkening or "frame" affect. Such harm is usually irreversible. Freeze-thaw Spalling results when water freezes in the surface pores. The general term is Frost weathering. The water expands on freezing, causing the stone to spall, crumble, or even to crack through. Protection of stone The longevity and usefulness of stone can be extended by sealing its surface effectively, so as to exclude harmful liquids and gases. The ancient Romans often used olive oil to seal their stone. Such treatment provides some protection by excluding water and other weathering agents, but it stains the stone permanently. During the renaissance Europeans experimented with the use of topical varnishes and sealants made from ingredients such as egg white, natural resins and silica, which were clear, could be applied wet and harden to form a protective skin. Most such measures did not last long, and some proved harmful in the long run. Modern sealers Modern stone sealers are divided into 3 broad types: topical sealers, penetrating sealers, and impregnating sealers. Topical sealers Topical sealers are generally made from polyurethanes, acrylics, or natural wax. These sealers may be effective at stopping stains but, being exposed on the surface of the material, they tend to wear out relatively quickly, especially on high-traffic areas of flooring. This type of sealer will significantly change the look and slip resistance of the surface, especially when it is wet. These sealers are not breathable i.e. do not allow the escape of water vapour and other gases, and are not effective against salt attack, such as efflorescence and spalling. Penetrating sealers The most penetrating sealers use siliconates, fluoro-polymers and siloxanes, which repel liquids. These sealers penetrate the surface of the stone enough to anchor the material to the surface. They are generally longer lasting than topical sealers and often do not substantially alter the look of the stone, but still can change the slip characteristics of the surface and do wear relatively quickly. Penetrating sealers often require the use of special cleaners which both clean and top up the repellent ingredient left on the stone surface. These sealers are often breathable to a certain degree, but do not penetrate deeply enough (generally less than 1mm) to be effective against salt attack, such as efflorescence and spalling. Impregnating sealers Uses silanes or modified silanes. These are a type of penetrating sealer, which penetrate deeply into the material, impregnating it with molecules which bond to the capillary pores and repels water and / or oils from within the material. Some modified silane sealers impregnate deeply enough to protect against salt attack, such as efflorescence, spalling, picture framing and freeze-thaw spalling. Some silane stone sealers based on nanotechnology claim to be resistant to UV light and higher pH levels found in new masonry and pointing. A good depth of penetration is also essential for protection from weathering and traffic. See also Dimension stone References Stonemasonry Building materials Coatings
Stone sealer
[ "Physics", "Chemistry", "Engineering" ]
1,392
[ "Building engineering", "Coatings", "Construction", "Stonemasonry", "Materials", "Building materials", "Matter", "Architecture" ]
8,978,415
https://en.wikipedia.org/wiki/Cryogenic%20treatment
A cryogenic treatment is the process of treating workpieces to cryogenic temperatures (typically around -300°F / -184°C, or as low as ) in order to remove residual stresses and improve wear resistance in steels and other metal alloys, such as aluminum. In addition to seeking enhanced stress relief and stabilization, or wear resistance, cryogenic treatment is also sought for its ability to improve corrosion resistance by precipitating micro-fine eta carbides, which can be measured before and after in a part using a quantimet. The process has a wide range of applications from industrial tooling to the improvement of musical signal transmission. Some of the benefits of cryogenic treatment include longer part life, less failure due to cracking, improved thermal properties, better electrical properties including less electrical resistance, reduced coefficient of friction, less creep and walk, improved flatness, and easier machining. Processes Cryogenic tempering Cryogenic tempering is two phase metal treatment that involves a descent and ascent phase, including a cryogenic treatment process (known as "cryogenic processing") where the material is slowly cooled to ultra low temperatures (typically around -300°F / -184°C), which is then optionally reheated slowly (typically up to +325°F / 162°C). Materials do not "harden" during the temperature descent or ascent, rather their molecular structures are compressed together tightly in uniformity through a computer controlled process that typically uses liquid nitrogen to slowly descend temperatures. Invention History of Cryogenic Processing & Cryogenic Tempering The cryogenic treatment process was invented by Ed Busch (CryoTech) in Detroit, Michigan in 1966, inspired by NASA research, which later merged with 300 Below, Inc. in 2000 to become the world's largest and oldest commercial cryogenic processing company after Peter Paulin of Decatur, IL collaborated with process control engineers to invent the world's first computer-controlled "dry" cryogenic processor in 1992 (where he was featured on the Discovery Channel's Next Step TV Show for his invention). Whereas the industry initially submerged metal parts in liquid nitrogen by dunking them or pouring liquid nitrogen over the parts, the earliest results proved inconsistent, which led Mr. Paulin to develop 300 Below's "dry" computer-controlled cryogenic processing equipment to ensure consistent and accurate treatment results across every processing run by introducing liquid nitrogen into a chamber above its boiling point, in a "dry" gaseous state, to ensure that parts in a chamber are not thermally shocked from being exposed to direct liquid contact of ultra low temperatures. A "dry" cryogenic process does not submerge parts in liquid, but rather ensures that temperatures are slowly descended at less than one degree per minute using short bursts of cold gas being introduced via a solenoid-metered pipe, which is controlled by a computer equipment paired with highly accurate RTD (Resistance Temperature Detector) sensors. Science Behind Dry Cryogenic Processing & Cryogenic Tempering Because all changes to metals take place on the quench, the first phase of the initial descent is called cryogenic processing, and by adding a second phase to heat the molecular structure of materials after an initial molecular re-alignment, both processes together are called cryogenic tempering. By using liquid nitrogen, the temperature can go as low as −196 °C, though the typical dwell temperature of cryogenic processing equipment is slightly above the boiling point of liquid nitrogen (closer to -300°F / -184°C) due to being injected into the processing chamber as a gaseous state and making every attempt not to introduce liquid into the chamber that could cause parts to become thermally shocked. Cryogenic processing (and especially cryogenic tempering) can have a profound effect on the mechanical properties of certain materials, such as steels or tungsten carbide, but the heating phase in cryogenic tempering is typically omitted for softer metals like brass in musical instruments, for piano strings, in certain aerospace applications, or for sensitive electronic components like vacuum tubes and transistors in high-end audio equipment. In tungsten carbide (WC-Co), the crystal structure of cobalt is transformed from softer FCC to harder HCP phase whereas the hard tungsten carbide particle is unaffected by the treatment. Applications of cryogenic processing Aerospace & Defense: communication, optical housings, satellites, weapons platforms, guidance systems, landing systems. Automotive: brake rotors, transmissions, clutches, brake parts, rods, crank shafts, camshafts axles, bearings, ring and pinion, heads, valve trains, differentials, springs, nuts, bolts, washers. Cutting tools: cutters, knives, blades, drill bits, end mills, turning or milling inserts. Cryogenic treatments of cutting tools can be classified as Deep Cryogenic Treatments (around -196 °C) or Shallow Cryogenic Treatments (around -80 °C). Forming tools: roll form dies, progressive dies, stamping dies. Mechanical industry: pumps, motors, nuts, bolts, washers. Medical: tooling, scalpels. Motorsports and Fleet Vehicles: See Automotive for brake rotors and other automotive components. Musical: Vacuum tubes, Audio cables, brass instruments, guitar strings and fret wire, piano wire, amplifiers, magnetic pickups, cables, connectors. Sports: Firearms, knives, fishing equipment, auto racing, tennis rackets, golf clubs, mountain climbing gear, archery, skiing, aircraft parts, high pressure lines, bicycles, motor cycles. Cryogenic machining Cryogenic machining is a machining process where the traditional flood lubro-cooling liquid (an emulsion of oil into water) is replaced by a jet of either liquid nitrogen (LN2) or pre-compressed carbon dioxide (). Cryogenic machining is useful in rough machining operations, in order to increase the tool life. It can also be useful to preserve the integrity and quality of the machined surfaces in finish machining operations. Cryogenic machining tests have been performed by researchers for several decades, but the actual commercial applications are still limited to very few companies. Both cryogenic machining by turning and milling are possible. Cryogenic machining is a relatively new technique in machining. This concept was applied on various machining processes such as turning, milling, drilling etc. Cryogenic turning technique is generally applied on three major groups of workpiece materials—superalloys, ferrous metals, and viscoelastic polymers/elastomers. The roles of cryogen in machining different materials are unique. Cryogenic deflashing Cryogenic deburring Cryogenic rolling Cryogenic rolling or , is one of the potential techniques to produce nanostructured bulk materials from its bulk counterpart at cryogenic temperatures. It can be defined as rolling that is carried out at cryogenic temperatures. Nanostructured materials are produced chiefly by severe plastic deformation processes. The majority of these methods require large plastic deformations (strains much larger than unity). In case of cryorolling, the deformation in the strain hardened metals is preserved as a result of the suppression of the dynamic recovery. Hence large strains can be maintained and after subsequent annealing, ultra-fine-grained structure can be produced. Advantages Comparison of cryorolling and rolling at room temperature: In cryorolling, the strain hardening is retained up to the extent to which rolling is carried out. This implies that there will be no dislocation annihilation and dynamic recovery. Where as in rolling at room temperature, dynamic recovery is inevitable and softening takes place. The flow stress of the material differs for the sample which is subjected to cryorolling. A cryorolled sample has a higher flow stress compared to a sample subjected to rolling at room temperature. Cross slip and climb of dislocations are effectively suppressed during cryorolling leading to high dislocation density which is not the case for room temperature rolling. The corrosion resistance of the cryorolled sample comparatively decreases due to the high residual stress involved. The number of electron scattering centres increases for the cryorolled sample and hence the electrical conductivity decreases significantly. The cryorolled sample shows a high dissolution rate. Ultra-fine-grained structures can be produced from cryorolled samples after subsequent annealing. Cryogenic treatment in specific materials Stainless steel The torsional and tensional deformation under cryogenic temperature of stainless steel is found to be significantly enhance the mechanical strength while incorporating the gradual phase transformation inside the steel. This strength improvement is the result of following phenomenon. The deformation induced phase transformation into martensitic phase which is stronger body centered cubic phase. The torsional and tensional deformation induces higher volume ratio of martensitic phase near the edge to prevent initial mechanical failure from the surface The torsional deformation creates the gradient phase transformation along the radial direction protecting large hydrostatic tension The high deformation triggers dislocation plasticity in martensitic phase to enhance overall ductility and tensile strength Copper Zhang et al. exploited the cryorolling to the dynamic plastic deformed copper at liquid nitrogen temperature (LNT-DPD) to greatly enhance tensile strength with high ductility. The key of this combined approach (Cryogenic hardening and Cryogenic rolling) is to engineer the nano-sized twin boundary embedded in the copper. Processing with the plastic deformation of grained bulk metal decreases the size of the grain boundary and enhances the grain boundary strengthening. However, as the grain gets smaller, the interaction between grain and the dislocation inside impedes further process of grains. Among the grain boundaries, it is known that the twin boundaries, a special type of low-energy grain boundary has lower interaction energy with dislocation leading to much smaller saturation size of the grain. The cryogenic dynamic plastic deformation creates higher fraction of the twin boundaries compared to the severe plastic deformation. Following cryorolling further reduces the grain boundary energy with relieving the twin boundary leading to higher Hall-Petch strengthening effect. In addition, this increases the ability of grain boundary to accommodate more dislocation leading to the improvement in ductility from cryorolling. Titanium Cryogenic hardening of Titanium is hard to manipulate compare to other face centered cubic (fcc) metals because these hexagonal close packed (hcp) metals has less symmetry and slip systems to exploit. Recently Zhao et al. introduced the efficient method to manipulate nanotwinned titanium which has higher strength, ductility and thermal stability. By cryoforging repetitively along the three principal axes in liquid nitrogen and following annealing process, pure Titanium can possess hierarchical twin boundary network structure which suppresses the motion of dislocation and significantly enhances its mechanical property. The microstructure analysis found that the repeated twinning and de-twinning keep increasing the fraction of nanosized twin boundaries and refining the grains to render much higher Hall-Petch strengthening effect even after the saturation of microscale twin boundary at high flow stress. Especially, the strength and ductility of nanotwinned titanium at 77 K, reaches about 2 GPa, and ~100% which far outweighs those of conventional cryogenic steels even without any inclusion of alloying. References External links Cryogenics Society of America CSA Cryogenic Treatment Database of Research Articles 300 Below - Founder of Commercial Cryogenic Industry (Since 1966) Understanding how Deep Cryogenics works, and what applications are most effective Cryogenics Metal forming Metal heat treatments
Cryogenic treatment
[ "Physics", "Chemistry" ]
2,337
[ "Metallurgical processes", "Metal heat treatments", "Applied and interdisciplinary physics", "Cryogenics" ]
8,978,620
https://en.wikipedia.org/wiki/Elite%20Residence
Elite Residence is a supertall skyscraper in Dubai, United Arab Emirates in the Dubai Marina district, overlooking one of the human-made palm islands, Palm Jumeirah. The building is tall and has 87 floors. Of the 91 floors, 76 are for 695 apartments and the other 15 include amenities such as car-parking, swimming pools, spas, reception areas, health clubs, a business centre and a gymnasium. The skyscraper has 695 apartments and 12 elevators. The tower was the third-tallest residential building in the world when completed on 21 January 2012. As of 2022, it is the eighth-tallest residential building in the world. See also List of tallest buildings in Dubai List of tallest buildings in the United Arab Emirates References External links Developer website Community website Residential skyscrapers in Dubai 2012 establishments in the United Arab Emirates Residential buildings completed in 2012 Residential skyscrapers High-tech architecture Postmodern architecture
Elite Residence
[ "Engineering" ]
186
[ "Postmodern architecture", "Architecture" ]
8,978,774
https://en.wikipedia.org/wiki/Quantum%20nonlocality
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions. Quantum nonlocality does not allow for faster-than-light communication, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory. History Einstein, Podolsky and Rosen In the 1935 EPR paper, Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality. Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as where . Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis , then Bob's system will be left in one of the states . Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis , then Bob's system will be left in one of the states . Schrödinger referred to this phenomenon as "steering". This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity. In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states or , since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states or for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes: Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action". Bell inequality In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. However, Bell notes that the non-local hidden variable model of Bohm are different: Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality). In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment in her (his) lab, obtaining outcome . If Alice and Bob repeat their experiments several times, then they can estimate the probabilities , namely, the probability that Alice and Bob respectively observe the results when they respectively conduct the experiments x,y. In the following, each such set of probabilities will be denoted by just . In the quantum nonlocality slang, is termed a box. Bell formalized the idea of a hidden variable by introducing the parameter to locally characterize measurement results on each system: "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of as a local "strategy" or "message" that occurs with some probability when Alice and Bob reboot their experimental setup. Bell's assumption of local causality then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment Here () denotes the probability that Alice (Bob) obtains the result when she (he) conducts experiment and the local variable describing her (his) experiment has value (). Suppose that can take values from some set . If each pair of values has an associated probability of being selected (shared randomness is allowed, i.e., can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result: A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which can each take, one can represent each box as a finite vector with entries . In that representation, the set of all classical boxes forms a convex polytope. In the Bell scenario studied by CHSH, where can take values within , any Bell local box must satisfy the CHSH inequality: where The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box with equal to . This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense. Possibilistic nonlocality Bell's demonstration is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local hidden variable theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 The state involved is often called the GHZ state. In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. It starts with the observation that the state defined below can be written in a few suggestive ways: where, as above, . The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis or . We see that if they each measure with respect to , then they never see the outcome . If one measures with respect to and the other , they never see the outcomes However, sometimes they see the outcome when measuring with respect to , since This leads to the paradox: having the outcome we conclude that if one of the experimenters had measured with respect to the basis instead, the outcome must have been or , since and are impossible. But then, if they had both measured with respect to the basis, by locality the result must have been , which is also impossible. Nonlocal hidden variable models with a finite propagation speed The work of Bancal et al. generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models. Analogs of Bell’s theorem in more complicated causal structures The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure. In a standard bipartite Bell experiment, Alice's (Bob's) setting (), together with her (his) local variable (), influence her (his) local outcome (). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node . Similar separations have been established in other types of causal structures. The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it. Entanglement and nonlocality Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment. It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations (although, operating on several copies of some of such states, or carrying out local post-selections, it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, there are none for nonlocality. Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality. Quantum correlations As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, is known as Tsirelson's bound. Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by The sets of quantum correlations and Tsirelson’s problem Mathematically, a box admits a quantum realization if and only if there exists a pair of Hilbert spaces , a normalized vector and projection operators such that For all , the sets represent complete measurements. Namely, . , for all . In the following, the set of such boxes will be called . Contrary to the classical set of correlations, when viewed in probability space, is not a polytope. On the contrary, it contains both straight and curved boundaries. In addition, is not closed: this means that there exist boxes which can be arbitrarily well approximated by quantum systems but are themselves not quantum. In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors of the overall Hilbert space describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition: admits a field quantum realization if and only if there exists a Hilbert space , a normalized vector and projection operators such that For all , the sets represent complete measurements. Namely, . , for all . , for all . Call the set of all such correlations . How does this new set relate to the more conventional defined above? It can be proven that is closed. Moreover, , where denotes the closure of . Tsirelson's problem consists in deciding whether the inclusion relation is strict, i.e., whether or not . This problem only appears in infinite dimensions: when the Hilbert space in the definition of is constrained to be finite-dimensional, the closure of the corresponding set equals . In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory that would imply that , thus solving Tsirelson's problem. Tsirelson's problem can be shown equivalent to Connes embedding problem, a famous conjecture in the theory of operator algebras. Characterization of quantum correlations Since the dimensions of and are, in principle, unbounded, determining whether a given box admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. Moreover, the problem of deciding whether can be approximated by a quantum system with precision is NP-hard. Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints. For small fixed dimensions , one can explore, using variational methods, whether can be realized in a bipartite quantum system , with , . That method, however, can just be used to prove the realizability of , and not its unrealizability with quantum systems. To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. This is an infinite decreasing sequence of sets of correlations with the properties: If , then for all . If , then there exists such that . For any , deciding whether can be cast as a semidefinite program. The NPA hierarchy thus provides a computational characterization, not of , but of . If , (as claimed by Ji, Natarajan, Vidick, Wright, and Yuen) then a new method to detect the non-realizability of the correlations in is needed. If Tsirelson's problem was solved in the affirmative, namely, , then the above two methods would provide a practical characterization of . The physics of supra-quantum correlations The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside which nonetheless do not lead to any unphysical operational behavior? In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. Namely, whether any hypothetical box would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are: Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as: Here take values in , and denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of ). This box had been identified earlier, by Rastall and Khalfin and Tsirelson. In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed: Non-trivial communication complexity (NTCC). This principle stipulates that nonlocal correlations should not be so strong as to allow two parties to solve all 1-way communication problems with some probability using just one bit of communication. It can be proven that any box violating Tsirelson's bound by more than is incompatible with NTCC. No Advantage for Nonlocal Computation (NANLC). The following scenario is considered: given a function , two parties are distributed the strings of bits and asked to output the bits so that is a good guess for . The principle of NANLC states that non-local boxes should not give the two parties any advantage to play this game. It is proven that any box violating Tsirelson's bound would provide such an advantage. Information Causality (IC). The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string of bits. The second part, Bob, receives a random number . Their goal is to transmit Bob the bit , for which purpose Alice is allowed to transmit Bob bits. The principle of IC states that the sum over of the mutual information between Alice's bit and Bob's guess cannot exceed the number of bits transmitted by Alice. It is shown that any box violating Tsirelson's bound would allow two parties to violate IC. Macroscopic Locality (ML). In the considered setup, two separate parties conduct extensive low-resolution measurements over a large number of independently prepared pairs of correlated particles. ML states that any such “macroscopic” experiment must admit a local hidden variable model. It is proven that any microscopic experiment capable of violating Tsirelson's bound would also violate standard Bell nonlocality when brought to the macroscopic scale. Besides Tsirelson's bound, the principle of ML fully recovers the set of all two-point quantum correlators. Local Orthogonality (LO). This principle applies to multipartite Bell scenarios, where parties respectively conduct experiments in their local labs. They respectively obtain the outcomes . The pair of vectors is called an event. Two events , are said to be locally orthogonal if there exists such that and . The principle of LO states that, for any multipartite box, the sum of the probabilities of any set of pair-wise locally orthogonal events cannot exceed 1. It is proven that any bipartite box violating Tsirelson's bound by an amount of violates LO. All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories. The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it. Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining . This state of affairs continued for some years until the construction of the almost quantum set . is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in , but also some non-quantum boxes . Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs. Device independent protocols Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations . These protocols are termed device-independent. Device-independent quantum key distribution The first device-independent protocol proposed was device-independent quantum key distribution (QKD). In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics . Based on how non-local the box happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, all such protocols have been recently proven unconditionally secure. Device-independent randomness certification, expansion and amplification Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness or with a shorter but more random string. This last primitive can be proven impossible in a classical setting. Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols. Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality. Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality. DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate. Self-testing Sometimes, the box shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators and a quantum state giving rise to such that any other physical realization of is connected to via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson and named self-testing by Mayers and Yao. Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to , one can still determine the underlying state and measurement operators up to error bars. Dimension witnesses The degree of non-locality of a quantum box can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution and randomness expansion. See also Action at a distance Popper's experiment Quantum pseudo-telepathy Quantum contextuality Quantum foundations References Further reading Nonlocality Nonlocality
Quantum nonlocality
[ "Physics" ]
5,932
[ "Quantum field theory", "Quantum measurement", "Quantum mechanics" ]
8,978,968
https://en.wikipedia.org/wiki/Graceful%20labeling
In graph theory, a graceful labeling of a graph with edges is a labeling of its vertices with some subset of the integers from 0 to inclusive, such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints, such that this magnitude lies between 1 and inclusive. A graph which admits a graceful labeling is called a graceful graph. The name "graceful labeling" is due to Solomon W. Golomb; this type of labeling was originally given the name β-labeling by Alexander Rosa in a 1967 paper on graph labelings. A major open problem in graph theory is the graceful tree conjecture or Ringel–Kotzig conjecture, named after Gerhard Ringel and Anton Kotzig, and sometimes abbreviated GTC (not to be confused with Kotzig's conjecture on regularly path connected graphs). It hypothesizes that all trees are graceful. It is still an open conjecture, although a related but weaker conjecture known as "Ringel's conjecture" was partially proven in 2020. Kotzig once called the effort to prove the conjecture a "disease". Another weaker version of graceful labelling is near-graceful labeling, in which the vertices can be labeled using some subset of the integers on such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints (this magnitude lies on ). Another conjecture in graph theory is Rosa's conjecture, named after Alexander Rosa, which says that all triangular cacti are graceful or nearly-graceful. A graceful graph with edges 0 to is conjectured to have no fewer than vertices, due to sparse ruler results. This conjecture has been verified for all graphs with 213 or fewer edges. A related conjecture is that the smallest 2-valence graceful graph has edges, with the case for 6-valence shown below. Selected results In his original paper, Rosa proved that an Eulerian graph with number of edges m ≡ 1 (mod 4) or m ≡ 2 (mod 4) cannot be graceful. Also in his original paper, Rosa proved that the cycle Cn is graceful if and only if n ≡ 0 (mod 4) or n ≡ 3 (mod 4). All path graphs and caterpillar graphs are graceful. All lobster graphs with a perfect matching are graceful. All trees with at most 27 vertices are graceful; this result was shown by Aldred and McKay in 1998 using a computer program. This was extended to trees with at most 29 vertices in the Honours thesis of Michael Horton. Another extension of this result up to trees with 35 vertices was claimed in 2010 by the Graceful Tree Verification Project, a distributed computing project led by Wenjie Fang. All wheel graphs, web graphs, helm graphs, gear graphs, and rectangular grids are graceful. All n-dimensional hypercubes are graceful. All simple connected graphs with four or fewer vertices are graceful. The only non-graceful simple connected graphs with five vertices are the 5-cycle (pentagon); the complete graph K5; and the butterfly graph. See also Edge-graceful labeling List of conjectures References External links Numberphile video about graceful tree conjecture Graceful labeling in mathworld Further reading (K. Eshghi) Introduction to Graceful Graphs, Sharif University of Technology, 2002. (U. N. Deshmukh and Vasanti N. Bhat-Nayak), New families of graceful banana trees – Proceedings Mathematical Sciences, 1996 – Springer (M. Haviar, M. Ivaska), Vertex Labellings of Simple Graphs, Research and Exposition in Mathematics, Volume 34, 2015. (Ping Zhang), A Kaleidoscopic View of Graph Colorings, SpringerBriefs in Mathematics, 2016 – Springer Graph theory objects Conjectures
Graceful labeling
[ "Mathematics" ]
771
[ "Unsolved problems in mathematics", "Graph theory objects", "Graph theory", "Conjectures", "Mathematical relations", "Mathematical problems" ]
8,979,437
https://en.wikipedia.org/wiki/Stochastic%20approximation
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations. In a nutshell, stochastic approximation algorithms deal with a function of the form which is the expected value of a function depending on a random variable . The goal is to recover properties of such a function without evaluating it directly. Instead, stochastic approximation algorithms use random samples of to efficiently approximate properties of such as zeros or extrema. Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory. The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952. Robbins–Monro algorithm The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function , and a constant , such that the equation has a unique root at . It is assumed that while we cannot directly observe the function , we can instead obtain measurements of the random variable where . The structure of the algorithm is to then generate iterates of the form: Here, is a sequence of positive step sizes. Robbins and Monro proved, Theorem 2 that converges in (and hence also in probability) to , and Blum later proved the convergence is actually with probability one, provided that: is uniformly bounded, is nondecreasing, exists and is positive, and The sequence satisfies the following requirements: A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: , for . Other series, such as are possible but in order to average out the noise in , the above condition must be met. Example Consider the problem of estimating the mean of a probability distribution from a stream of independent samples . Let , then the unique solution to is the desired mean . The RM algorithm gives usThis is equivalent to stochastic gradient descent with loss function . It is also equivalent to a weighted average:In general, if there exists some function such that , then the Robbins–Monro algorithm is equivalent to stochastic gradient descent with loss function . However, the RM algorithm does not require to exist in order to converge. Complexity results If is twice continuously differentiable, and strongly convex, and the minimizer of belongs to the interior of , then the Robbins–Monro algorithm will achieve the asymptotically optimal convergence rate, with respect to the objective function, being , where is the minimal value of over . Conversely, in the general convex case, where we lack both the assumption of smoothness and strong convexity, Nemirovski and Yudin have shown that the asymptotically optimal convergence rate, with respect to the objective function values, is . They have also proven that this rate cannot be improved. Subsequent developments and Polyak–Ruppert averaging While the Robbins–Monro algorithm is theoretically able to achieve under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning. Chung (1954) and Fabian (1968) showed that we would achieve optimal convergence rate with (or ). Lai and Robbins designed adaptive procedures to estimate such that has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991) and Ruppert (1988) independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:The convergence of to the unique root relies on the condition that the step sequence decreases sufficiently slowly. That is A1) Therefore, the sequence with satisfies this restriction, but does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate yet with a more robust step size policy. Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate . A more general result is given in Chapter 11 of Kushner and Yin by defining interpolated time , interpolated process and interpolated normalized process as Let the iterate average be and the associate normalized error to be . With assumption A1) and the following A2) A2) There is a Hurwitz matrix and a symmetric and positive-definite matrix such that converges weakly to , where is the statisolution to where is a standard Wiener process. satisfied, and define . Then for each , The success of the averaging idea is because of the time scale separation of the original sequence and the averaged sequence , with the time scale of the former one being faster. Application in stochastic optimization Suppose we want to solve the following stochastic optimization problemwhere is differentiable and convex, then this problem is equivalent to find the root of . Here can be interpreted as some "observed" cost as a function of the chosen and random effects . In practice, it might be hard to get an analytical form of , Robbins–Monro method manages to generate a sequence to approximate if one can generate , in which the conditional expectation of given is exactly , i.e. is simulated from a conditional distribution defined by Here is an unbiased estimator of . If depends on , there is in general no natural way of generating a random outcome that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimator . If is viewed as some "fundamental" underlying random process that is generated independently of , and under some regularization conditions for derivative-integral interchange operations so that , then gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in which has a conditional expectation close to but not exactly equal to it. We then define a recursion analogously to Newton's Method in the deterministic algorithm: Convergence of the algorithm The following result gives sufficient conditions on for the algorithm to converge: C1) C2) C3) C4) C5) Then converges to almost surely. Here are some intuitive explanations about these conditions. Suppose is a uniformly bounded random variables. If C2) is not satisfied, i.e. , thenis a bounded sequence, so the iteration cannot converge to if the initial guess is too far away from . As for C3) note that if converges to then so we must have ,and the condition C3) ensures it. A natural choice would be . Condition C5) is a fairly stringent condition on the shape of ; it gives the search direction of the algorithm. Example (where the stochastic gradient method is appropriate) Suppose , where is differentiable and is a random variable independent of . Then depends on the mean of , and the stochastic gradient method would be appropriate in this problem. We can choose Kiefer–Wolfowitz algorithm The Kiefer–Wolfowitz algorithm was introduced in 1952 by Jacob Wolfowitz and Jack Kiefer, and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function. Let be a function which has a maximum at the point . It is assumed that is unknown; however, certain observations , where , can be made at any point . The structure of the algorithm follows a gradient-like method, with the iterates being generated as where and are independent. At every step, the gradient of is approximated akin to a central difference method with . So the sequence specifies the sequence of finite difference widths used for the gradient approximation, while the sequence specifies a sequence of positive step sizes taken along that direction. Kiefer and Wolfowitz proved that, if satisfied certain regularity conditions, then will converge to in probability as , and later Blum in 1954 showed converges to almost surely, provided that: for all . The function has a unique point of maximum (minimum) and is strong concave (convex) The algorithm was first presented with the requirement that the function maintains strong global convexity (concavity) over the entire feasible space. Given this condition is too restrictive to impose over the entire domain, Kiefer and Wolfowitz proposed that it is sufficient to impose the condition over a compact set which is known to include the optimal solution. The function satisfies the regularity conditions as follows: There exists and such that There exists and such that For every , there exists some such that The selected sequences and must be infinite sequences of positive numbers such that A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be and . Subsequent developments and important issues The Kiefer Wolfowitz algorithm requires that for each gradient computation, at least different parameter values must be simulated for every iteration of the algorithm, where is the dimension of the search space. This means that when is large, the Kiefer–Wolfowitz algorithm will require substantial computational effort per iteration, leading to slow convergence. To address this problem, Spall proposed the use of simultaneous perturbations to estimate the gradient. This method would require only two simulations per iteration, regardless of the dimension . In the conditions required for convergence, the ability to specify a predetermined compact set that fulfills strong convexity (or concavity) and contains the unique solution can be difficult to find. With respect to real world applications, if the domain is quite large, these assumptions can be fairly restrictive and highly unrealistic. Further developments An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on. These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size should not converge to zero but should be chosen so as to track the function., 2nd ed., chapter 3 C. Johan Masreliez and R. Douglas Martin were the first to apply stochastic approximation to robust estimation. The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem by Aryeh Dvoretzky published in 1956. See also Stochastic gradient descent Stochastic variance reduction References Stochastic optimization Statistical approximations
Stochastic approximation
[ "Mathematics" ]
2,457
[ "Statistical approximations", "Mathematical relations", "Approximations" ]
8,979,708
https://en.wikipedia.org/wiki/Cedars%20of%20God
The Cedars of God ( Arz ar-Rabb "Cedars of the Lord"), located in the Kadisha Valley of Bsharre, Lebanon, is one of the last vestiges of the extensive forests of the Lebanon cedar that thrived across Mount Lebanon in antiquity. All early modern travelers' accounts of the wild cedars appear to refer to the ones in Bsharri; the Christian monks of the monasteries in the Kadisha Valley venerated the trees for centuries. The earliest documented references of the Cedars of God are found in Tablets 4-6 of the great Epic of Gilgamesh, six days walk from Uruk. The Phoenicians, Israelites, Egyptians, Assyrians, Babylonians, Persians, Romans, Arabs, and Turks used Lebanese timber. The Egyptians valued their timber for shipbuilding, and in the Ottoman Empire their timber was used to construct railways. History Ancient history The mountains of Lebanon were once shaded by thick cedar forests and the tree is the symbol of the country. After centuries of persistent deforestation, the extent of these forests has been markedly reduced. It was once said that a battle occurred between the demigods and the humans over the beautiful and divine forest of Cedar trees near southern Mesopotamia. This forest, once protected by the Sumerian god Enlil, was completely bared of its trees when humans entered its grounds 4700 years ago, after winning the battle against the guardians of the forest, the demigods. The story also tells that Gilgamesh used cedar wood to build his city. Over the centuries, cedar wood was exploited by the Phoenicians, Egyptians, Israelites, Assyrians, Babylonians, Persians, Romans, Arabs, and Turks. The Phoenicians used the cedars for their merchant fleets. They needed timbers for their ships and the Cedar woods made them the “first sea trading nation in the world”. The Egyptians used cedar resin for the mummification process and the cedar wood for some of “their first hieroglyph bearing rolls of papyrus”. In the Bible, Solomon procured cedar timber to build the Temple in Jerusalem. The emperor Hadrian claimed these forests as an imperial domain, and destruction of the cedar forests was temporarily halted. Early modern history All early modern travelers' accounts of the wild cedars of Lebanon appear to refer to the Bsharri cedars. Pierre Belon visited the area in 1550, making him the first modern traveler to identify the Cedars of God in his ‘’Observations’’. Belon counted 28 trees: At a considerable height up the mountains the traveler arrives at the Monastery of the Virgin Mary, which is situated in the valley. Thence proceeding four miles up the mountain, he will arrive at the cedars, the Maronites or the monks acting as guides. The cedars stand in a valley, and not on top of the mountain, and they are supposed to amount to 28 in number, though it is difficult to count them, they being distant from each other a few paces. These the Archbishop of Damascus has endeavored to prove to be the same that Solomon planted with his own hands in the quincunx manner as they now stand. No other tree grows in the valley in which they are situated and it is generally so covered with snow as to be only accessible in summer. Leonhard Rauwolf followed in 1573-75, counting 24 trees: ... saw nothing higher, but only a small hill before us, all covered with snow, at the bottom whereof the high cedar trees were standing... And, although this hill hath, in former ages, been quite covered with cedars, yet they are since so decreased, that I could tell no more but twenty-four that stood round about in a circle and two others, the branches whereof are quite decayed for age. I also went about this place to look for young ones, but could find none at all. Jean de Thévenot counted 23 trees in 1655: It is a Fobbery to say, that if one reckon the Cedars of Mount Lebanon twice, he shall have a different number, for in all, great and small, there is neither more or less than twenty three of them. Laurent d'Arvieux in 1660 counted 20 trees; and Henry Maundrell in 1697 counted 16 trees of the “very old” type: Sunday, May 9 The noble (cedar] trees grow amongst the snow near the highest part of Lebanon; and are remarkable as well as for their own age and largeness, as for those frequent allusions made to them in the word of God. Here are some of them very old, and of prodigious bulk; and others younger of a smaller size. Of the former I could reckon up only sixteen, and the latter are very numerous. I measured one of the largest, and found it twelve yards six inches in girt, and yet sound; and thirty seven yards in the spread of its boughs. At about five or six yards from the ground, it was divided into five limbs, each of which was equal to a great tree. After about half an hour spent surveying this place, the clouds began to thicken, and to fly along upon the ground; which so obscured the road, that my guide was very much at a loss to find our way back again. We rambled about for seven hours thus bewildered, which gave me no small fear of being forc'd to spend one night more on Libanus. Jean de la Roque in 1722 found 20 trees. In 1738 Richard Pococke provided a detailed description. ... they form a grove about a mile in circumference, which consists of some large cedars that are near to one another, a great number of young cedars, and some pines. The great cedars, at some distance, look like very large spreading oaks; the bodies of the trees are short, dividing at bottom into three or four limbs, some of which growing up together for about ten feet, appear something like those Gothic columns, which seem to be composed of several pillars; higher up they begin to spread horizontally. One that had the roundest body, tho' not the largest, measured twenty four feet in circumference, and another with a sort of triple body, as described above, and of a triangular figure, measured twelve feet on each side. The young cedars are not easily known from pines; I observed they bear a greater quantity of fruit than the large ones. The wood does not differ from white deal in appearance, nor does it seem to be harder; it has a fine smell, but not so fragrant as the juniper of America which is commonly called Cedar; and it also falls short of it in beauty; I took a piece of the wood from a great tree that was blown down by the wind, and left there to rot; there are fifteen large ones standing. The Christians of the several denominations near this place come here to celebrate the festival of the transfiguration, and have built altars against several of the large trees, on which they administer the sacrament. These trees are about half a mile to the north of the road, to which we returned... From the 19th century onwards, the number of writers recording their visits increased substantially, and the number of cedars counted by the writers was in hundreds. Alphonse de Lamartine visited the place during his travel in Lebanon (1832–33), mentioning the cedars in some texts. In 1871, Edward Henry Palmer of the Palestine Exploration Fund described the cedars as follows: Descending by a steep zigzag path to the cedars, we pitched our camp and proceeded to examine the sacred and renowned grove, and could not repress a feeling of disappointment at its small extent, and the insignificant appearance of the trees. They consist of a little clump of trees of comparatively modern growth, not more than nine of them showing any indications of a respectable antiquity, and covering only about three acres of ground. They stand on a ridge consisting of five mounds and two spurs running nearly east and west, as in the accompanying plan. The whole number of trees we estimated at about 355; their size has also been grossly exaggerated, none of them being over 80ft. high. The ground is covered with débris of cedar and white limestone, and in the centre of the clump is a hideous little building, a Maronite chapel, the appointments of which are painfully poverty-stricken and inadequate. The trees have been lopped and otherwise maltreated, especially by the irrepressible tourist, who has been at infinite pains to cut his name on every available trunk. One tree, rather a large one, has a hole in it where a branch had broken away, and this has been enlarged into a chamber. They are scrubby scanty specimens, and not half so fine as may be seen in many an English park. Concern for the protection of the biblical "cedars of God" goes back to 1876, when the grove was surrounded by a high stone wall, paid for by Augusta Victoria of Schleswig-Holstein (often erroneously attributed to Queen Victoria of Great Britain, as Augusta Victoria was Queen of Prussia and hence ‘Queen Victoria’) to protect saplings from browsing by goats. Nevertheless, during World War I, British troops used cedar to build railroads. Henry Bordeaux came in 1922 and wrote, Yamilé, a story about the place. Recent history Time, along with the exploitation of the wood and the effects of climate change, has led to a decrease in the number of cedar trees in Lebanon. However, Lebanon is still widely known for its cedar tree history, as they are the emblem of the country and the symbol of the Lebanese flag. The remaining trees survive in mountainous areas, where they are the dominant tree species. This is the case on the slopes of Mount Makmel that tower over the Kadisha Valley, where the Cedars of God are found at an altitude of more than . Four trees have reached a height of , with their trunks reaching . World Heritage Site In 1998, the Cedars of God were added to the UNESCO list of World Heritage Sites. Current status The forest is rigorously protected. It is possible to tour if escorted by an authorized guide. After a preliminary phase in which the land was cleared of detritus, the sick plants treated, and the ground fertilized, the "Committee of the Friends of the Cedar Forest" initiated a reforestation program in 1985. The Committee planted 200,000 cedars, with 180,000 surviving. These efforts will only be appreciable in a few decades due to the slow growth of cedars. In these areas the winter offers incredible scenery, and the trees are covered with a blanket of snow. Biblical and other ancient references The Cedar Forest of ancient Mesopotamian religion appears in several sections of the Epic of Gilgamesh. The Lebanon Cedar is mentioned 103 times in the Bible. In the Hebrew text it is named and in the Greek text (LXX) it is named . Example verses include: "Open thy doors, O Lebanon, that the fire may devour thy cedars. Howl, fir tree; for the cedar is fallen; because the mighty are spoiled: howl, O ye oaks of Bashan; for the forest of the vintage is come down." (Zechariah 11:1, 2) "He moves his tail like a cedar; The sinews of his thighs are tightly knit." (Job 40:17) "The priest shall take cedarwood and hyssop and scarlet stuff, and cast them into the midst of the burning of the heifer" (Numbers 19:6) "The voice of the Lord breaks the cedars; the Lord breaks in pieces the cedars of Lebanon" (Psalm 29:5) "The righteous flourish like the palm tree and grow like the cedar in Lebanon" (Psalm 92:12) "I will put in the wilderness the cedar, the acacia, the myrtle, and the olive" (Isaiah 41: 19) "Behold, I will liken you to a cedar in Lebanon, with fair branches and forest shade" (Ezekiel 31:3) "I destroyed the Amorite before them, whose height was like the height of the cedars" (Amos 2:9) "The trees of the Lord are watered abundantly, the cedars of Lebanon that he planted." (Psalm 104:16 NRSV) [King Solomon made] cedar as plentiful as the sycamore-fig trees in the foothills. (1 Kings 10:27, NIV, excerpt) Gallery See also Garden of the Gods Al Shouf Cedar Nature Reserve List of individual trees References Bibliography Aiello, Anthony S., and Michael S. Dosmann. "The quest for the Hardy Cedar-of-lebanon ." Arnoldia: The magazine of the Arnold Arboretum 65.1 (2007): 26–35. Anderson, Mary Perle. “The Cedar of Lebanon.” Torreya, vol. 8, no. 12, 1908, pp. 287–292. JSTOR, www.jstor.org/stable/40594656. External links Lebanon eco-tourism: Cedars of God Cedrus Old-growth forests Forests of Lebanon Sacred groves Environment of Lebanon World Heritage Sites in Lebanon Epic of Gilgamesh Tourist attractions in Lebanon Tourism in Lebanon Oldest trees
Cedars of God
[ "Biology" ]
2,749
[ "Old-growth forests", "Ecosystems" ]
8,979,919
https://en.wikipedia.org/wiki/TaqMan
TaqMan probes are hydrolysis probes that are designed to increase the specificity of quantitative PCR. The method was first reported in 1991 by researcher Kary Mullis at Cetus Corporation, and the technology was subsequently developed by Hoffmann-La Roche for diagnostic assays and by Applied Biosystems (now part of Thermo Fisher Scientific) for research applications. The TaqMan probe principle relies on the 5´–3´ exonuclease activity of Taq polymerase to cleave a dual-labeled probe during hybridization to the complementary target sequence and fluorophore-based detection. As in other quantitative PCR methods, the resulting fluorescence signal permits quantitative measurements of the accumulation of the product during the exponential stages of the PCR; however, the TaqMan probe significantly increases the specificity of the detection. TaqMan probes were named after the videogame Pac-Man (Taq Polymerase + PacMan = TaqMan) as its mechanism is based on the Pac-Man principle. Principle TaqMan probes consist of a fluorophore covalently attached to the 5’-end of the oligonucleotide probe and a quencher at the 3’-end. Several different fluorophores (e.g. 6-carboxyfluorescein, acronym: FAM, or tetrachlorofluorescein, acronym: TET) and quenchers (e.g. tetramethylrhodamine, acronym: TAMRA) are available. The quencher molecule quenches the fluorescence emitted by the fluorophore when excited by the cycler’s light source via Förster resonance energy transfer (FRET). As long as the fluorophore and the quencher are in proximity, quenching inhibits any fluorescence signals. TaqMan probes are designed such that they anneal within a DNA region amplified by a specific set of primers. (Unlike the diagram, the probe binds to single stranded DNA.) TaqMan probes can be conjugated to a minor groove binder (MGB) moiety, dihydrocyclopyrroloindole tripeptide (DPI3), in order to increase its binding affinity to the target sequence; MGB-conjugated probes have a higher melting temperature (Tm) due to increased stabilization of van der Waals forces. As the Taq polymerase extends the primer and synthesizes the nascent strand (from the single-stranded template), the 5' to 3' exonuclease activity of the Taq polymerase degrades the probe that has annealed to the template. Degradation of the probe releases the fluorophore from it and breaks the proximity to the quencher, thus relieving the quenching effect and allowing fluorescence of the fluorophore. Hence, fluorescence detected in the quantitative PCR thermal cycler is directly proportional to the fluorophore released and the amount of DNA template present in the PCR. Applications TaqMan probe-based assays are widely used in quantitative PCR in research and medical laboratories: Gene expression assays Pharmacogenomics Human Leukocyte Antigen (HLA) genotyping Determination of viral load in clinical specimens (HIV, Hepatitis) Bacterial Identification assays DNA quantification SNP genotyping Verification of microarray results See also Quantitative PCR SYBR Green Reverse transcription polymerase chain reaction Molecular beacon Gene Expression Notes and references External links 1. TaqMan RT-PCR resources- primer databases, software, protocols 2. Beacon Designer - Software to design real time PCR primers and probes including SYBR Green primers, TaqMan Probes, Molecular Beacons. Chemical reactions Gene expression Polymerase chain reaction
TaqMan
[ "Chemistry", "Biology" ]
808
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Gene expression", "Molecular genetics", "Cellular processes", "nan", "Molecular biology", "Biochemistry" ]
8,980,050
https://en.wikipedia.org/wiki/Fouling%20community
Fouling communities are communities of organisms found on artificial surfaces like the sides of docks, marinas, harbors, and boats. Settlement panels made from a variety of substances have been used to monitor settlement patterns and to examine several community processes (e.g., succession, recruitment, predation, competition, and invasion resistance). These communities are characterized by the presence of a variety of sessile organisms including ascidians, bryozoans, mussels, tube building polychaetes, sea anemones, sponges, barnacles, and more. Common predators on and around fouling communities include small crabs, starfish, fish, limpets, chitons, other gastropods, and a variety of worms. Ecology Fouling communities follow a distinct succession pattern in a natural environment. Environmental impact Impacts on Humans Fouling communities can have a negative economic impact on humans, by damaging the bottom of boats, docks, and other marine human-made structures. This effect is known as Biofouling, and has been combated by Anti-fouling paint, which is now known to introduce toxic metals to the marine environment. Fouling communities have a variety of species, and many of these are filter feeders, meaning that organisms in the fouling community can also improve water clarity. Invasive Species Fouling communities do grow on natural structures, however these communities are largely made up of native species, whereas the communities growing on man-made structures have larger populations of invasive species. This difference between the species diversity across human structures and natural substrate is likely dependent on human pollution, which is known to weaken native species and create a community and environment dominated by non-indigenous species. These largely non-indigenous species communities living on docks and boats usually have a higher resistance to anthropogenic disturbances. This effect is sorely felt in untouched native marine communities, as non-indigenous species growing on boat hulls are transported across the world, to wherever the boat anchors. Research history Fouling communities were highlighted particularly in the literature of marine ecology as a potential example of alternate stable states through the work of John Sutherland in the 1970s at Duke University, although this was later called into question by Connell and Sousa. Fouling communities have been used to test the ecological effectiveness of artificial coral reefs. See also Biofouling Ecological succession Didemnum vexillum References External links http://research.ncl.ac.uk/biofouling/ is the Newcastle University barnacle and biofouling information site. http://www.imo.org/en/OurWork/Environment/Biofouling/Pages/default.aspx is the International Maritime Organization information about biofouling which includes a comprehensive list of invasive species in the fouling community. https://darchive.mblwhoilibrary.org/bitstream/handle/1912/191/chapter%203.pdf?sequence=11 https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=4896&context=open_access_etds Aquatic ecology Fouling
Fouling community
[ "Materials_science", "Biology" ]
654
[ "Aquatic ecology", "Materials degradation", "Ecosystems", "Fouling" ]
8,980,531
https://en.wikipedia.org/wiki/Hammock%20activity
A hammock activity (also hammock task) is a schedule or project planning term for a grouping of tasks that "hang" between two end dates it is tied to. A hammock activity can group tasks that are not related in the hierarchical sense of a Work Breakdown Structure, or are not related in a logical sense of a task dependency, where one task must wait for another. Usage includes: Group dissimilar activities that lead to an overall capability, such as preparations under a summary label, e.g. "vacation preparation"; Group unrelated items for the purpose of a summary such as a calendar-based reporting period, e.g. "First-quarter plans"; Group ongoing or overhead activities that run the length of an effort, e.g. "project management". The duration of the hammock activity (the size of the hammock) may also be set by the subtasks within it, so that the abstract grouping has a start date of the earliest of any of the subtasks and the finish date is the latest of any of the contents. A hammock activity is regarded as a form of Summary activity that is similar to a Level of Effort (LOE) activity. Use of hammock activities is also a way to simplify the difficulties of performing Work Breakdown Structure decomposition to low levels. Also, hammock tasks can represent any group of tasks in the Integrated Master Schedule (IMS) regardless of their physical location or parent Work Breakdown Structure (WBS) element. References External links Create a hammock task (Bonnie Biafore) Wideman Comparative Glossary of Project Management Terms Schedule (project management)
Hammock activity
[ "Physics" ]
345
[ "Spacetime", "Physical quantities", "Time", "Schedule (project management)" ]
8,980,638
https://en.wikipedia.org/wiki/Soil%20stockpile
A soil stockpile is formed with excavated topsoil during the construction of buildings or infrastructure. It is considered to be an important resource in construction and ecology. Soil is stockpiled for later use in landscaping or restoration of the region following the removal of construction infrastructure. Before re-use, stockpiled soil may be tested for contamination. References Building engineering
Soil stockpile
[ "Engineering" ]
74
[ "Building engineering", "Civil engineering", "Architecture" ]
8,980,927
https://en.wikipedia.org/wiki/World%20Urbanism%20Day
The international organisation for World Urbanism Day, also known as "World Town Planning Day", was founded in 1949 by the late Professor Carlos Maria della Paolera of the University of Buenos Aires, a graduate at the Institut d'urbanisme in Paris, to advance public and professional interest in planning. It is celebrated in more than 30 countries on four continents each November 8. See also Urbanism Urban planning New Urbanism Institut d'Urbanisme de Paris (French Wikipedia) References External links American Planning Association: World Town Planning Day World Urbanism Day from WN Network Urban planning Planned communities Garden suburbs November observances International observances
World Urbanism Day
[ "Engineering" ]
133
[ "Urban planning", "Architecture" ]
8,981,301
https://en.wikipedia.org/wiki/Vegard%27s%20law
In crystallography, materials science and metallurgy, Vegard's law is an empirical finding (heuristic approach) resembling the rule of mixtures. In 1921, Lars Vegard discovered that the lattice parameter of a solid solution of two constituents is approximately a weighted mean of the two constituents' lattice parameters at the same temperature: e.g., in the case of a mixed oxide of uranium and plutonium as used in the fabrication of MOX nuclear fuel: Vegard's law assumes that both components A and B in their pure form (i.e., before mixing) have the same crystal structure. Here, is the lattice parameter of the solid solution, and are the lattice parameters of the pure constituents, and is the molar fraction of B in the solid solution. Vegard's law is seldom perfectly obeyed; often deviations from the linear behavior are observed. A detailed study of such deviations was conducted by King. However, it is often used in practice to obtain rough estimates when experimental data are not available for the lattice parameter for the system of interest. For systems known to approximately obey Vegard's law, the approximation may also be used to estimate the composition of a solution from knowledge of its lattice parameters, which are easily obtained from diffraction data. For example, consider the semiconductor compound . A relation exists between the constituent elements and their associated lattice parameters, , such that: When variations in lattice parameter are very small across the entire composition range, Vegard's law becomes equivalent to Amagat's law. Relationship to band gaps in semiconductors In many binary semiconducting systems, the band gap in semiconductors is approximately a linear function of the lattice parameter. Therefore, if the lattice parameter of a semiconducting system follows Vegard's law, one can also write a linear relationship between the band gap and composition. Using as before, the band gap energy, , can be written as: Sometimes, the linear interpolation between the band gap energies is not accurate enough, and a second term to account for the curvature of the band gap energies as a function of composition is added. This curvature correction is characterized by the bowing parameter, : Mineralogy The following excerpt from Takashi Fujii (1960) summarises well the limits of the Vegard’s law in the context of mineralogy and also makes the link with the Gladstone–Dale equation: See also When considering the empirical correlation of some physical properties and the chemical composition of solid compounds, other relationships, rules, or laws, also closely resembles the Vegard's law, and in fact the more general rule of mixtures: Amagat's law Gladstone–Dale equation Kopp's law Kopp–Neumann law Rule of mixtures References Crystallography Materials science Metallurgy Mineralogy Eponyms
Vegard's law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
579
[ "Applied and interdisciplinary physics", "Metallurgy", "Materials science", "Crystallography", "Condensed matter physics", "nan" ]
8,981,519
https://en.wikipedia.org/wiki/Covarion
The method of covarions, or concomitantly variable codons, is a technique in computational phylogenetics that allows the hypothesized rate of molecular evolution at individual codons in a set of nucleotide sequences to vary in an autocorrelated manner. Under the covarion model, the rates of evolution on different branches of a hypothesized phylogenetic tree vary in an autocorrelated way, and the rates of evolution at different codon sites in an aligned set of DNA or RNA sequences vary in a separate but autocorrelated manner. This provides additional and more realistic constraints on evolutionary rates versus the simpler technique of allowing the rate of evolution on each branch to be selected randomly from a suitable probability distribution such as the gamma distribution. Covarions is a concrete form of the more general concept of heterotachy. Developing a computational algorithm suitable for identifying sites with high evolutionary rates from a static dataset is a challenge due to the constraints of autocorrelation. The original statement of the method used a rough stochastic model of the evolutionary process designed to identify transiently high-variability codon sites. Abandoning the requirement that rates be autocorrelated on a given DNA or RNA molecule allows extension of substitution matrix methods to the covarion model. The matrix at right represents a covarion-based modification to the three-parameter Kimura substitution model, where the vertical axis represents the original state and the horizontal axis the destination state. The two rates, 0 and 1, define a pair of mutation states; transitions can occur between state 0 and state 1 at any time, but nucleotides can only mutate in state 1. That is, the rate of mutation in state 0 is 0. Here α and β are the standard Kimura parameters for transition and transversion mutations, κδ is the rate of transition between a site being invariant (state 0) and variable (state 1), and δ is the rate of transition between a site being variable (state 1) and invariant (state 0). Because nucleotide sequences do not themselves reflect the difference between a 0 or 1 state, an observation of a given nucleotide is treated as ambiguous; that is, if a given site contains a C nucleotide, it is ambiguous between C0 and C1 states. References Felsenstein J. (2004). Inferring Phylogenies Sinauer Associates: Sunderland, MA. Fitch WM, Markowitz E. (1970) An improved method for determining codon variability in a gene and its applications to the rate of fixation of mutations in evolution. Biochem Genet 4: 579–593. PubMed Penny D, McComish BJ, Charleston MA, Hendy MD. (2001) Mathematical elegance with biochemical realism: The covarion model of molecular evolution. J Mol Evol 53: 711–723. DOI Galtier N. (2001) Maximum-likelihood phylogenetic analysis under a covarion-like model. Mol Biol Evol 18:866–873. FullText Computational phylogenetics
Covarion
[ "Biology" ]
641
[ "Bioinformatics", "Phylogenetics", "Computational phylogenetics", "Genetics techniques" ]
8,981,673
https://en.wikipedia.org/wiki/Blaney%E2%80%93Criddle%20equation
The Blaney–Criddle equation (named after H. F. Blaney and W. D. Criddle) is a method for estimating reference crop evapotranspiration. Usage The Blaney–Criddle equation is a relatively simplistic method for calculating evapotranspiration. When sufficient meteorological data is available the Penman–Monteith equation is usually preferred. However, the Blaney–Criddle equation is ideal when only air-temperature datasets are available for a site. Given the coarse accuracy of the Blaney–Criddle equation, it is recommended that it be used to calculate evapotranspiration for periods of one month or greater. The equation calculates evapotranspiration for a 'reference crop', which is taken as actively growing green grass of 8–15 cm height. Equation ETo = p ·(0.457·Tmean + 8.128) Where: ETo is the reference evapotranspiration [mm day−1] (monthly) Tmean is the mean daily temperature [°C] given as Tmean = (Tmax + Tmin )/ 2 p is the mean daily percentage of annual daytime hours. Accuracy and bias Given the limited data input to the equation, the calculated evapotranspiration should be regarded as only broadly accurate. Rather than a precise measure of evapotranspiration, the output of the equation is better thought of as providing an order of magnitude. The inaccuracy of the equation is exacerbated by extreme variants of weather. In particular evapotranspiration is known to be exaggerated by up to 40% in calm, humid, clouded areas and depreciated by 60% in windy, dry, sunny areas. See also Jensen–Haise equation (M. E. Jensen and H. R. Haise, 1963) Penman–Monteith equation External links Rational Use of the FAO Blaney-Criddle Formula (Allen 1986) Potential Evapotranspiration Notes and references Agronomy Equations
Blaney–Criddle equation
[ "Mathematics" ]
443
[ "Mathematical objects", "Equations" ]
8,982,342
https://en.wikipedia.org/wiki/List%20of%20abductors%20of%20the%20human%20body
Abduction is an anatomical term of motion referring to a movement which draws a limb out to the side, away from the median sagittal plane of the body. It is thus opposed to adduction. Upper limb Arm and shoulder of arm at shoulder (raising arm) Supraspinatus 0-15 Deltoid 15-90 Hand and wrist of hand at wrist Flexor carpi radialis Extensor carpi radialis longus Extensor carpi radialis brevis of finger Abductor digiti minimi Dorsal interossei of the hand of thumb Abductor pollicis longus Abductor pollicis brevis Lower limb of femur at hip Gluteus maximus muscle Gluteus medius muscle Gluteus minimus muscle Sartorius muscle Tensor fasciae latae muscle Piriformis of toe Abductor hallucis Abductor digiti minimi Dorsal interossei of the foot Other vocal folds Posterior cricoarytenoid muscle eyeball Lateral rectus muscle Superior oblique muscle Inferior oblique muscle References See also Abductors (muscles) Anatomical terms of motion
List of abductors of the human body
[ "Biology" ]
237
[ "Behavior", "Anatomical terms of motion", "Motor control" ]
8,982,401
https://en.wikipedia.org/wiki/Nanakshahi%20bricks
Nanakshahi bricks (; meaning "belonging to the reign of Guru Nanak"), also known as Lakhuri bricks, were decorative bricks used for structural walls during the Mughal era. They were employed for constructing historical Sikh architecture, such as at the Golden Temple complex. The British colonists also made use of the bricks in Punjab. Uses Nanakshahi bricks were used in the Mughal-era more for aesthetic or ornamental reasons rather than structural reasons. This variety of brick tiles were of moderate dimensions and could be used for reinforcing lime concretes in the structural walls and other thick components. But, as they made moldings, cornices, plasters, etc. easy to work into a variety of shapes, they were more often used as cladding or decorative material. In the present-day, the bricks are sometimes used to give a "historical" look to settings, such as when the surrounding of the Golden Temple complex was heavily renovated in the 2010s. General specifications Nanakshahi bricks are moderate in-size. More often than not, the structures on which they were used, especially the Sikh temples (Gurudwaras), were a combination of two systems: trabeated and post-and-lintel, or based on arches. The surfaces were treated with lime or gypsum plaster which was molded into cornices, pilasters, and other structural as well as non-structural embellishments. Brick and lime mortar as well as lime or gypsum plaster, and lime concrete were the most favoured building materials, although stone (such as red stone and white marble) were also used in a number of shrines. Many fortresses were built using these bricks. They come in 4”x4” and 4”x6’’ sizes. Relationship with Lakhuri bricks Due to a lack of understanding, sometimes contemporary writers confuse the Lakhuri bricks with other similar but distinct regional variants. For example, some writers use "Lakhuri bricks and Nanakshahi bricks" implying two different things, and others use "Lakhuri bricks or Nanakshahi bricks" inadvertently implying either are the same or two different things, leading to confusion on if they are the same, especially if these words are casually mentioned interchangeably. Lakhuri bricks were used by the Mughal Empire that spanned across the Indian subcontinent, whereas Nanak Shahi bricks were used mainly across the Sikh Empire, that was spread across the Punjab region in the north-west Indian subcontinent, when Sikhs were in conflict with the Mughal Empire due to the religious persecution of Sikhs by Mughals. Coins struck by Sikh rulers between 1764 CE to 1777 CE were called Gobind Shahi coins (bearing an inscription in the name of Guru Gobind Singh), and coins struck from 1777 onward were called Nanak Shahi coins (bearing an inscription in the name of Guru Nanak). Mughal-era Lakhuri bricks predate Nanakshahi bricks, as seen in Bahadurgarh Fort of Patiala that was built by the Mughal Nawab Saif Khan in 1658 CE using earlier-era Lakhuri bricks, and nearly 80 years later it was renovated using later-era Nanakshahi bricks and renamed in the honor of Guru Tegh Bahadur (as Guru Teg Bahadur had stayed at this fort for three months and nine days before leaving for Delhi when he was executed by Aurangzeb in 1675 CE) by Maharaja of Patiala Karam Singh in 1837 CE. Since the timeline of both the Mughal Empire and Sikh Empire overlapped, both Lakhuri and Nanakshahi bricks were used around the same time in their respective dominions. Restoration architect author Anil Laul clarifies "We, therefore, had slim bricks known as the Lakhori and Nanakshahi bricks in India and the slim Roman bricks or their equivalents for many other parts of the world." Conservation Peter Bance, when evaluating the status of Sikh sites in present-day India, where the majority of Sikhs live today, criticizes the destruction of the originality of 19th century Sikh sites under the guise of "renovation", whereby historical structures are toppled and new buildings take their former place. An example cited by him of sites losing their originality relates to nanakshahi bricks, which are characteristic of Sikh architecture from the 19th century, being replaced by renovators of historical Sikh sites in India by marble and gold. See also Lakhori bricks Sikh architecture Notes References External links Nanak Shahi Bricks Ancient Home of Baba Sohan Singh Bhakna,(of Ghadar Party fame) in trouble Viraasat Haveli frozen in Time Indian architectural history Sikh architecture Mughal architecture elements Building materials
Nanakshahi bricks
[ "Physics", "Engineering" ]
949
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
8,982,920
https://en.wikipedia.org/wiki/Bedtime
Bedtime (also called putting to bed or tucking in) is a ritual part of parenting to help children feel more secure and become accustomed to a more rigid schedule of sleep than they might prefer. The ritual of bedtime is aimed at facilitating the transition from wakefulness to sleep. It may involve bedtime stories, children's songs, nursery rhymes, bed-making and getting children to change into nightwear. In some religious households, prayers are said shortly before going to bed. Sleep training may be part of the bedtime ritual for babies and toddlers. In adult use, the term means simply "time for bed", similar to curfew, as in "It's past my bedtime". Some people are accustomed to drinking a nightcap or herbal tea at bedtime. Sleeping coaches are also used to help individuals reach their bedtime goals. Researchers studying sleep are finding patterns revealing that cell phone use at night disturbs going to sleep at one's bedtime and achieving a good night's sleep. Synonyms In boarding schools and on trips or holidays that involve young people, the equivalent of bedtime is lights out or lights-out - this term is also used in prisons, hospitals, in the military, and in sleep research. Newspapers Print newspapers, usually a daily, was "put to bed" when editorial work on the issue had formally ceased, the content was fixed, and printing could begin. See also Crib talk Lullaby Sleep cycle References Parenting Sleep Culture of beds
Bedtime
[ "Biology" ]
305
[ "Behavior", "Sleep" ]
8,983,001
https://en.wikipedia.org/wiki/The%20Foundations%20of%20Arithmetic
The Foundations of Arithmetic () is a book by Gottlob Frege, published in 1884, which investigates the philosophical foundations of arithmetic. Frege refutes other idealist and materialist theories of number and develops his own platonist theory of numbers. The Grundlagen also helped to motivate Frege's later works in logicism. The book was also seminal in the philosophy of language. Michael Dummett traces the linguistic turn to Frege's Grundlagen and his context principle. The book was not well received and was not read widely when it was published. It did, however, draw the attentions of Bertrand Russell and Ludwig Wittgenstein, who were both heavily influenced by Frege's philosophy. An English translation was published (Oxford, 1950) by J. L. Austin, with a second edition in 1960. Linguistic turn Gottlob Frege, Introduction to The Foundations of Arithmetic (1884/1980) In the enquiry that follows, I have kept to three fundamental principles: always to separate sharply the psychological from the logical, the subjective from the objective; never to ask for the meaning of a word in isolation, but only in the context of a proposition never to lose sight of the distinction between concept and object. In order to answer a Kantian question about numbers, "How are numbers given to us, granted that we have no idea or intuition of them?" Frege invokes his "context principle", stated at the beginning of the book, that only in the context of a proposition do words have meaning, and thus finds the solution to be in defining "the sense of a proposition in which a number word occurs." Thus an ontological and epistemological problem, traditionally solved along idealist lines, is instead solved along linguistic ones. Criticisms of predecessors Psychologistic accounts of mathematics Frege objects to any account of mathematics based on psychologism, that is, the view that mathematics and numbers are relative to the subjective thoughts of the people who think of them. According to Frege, psychological accounts appeal to what is subjective, while mathematics is purely objective: mathematics is completely independent from human thought. Mathematical entities, according to Frege, have objective properties regardless of humans thinking of them: it is not possible to think of mathematical statements as something that evolved naturally through human history and evolution. He sees a fundamental distinction between logic (and its extension, according to Frege, math) and psychology. Logic explains necessary facts, whereas psychology studies certain thought processes in individual minds. Ideas are private, so idealism about mathematics implies there is "my two" and "your two" rather than simply the number two. Kant Frege greatly appreciates the work of Immanuel Kant. However, he criticizes him mainly on the grounds that numerical statements are not synthetic-a priori, but rather analytic-a priori. Kant claims that 7+5=12 is an unprovable synthetic statement. No matter how much we analyze the idea of 7+5 we will not find there the idea of 12. We must arrive at the idea of 12 by application to objects in the intuition. Kant points out that this becomes all the more clear with bigger numbers. Frege, on this point precisely, argues towards the opposite direction. Kant wrongly assumes that in a proposition containing "big" numbers we must count points or some such thing to assert their truth value. Frege argues that without ever having any intuition toward any of the numbers in the following equation: 654,768+436,382=1,091,150 we nevertheless can assert it is true. This is provided as evidence that such a proposition is analytic. While Frege agrees that geometry is indeed synthetic a priori, arithmetic must be analytic. Mill Frege roundly criticizes the empiricism of John Stuart Mill. He claims that Mill's idea that numbers correspond to the various ways of splitting collections of objects into subcollections is inconsistent with confidence in calculations involving large numbers. He further quips, "thank goodness everything is not nailed down!" Frege also denies that Mill's philosophy deals adequately with the concept of zero. He goes on to argue that the operation of addition cannot be understood as referring to physical quantities, and that Mill's confusion on this point is a symptom of a larger problem of confounding the applications of arithmetic with arithmetic itself. Frege uses the example of a deck of cards to show numbers do not inhere in objects. Asking "how many" is nonsense without the further clarification of cards or suits or what, showing numbers belong to concepts, not to objects. Julius Caesar problem The book contains Frege's famous anti-structuralist Julius Caesar problem. Frege contends a proper theory of mathematics would explain why Julius Caesar is not a number. Development of Frege's own view of a number Frege makes a distinction between particular numerical statements such as 1+1=2, and general statements such as a+b=b+a. The latter are statements true of numbers just as well as the former. Therefore, it is necessary to ask for a definition of the concept of number itself. Frege investigates the possibility that number is determined in external things. He demonstrates how numbers function in natural language just as adjectives. "This desk has 5 drawers" is similar in form to "This desk has green drawers". The drawers being green is an objective fact, grounded in the external world. But this is not the case with 5. Frege argues that each drawer is on its own green, but not every drawer is 5. Frege urges us to remember that from this it does not follow that numbers may be subjective. Indeed, numbers are similar to colors at least in that both are wholly objective. Frege tells us that we can convert number statements where number words appear adjectivally (e.g., 'there are four horses') into statements where number terms appear as singular terms ('the number of horses is four'). Frege recommends such translations because he takes numbers to be objects. It makes no sense to ask whether any objects fall under 4. After Frege gives some reasons for thinking that numbers are objects, he concludes that statements of numbers are assertions about concepts. Frege takes this observation to be the fundamental thought of Grundlagen. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concept horse in the barn. Frege attempts to explain our grasp of numbers through a contextual definition of the cardinality operation ('the number of...', or ). He attempts to construct the content of a judgment involving numerical identity by relying on Hume's principle (which states that the number of Fs equals the number of Gs if and only if F and G are equinumerous, i.e. in one-one correspondence). He rejects this definition because it doesn't fix the truth value of identity statements when a singular term not of the form 'the number of Fs' flanks the identity sign. Frege goes on to give an explicit definition of number in terms of extensions of concepts, but expresses some hesitation. Frege's definition of a number Frege argues that numbers are objects and assert something about a concept. Frege defines numbers as extensions of concepts. 'The number of F's' is defined as the extension of the concept '... is a concept that is equinumerous to F'. The concept in question leads to an equivalence class of all concepts that have the number of F (including F). Frege defines 0 as the extension of the concept being non self-identical. So, the number of this concept is the extension of the concept of all concepts that have no objects falling under them. The number 1 is the extension of being identical with 0. Legacy The book was fundamental in the development of two main disciplines, the foundations of mathematics and philosophy. Although Bertrand Russell later found a major flaw in Frege's Basic Law V (this flaw is known as Russell's paradox, which is resolved by axiomatic set theory), the book was influential in subsequent developments, such as Principia Mathematica. The book can also be considered the starting point in analytic philosophy, since it revolves mainly around the analysis of language, with the goal of clarifying the concept of number. Frege's views on mathematics are also a starting point on the philosophy of mathematics, since it introduces an innovative account on the epistemology of numbers and mathematics in general, known as logicism. Editions See also Begriffsschrift Foundationalism Round square copula References Sources External links Frege, Gottlob (1960). Foundations of Arithmetic – Free, full-text German edition Die Grundlagen der Arithmetik at archive.org – Free, full-text German edition (Book from the collections of Harvard University) Die Grundlagen der Arithmetik at archive.org – Free, full-text German edition (Book from the collections of Oxford University) 1884 non-fiction books Books by Gottlob Frege Logic books Philosophy of mathematics literature
The Foundations of Arithmetic
[ "Mathematics" ]
1,897
[ "Philosophy of mathematics literature" ]
8,983,045
https://en.wikipedia.org/wiki/ISCSI%20Extensions%20for%20RDMA
The iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use Remote Direct Memory Access (RDMA). RDMA can be provided by the Transmission Control Protocol (TCP) with RDMA services (iWARP), which uses an existing Ethernet setup and therefore has lower hardware costs, RoCE (RDMA over Converged Ethernet), which does not need the TCP layer and therefore provides lower latency, or InfiniBand. iSER permits data to be transferred directly into and out of SCSI computer memory buffers (those which connect computers and storage devices) without intermediate data copies and with minimal CPU involvement. History An RDMA consortium was announced on May 31, 2002, with a goal of product implementations by 2003. The consortium released their proposal in July, 2003. The protocol specifications were published as drafts in September 2004 in the Internet Engineering Task Force and issued as RFCs in October 2007. The OpenIB Alliance was renamed in 2007 to be the OpenFabrics Alliance, and then released an open source software package. Description The motivation for iSER is to use RDMA to avoid unnecessary data copying on the target and initiator. The Datamover Architecture (DA) defines an abstract model in which the movement of data between iSCSI end nodes is logically separated from the rest of the iSCSI protocol; iSER is one Datamover protocol. The interface between the iSCSI and a Datamover protocol, iSER in this case, is called Datamover Interface (DI). The main difference between the standard iSCSI and iSCSI over iSER is the execution of SCSI read/write commands. With iSER the target drives all data transfer (with the exception of iSCSI unsolicited data) by issuing RDMA write/read operations, respectively. When the iSCSI layer issues an iSCSI command PDU, it calls the Send_Control primitive, which is part of the DI. The Send_Control primitive sends the STag with the PDU. The iSER layer in the target side notifies the target that the PDU was received with the Control_Notify primitive (which is part of the DI). The target calls the Put_Data or Get_Data primitives (which are part of the DI) to perform an RDMA write/read operation respectively. Then, the target calls the Send_Control primitive to send a response to the initiator. An example is shown in the figures (time progresses from top to bottom). All iSCSI control-type PDUs contain an iSER header, which allows the initiator to advertise the STags that were generated during buffer registration. The target will use the STags later for RDMA read/write operations. See also LIO Linux SCSI Target The SCST Linux SCSI target software stack SCSI RDMA Protocol References Further reading Thesis for Master of Science in Computer Science External links iSER and DA Frequently Asked Questions Computer networking SCSI
ISCSI Extensions for RDMA
[ "Technology", "Engineering" ]
638
[ "Computer networking", "Computer science", "Computer engineering" ]
8,983,270
https://en.wikipedia.org/wiki/Trimmer%20%28electronics%29
A trimmer, or preset, is a miniature adjustable electrical component. It is meant to be set correctly when installed in some device, and never seen or adjusted by the device's user. Trimmers can be variable resistors (potentiometers), variable capacitors, or trimmable inductors. They are common in precision circuitry like A/V components, and may need to be adjusted when the equipment is serviced. Trimpots (trimmer potentiometers) are often used to initially calibrate equipment after manufacturing. Unlike many other variable controls, trimmers are mounted directly on circuit boards, turned with a small screwdriver and rated for many fewer adjustments over their lifetime. Trimmers like trimmable inductors and trimmable capacitors are usually found in superhet radio and television receivers, in the intermediate frequency (IF), oscillator and radio frequency (RF) circuits. They are adjusted into the right position during the alignment procedure of the receiver. General considerations Trimmers come in a variety of sizes and levels of precision. For example, multi-turn trim potentiometers exist, in which it takes several turns of the adjustment screw to reach the end value. This allows for very high degrees of accuracy. Often they make use of a worm-gear (rotary track) or a leadscrew (linear track). The position on the component of the adjustment often needs to be considered for accessibility after the circuit is assembled. Both top- and side-adjust trimmers are available to facilitate this. The adjustment of presets is often fixed in place with sealing wax after the adjustment is made to prevent movement by vibration. This also serves as an indication if the device has been tampered with. Resistors Resistor trimmers generally come in the form of a potentiometer (pot), often called a trimpot. Potentiometers have three terminals, but can be used as a normal two-terminal resistor by joining the wiper to one of the other terminals, or just using two terminals. Trimpot is a registered trademark of Bourns, Inc., and the device was patented by Marlan Bourns in 1952. The term has since become generic. Two types of preset resistor are commonly found in circuits. The skeleton potentiometer works like a regular circular potentiometer, but is stripped of its enclosure, shaft, and fixings. The full movement of a skeleton potentiometer is less than a single turn. The other type is the multi-turn potentiometer which moves the slider along the resistive track via a gearing arrangement. The gearing is such that multiple turns of the adjustment screw are required to move the slider the full distance along the resistive track, leading to very high precision of setting. Some, possibly the majority, of multi-turn pots have a linear track rather than a circular one. Typically, a worm gear is used with rotary track presets and a leadscrew is used with linear track presets. Capacitors Trimming capacitors can be multi-plate parallel-plate capacitors with a dielectric for between plates for increased capacitance. However, at SHF only very small values of capacitance are needed. Presets at these frequencies are commonly a glass tube with plates at either end. The top plate is adjusted by means of a screw to which it is attached at the top of the cylinder. Inductors A common way of making preset inductors in radios is to wind the inductor on a plastic tube. A high permeability core material is inserted into the cylinder in the form of a screw. Winding the core further in to the inductor increases inductance and vice versa. It is normally necessary to use non-metallic tools to adjust inductors. A steel screwdriver will increase the inductance while it is being adjusted and it will fall again when the screwdriver is removed. At VHF and SHF, only small values of inductance are usually needed. Inductors can be made of open coils of a few turns. They can be tuned by squeezing the coils together or by pulling them apart as the inductance needs to be increased or decreased respectively. Tuned circuits An adjustable tuned circuit can be formed in the same way as a preset inductor. The inductor and its resonant capacitor are commonly contained in a metal can for shielding with a hole at the top to give access to the adjustable core. Tuned transformers can also be constructed this way with two windings on the same core. This is a common component in the IF stage of radios which have a double-tuned amplifier format. Distributed-element circuit Distributed-element circuits often use the component known as a stub. In printed planar formats such as microstrip, stubs can be trimmed by removing material with a scalpel or adding material by soldering on copper foil or even just pressing on strips of indium. This is useful for prototypes and pre-production runs, but is usually not done on production items. Applications They are common in precision circuitry like A/V components, and may need to be adjusted when the equipment is serviced. Trimpots are often used to initially calibrate equipment after manufacturing. Unlike many other variable controls, trimmers are mounted directly on circuit boards, turned with a small screwdriver and rated for many fewer adjustments over their lifetime. Trimmers like trimmable inductors and trimmable capacitors are usually found in superhet radio and television receivers, in the intermediate frequency (IF), oscillator and radio frequency (RF) circuits. They are adjusted into the correct position during the alignment procedure. Electronic symbols In circuit diagrams, the symbol for a variable component is the symbol for a fixed component with a diagonal line through it terminating in an arrow head. For a preset component, the diagonal line terminates in a bar. See also Laser trimming References External links Trimmer potentiometers (examples and internals), Robot Room Highlights from Trimmer Primers - Bourns Resistive components Capacitors de:Potentiometer#Trimmpotentiometer
Trimmer (electronics)
[ "Physics" ]
1,284
[ "Physical quantities", "Resistive components", "Capacitors", "Capacitance", "Electrical resistance and conductance" ]
8,983,708
https://en.wikipedia.org/wiki/Lam%C3%A9%20parameters
In continuum mechanics, Lamé parameters (also called the Lamé coefficients, Lamé constants or Lamé moduli) are two material-dependent quantities denoted by λ and μ that arise in strain-stress relationships. In general, λ and μ are individually referred to as Lamé's first parameter and Lamé's second parameter, respectively. Other names are sometimes employed for one or both parameters, depending on context. For example, the parameter μ is referred to in fluid dynamics as the dynamic viscosity of a fluid (not expressed in the same units); whereas in the context of elasticity, μ is called the shear modulus, and is sometimes denoted by G instead of μ. Typically the notation G is seen paired with the use of Young's modulus E, and the notation μ is paired with the use of λ. In homogeneous and isotropic materials, these define Hooke's law in 3D, where is the stress tensor, the strain tensor, the identity matrix and the trace function. Hooke's law may be written in terms of tensor components using index notation as where is the Kronecker delta. The two parameters together constitute a parameterization of the elastic moduli for homogeneous isotropic media, popular in mathematical literature, and are thus related to the other elastic moduli; for instance, the bulk modulus can be expressed as . Relations for other moduli are found in the (λ, G) row of the conversions table at the end of this article. Although the shear modulus, μ, must be positive, the Lamé's first parameter, λ, can be negative, in principle; however, for most materials it is also positive. The parameters are named after Gabriel Lamé. They have the same dimension as stress and are usually given in SI unit of stress [Pa]. See also Elasticity tensor Further reading K. Feng, Z.-C. Shi, Mathematical Theory of Elastic Structures, Springer New York, , (1981) G. Mavko, T. Mukerji, J. Dvorkin, The Rock Physics Handbook, Cambridge University Press (paperback), , (2003) W.S. Slaughter, The Linearized Theory of Elasticity, Birkhäuser, , (2002) References Elasticity (physics)
Lamé parameters
[ "Physics", "Materials_science" ]
464
[ "Deformation (mechanics)", "Physical phenomena", "Physical properties", "Elasticity (physics)" ]
8,983,968
https://en.wikipedia.org/wiki/Chlorophyllum%20molybdites
Chlorophyllum molybdites, commonly known as the green-spored parasol, false parasol, green-spored lepiota and vomiter, is a widespread mushroom. Poisonous and producing severe gastrointestinal symptoms of vomiting and diarrhea, it is commonly confused with the shaggy parasol (Chlorophyllum rhacodes) or shaggy mane (Coprinus comatus), and is the most commonly misidentified poisonous mushroom in North America. Its large size and similarity to the edible parasol mushroom (Macrolepiota procera), as well as its habit of growing in areas near human habitation, are reasons cited for this. The nature of the poisoning is predominantly gastrointestinal. Description It is an imposing mushroom with a pileus (cap) ranging from in diameter, hemispherical and with a flattened top. The cap is whitish in colour with coarse brownish scales. The gills are free and white, usually turning dark and green with maturity. It has a rare green spore print. The stipe ranges from tall and bears a double-edged ring. Its stem lacks the snakeskin pattern that is generally present on the parasol mushroom. The flesh is thick, and though firm at first, softens with age. It is white, though the base of the foot can sporadically become reddish-brown to pale reddish-pink or almost orange when cut or crushed. Distribution and habitat Chlorophyllum molybdites grows in lawns and parks across eastern North America, as well as temperate and subtropical regions around the world. Fruiting bodies generally appear after summer and autumn rains. It appears to have spread to other countries, with reports from Scotland, Australia, and Cyprus. Toxicity Chlorophyllum molybdites is the most frequently eaten poisonous mushroom in North America. The symptoms, caused by molybdophyllysin, are predominantly gastrointestinal in nature, with vomiting, diarrhea and colic, often severe, occurring 1–3 hours after consumption. Although these poisonings can be severe, particularly in children, none have yet resulted in death. Professor James Kimbrough writes:Chlorophyllum molybdites, the green-spored Morgan's Lepiota, is responsible for the greatest number of cases of mushroom poisonings in North America, and in Florida. This is probably due to the fact that it is easily confused with choice edible species such as Lepiota procera and L. rhacodes, and it is one of the most common mushrooms found on lawns and pastures throughout the country, with the exception of the Pacific Northwest. When eaten raw C. molybdites produce severe symptoms, including bloody stools, within a couple of hours. When cooked well, or parboiled and decanting the liquid before cooking, others eat and enjoy it. Eilers and Nelso (1974) found a heat-labile, high molecular weight protein which showed an adverse effect when given by intraperitoneal injection into laboratory animals. Cases of poisoning from these mushrooms are also reported in Malaysia, where they are often mistaken for Termitomyces mushrooms that are found locally. Gallery References External links Mushroom Expert – Chlorophyllum molybdites Tom Volk's Fungus of the Month – Chlorophyllim molybdites Your Yard Might Be Home to the "Vomiter" Mushroom | Huffington Post Poisonous fungi Agaricaceae Fungi found in fairy rings Fungi of Europe Fungi of North America Fungi of Africa Fungus species
Chlorophyllum molybdites
[ "Biology", "Environmental_science" ]
737
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
8,984,137
https://en.wikipedia.org/wiki/Midland%20Radio
Midland Radio Corporation, also known as Midland Radio, or just Midland, is a manufacturing company, headquartered in Kansas City, Missouri, US. Midland Radio develops radio communications products. History and structure Midland Radio Corporation was established in 1959 in Kansas City, Missouri, US. Midland Radio Corporation and is owned by private investors. Midland is the oldest U.S. manufacturer of CB radios. In the 1990s, CTE International acquired a significant share of Midland Radio Corporation. Midland Radio is the U.S. affiliate of an international group of companies with offices in Bulgaria, Germany, Hong Kong, Italy, Poland, Russia, Spain, Ukraine, and the United Kingdom. MRC is headquartered in a distribution facility in Kansas City, Missouri, which houses its entire U.S. operations. Products Midland Radio develops, manufactures, and imports both consumer and business radio communications products. They were the first to introduce a 14-channel FRS radio to the market. Here is an overview of their product categories: NOAA Weather Radios: NOAA stands for the National Oceanographic and Atmospheric Administration. This federal agency is responsible for the majority of weather forecasts and forewarning about weather events. Midland Radio has four different NOAA Weather Radios: The WR120, WR300, WR400 and the HH50B. Emergency Alert Radios: This line of products alert consumers when there is a public or weather related emergency. The two main Emergency Alert products are the ER210 and ER310. Both are equipped with hand cranks and solar for a sustainable, back up power source. Multiple power source options - USB, LED flashlight, Hand Crank and rechargeable batteries. Two -Way Radios Midland Radio offers four lines of two-way radios: X-Talker, LXT, GXT and XT511 MicroMobile High-Powered GMRS Radios that can communicate with any Midland Radio Two-Way Radio. Midland Radio is the official communication sponsor of Jeep Jamboree, which now is in a transition to switch from CB to MicroMobile Radios. CB Radios Midland offers classic and portable CB Radios Business Radios Midland Radio has three product lines within their business sector - Heavy Duty, Medium Duty and Light Duty Portable Power The PPG1000 runs on a silent 924 Wh lithium-ion battery, which makes it a clean source of power and safe to use indoors as well as outdoors. Run a coffee maker, blender, mini fridge and so on, with the PPG1000. Amateur Radios Midland Radio offers a Dual Band Amateur Two-Way Radio featuring UHF and VHF bands and NOAA weather channels CTE International Midland Radio, and the Midland trademark is represented in Europe by CTE International. CTE International acquired a "significant share" of Midland Radio Corporation in the 1990s. References External links USA site EU site Manufacturing companies based in Kansas City, Missouri Manufacturing companies established in 1959 1959 establishments in Missouri American brands Radio manufacturers
Midland Radio
[ "Engineering" ]
592
[ "Radio electronics", "Radio manufacturers" ]
8,984,493
https://en.wikipedia.org/wiki/Ion%20pump
An ion pump (also referred to as a sputter ion pump) is a type of vacuum pump which operates by sputtering a metal getter. Under ideal conditions, ion pumps are capable of reaching pressures as low as 10−11 mbar. An ion pump first ionizes gas within the vessel it is attached to and employs a strong electrical potential, typically 3–7 kV, which accelerates the ions into a solid electrode. Small bits of the electrode are sputtered into the chamber. Gasses are trapped by a combination of chemical reactions with the surface of the highly-reactive sputtered material, and being physically trapped underneath that material. History The first evidence for pumping from electrical discharge was found 1858 by Julius Plücker, who did early experiments on electrical discharge in vacuum tubes. In 1937, Frans Michel Penning observed some evidence of pumping in the operation of his cold cathode gauge. These early effects were comparatively slow to pump, and were therefore not commercialized. A major advance came in the 1950s, when Varian Associates were researching improvements for the performance of vacuum tubes, particularly on improving the vacuum inside the klystron. In 1957, Lewis D Hall, John C Helmer, and Robert L Jepsen filed a patent for a significantly improved pump, one of the earliest pumps that could get a vacuum chamber to ultra-high vacuum pressures. Working principle The basic element of the common ion pump is a Penning trap. A swirling cloud of electrons produced by an electric discharge is temporarily stored in the anode region of a Penning trap. These electrons ionize incoming gas atoms and molecules. The resultant swirling ions are accelerated to strike a chemically active cathode (usually titanium). On impact the accelerated ions will either become buried within the cathode or sputter cathode material onto the walls of the pump. The freshly sputtered chemically active cathode material acts as a getter that then evacuates the gas by both chemisorption and physisorption resulting in a net pumping action. Inert and lighter gases, such as He and H2 tend not to sputter and are absorbed by physisorption. Some fraction of the energetic gas ions (including gas that is not chemically active with the cathode material) can strike the cathode and acquire an electron from the surface, neutralizing it as it rebounds. These rebounding energetic neutrals are buried in exposed pump surfaces. Both the pumping rate and capacity of such capture methods are dependent on the specific gas species being collected and the cathode material absorbing it. Some species, such as carbon monoxide, will chemically bind to the surface of a cathode material. Others, such as hydrogen, will diffuse into the metallic structure. In the former example, the pump rate can drop as the cathode material becomes coated. In the latter, the rate remains fixed by the rate at which the hydrogen diffuses. Types There are three main types of ion pumps: the conventional or standard diode pump, the noble diode pump and the triode pump. Standard diode pump A standard diode pump is a type of ion pump employed in high vacuum processes which contains only chemically active cathodes, in contrast to noble diode pumps. Two sub-types may be distinguished: the sputter ion pumps and the orbitron ion pumps. Sputter ion pump In the sputter ion pumps, one or more hollow anodes are placed between two cathode plates, with an intense magnetic field parallel to the axis of the anodes in order to augment the path of the electrons in the anode cells. Orbitron ion pump In the orbitron vacuum pumps, electrons are caused to travel in spiral orbits between a central anode, normally in the form of a cylindrical wire or rod, and an outer or boundary cathode, generally in the form of a cylindrical wall or cage. The orbiting of the electrons is achieved without the use of a magnetic field, even though a weak axial magnetic field may be employed. Noble diode pump A noble diode pump is a type of ion pump used in high-vacuum applications that employs both a chemically reactive cathode, such as titanium, and an additional cathode composed of tantalum. The tantalum cathode serves as a high-inertia crystal lattice structure for the reflection and burial of neutrals, increasing pumping effectiveness of inert gas ions. Pumping intermittently high quantities of hydrogen with noble diodes should be done with great care, as hydrogen might over months get re-emitted out of the tantalum. Applications Ion pumps are commonly used in ultra-high vacuum (UHV) systems, as they can attain ultimate pressures less than 10−11 mbar. In contrast to other common UHV pumps, such as turbomolecular pumps and diffusion pumps, ion pumps have no moving parts and use no oil. They are therefore clean, need little maintenance, and produce no vibrations. These advantages make ion pumps well-suited for use in scanning probe microscopy, molecular beam epitaxy and other high-precision apparatuses. Radicals Recent work has suggested that free radicals escaping from ion pumps can influence the results of some experiments. See also Electroosmotic flow Marklund convection References Sources External links An Introduction to Ion Pumps Vacuum pumps
Ion pump
[ "Physics", "Engineering" ]
1,089
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
8,984,536
https://en.wikipedia.org/wiki/Extreme%20Ultraviolet%20Explorer
The Extreme Ultraviolet Explorer (EUVE or Explorer 67) was a NASA space telescope for ultraviolet astronomy. EUVE was a part of NASA's Explorer spacecraft series. Launched on 7 June 1992 with instruments for ultraviolet (UV) radiation between wavelengths of 7 and 76 nm (equivalent to 0.016–0.163 keV in energy), the EUVE was the first satellite mission especially for the short-wave ultraviolet range. The satellite compiled an all-sky survey of 801 astronomical targets before being decommissioned on 31 January 2001. Mission The Extreme-Ultraviolet Explorer (EUVE) was a spinning spacecraft designed to rotate about the Earth/Sun line. EUVE was a part of NASA's Explorer spacecraft series and designed to operate in the extreme ultraviolet (EUV) range of the spectrum, from 70 to 760 Ångström (Å). This spacecraft's objective was to carry out a full-sky survey, and subsequently, a deep survey and pointed observations. Science objectives included discovering and studying UV sources radiating in this spectral region, and analyzing effects of the interstellar medium on the radiation from these sources.The proposal for the craft originated with the Space Astrophysics Group at the University of Berkeley who had previously been involved with the EUV telescope on the Apollo element of the Apollo–Soyuz mission. The full-sky survey was accomplished by three Wolter-Schwarzschild grazing-incidence telescopes. During the sky survey, the satellite was spun three times per orbit to image a 2° wide band of sky in each of four EUV passbands. The deep survey was accomplished with a fourth Wolter-Schwarzschild grazing-incidence telescope, within a 2 × 180° region of sky. This telescope was also used for three-EUV bandpass spectroscopy of individual sources, providing ~ 1–2 Å resolution spectra. The goals of the mission included several different areas of observation using the extreme ultraviolet (EUV) range of frequencies: To make an all-sky survey in the extreme ultraviolet band; To make a deep survey in the EUV range on two separate bandpasses; To make spectroscopic observations of targets found by other missions; To observe EUV sources such as hot white dwarfs and coronal stars; To study the composition of the interstellar medium using EUV spectroscopy; To determine whether it would be beneficial to create another, more sensitive EUV telescope. Spacecraft The science instruments were attached to a Multi-mission Modular Spacecraft (MMS). The MMS was 3-axis stabilized, with a stellar reference control system and solar arrays. Payload instruments NASA described these instruments: 2 Wolter-Schwarzschild Type I grazing incidence mirror, each with an imaging microchannel plate (MCP detector) (Scanner A & B) FoV ~5° diameter; two passbands 44–220 Å 140–360 Å; 1 Wolter-Schwarzschild Type II grazing incidence mirror, with an imaging Micro-Channel Plate (MCP detector) FoV ~4° diameter; two passbands 520–750 Å and 400–600 Å; 1 Wolter-Schwarzschild Type II grazing incidence mirror Deep Survey/Spectrometer Telescope. The light is split, with half of the light fed to: An imaging deep survey MCP detector, and Three spectrometers which are each combinations of a grating and MCP detector: SW (70–190 Å), MW (140–380 Å), LW (280–760 Å). Experiments Extreme Ultraviolet Deep-Sky Survey The EUVE Spectrometer was a three-fold symmetric slitless objective design based on variable line space grazing incidence reflection gratings. Photon images are accumulated simultaneously in three bandpasses with effective spectral resolutions of 200–400 in 3 bandpasses from 70 to 760 Å. The Spectrometer and Deep Survey instruments share the DS/S mirror. The regions of the mirror devoted to the spectrometer and Deep Survey were defined at the front aperture, which was an annulus divided into six segments. Each of the spectrometer channels receives a beam of light from one of three alternating segments. This division gives each channel a geometric area of . After the mirror, each converging beam then strikes one of three gratings which focus the spectra onto three detectors, arranged in a circle around the central Deep Survey detector. The throughput of the EUVE Spectrometer was determined by the combined effects of the mirrors' and gratings' coating reflectivities, which were functions of both wavelength and grazing angle, the filter transmissions, and the quantum efficiency functions of the detector photocathode materials. Collimators and Sky Background In order to achieve good spectral resolution, any EUV spectrometer must be designed to limit the effect of diffuse sky radiation. The medium and long wavelength channels of the EUVE Spectrometer have wire-grid collimators placed directly after the aperture before the mirror, which limit the grazing angles of the incident light to exclude some of the sky background. They consist of 15 etched molybdenum grids, spaced exponentially and held in a thermally stable claw structure, also of molybdenum. The transmission profile of the stack is triangular in the dispersion direction and limits the beam to 20 arcminutes FWHM. The transmission of each collimator assembly was tested in visible light. The collimator relative transmissions were measured in the EUV by comparing the Spectrometer throughputs, measured as a function of off-axis angle, before and after installation of the collimators in the medium and long wavelength channels. Alignment to the boresight of the instrument was also determined. Both collimators functioned as designed, with peak transmissions of 64.2% and 65.4% in the medium and long wavelength channels, respectively. Variable Line Space Gratings The EUVE Spectrometer incorporated plane diffraction gratings with continuously varying line spacing, placed in the converging beam of the telescope to diffract the light as it approached the focus. Like concave gratings, they obviate the use of other focusing optics after dispersion. Unlike uniformly spaced rulings, variable line space gratings can produce nearly stigmatic spectra using straight, conventionally ruled grooves. The gratings are blazed for use in the first inside order. "Inside" was used to mean diffracted orders at angles between the surface normal and the specular direction, and was referred to with a minus sign when represented numerically, e.g. −1st order. The gratings cover three overlapping bandpasses; short wavelengths from 70 to 190 A, medium wavelengths from 140 to 380 A, and long wavelengths from 280 to 760 Å. The groove densities range from 415 to 3550 grooves/mm. The gratings were ruled by Hitachi, Inc. at the Naka Optical Works in Japan. The short wavelength grating is coated with rhodium to optimize the reflectivity between 70 and 190 Å. The medium and long wavelength gratings have platinum surface coatings. Spectrometer Filters Thin film filters, a few thousand Å thick, completely covered each detector. They define broad bandpasses while screening out bright geocoronal and interplanetary lines such as Lyman alpha radiation and some higher orders of diffraction. The materials were Lexan and boron in the short wavelength, aluminum and carbon in the medium, and aluminum in the long wavelength channel. The two longer wavelength filters have an off-axis quadrant of material which covered the same bandpass as one of the shorter channels. At these positions, which correspond to off-axis angles of approximately 0.5°, some wavelengths that would normally lie in the shorter channel's range appear in the longer wavelength channel in second order (n=−2), and are passed by the alternate filter. Wavelengths from parts of the shorter bandpass that overlap the longer channel also appear in first order. These off-axis locations are configured to be used as backups to duplicate the short and medium channels, should either of these detectors fail. Micro-Channel Plate Detectors All the EUVE detectors were microchannel plate (MCP) detectors. MCP detectors are electron-amplification devices that provided two-dimensional imaging and time-tagging of individual EUV photon events. Each detector employs a biased stack of three porous quartz MCPs with a channel length-to-diameter ratio of approximately 80:1. The stack acts as an electron multiplier, and is backed by a conducting anode, partitioned into a graduated "wedge, strip, and zigzag" pattern. The top plate has an applied photocathode of potassium bromide (KBr), to enhanced the photoelectric response at EUV wavelengths. When a photon excited the front surface, a bias of 4–5 kV causes cascading electrons to form a cloud of 2–3 electrons, which then strikes the divided anode. Event positions (X, Y) are calculated by onboard instrument software (ISW) from the division of the charge cloud among the wedge, strip, and zigzag areas of the anode. The detectors record positions 0–2047 in each dimension, and a single pixel is about 29×29 mc. This resulted in a pixel size of roughly 4.25 seconds when remapped to the sky. All the detectors were equipped with four stimpulser, or "stim" pins, which periodically excite the anode at standard positions, and are used to monitor position stability. The detectors have been placed at the sagittal intersection to produce good imaging over the whole detector, rather than optimized spectral focus at one point. Extreme Ultraviolet Full-Sky Survey This investigation is designed to perform a full-sky survey, searching for EUV sources. The instrument package contains four Wolter-Schwarzschild grazing-incidence telescopes (with EUV thin-film filters) to collect and to isolate radiation. The detector system for each telescope was a wedge and strip anode image converter, consisting of a micro-channel plate, a wedge and strip anode, and detector amplifiers designed to produce images of sky fields in selected wavelength ranges. Three telescopes are designed to operate at right angles to the spin axis and to carry out the sky survey, with bandpass filters (tentatively) for the wavelength ranges 80 to 190 Å, 170 to 330 Å, and 500 to 750 Å. These three telescopes point perpendicular to the Earth-Sun line and sweep out a great circle in the sky with each spacecraft's revolution. As the Earth moves around the Sun, the great circle is shifted by 1° each day and so the entire celestial sphere is surveyed in 6 months. The fourth telescope points in the anti-solar direction, within the Earth's shadow cone. In this limited direction, the He II 304 Å background is almost completely absent, and thus higher sensitivity can be obtained for observing selected interesting objects. Spectroscopic observations of the brightest EUV sources are carried out with a resolving power of 100 from 80 to 800 Å. The full-sky survey was completed in August 1993 by which time 801 UV sources had been observed. Atmospheric entry The EUVE mission was extended twice, but cost and scientific merit issues led NASA to a decision to terminate the mission in 2000. EUVE satellite operations ended on 31 January 2001 when the spacecraft was placed in a safehold. Transmitters were commanded off on 2 February 2001. EUVE re-entered in the atmosphere of Earth over central Egypt at approximately 04:15 UTC on 31 January 2002. The mission is considered a success since it accomplished its scientific, technological, and outreach goals. See also Explorer program 1992 in spaceflight References External links EUVE page at Space Sciences Lab (links to science highlights and publications) EUVE page at NASA GSFC EUVE page at NASA-STScI (MAST) (has stellar map of EUVE observations) Explorers Program Space telescopes Extreme ultraviolet telescopes Satellites formerly orbiting Earth Spacecraft launched by Delta II rockets Spacecraft launched in 1992 Spacecraft which reentered in 2002
Extreme Ultraviolet Explorer
[ "Astronomy" ]
2,485
[ "Space telescopes" ]
8,984,619
https://en.wikipedia.org/wiki/Feature-oriented%20scanning
Feature-oriented scanning (FOS) is a method of precision measurement of surface topography with a scanning probe microscope in which surface features (objects) are used as reference points for microscope probe attachment. With FOS method, by passing from one surface feature to another located nearby, the relative distance between the features and the feature neighborhood topographies are measured. This approach allows to scan an intended area of a surface by parts and then reconstruct the whole image from the obtained fragments. Beside the mentioned, it is acceptable to use another name for the method – object-oriented scanning (OOS). Topography Any topography element that looks like a hill or a pit in wide sense may be taken as a surface feature. Examples of surface features (objects) are: atoms, interstices, molecules, grains, nanoparticles, clusters, crystallites, quantum dots, nanoislets, pillars, pores, short nanowires, short nanorods, short nanotubes, viruses, bacteria, organelles, cells, etc. FOS is designed for high-precision measurement of surface topography (see Fig.) as well as other surface properties and characteristics. Moreover, in comparison with the conventional scanning, FOS allows obtaining a higher spatial resolution. Thanks to a number of techniques embedded in FOS, the distortions caused by thermal drifts and creeps are practically eliminated. Applications FOS has the following fields of application: surface metrology, precise probe positioning, automatic surface characterization, automatic surface modification/stimulation, automatic manipulation of nanoobjects, nanotechnological processes of “bottom-up” assembly, coordinated control of analytical and technological probes in multiprobe instruments, control of atomic/molecular assemblers, control of probe nanolithographs, etc. See also Counter-scanning Feature-oriented positioning References 1. (Russian translation is available). 2. (Russian translation is available). 3. 4. (in Russian). 5. 6. 7. 8. 9. 10. 11. 12. External links Feature-oriented scanning, Research section, Lapshin's Personal Page on SPM & Nanotechnology Microscopes Nanotechnology
Feature-oriented scanning
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
448
[ "Materials science", "Measuring instruments", "Microscopes", "Microscopy", "Nanotechnology" ]
8,984,724
https://en.wikipedia.org/wiki/Dirichlet%20algebra
In mathematics, a Dirichlet algebra is a particular type of algebra associated to a compact Hausdorff space X. It is a closed subalgebra of C(X), the uniform algebra of bounded continuous functions on X, whose real parts are dense in the algebra of bounded continuous real functions on X. The concept was introduced by . Example Let be the set of all rational functions that are continuous on ; in other words functions that have no poles in . Then is a *-subalgebra of , and of . If is dense in , we say is a Dirichlet algebra. It can be shown that if an operator has as a spectral set, and is a Dirichlet algebra, then has a normal boundary dilation. This generalises Sz.-Nagy's dilation theorem, which can be seen as a consequence of this by letting References Completely Bounded Maps and Operator Algebras Vern Paulsen, 2002 . Functional analysis C*-algebras
Dirichlet algebra
[ "Mathematics" ]
202
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
8,984,861
https://en.wikipedia.org/wiki/Spectral%20set
In operator theory, a set is said to be a spectral set for a (possibly unbounded) linear operator on a Banach space if the spectrum of is in and von-Neumann's inequality holds for on - i.e. for all rational functions with no poles on This concept is related to the topic of analytic functional calculus of operators. In general, one wants to get more details about the operators constructed from functions with the original operator as the variable. For a detailed discussion of spectral sets and von Neumann's inequality, see. Functional analysis
Spectral set
[ "Mathematics" ]
114
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
8,985,148
https://en.wikipedia.org/wiki/Jan%20G.%20Smith
Jan Gustav Salomon Smith, born 19 June 1895 in Stockholm, Sweden, died 30 April 1966 in Stockholm. In the literature he is known as Jan G. Smith. He was an engineer with a M.Sc. degree from KTH, Stockholm. For many years he worked in the American automobile industry and returned to Sweden in 1924. His experiences from the American automobile industry was probably the main reason why Gustav Larson asked him to join the team of engineers that started the design work for Volvo's first automobile, ÖV 4, in 1924. He worked for Gustav Larson in the temporary "design office" in Gustav Larsons private flat in Stockholm about a year. A lot of Jan G. Smith's original drawings for the Volvo ÖV4, the gearbox, the main chassis components and technical papers that he had collected in America in the form of a private design book, are saved in the archive of the National Museum of Science and Technology, Stockholm, Sweden. After the Volvo project he was employed by ASEA in Västerås and later worked for the same company in Stockholm. Jan was replaced in the Volvo project by engineer Henry Westerberg that stayed with Volvo as a designer until 1980 at the age of 79 when he was retired. Jan G. Smith was awarded a gold medal in 1929 by the Royal Swedish Academy of Engineering Sciences (IVA), together with Gustav Larson, "for their contribution to the national automobile industry in Sweden". References Title: Volvo Personvagnar från 20-tal till 80-tal, by Björn Eric Lindh, 1984. (Swedish language only). Title: Volvo Göteborg Sverige, by Christer Olsson, 1996. (Swedish language only). External links National Museum of Science and Technology, Sweden. Official website. Royal Swedish Academy of Engineering Sciences. Official website. 20th-century Swedish engineers Automotive engineers Volvo people 1895 births 1966 deaths
Jan G. Smith
[ "Engineering" ]
386
[ "Automotive engineering", "Automotive engineers" ]
8,985,569
https://en.wikipedia.org/wiki/Load%20dump
Load dump means the disconnection of a powered load. It can cause two problems: failure of supply to equipment or customers large voltage spikes from the inductive generator(s) In automotive electronics, it refers to the disconnection of the vehicle battery from the alternator while the battery is being charged. Due to such a disconnection of the battery, other loads connected to the alternator experience a surge in the voltage on the battery bus. This surge may be as high as 120 volts and the surge may take up to 400 ms to decay. It is typically clamped to 40 V in 12 V vehicles and about 60 V in 24 V systems. Overview The field winding of an alternator has a large inductance. When the vehicle battery is being charged, the alternator generates a large current, the magnitude of which is controlled by the current in the field winding. If the battery becomes disconnected while it is being charged the load on the alternator suddenly decreases. However, the vehicle's voltage regulator cannot quickly cause the field current to decrease sufficiently, so the alternator continues to generate a large current. This large current causes the voltage on the vehicle bus to increase significantly—well above the normal and regulated level. All the loads connected to the alternator see this high voltage spike. The strength of the spike depends on many factors including the speed at which the alternator is rotating and the current which was being supplied to the battery before it was disconnected. These spike may peak at as high as 120 V and may take up to 400 ms to decay. This kind of a spike would damage many semiconductor devices, e.g. ECUs, that may be connected to the alternator. Special protection devices, such as TVS diodes, varistors which can withstand and absorb the energy of these spikes may be added to protect such semiconductor devices. Various automotive standards such as ISO 7637-2 and SAE J1113-11 specify a standard shape of the load dump pulse against which automotive electronic components may be designed. There can also be a smaller inductive spike due to the inductance of the stator windings. That may have a larger voltage, but it will be for a much shorter duration, as relatively little energy is stored in the inductance of these windings. Load dump can be more damaging because the alternator continues to generate power until the field current can decrease, so much more energy can be released. References Power electronics
Load dump
[ "Engineering" ]
503
[ "Electronic engineering", "Power electronics" ]
8,985,806
https://en.wikipedia.org/wiki/Roaming%20SIM
A roaming SIM is a mobile phone SIM card that operates on more than one network within its home country. Roaming SIMs currently have two main applications, the least cost call routing for roaming mobile calls and machine to machine. Using a normal network locked SIM, travelers can use their own roaming enabled mobile phone in any country that has a roaming agreement with their home network, or for global networks like Vodafone, with another Vodafone OpCo. This manifests itself to most users when they receive a text message welcoming the traveler to a local network. Once they return home, their SIM will only work on the network with which they have a contract. A roaming SIM however, also known as a global roaming SIM, will work with whichever network it can detect, at home or abroad. Roaming mobile calls The use of roaming SIM cards in its most common form is in normal voice applications such as mobile phone calls. The common application of roaming SIMs for voice is where mobile calls are automatically routed to, and made on, the least cost network. This typically means that incoming calls are free, no matter which network a mobile user is on. This also means that a caller enjoys the lowest cost when making a call, significantly reducing call costs, especially compared to normal network charges for International Roaming. Global roaming SIMs are very often combined with callback technology, whereby the user dials a number in the normal way, but the call is intercepted by an application on the SIM card and turned from an outbound call to an inbound call which the user answers. This ensures that the call travels exclusively through the least cost route, and also it is taking advantage of the fact that inbound call charges are typically lower than outbound ones. Some providers achieve this automatic call interception and callback by encoding a program onto the SIM card. Other providers use Multi-IMSI (International Mobile Subscriber Identity) technology to lower the cost of roaming. In this case, there is a program on the SIM card that selects the lowest cost IMSI (or 'profile') to use in a specific country. Increasingly, data services are being added to roaming SIM cards to reduce the cost of roaming data charges. Mobile users are increasingly using data services, and it can be very difficult to predict the cost of using data because it is invoiced based on volume. Machine to machine This technology is also used in various machine to machine (M2M) applications where devices communicate directly, such as vehicle tracking systems, smart meters, and industrial monitoring. By seamlessly switching between multiple networks, it ensures more comprehensive coverage, even in remote areas, while minimizing costs through least-cost routing, which selects the most economical network available. Alternatives For some applications (particularly where regular travel between two countries is the main purpose) a Dual SIM can be considered as an alternative. They have the advantage that it is possible to buy a local SIM card and use that next to the primary SIM card. Voice over IP apps (softphones) may be installed on smartphones to inexpensively call international numbers. As these use Wi-Fi where available, costs may be substantially lower. Voice quality using VoIP for international calls may vary. References Mobile telecommunications
Roaming SIM
[ "Technology" ]
650
[ "Mobile telecommunications" ]
8,985,875
https://en.wikipedia.org/wiki/Squitter
Squitter refers to random pulses, pulse-pairs and other non-solicited messages used in various aviation radio systems' signal maintenance. Squitter pulses were originally, and are still, used in the DME/TACAN air navigation systems. Squitter pulses, because of their randomness and identical appearance to standard reply pulse-pairs, appear the same as unsolicited/unsynchronised replies to other interrogating aircraft. Squitter was first used in the original IFF systems. These used a superregenerative receiver which greatly amplified input signals using positive feedback. If the gain was set too high, random radio noise like static would enter the amplifier and cause it to send out a signal, creating random signals. An automatic gain control system on subsequent models cured this problem. Primarily, squitter is used to maintain a regular signal from the ground beacon. In the TACAN system, signal strength variation due to rotation of the transmitting beam (amplitude modulation) determines the course bearing function. This function would be lost without a constant 2700-4800 pulse-pairs per second, carrier-like signal in cases of low or no interrogating aircraft. In the Mode S secondary surveillance radar system, the term is used to describe messages that are unsolicited downlink transmissions from an automatic dependent surveillance-broadcast (ADS-B) Mode S transponder system. Mode S transponders transmit acquisition squitter (unsolicited downlink transmissions) to permit passive acquisition by interrogators with broad antenna beams, where active acquisition may be hindered by all-call synchronous garble. Examples of such interrogators are an airborne collision avoidance system and an airport surface system. External links SSR at Radartutorial AIS-P/TailLight Air traffic control Radar Avionics de:Sekundärradar fr:Radar secondaire no:Sekundærradar fi:Toisiotutka
Squitter
[ "Technology" ]
409
[ "Avionics", "Aircraft instruments" ]
4,288,963
https://en.wikipedia.org/wiki/Maximum%20common%20induced%20subgraph
In graph theory and theoretical computer science, a maximum common induced subgraph of two graphs G and H is a graph that is an induced subgraph of both G and H, and that has as many vertices as possible. Finding this graph is NP-hard. In the associated decision problem, the input is two graphs G and H and a number k. The problem is to decide whether G and H have a common induced subgraph with at least k vertices. This problem is NP-complete. It is a generalization of the induced subgraph isomorphism problem, which arises when k equals the number of vertices in the smaller of G and H, so that this entire graph must appear as an induced subgraph of the other graph. Based on hardness of approximation results for the maximum independent set problem, the maximum common induced subgraph problem is also hard to approximate. This implies that, unless P = NP, there is no approximation algorithm that, in polynomial time on -vertex graphs, always finds a solution within a factor of of optimal, for any . One possible solution for this problem is to build a modular product graph of G and H. In this graph, the largest clique corresponds to a maximum common induced subgraph of G and H. Therefore, algorithms for finding maximum cliques can be used to find the maximum common induced subgraph. Moreover, a modified maximum-clique algorithm can be used to find a maximum common connected subgraph. The McSplit algorithm (along with its McSplit↓ variant) is a forward checking algorithm that does not use the clique encoding, but uses a compact data structure to keep track of the vertices in graph H to which each vertex in graph G may be mapped. Both versions of the McSplit algorithm outperform the clique encoding for many graph classes. A more efficient implementation of McSplit is McSplitDAL+PR, which combines a Reinforcement Learning agent with some heuristic scores computed with the PageRank algorithm. Applications Maximum common induced subgraph algorithms form the basis for both graph differencing and graph alignment. Graph differencing identifies and highlights differences between two graphs by pinpointing changes, additions, or deletions. Graph alignment involves finding correspondences between the vertices and edges of two graphs to identify similar structures. Maximum common induced subgraph algorithms have a long tradition in bioinformatics, cheminformatics, pharmacophore mapping, pattern recognition, computer vision, code analysis, compilers, and model checking. The problem is also particularly useful in software engineering and model-based systems engineering, where software code and engineering models (e.g., Simulink, UML diagrams) are represented as graph data structures. Graph differencing can be used to detect changes between different versions of software code and models for change auditing, debugging, version control and collaborative team development. See also Molecule mining Maximum common edge subgraph References NP-complete problems Cheminformatics Computational problems in graph theory
Maximum common induced subgraph
[ "Chemistry", "Mathematics" ]
610
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Computational chemistry", "Mathematical relations", "Cheminformatics", "nan", "Mathematical problems", "NP-complete problems" ]
4,289,311
https://en.wikipedia.org/wiki/Tm%20ligands
Tm is an abbreviation for anionic tridentate ligand based on three imidazole-2-thioketone groups bonded to a borohydride center. They are examples of scorpionate ligands. Various ligands in this family are known, differing in what substituents are on the imidazoles. The most common is TmMe, which has a methyl group on the nitrogen. It is easily prepared by the reaction of molten methimazole (1-methylimidazole-2-thione) with sodium borohydride, giving the sodium salt of the ligand. Salts of the TmMe anion are known also for lithium and potassium. Other alkyl- and aryl-group variations are likewise named TmR according to those groups. Ligand characteristics, comparison with Tp− The TmMe anion is a tridentate, tripodal ligand topologically similar to the more common Tp ligands, but the two classes of ligands differ in several ways. TmMe has three "soft" sulfur donor atoms, whereas Tp− has three nitrogen donor atoms. The thioamide sulfur is highly basic, as found for other thioureas. The TmR anion simulates the environment provided by three facial thiolate ligands but without the 3- charge of a facial trithiolate. The large 8-membered SCNBNCSM chelate rings in M(TmMe) complexes are more flexible than the 6-membered CNBNCM rings in M(Tp) complexes. This flexibility enables the formation of boron-metal bonds, after loss of the B-H bond. This degradation of the coordinated TmMe anion gives a dehydrogenated boranamide B(mt)3 where mt = methimazolate. References Inorganic chemistry Tripodal ligands
Tm ligands
[ "Chemistry" ]
389
[ "nan" ]
4,289,748
https://en.wikipedia.org/wiki/Susan%20Headley
Susan Headley (born 1959, also known as Susy Thunder or Susan Thunder) is an American former phreaker and early computer hacker during the late 1970s and early 1980s. A member of the so-called Cyberpunks, Headley specialized in social engineering, a type of hacking which uses pretexting and misrepresentation of oneself in contact with targeted organizations in order to elicit information vital to hacking those organizations. Biography Born in Altona, Illinois, in 1959, Headley claims to have dropped out of school in the eighth grade after a difficult childhood. She later moved to Los Angeles, California, where she worked as a teenage prostitute and was a rock 'n' roll groupie, claiming all four former members of the Beatles among her conquests. She met computer hacker Kevin Mitnick (also known as Condor) in 1980, and together with another hacker, Lewis de Payne (also known as Roscoe), formed a gang of phone phreaks. In The Hacker's Handbook, Headley is referred to as "one of the earliest of the present generation of hackers" and described as successfully hacking the US phone system as a 17-year-old in 1977. On October 25, 1983, Headley testified in front of the Governmental Affairs oversight committee as to the technical capabilities and possible motivations of modern-day hackers and phone phreaks. Public service Headley was elected to public office in California in 1994, as City Clerk of California City. Personal life Headley is married, and lives in the Midwest. She is a coin collector. References External links Esquire magazine article on Mitnick, including interview with Susan Thunder Cyberpunks: Outlaws and Hackers on the Computer Frontier Book by Katie Hafner Hendon Mob poker players' database entry for Susy Thunder Searching for Susy Thunder by Claire L. Evans Living people 1959 births American cybercriminals California City, California Groupies Hackers
Susan Headley
[ "Technology" ]
400
[ "Lists of people in STEM fields", "Hackers" ]
4,290,115
https://en.wikipedia.org/wiki/Avipes
Avipes (meaning "bird foot") is a genus of extinct archosaurs represented by the single species Avipes dillstedtianus, which lived during the middle Triassic period. The only known fossil specimen, a partial foot (metatarsals), was found in Bedheim, Thuringia, Germany, in deposits of Lettenkohlensandstein (a form of sandstone). Avipes was named in 1932 by Huene. Although originally classified as a coelurosaur or a ceratosaur, a new study of the fossil specimen found that it was too incomplete to assign to a group more specific than Archosauria, and so it was regarded as indeterminate by Rauhut and Hungerbuhler in 2000. References Information on Avipes Nomina dubia Middle Triassic reptiles of Europe Middle Triassic archosaurs Prehistoric reptile genera
Avipes
[ "Biology" ]
179
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
4,290,268
https://en.wikipedia.org/wiki/Indira%20Gandhi%20Canal
The Indira Gandhi Canal (originally, Rajasthan Canal) is the longest canal in India. It starts at the Harike Barrage near Harike, a few kilometers downriver from the confluence of the Satluj and Beas rivers in Punjab state, and ends in irrigation facilities in the Thar Desert in the northwest of Rajasthan state. Previously known as the Rajasthan Canal, it was renamed the Indira Gandhi Canal on 2 November 1984 following the assassination of Prime Minister Indira Gandhi. The canal consists of the Rajasthan feeder canal with the first in Punjab and Haryana state and a further in Rajasthan. This is followed by the of the Rajasthan main canal, which is entirely within Rajasthan. The canal enters Haryana from Punjab near Lohgarh and runs through the western part of the Sirsa district before entering Rajasthan near Kharakhera village in the Tibbi tehsil of the Hanumangarh district. It traverses seven districts of Rajasthan: Barmer, Bikaner, Hanumangarh, Jaisalmer, Jodhpur, and Sriganganagar. The main canal is long, which is 1458 RD (reduced distance). From 1458 RD, a long branch starts, known as the Sagar Mal Gopa Branch or the SMGS. From the end point of SMGS, another 92-kilometer-long sub-branch starts, the last of the Baba Ramdev sub-branch. It ends near Gunjangarh village in Jaisalmer district. Design and construction The idea of bringing the waters from the Himalayan Rivers flowing through Punjab and into Pakistan was conceived by hydraulic engineer Kanwar Sain in the late 1940s. Sain estimated that of desert land in Bikaner and the northwest corner of Jaisalmer could be irrigated by the stored waters of Punjab rivers. In 1960, the Indus Water Treaty was signed between India and Pakistan, which gave India the right to use the water from three rivers: the Satluj, Beas and Ravi. The proposed Rajasthan Canal envisioned use of of water. The initial plan was to build the canal in two stages. Stage I consisted of a feeder canal from Harike barrage, Firozpur, Punjab to Masitawali (Hanumangarh) with the main canal of from Masitawali (Hanumangarh) to Pugal, (Bikaner) in Rajasthan. Stage I also included constructing a distributary canal system of about in length. Stage II involved constructing a long main canal from Pugal (Bikaner) to Mohangarh (Jaisalmer) along with a distributary canal network of . The main canal was planned to be wide at the top and wide at the bottom with a water depth of . It was scheduled to be completed by 1971. The canal faced severe financial constraints, neglect and corruption. In 1970 the plan was revised and it was decided that the entire canal would be lined with concrete tiles. Five more lift schemes were added and the flow command of Stage II was increased by . With increased requirements, the total length of main, feeder and distribution canals was about . Stage I was completed in 1983 around 20 years behind the completion schedule. Effect on the region After the construction of the Indira Gandhi Canal, irrigation facilities were available over an area of in Jaisalmer district and in Barmer district. Irrigation had already been provided in an area of in Jaisalmer district. Mustard, cotton, and wheat now grows in this semi-arid northwestern region, replacing the soil there previously. However, many dispute the success of this canal in arid regions and question whether it has achieved its goals. References Sources Anon. 1998. Statistical Abstract Rajasthan. Directorate of Economic and Statistics, Rajasthan, Jaipur. Balak Ram, 1999. Report on Wastelands in Hanumangarh district, Rajasthan. CAZRI, Jodhpur. Karimkoshteh, M. H. 1995. Greening the Desert (Agro-Economic impact of IG canal). Renaissance Publication, New Delhi. Kavadia, P.S. 1991. Problem of waterlogging in Indira Gandhi Nahar Project and outline of Action Plan to tackle it. Singh, S. and Kar, A. 1997. Desertification Control - In the arid ecosystem of India for sustainable development. Agro-Botanical Publishers, Bikaner. Burdak, L. R. 1982. Recent advances in Desert Afforestation, Dehradun. Canals in Punjab, India Interbasin transfer Canals in Rajasthan Irrigation canals 1983 establishments in Rajasthan Canals opened in 1983
Indira Gandhi Canal
[ "Environmental_science" ]
918
[ "Hydrology", "Interbasin transfer" ]
4,290,841
https://en.wikipedia.org/wiki/HHV%20Infected%20Cell%20Polypeptide%200
Human Herpes Virus (HHV) Infected Cell Polypeptide 0 (ICP0) is a protein, encoded by the DNA of herpes viruses. It is produced by herpes viruses during the earliest stage of infection, when the virus has recently entered the host cell; this stage is known as the immediate-early or α ("alpha") phase of viral gene expression. During these early stages of infection, ICP0 protein is synthesized and transported to the nucleus of the infected host cell. Here, ICP0 promotes transcription from viral genes, disrupts structures in the nucleus known as nuclear dots or promyelocytic leukemia (PML) nuclear bodies, and alters the expression of host and viral genes in combination with a neuron specific protein. At later stages of cellular infection, ICP0 relocates to the cell cytoplasm to be incorporated into new virion particles. History and background ICP0 was identified as an immediate-early polypeptide product of Herpes simplex virus-1 (HSV-1) infection in 1976. The gene, in HSV-1, from which ICP0 is produced is known as HSV-1 α0 ("alpha zero"), Immediate Early (IE) gene 1, or simply as the HSV-1 ICP0 gene. The HSV-1 ICP0 gene was characterized and sequenced in 1986. This sequence predicted a 775 amino acid sequence with a molecular weight of 78.5 KDa. At the time of gene isolation, ICP0 was known as IE110 as gel electrophoresis experiments performed prior to obtaining the gene sequence indicated the ICP0 protein weighed 110 kDa. Post-translational modifications, such as phosphorylation or sumoylation, were presumed to account for the actual protein size appearing 30 kDa larger than that of the predicted amino acid sequence. Functions Dismantle microtubule networks ICP0 co-localizes with α-tubulin, and dismantles host cell microtubule networks once it translocates to the cytoplasm. Transcription In HSV-1 infected cells, ICP0 activates the transcription of many viral and cellular genes. It acts synergistically with HSV-1 immediate early (IE) protein, ICP4, and is essential for the reactivation of latent herpes virus and viral replication. Degradation of antiviral pathways ICP0 is responsible for overcoming a variety of cellular antiviral responses. After translocating to the nucleus early in infection, ICP0 promotes the degradation of many cellular antiviral genes, including those for nuclear body-associated proteins promyelocytic leukemia protein (PML) and Sp100, causing disruption of PML nuclear bodies and reduced cellular antiviral capacity. ICP0 also inhibits the activity of IFN regulatory factors (IRF3) and IRF7, which are key transcription factors that induce production of antiviral cytokines called interferons. Barriers to viral replication induced by interferons can also be overcome by the action of ICP0. This function of ICP0 also prevents the production of RNase L, an enzyme that degrades single-stranded viral and cellular RNAs and induces host cell apoptosis in virus infected cells. Interaction with host cell SUMO-1 protein and disruption PML Nuclear Bodies Small ubiquitin-related modifier 1 (SUMO-1) is a protein produced by human cells that is involved in the modification of many proteins, including human PML protein. HSV-1 ICP0 and several of its homologs in other herpes viruses bind to SUMO-1 in a manner similar to endogenous proteins, causing depletion of SUMO-1, and disruption of nuclear bodies. Interaction with neuron-differentiating protein NRSF and protein cofactor coREST ICP0 interacts with a human protein, known as Neuronal Restrictive Silencer Factor (NRSF) or RE1-silencing transcription factor (REST) that regulates differences in gene expression between cells of neuronal or non-neuronal origin; NRSF is found in non-neuronal cells but not in fully differentiated neurons. This interaction is attributed to the partial similarity of ICP0 to the human protein CoREST, also called REST corepressor 1 (RCOR1), which combines with NRSF to repress expression of neuronal genes in non-neuronal cells. Although the full NRSF protein is not typically found in neurons, truncated forms of NRSF are produced that selectively control the expression of certain neurotransmitter channels in specialized neurons. Combination of ICP0 with these NRSF-like neuronal factors may silence herpes genes in neurons, blocking the production of other immediate-early genes such as ICP4 and reducing production of ICP22. The repressed production of immediate-early HSV genes may contribute to the establishment of latency during infection with herpes viruses. CoREST and NRSF combine with another cellular protein, histone deacetylase-1 (HDAC) to form a HDAC/CoREST/NRSF complex. This complex silences production of the HSV-1 protein ICP4 by interfering with chromatin remodeling of the viral DNA that is necessary to allow viral gene transcription; it deacetylates histones associated with viral DNA in viral chromatin. Furthermore, an NRSF-binding region is located between the viral genes expressing proteins ICP4 and ICP22. ICP0 interacts with coREST, dissociating HDAC1 from CoREST/NRSF in the HDAC/CoREST/NRSF complex and preventing the silencing of the HSV genome in non-neuronal cells. Suppression of ICP0 activity Interaction with latency-associated RNA transcript (LAT) During latent infection a viral RNA transcript inhibits expression of the herpes virus ICP0 gene via an antisense RNA mechanism. The RNA transcript is produced by the virus and accumulates in host cells during latent infection; it is known as Latency Associated Transcript (LAT). A chromatin insulator region between promoters of the LAT and ICP0 genes of the HSV-1 genome may allow for the independent regulation of their expression. Silencing of ICP0 gene activity by ICP4 Although it is tempting to hypothesize that LAT is the repressor of the ICP0 gene, evidence supporting this hypothesis is lacking. Recent data suggest that ICP4 strongly suppresses the ICP0 gene, and ICP0 antagonizes ICP4. The balance between ICP0 and ICP4 dictates whether the ICP0 gene can be efficiently transcribed. Homologs across Herpes virus species The ICP0 gene and protein from HSV-1 have orthologs in related viruses from the herpes virus family. HSV-2 ICP0 is predicted to produce a polypeptide of 825 amino acids with a predicted molecular weight of 81986 Da, and 61.5% amino acid sequence similarity to HSV-1 ICP0. Simian varicella virus (SVV) is a varicellovirus that, like HSV-1 and HSV-2, belongs to the alphaherpesvirinae subfamily of herpes viruses. SVV expresses an HSV-1 LAT ortholog known as SVV LAT, and an HSV-1 ICP0 ortholog known as SVV ORF-61 (Open Reading Frame 61). Varicella Zoster Virus (VZV) is another varicellovirus in which a homolog of HSV-1 ICP0 gene has been identified; VSV ORF-61 is a partial homolog and a functional replacement for HSV-1 ICP0 gene. See also ICP-47 References Herpesviridae Proteins
HHV Infected Cell Polypeptide 0
[ "Chemistry" ]
1,676
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
4,290,894
https://en.wikipedia.org/wiki/Lie%20bialgebra
In mathematics, a Lie bialgebra is the Lie-theoretic case of a bialgebra: it is a set with a Lie algebra and a Lie coalgebra structure which are compatible. It is a bialgebra where the multiplication is skew-symmetric and satisfies a dual Jacobi identity, so that the dual vector space is a Lie algebra, whereas the comultiplication is a 1-cocycle, so that the multiplication and comultiplication are compatible. The cocycle condition implies that, in practice, one studies only classes of bialgebras that are cohomologous to a Lie bialgebra on a coboundary. They are also called Poisson-Hopf algebras, and are the Lie algebra of a Poisson–Lie group. Lie bialgebras occur naturally in the study of the Yang–Baxter equations. Definition A vector space is a Lie bialgebra if it is a Lie algebra, and there is the structure of Lie algebra also on the dual vector space which is compatible. More precisely the Lie algebra structure on is given by a Lie bracket and the Lie algebra structure on is given by a Lie bracket . Then the map dual to is called the cocommutator, and the compatibility condition is the following cocycle relation: where is the adjoint. Note that this definition is symmetric and is also a Lie bialgebra, the dual Lie bialgebra. Example Let be any semisimple Lie algebra. To specify a Lie bialgebra structure we thus need to specify a compatible Lie algebra structure on the dual vector space. Choose a Cartan subalgebra and a choice of positive roots. Let be the corresponding opposite Borel subalgebras, so that and there is a natural projection . Then define a Lie algebra which is a subalgebra of the product , and has the same dimension as . Now identify with dual of via the pairing where and is the Killing form. This defines a Lie bialgebra structure on , and is the "standard" example: it underlies the Drinfeld-Jimbo quantum group. Note that is solvable, whereas is semisimple. Relation to Poisson–Lie groups The Lie algebra of a Poisson–Lie group G has a natural structure of Lie bialgebra. In brief the Lie group structure gives the Lie bracket on as usual, and the linearisation of the Poisson structure on G gives the Lie bracket on (recalling that a linear Poisson structure on a vector space is the same thing as a Lie bracket on the dual vector space). In more detail, let G be a Poisson–Lie group, with being two smooth functions on the group manifold. Let be the differential at the identity element. Clearly, . The Poisson structure on the group then induces a bracket on , as where is the Poisson bracket. Given be the Poisson bivector on the manifold, define to be the right-translate of the bivector to the identity element in G. Then one has that The cocommutator is then the tangent map: so that is the dual of the cocommutator. See also Lie coalgebra Manin triple References H.-D. Doebner, J.-D. Hennig, eds, Quantum groups, Proceedings of the 8th International Workshop on Mathematical Physics, Arnold Sommerfeld Institute, Claausthal, FRG, 1989, Springer-Verlag Berlin, . Vyjayanthi Chari and Andrew Pressley, A Guide to Quantum Groups, (1994), Cambridge University Press, Cambridge . Lie algebras Coalgebras Symplectic geometry
Lie bialgebra
[ "Mathematics" ]
766
[ "Mathematical structures", "Algebraic structures", "Coalgebras" ]
4,290,903
https://en.wikipedia.org/wiki/Red%20rain%20in%20Kerala
The Kerala red rain phenomenon was a blood rain event that occurred in Wayanad district of southern Indian state Kerala on Monday, 15 July 1957 and the colour subsequently turned yellow and also 25 July to 23 September 2001, when heavy downpours of red-coloured rain fell sporadically in Kerala, staining clothes pink. Yellow, green and black rain was also reported. Coloured rain was also reported in Kerala in 1896 and several times since, most recently in June 2012, and from 15 November 2012 to 27 December 2012 in eastern and north-central provinces of Sri Lanka. Following a light-microscopy examination in 2001, it was initially thought that the rains were coloured by fallout from a hypothetical meteor burst, but a study commissioned by the Government of India concluded that the rains had been coloured by airborne spores from a locally prolific terrestrial green algae from the genus Trentepohlia. Occurrence The coloured rain of Kerala began falling on 25 July 2001, in the districts of Kottayam and Idukki in the southern part of the state. Yellow, green, and black rain was also reported. Many more occurrences of the red rain were reported over the following ten days, and then with diminishing frequency until late September. According to locals, the first coloured rain was preceded by a loud thunderclap and flash of light, and followed by groves of trees shedding shrivelled grey "burnt" leaves. Shriveled leaves and the disappearance and sudden formation of wells were also reported around the same time in the area. It typically fell over small areas, no more than a few square kilometres in size, and was sometimes so localised that normal rain could be falling just a few meters away from the red rain. Red rainfalls typically lasted less than 20 minutes. Each millilitre of rain water contained about 9 million red particles. Extrapolating these figures to the total amount of red rain estimated to have fallen, it was estimated that of red particles had fallen on Kerala. Description of the particles The brownish-red solid separated from the red rain consisted of about 90% round red particles and the balance consisted of debris. The particles in suspension in the rain water were responsible for the colour of the rain, which at times was strongly coloured red. A small percentage of particles were white or had light yellow, bluish grey and green tints. The particles were typically 4 to 10 μm across and spherical or oval. Electron microscope images showed the particles as having a depressed centre. At still higher magnification some particles showed internal structures. Chemical composition Some water samples were taken to the Centre for Earth Science Studies (CESS) in India, where they separated the suspended particles by filtration. The pH of the water was found to be around 7 (neutral). The electrical conductivity of the rainwater showed the absence of any dissolved salts. Sediment (red particles plus debris) was collected and analysed by the CESS using a combination of ion-coupled plasma mass spectrometry, atomic absorption spectrometry and wet chemical methods. The major elements found are listed below. The CESS analysis also showed significant amounts of heavy metals, including nickel (43 ppm), manganese (59 ppm), titanium (321 ppm), chromium (67ppm) and copper (55 ppm). Physicists Godfrey Louis and Santhosh Kumar of the Mahatma Gandhi University, Kerala, used energy dispersive X-ray spectroscopy analysis of the red solid and showed that the particles were composed of mostly carbon and oxygen, with trace amounts of silicon and iron. A CHN analyser showed content of 43.03% carbon, 4.43% hydrogen, and 1.84% nitrogen. Tom Brenna in the Division of Nutritional Sciences at Cornell University conducted carbon and nitrogen isotope analyses using a scanning electron microscope with X-ray micro-analysis, an elemental analyser, and an isotope ratio (IR) mass spectrometer. The red particles collapsed when dried, which suggested that they were filled with fluid. The amino acids in the particles were analysed and seven were identified (in order of concentration): phenylalanine, glutamic acid/glutamine, serine, aspartic acid, threonine, and arginine. The results were consistent with a marine origin or a terrestrial plant that uses a C4 photosynthetic pathway. Government report Initially, the Centre for Earth Science Studies (CESS) stated that the likely cause of the red rain was an exploding meteor, which had dispersed about 1,000 kg (one ton) of material. A few days later, following a basic light microscopy evaluation, the CESS retracted this as they noticed the particles resembled spores, and because debris from a meteor would not have continued to fall from the stratosphere onto the same area while unaffected by wind. A sample was, therefore, handed over to the Tropical Botanical Garden and Research Institute (TBGRI) for microbiological studies, where the spores were allowed to grow in a medium suitable for growth of algae and fungi. The inoculated petri dishes and conical flasks were incubated for three to seven days and the cultures were observed under a microscope. In November 2001, commissioned by the Government of India's Department of Science & Technology, the Centre for Earth Science Studies (CESS) and the Tropical Botanical Garden and Research Institute (TBGRI) issued a joint report, which concluded: The site was again visited on 16 August 2001 and it was found that almost all the trees, rocks and even lamp posts in the region were covered with Trentepohlia estimated to be in sufficient amounts to generate the quantity of spores seen in the rainwater. Although red or orange, Trentepohlia is a chlorophyte green alga which can grow abundantly on tree bark or damp soil and rocks, but is also the photosynthetic symbiont or photobiont of many lichens, including some of those abundant on the trees in Changanassery area. The strong orange colour of the algae, which masks the green of the chlorophyll, is caused by the presence of large quantities of orange carotenoid pigments. A lichen is not a single organism, but the result of a partnership (symbiosis) between a fungus and an alga or cyanobacterium. The report also stated that there was no meteoric, volcanic or desert dust origin present in the rainwater and that its colour was not due to any dissolved gases or pollutants. The report concluded that heavy rains in Kerala – in the weeks preceding the red rains – could have caused the widespread growth of lichens, which had given rise to a large quantity of spores into the atmosphere. However, for these lichen to release their spores simultaneously, it is necessary for them to enter their reproductive phase at about the same time. The CESS report noted that while this may be a possibility, it is quite improbable. Also, they could find no satisfactory explanation for the apparently extraordinary dispersal, nor for the apparent uptake of the spores into clouds. CESS scientists noted that "While the cause of the colour in the rainfall has been identified, finding the answers to these questions is a challenge." Attempting to explain the unusual spore proliferation and dispersal, researcher Ian Goddard proposed several local atmospheric models. Parts of the CESS/TBGRI report were supported by Milton Wainwright at the University of Sheffield, who, together with Chandra Wickramasinghe, has studied stratospheric spores. In March 2006 Wainwright said the particles were similar in appearance to spores of a rust fungus, later saying that he had confirmed the presence of DNA, and reported their similarity to algal spores, and found no evidence to suggest that the rain contained dust, sand, fat globules, or blood. In November 2012, Rajkumar Gangappa and Stuart Hogg from the University of Glamorgan, UK, confirmed that the red rain cells from Kerala contain DNA. In February 2015, a team of scientists from India and Austria, also supported the identification of the algal spores as Trentepohlia annulata, however, they speculate that the spores from the 2011 incident were carried by winds from Europe to the Indian subcontinent. Alternative hypotheses History records many instances of unusual objects falling with the rain – in 2000, in an example of raining animals, a small waterspout in the North Sea sucked up a school of fish a mile off shore, depositing them shortly afterwards on Great Yarmouth in the United Kingdom. Coloured rain is by no means rare, and can often be explained by the airborne transport of rain dust from deserts or other dry regions which have been washed down by rain. "Red Rains" have been frequently described in southern Europe, with increasing reports in recent years. One such case occurred in England in 1903, when dust was carried from the Sahara and fell with rain in February of that year. At first, the red rain in Kerala was attributed to the same effect, with dust from the deserts of Arabia initially the suspect. LIDAR observations had detected a cloud of dust in the atmosphere near Kerala in the days preceding the outbreak of the red rain. However, laboratory tests from all involved teams ruled out the particles were desert sand. K.K. Sasidharan Pillai, a senior scientific assistant in the Indian Meteorological Department, proposed dust and acidic material from an eruption of Mayon Volcano in the Philippines as an explanation for the coloured rain and the "burnt" leaves. The volcano was erupting in June and July 2001 and Pillai calculated that the Eastern or Equatorial jet stream could have transported volcanic material to Kerala in 25–36 hours. The Equatorial jet stream is unusual in that it sometimes flows from east to west at about 10° N, approximately the same latitude as Kerala (8° N) and Mayon Volcano (13° N). This hypothesis was also ruled out as the particles were neither acidic nor of volcanic origin, but were spores. A study has been published showing a correlation between historic reports of coloured rains and of meteors; the author of the paper, Patrick McCafferty, stated that sixty of these colored rain events, or 36%, were linked to meteoritic or cometary activity, though not always strongly. Sometimes the fall of red rain seems to have occurred after an air-burst, as from a meteor exploding in air; other times the odd rainfall is merely recorded in the same year as the appearance of a comet. Panspermia hypothesis In 2003 Godfrey Louis and Santhosh Kumar, physicists at the Mahatma Gandhi University in Kottayam, Kerala, posted an article entitled "Cometary panspermia explains the red rain of Kerala" in the non-peer reviewed arXiv web site. While the CESS report said there was no apparent relationship between the loud sound (possibly a sonic boom) and flash of light which preceded the red rain, to Louis and Kumar it was a key piece of evidence. They proposed that a meteor (from a comet containing the red particles) caused the sound and flash and when it disintegrated over Kerala it released the red particles which slowly fell to the ground. However, they omitted an explanation on how debris from a meteor continued to fall in the same area over a period of two months while unaffected by winds. Their work indicated that the particles were of biological origin (consistent with the CESS report), however, they invoked the panspermia hypothesis to explain the presence of cells in a supposed fall of meteoric material. Additionally, using ethidium bromide they were unable to detect DNA or RNA in the particles. Two months later they posted another paper on the same web site entitled "New biology of red rain extremophiles prove cometary panspermia" in which they reported that The microorganism isolated from the red rain of Kerala shows very extraordinary characteristics, like the ability to grow optimally at and the capacity to metabolise a wide range of organic and inorganic materials. These claims and data have yet to be verified and reported in any peer reviewed publication. In 2006 Louis and Kumar published a paper in Astrophysics and Space Science entitled "The red rain phenomenon of Kerala and its possible extraterrestrial origin" which reiterated their arguments that the red rain was biological matter from an extraterrestrial source but made no mention of their previous claims to having induced the cells to grow. The team also observed the cells using phase contrast fluorescence microscopy, and they concluded that: "The fluorescence behaviour of the red cells is shown to be in remarkable correspondence with the extended red emission observed in the Red Rectangle Nebula and other galactic and extragalactic dust clouds, suggesting, though not proving an extraterrestrial origin." One of their conclusions was that if the red rain particles are biological cells and are of cometary origin, then this phenomenon can be a case of cometary panspermia. In August 2008 Louis and Kumar again presented their case in an astrobiology conference. The abstract for their paper states that The red cells found in the red rain in Kerala, India are now considered as a possible case of extraterrestrial life form. These cells can undergo rapid replication even at an extreme high temperature of . They can also be cultured in diverse unconventional chemical substrates. The molecular composition of these cells is yet to be identified. In September 2010 a similar paper was presented at a conference in California, US. Cosmic ancestry Researcher Chandra Wickramasinghe used Louis and Kumar's "extraterrestrial origin" claim to further support his panspermia hypothesis called cosmic ancestry. This hypothesis postulates that life is neither the product of supernatural creation, nor is it spontaneously generated through abiogenesis, but that it has always existed in the universe. Cosmic ancestry speculates that higher life forms, including intelligent life, descend ultimately from pre-existing life which was at least as advanced as the descendants. Criticism Louis and Kumar made their first publication of their finding on a web site in 2003, and have presented papers at conferences and in astrophysics magazines a number of times since. The controversial conclusion of Louis et al. is the only hypothesis suggesting that these organisms are of extraterrestrial origin. Such reports have been popular in the media, with major news agencies like CNN repeating the panspermia theory without critique. The hypothesis' authors – G. Louis and Kumar – did not explain how debris from a meteor could have continued to fall on the same area over a period of two months, despite the changes in climatic conditions and wind pattern spanning over two months. Samples of the red particles were also sent for analysis to his collaborators Milton Wainwright at the University of Sheffield and Chandra Wickramasinghe at Cardiff University. Louis then incorrectly reported on 29 August 2010 in the non-peer reviewed online physics archive "arxiv.org" that they were able to have these cells "reproduce" when incubated at high pressure saturated steam at 121 °C (autoclaved) for up to two hours. Their conclusion is that these cells reproduced, without DNA, at temperatures higher than any known life form on earth is able to. They claimed that the cells, however, were unable to reproduce at temperatures similar to known organisms. Regarding the "absence" of DNA, Louis admits he has no training in biology, and has not reported the use of any standard microbiology growth medium to culture and induce germination and growth of the spores, basing his claim of "biological growth" on light absorption measurements following aggregation by supercritical fluids, an inert physical observation. Both his collaborators, Wickramasinghe and Milton Wainwright independently extracted and confirmed the presence of DNA from the spores. The absence of DNA was key to Louis and Kumar's hypothesis that the cells were of extraterrestrial origins. Louis' only reported attempt to stain the spores' DNA was by the use of malachite green, which is generally used to stain bacterial endospores, not algal spores, whose primary function of their cell wall and their impermeability is to ensure its own survival through periods of environmental stress. They are therefore resistant to ultraviolet and gamma radiation, desiccation, lysozyme, temperature, starvation and chemical disinfectants. Visualizing algal spore DNA under a light microscope can be difficult due to the impermeability of the highly resistant spore wall to dyes and stains used in normal staining procedures. The spores' DNA is tightly packed, encapsulated and desiccated, therefore, the spores must first be cultured in suitable growth medium and temperature to first induce germination, then cell growth followed by reproduction before staining the DNA. Other researchers have noted recurring instances of red rainfalls in 1818, 1846, 1872, 1880, 1896, and 1950 and several times since then. Most recently, coloured rainfall occurred over Kerala during the summers of 2001, 2006, 2007, 2008, and 2012; since 2001, the botanists have found the same Trentepohlia spores every time. This supports the notion that the red rain is a seasonal local environmental feature caused by algal spores. In popular culture The science fiction film Red Rain was loosely based on the red rain in Kerala story. It was directed by Rahul Sadasivan and released in India on 6 December 2013. See also References External links Sampath, S., Abraham, T. K., Sasi Kumar, V., & Mohanan, C.N. (2001). Colored Rain: A Report on the Phenomenon. CESS-PR-114-2001, Center for Earth Science Studies and Tropical Botanic Garden and Research Institute. "When aliens rained over India" by Hazel Muir in New Scientist "Searching for 'our alien origins'" by Andrew Thompson in BBC News "Fluorescence Mystery in Red Rain Cells of Kerala, India " Linda Moulton Howe Earthfiles "Home page of Dr A Santhosh Kumar" 2001 in India 2001 meteorology Anomalous weather Environment of Kerala Panspermia Weather events in India Rain Changanassery Meteorological hypotheses Trentepohliaceae
Red rain in Kerala
[ "Physics", "Biology" ]
3,712
[ "Physical phenomena", "Origin of life", "Panspermia", "Weather", "Anomalous weather", "Biological hypotheses" ]
4,291,105
https://en.wikipedia.org/wiki/Gopher%20wood
Gopher wood or gopherwood is a term used once in the Bible, to describe the material used to construct Noah's Ark. states that Noah was instructed to build the Ark of (), commonly transliterated as wood, a word not otherwise used in the Bible or the Hebrew language in general (a ). Although some English Bibles attempt a translation, older English translations such as the King James Version (17th century) leave it untranslated. The word is unrelated to the name of the North American animal known as the gopher. Identity The Greek Septuagint (3rd–1st centuries BC) translates the phrase as (), , translating as . Similarly, the Latin Vulgate (5th century AD) rendered it as (, in the spelling of the Clementine Vulgate), . The Jewish Encyclopedia states that it was most likely a translation of the Akkadian term , , or the Assyrian , . The Aramaic Targum Onkelos, considered by many Jews to be an authoritative translation of the Hebrew scripture, renders this word as , . The Syriac Peshitta translates this word as , (boxwood). Many modern English translations favor cypress (otherwise referred to in Biblical Hebrew as ). This was espoused (among others) by Adam Clarke, a Methodist theologian famous for his commentary on the Bible: Clarke cited a resemblance between the Greek word for cypress, , and the Hebrew word . Likewise, the (20th century) has it as . Others, noting the visual similarity between the Hebrew letters () and (), suggest that the word may actually be , the Hebrew word meaning : thus wood would be . Later suggestions for a dynamic equivalent of the word have included (to strengthen the Ark), or a now-lost type of tree, but there is no consensus. References External links Gopherwood and Construction of the Ark The Free Dictionary - "Gopherwood" (giving a definition of Cladrastis kentukea) Noah's Ark Wood Plants in the Bible Plant common names Cedrus Cupressus Biblical studies
Gopher wood
[ "Biology" ]
424
[ "Plants", "Plant common names", "Common names of organisms" ]
4,291,327
https://en.wikipedia.org/wiki/Nintendo%20European%20Research%20%26%20Development
Nintendo European Research & Development (NERD) is a French subsidiary for Nintendo, located in Paris, which develops software technologies and middleware for Nintendo platforms. This includes retro console emulators, patented video codecs, and digital rights management technology. The organization originated as Mobiclip and Actimagine () with notable customers including Nintendo, Sony Pictures Digital, and Fisher-Price. Nintendo licensed Mobiclip compression technology for the Game Boy Advance and Nintendo DS video game consoles, used by popular games such as Square Enix's Final Fantasy III and Konami's Contra 4. Fisher-Price used them for its Pixter Multi-Media educational toy. Sony Pictures Digital and The Carphone Warehouse used Mobiclip software to deliver TV-like full-length movies on MicroSD memory cards for smart phones. Nintendo purchased the company, to create NERD. History Actimagine was established in March 2003 by a team of engineers (Eric Bécourt, Alexandre Delattre, Laurent Hiriart, Jérôme Larrieu, Sylvain Quendez) and a businessman (André Pagnac). Actimagine started out with mobile gaming consoles. The video compression technology offered by Mobiclip was an optimized response to the battery life and video quality requirements of Nintendo video gaming platforms: Game Boy Advance, Nintendo DS, Wii, and Nintendo 3DS. The Mobiclip codec provides high video quality with low battery consumption and has been selected by major studios, such as Sony Pictures Digital, Paramount, Fox and Gaumont Columbia TriStar Films, and by leading handset manufacturers, such as Nokia or Sony Ericsson, to deliver video on memory cards for mobile phones. In April 2006, Actimagine raised €3 million in equity financing from US venture capital firm GRP Partners. This first round of institutional fund raising enabled Actimagine to accelerate its business development in the US and Japan. The same year, Adobe acquired Actimagine's Flash rendering engine optimized for mobile devices. In 2008, Mobiclip launched the first application delivering live TV on the iPhone, a year before Apple. In October 2011, Mobiclip was bought by Nintendo and is now a subsidiary of the latter. Since then it is now known as "Nintendo European Research & Development" or "NERD". In 2017, the United States branch was merged with Nintendo Technology Development. Mobiclip video codecs Mobiclip was developed with a completely different algorithm from the one used for other video codecs on the market, based on minimal use of the processor resources, allowing battery life to be increased considerably and the cost of the hardware to be reduced. Nintendo licensing Nintendo selected Mobiclip as its main provider of video codec technologies on the Game Boy Advance, Nintendo DS, Nintendo Wii and Nintendo 3DS. Major software titles used it for in-game cinematics, including: GBA Video series on the Game Boy Advance Dragon Quest IX: Sentinels of the Starry Skies on Nintendo DS Professor Layton series on Nintendo DS and Nintendo 3DS Fire Emblem Awakening on Nintendo 3DS. Wii no Ma and Nintendo Channel on Wii. eCrew Development Program, the extremely rare Japanese McDonald's training game for the Nintendo DS. The Legendary Starfy on Nintendo DS. Kingdom Hearts 358/2 Days on Nintendo DS. List of technologies developed by NERD Software Emulation Kachikachi: NES emulation for the NES Classic Edition Canoe: Super NES emulation for the Super NES Classic Edition L-CLASSICS: NES & Super NES emulation for Nintendo Switch Online Hiyoko: Game Boy & Game Boy Color emulation for Nintendo Switch Online Hovercraft: Nintendo 64 emulation for Nintendo Switch Online and Super Mario 3D All-Stars (co-developed with iQue) m2engage: Sega Genesis emulation for Nintendo Switch Online (co-developed with M2) Sloop: Game Boy Advance emulation for Nintendo Switch Online (co-developed with Panasonic Vietnam) Hagi: GameCube & Wii emulation for Super Mario 3D All-Stars and other Nintendo Switch re-releases, e.g. Pikmin 1 & 2 Hachihachi: Nintendo DS emulation for the Wii U Virtual Console Other technologies Mobiclip video codecs for smartphone / Game Boy Advance / Nintendo DS / Nintendo 3DS / Wii Media player for Wii U Internet Browser Super-stable 3D display on New Nintendo 3DS Nintendo Labo VR Kit (co-developed with Nintendo EPD) Deep learning middleware for Dr Kawashima's Brain Training for Nintendo Switch Heart rate detection system in Joy-Con, used in Ring Fit Adventure Providing expertise in areas such as steering control, low-latency video capture and streaming and location tracking for Mario Kart Live: Home Circuit Filtering Expertise and Gesture Tracking for Nintendo Switch Sports Rendering System, WYSIWYG, Texture Compression and Animations for The Legend of Zelda: Tears of the Kingdom'' References Mobile content Nintendo divisions and subsidiaries Video game companies established in 2003 Companies based in Paris Video game development companies French subsidiaries of foreign companies
Nintendo European Research & Development
[ "Technology" ]
1,041
[ "Mobile content" ]
4,292,072
https://en.wikipedia.org/wiki/Charcoal%20lighter%20fluid
Charcoal lighter fluid is a flammable fluid used to accelerate the ignition of charcoal in a barbecue grill. It can either be petroleum based (e.g., mineral spirits) or alcohol based (usually methanol or ethanol). It can be used both with lump charcoal and briquettes. Lighter-fluid infused briquettes, that eliminate the need for separate application of lighter fluid, are available. The use of lighter fluid is somewhat controversial as the substance is combustible, harmful or fatal if swallowed, and may impart an unpleasant flavor to food cooked upon fires lit with it. The sale of petroleum-based charcoal lighter fluid is regulated in some jurisdictions due to its potential to cause photochemical smog through evaporation of its volatile organic compounds. The South Coast Air Quality Management District requires that all charcoal lighter fluids sold in its jurisdiction (essentially Southern California) meet the air quality standards set forth in District Rule 1174. Common substitutes to aid in the starting of charcoal fires are chimney and electric fire starters. In former Soviet countries, the alcohol-based lighter fluid is sometimes consumed as a surrogate alcohol among very poor alcoholics because of its cheap price compared to vodka, just as it is with Troynoy Eau de Cologne. Lighter fluid is poisonous and should never be consumed. Charcoal lighter fluid, known as LF-1, was used in the Pratt & Whitney J57 engine, which powered the Lockheed U-2 aircraft. With an additive to improve thermal oxidative stability, it was covered by a military specification and known as Thermally Stable Jet Fuel, JPTS. References Fuels Barbecue
Charcoal lighter fluid
[ "Chemistry" ]
329
[ "Fuels", "Chemical energy sources" ]
4,292,269
https://en.wikipedia.org/wiki/Alphabet%20%28formal%20languages%29
In formal language theory, an alphabet, sometimes called a vocabulary, is a non-empty set of indivisible symbols/characters/glyphs, typically thought of as representing letters, characters, digits, phonemes, or even words. Alphabets in this technical sense of a set are used in a diverse range of fields including logic, mathematics, computer science, and linguistics. An alphabet may have any cardinality ("size") and, depending on its purpose, may be finite (e.g., the alphabet of letters "a" through "z"), countable (e.g., ), or even uncountable (e.g., ). Strings, also known as "words" or "sentences", over an alphabet are defined as a sequence of the symbols from the alphabet set. For example, the alphabet of lowercase letters "a" through "z" can be used to form English words like "iceberg" while the alphabet of both upper and lower case letters can also be used to form proper names like "Wikipedia". A common alphabet is {0,1}, the binary alphabet, and a "00101111" is an example of a binary string. Infinite sequences of symbols may be considered as well (see Omega language). It is often necessary for practical purposes to restrict the symbols in an alphabet so that they are unambiguous when interpreted. For instance, if the two-member alphabet is {00,0}, a string written on paper as "000" is ambiguous because it is unclear if it is a sequence of three "0" symbols, a "00" followed by a "0", or a "0" followed by a "00". Notation If L is a formal language, i.e. a (possibly infinite) set of finite-length strings, the alphabet of L is the set of all symbols that may occur in any string in L. For example, if L is the set of all variable identifiers in the programming language C, Ls alphabet is the set { a, b, c, ..., x, y, z, A, B, C, ..., X, Y, Z, 0, 1, 2, ..., 7, 8, 9, _ }. Given an alphabet , the set of all strings of length over the alphabet is indicated by . The set of all finite strings (regardless of their length) is indicated by the Kleene star operator as , and is also called the Kleene closure of . The notation indicates the set of all infinite sequences over the alphabet , and indicates the set of all finite or infinite sequences. For example, using the binary alphabet {0,1}, the strings ε, 0, 1, 00, 01, 10, 11, 000, etc. are all in the Kleene closure of the alphabet (where ε represents the empty string). Applications Alphabets are important in the use of formal languages, automata and semiautomata. In most cases, for defining instances of automata, such as deterministic finite automata (DFAs), it is required to specify an alphabet from which the input strings for the automaton are built. In these applications, an alphabet is usually required to be a finite set, but is not otherwise restricted. When using automata, regular expressions, or formal grammars as part of string-processing algorithms, the alphabet may be assumed to be the character set of the text to be processed by these algorithms, or a subset of allowable characters from the character set. See also Combinatorics on words Terminal and nonterminal symbols References Literature John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. . Formal languages Combinatorics on words
Alphabet (formal languages)
[ "Mathematics" ]
806
[ "Formal languages", "Mathematical logic", "Combinatorics on words", "Combinatorics" ]
4,292,604
https://en.wikipedia.org/wiki/Asiamericana
Asiamericana is a dubious genus of coelurosaur known only from isolated teeth found in the Bissekty Formation of Uzbekhistan. It was named to recognize the occurrence of similar fossil teeth in Central Asia and North America. These regions once formed a connected land mass, during the Cretaceous period. Discovery and naming The holotype teeth were discovered during the Uzbek-Russian-British-American-Canadian (URBAC) expedition by Lev Alexandrovich Nessov between 1974 and 1985 and were first described by Nesov (1985). The type species is A. asiatica, which was named and described by Nesov (1995). The holotype of A. asiatica is CCMGE 460/12457, and two other teeth (ZIN PH 1110/ 16 and ZIN PH 1129/16) are also known. All three teeth are known from the CBI-14 site of the Bissekty Formation of Kazakhstan. Description The teeth themselves are straight, lack a constriction at the base, and lack serrations. Classification In his initial description of the unusual teeth, Nesov speculated that they may belong to either saurodont fish or to spinosaurid dinosaurs. He later changed his opinion, deciding that they definitely represented theropod remains, and this opinion was followed by most later researchers who excluded them from reviews of spinosaurid teeth for this reason. However, in 2013 a study assumed that the teeth were identical to those of the possibly dromaeosaurid Richardoestesia isosceles, and renamed the species into Richardoestesia asiatica. A subsequent study confirmed this in 2019. References Coelurosaurs Late Cretaceous dinosaurs of Asia Fossil taxa described in 1995 Nomina dubia
Asiamericana
[ "Biology" ]
366
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
4,292,852
https://en.wikipedia.org/wiki/X-ray%20interferometer
An X-ray interferometer is analogous to a neutron interferometer. It has been suggested that it may offer the very highest spatial resolution in astronomy, though the technology is unproven as of 2008. One technique is triple Laue interferometry (LLL interferometry). See also High energy X-rays References X-Ray and Neutron Interferometry Author: Ulrich Bonse at uni-dortmund.de, 10 February 2005 Interferometers
X-ray interferometer
[ "Technology", "Engineering" ]
101
[ "Interferometers", "Measuring instruments" ]
4,293,361
https://en.wikipedia.org/wiki/Astronomical%20interferometer
An astronomical interferometer or telescope array is a set of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation, called baseline, between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array. Interferometry is most widely used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry (VLBI) radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter. At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring very precise optics. Practical infrared and optical astronomical interferometers have only recently been developed, and are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics. Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, and image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths. One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a partially complete reflecting telescope but with a "sparse" or "dilute" aperture. In fact, the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner (focus) are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, and Labeyrie's hypertelescope will use a spherical geometry. History One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars. The red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array. Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson, Betz and Townes (1974) in the infrared and by Labeyrie (1975) in the visible. In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer. In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first very high resolution images of nearby stars. In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and allowing even higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images. The same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Precision Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array. A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer. The Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar; then a first-ever six-way synthesis of Eta Virginis in 2002; and most recently "closure phase" as a step to the first synthesized images produced by geostationary satellites. Modern astronomical interferometry Astronomical interferometry is principally conducted using Michelson (and sometimes other type) interferometers. The principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, and CHARA. Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star (as used by the Palomar Testbed Interferometer and the VLTI), through the use of nulling (as will be used by the Keck Interferometer and Darwin) or through direct imaging (as proposed for Labeyrie's Hypertelescope). Engineers at the European Southern Observatory ESO designed the Very Large Telescope VLT so that it can also be used as an interferometer. Along with the four unit telescopes, four mobile 1.8-metre auxiliary telescopes (ATs) were included in the overall VLT concept to form the Very Large Telescope Interferometer (VLTI). The ATs can move between 30 different stations, and at present, the telescopes can form groups of two or three for interferometry. When using interferometry, a complex system of mirrors brings the light from the different telescopes to the astronomical instruments where it is combined and processed. This is technically demanding as the light paths must be kept equal to within 1/1000 mm (the same order as the wavelength of light) over distances of a few hundred metres. For the Unit Telescopes, this gives an equivalent mirror diameter of up to , and when combining the auxiliary telescopes, equivalent mirror diameters of up to can be achieved. This is up to 25 times better than the resolution of a single VLT unit telescope. The VLTI gives astronomers the ability to study celestial objects in unprecedented detail. It is possible to see details on the surfaces of stars and even to study the environment close to a black hole. With a spatial resolution of 4 milliarcseconds, the VLTI has allowed astronomers to obtain one of the sharpest images ever of a star. This is equivalent to resolving the head of a screw at a distance of . Notable 1990s results included the Mark III measurement of diameters of 100 stars and many accurate stellar positions, COAST and NPOI producing many very high resolution images, and Infrared Stellar Interferometer measurements of stars in the mid-infrared for the first time. Additional results include direct measurements of the sizes of and distances to Cepheid variable stars, and young stellar objects. High on the Chajnantor plateau in the Chilean Andes, the European Southern Observatory (ESO), together with its international partners, is building ALMA, which will gather radiation from some of the coldest objects in the Universe. ALMA will be a single telescope of a new design, composed initially of 66 high-precision antennas and operating at wavelengths of 0.3 to 9.6 mm. Its main 12-meter array will have fifty antennas, 12 metres in diameter, acting together as a single telescope – an interferometer. An additional compact array of four 12-metre and twelve 7-meter antennas will complement this. The antennas can be spread across the desert plateau over distances from 150 metres to 16 kilometres, which will give ALMA a powerful variable "zoom". It will be able to probe the Universe at millimetre and submillimetre wavelengths with unprecedented sensitivity and resolution, with a resolution up to ten times greater than the Hubble Space Telescope, and complementing images made with the VLT interferometer. Optical interferometers are mostly seen by astronomers as very specialized instruments, capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures; this is only true in the limited sense of angular resolution. The amount of light gathered—and hence the dimmest object that can be seen—depends on the real aperture size, so an interferometer would offer little improvement as the image is dim (the thinned-array curse). The combined effects of limited aperture area and atmospheric turbulence generally limits interferometers to observations of comparatively bright stars and active galactic nuclei. However, they have proven useful for making very high precision measurements of simple stellar parameters such as size and position (astrometry), for imaging the nearest giant stars and probing the cores of nearby active galaxies. For details of individual instruments, see the list of astronomical interferometers at visible and infrared wavelengths. At radio wavelengths, interferometers such as the Very Large Array and MERLIN have been in operation for many years. The distances between telescopes are typically , although arrays with much longer baselines utilize the techniques of Very Long Baseline Interferometry. In the (sub)-millimetre, existing arrays include the Submillimeter Array and the IRAM Plateau de Bure facility. The Atacama Large Millimeter Array has been fully operational since March 2013. Max Tegmark and Matias Zaldarriaga have proposed the Fast Fourier Transform Telescope which would rely on extensive computer power rather than standard lenses and mirrors. If Moore's law continues, such designs may become practical and cheap in a few years. Progressing quantum computing might eventually allow more extensive use of interferometry, as newer proposals suggest. See also Event Horizon Telescope (EHT) and Laser Interferometer Space Antenna (LISA) ExoLife Finder, a proposed hybrid interferometric telescope Hypertelescope Cambridge Optical Aperture Synthesis Telescope, an optical interferometer Navy Precision Optical Interferometer, a Michelson Optical Interferometer List 4C Array Akeno Giant Air Shower Array (AGASA) Allen Telescope Array (ATA), formerly known as the One Hectare Telescope (1hT) Antarctic Muon And Neutrino Detector Array (AMANDA) Atacama Large Millimeter Array (ALMA) Australia Telescope Compact Array CHARA array Cherenkov Telescope Array (CTA) Chicago Air Shower Array (CASA) Infrared Optical Telescope Array (IOTA) Interplanetary Scintillation Array (IPS array) also called the Pulsar Array LOFAR (LOw Frequency ARray) Modular Neutron Array (MoNA) Murchison Widefield Array (MWA) Northern Extended Millimeter Array (NOEMA) Nuclear Spectroscopic Telescope Array (NuSTAR) Square Kilometre Array (SKA) Submillimeter Array (SMA) Sunyaev-Zel'dovich Array (SZA) Telescope Array Project Very Large Array (VLA) Very Long Baseline Array (VLBA) Very Small Array References Further reading M. Ryle & D. Vonberg, 1946 Solar radiation on 175Mc/s, Nature 158 pp 339 Govert Schilling, New Scientist, 23 February 2006 The hypertelescope: a zoom with a view External links How to combine the light from multiple telescopes for astrometric measurements at NPOI... Why an Optical Interferometer? Remote Sensing the potential and limits of astronomical interferometry The Antoine Labeyrie's hypertelescope project's website pt:Interferômetro de Michelson
Astronomical interferometer
[ "Astronomy" ]
2,514
[ "Astronomical interferometers", "Astronomical instruments" ]
4,293,467
https://en.wikipedia.org/wiki/Prony%20brake
The Prony brake is a simple device invented by Gaspard de Prony in 1821 to measure the torque produced by an engine. The term "brake horsepower" is one measurement of power derived from this method of measuring torque. (Power is calculated by multiplying torque by rotational speed.) Essentially the measurement is made by wrapping a cord or belt around the output shaft of the engine and measuring the force transferred to the belt through friction. The friction is increased by tightening the belt until the frequency of rotation of the shaft is reduced to a desired rotational speed. In practice more engine power can then be applied until the limit of the engine is reached. In its simplest form an engine is connected to a rotating drum by means of an output shaft. A friction band is wrapped around half the drum's circumference and each end attached to a separate spring balance. A substantial pre-load is then applied to the ends of the band, so that each spring balance has an initial and identical reading. When the engine is running, the frictional force between the drum and the band will increase the force reading on one balance and decrease it on the other. The difference between the two readings multiplied by the radius of the driven drum is equal to the torque. If the engine speed is measured with a tachometer, the brake horsepower is easily calculated. An alternate mechanism is to clamp a lever to the shaft and measure using a single balance. The torque is then related to the lever length, shaft diameter and measured force. The device is generally used over a range of engine speeds to obtain power and torque curves for the engine, since there is a non-linear relationship between torque and engine speed for most engine types. Power output in SI units may be calculated as follows: Rotary power (in newton-meters per second, N·m/s) = 2π × the distance from the center-line of the drum (the friction device) to the point of measurement (in meters, m) × rotational speed (in revolutions per second) × measured force (in newtons, N). Or in Imperial units: Rotary power (in pound-feet per second, lbf·ft/s) = 2π × distance from center-line of the drum (the friction device) to the point of measurement (in feet, ft) × rotational speed (in revolutions per second) × measured force (in pounds, lbf). References Dynamometers French inventions
Prony brake
[ "Technology", "Engineering" ]
500
[ "Dynamometers", "Measuring instruments" ]
4,293,804
https://en.wikipedia.org/wiki/Smokeless%20tobacco
Smokeless tobacco is a tobacco product that is used by means other than smoking. Their use involves chewing, sniffing, or placing the product between gum and the cheek or lip. Smokeless tobacco products are produced in various forms, such as chewing tobacco, snuff, snus, and dissolvable tobacco products. Smokeless tobacco is widely used in South Asia and this accounts for about 80% of global consumption. All smokeless tobacco products contain nicotine and are therefore highly addictive. Quitting smokeless tobacco use is as challenging as smoking cessation. Using smokeless tobacco can cause various harmful effects such as dental disease, oral cancer, oesophagus cancer, and pancreas cancer, coronary heart disease, as well as negative reproductive effects including stillbirth, premature birth and low birth weight. Smokeless tobacco poses a lower health risk than traditional combusted products. However it is not a healthy alternative to cigarette smoking. The level of risk varies between different types of products (e.g., low nitrosamine Swedish-type snus versus other smokeless tobacco with high nitrosamine levels) and producing regions. There is no safe level of smokeless tobacco use. Globally it contributes to 650 000 deaths each year. Smokeless tobacco products typically contain over 3000 constituents, which includes multiple cancer-causing chemicals. Approximately 28 chemical constituents present in smokeless tobacco can cause cancer, among which nitrosamine is the most prominent. Smokeless tobacco consumption is widespread throughout the world. Once addicted to nicotine from smokeless tobacco use, many people, particularly young people, expand their tobacco use by smoking cigarettes. Males are more likely than females to use smokeless tobacco. Types Most smokeless tobacco use involves placing the product between the gum and the cheek or lip. Smokeless tobacco is a noncombustible tobacco product. Types of smokeless tobacco include: Mixed routes of administration: Kuber, a smokeless tobacco product known for its highly addictive properties and its unique presentation disguised as a mouth freshener. Users commonly add it to tea or consume it directly by placing a pinch under the lower lip. Nasal administration: Snuff, a type of tobacco that is inhaled or "snuffed" into the nasal cavity. Traditionally, a specialized tool called a snuff spoon was used for this purpose. However, modern users may simply pinch the snuff between their thumb and forefinger or use pre-measured packets. Oral (buccal, sublabial, or sublingual): Chewing tobacco, a type of tobacco that is chewed Creamy snuff, a fluid tobacco mixture marketed as a dental hygiene aid, albeit used for recreation Dipping tobacco, a type of tobacco that is placed between the lower or upper lip and gums. This form of tobacco (Hindi: Khaini) is commonly used in Indian subcontinent. It is the second most common form of tobacco consumption in India, after cigarrette smoking. Dissolvable tobacco, a variation on chewing tobacco that completely dissolves in the mouth Gutka, a mixture of tobacco, areca nut, and various flavoring sold in South Asia Iqmik, an Alaskan tobacco product which also contains punk ash Naswar, an Afghan tobacco product similar to dipping tobacco Pituri, a nicotine-containing substance traditionally made from Australian tobacco plants, used by Indigenous Australians for chewing and placed between the lower lip and gums. They use it in high doses to induce stupor or trance. Snus, similar to dipping tobacco although the tobacco is placed under the upper lip and there is no need for spitting Tobacco chewing gum, a kind of chewing gum containing tobacco Toombak and shammah, preparations found in North Africa, East Africa, and the Arabian peninsula Topical: Topical tobacco paste, a paste applied to the skin and absorbed through the dermis Since there are varied manufacturing methods, products can differ greatly in chemical arrangement and nicotine level. Smokeless tobacco products typically contain over 3000 constituents which play a part in their taste as well as scent. Nicotine levels Smokeless tobacco differs depending on the type of product, the types of tobacco used, and the amount of each tobacco type used within a product. Each variable results in different level of nicotine. Furthermore, nicotine is absorbed by the body to different degrees depending on the pH level of the product, which is known as the free nicotine or unionized nicotine level. Below are some measured nicotine levels of various smokeless tobacco products from 2006 and 2007 and their corresponding free nicotine levels as calculated by the Henderson–Hasselbalch equation. Health effects Various national and international health organizations, including the World Health Organization, the US National Cancer Institute, the UK Royal College of Physicians, stated that, even if it is less dangerous than smoking, using smokeless tobacco is addictive, represents a major health risk, has no safe level use and is not a safe substitute for smoking. Using smokeless tobacco can cause a number of adverse health effects such as dental disease, oral cancer, oesophagus cancer, and pancreatic cancer, cardiovascular disease, asthma, and deformities in the female reproductive system. It also raises the risk of fatal coronary artery disease, fatal stroke and non-fatal ischaemic heart disease Globally it contributes to 650 000 deaths each year with a significant proportion of them in Southeast Asia. Smoking cessation and harm reduction Quitting smokeless tobacco use is as challenging as smoking cessation. There is no scientific evidence that using smokeless tobacco can help a person quit smoking. It is not recommended to use any smokeless tobacco product as part of a harm reduction strategy. Tobacco companies that sell smokeless tobacco products promote them as harm reduction products and a less harmful substitute to cigarettes. This creates a false perception of safety while real risk reduction can be achieved by smoking less. Safety Smokeless tobacco products vary extensively worldwide in both form and health hazards. The level of health risk varies between different types of products (e.g., low nitrosamine Swedish-type snus versus other smokeless tobacco with high nitrosamine levels from South Asia). Even though smokeless tobacco poses a lower health risk than traditional combusted products, contrary to common belief it is not a "safe" alternative to conventional tobacco. There is no safe level of smokeless tobacco use. The declines in smokeless tobacco initiation among adolescents and young adults is particularly relevant to improving their health because smokeless tobacco use is often linked to subsequent cigarette initiation. Smokeless tobacco users can experience negative health consequences at any age. Youth use of tobacco in any form is unsafe. Cancer Smokeless tobacco (including products where tobacco is chewed) is a cause of oral cancer, oesophagus cancer, and pancreas cancer. Increased risk of oral cancer caused by smokeless tobacco is present in countries such as the United States but particularly prevalent in Southeast Asian countries where the use of smokeless tobacco is common. Smokeless tobacco can cause white or gray patches inside the mouth (leukoplakia) that can develop into oral cancer. Carcinogens All tobacco products, including smokeless, contain cancer-causing chemicals. These carcinogenic compounds occurring in smokeless tobacco vary widely, and depend upon the kind of product and how it was manufactured. There are 28 known cancer-causing substances in smokeless tobacco products. Carcinogenic compounds in smokeless tobacco belong primarily to three groups of compounds: tobacco-specific nitrosamines (TSNA), N-nitrosoamino acids and N-nitrosamines. Among these TSNAs are the most abundant in smokeless tobacco and the most carcinogenic. N-nitrosonornicotine and ketone are group 1 carcinogens to humans. These two nitrosamines found in smokeless tobacco products are the main agents for the majority of cancers in smokeless tobacco users. Products such as 3-(methylnitrosamino)-proprionitrile, nitrosamines, and nicotine initiate the production of reactive oxygen species in smokeless tobacco, eventually leading to fibroblast, DNA, and RNA damage with carcinogenic effects in the mouth of tobacco consumers. The metabolic activation of nitrosamine in tobacco by cytochrome P450 enzymes may lead to the formation of N-nitrosonornicotine, a major carcinogen, and micronuclei, which are an indicator of genotoxicity. These effects lead to further DNA damage and, eventually, oral cancer. Other chemicals found in tobacco can also cause cancer. These include the radioactive element polonium-210 found in tobacco fertilizer. Harmful chemicals are also formed when tobacco is cured with heat (polycyclic aromatic hydrocarbons). Furthermore tobacco contains harmful metals such as arsenic, beryllium, cadmium, chromium, cobalt, lead, nickel, and mercury. The amounts of nicotine in saliva from using smokeless tobacco could be at amounts that can be toxic to cells in the oral cavity. Cardiovascular disease Using smokeless tobacco increases the risk of fatal coronary heart disease and stroke. Use of smokeless tobacco also seems to greatly raise the risk of non-fatal ischaemic heart disease among users in Asia, although not in Europe. Effects during pregnancy Smokeless tobacco can cause adverse reproductive effects including stillbirth, premature birth, low birth weight. Nicotine in smokeless tobacco products that are used during pregnancy can affect how a baby's brain develops before birth. Management Due to the harm caused by smokeless tobacco, it use might lead to the need for management or treatment. Some medications that show some benefits are varenicline and nicotine lozenges. Some behavioural interventions may also help. Prevalence More than 300 million people are using smokeless tobacco worldwide. People of many regions, including India, Pakistan, other Asian countries, and North America, have a long history of smokeless tobacco use. Once addicted to nicotine from smokeless tobacco use, many people, particularly young people, expand their tobacco use by smoking cigarettes. Because young people who use smokeless tobacco can become addicted to nicotine, they may be more likely to also become cigarette smokers. Youth are particularly susceptible to starting smokeless tobacco use. United States Males were more likely than females to have used smokeless tobacco in the past month. In 2014, 3.3 percent of people aged 12 or older (an estimated 8.7 million people) used smokeless tobacco in the past month. Past month smokeless tobacco use remained relatively stable between 2002 and 2014. Past month smokeless tobacco use between 2002 and 2014 was mostly consistent among adults aged 26 or older. There was more variability in the percentages of young adults aged 18 to 25 and adolescents aged 12 to 17 who used smokeless tobacco between 2002 and 2014. Smokeless tobacco use for adolescents aged 12 to 17 was higher during the mid-2000s, but the 2014 estimates were closer to the lower levels seen in the early 2000s. In 2014, an estimated 1.0 million people aged 12 or older used smokeless tobacco for the first time in the past year; this represents 0.5 percent of people who had not previously used smokeless tobacco. Prevalence of smokeless tobacco types that contain areca nut is increasing in the Western Pacific. In 2016 about 2 of every 100 middle school students in the US (2.2%) reported current use of smokeless tobacco. In 2016 nearly 6 of every 100 high school students in the US (5.8%) reported current use of smokeless tobacco. Public policy WHO FCTC policies The WHO Framework Convention on Tobacco Control (FCTC) contains a set of common goals, minimum standards for tobacco control policy in the 168 countries which signed it. The FCTC policies are also applicable for smokeless tobacco however they are less implemented in regards to these products. Only 57 countries have policies regulating smokeless tobacco use. 13 countries and the European Union apply a ban for advertising and promoting smokeless tobacco. The sale of smokeless tobacco to minors (Article 16 of FCTC) is restricted only in 13 countries and the WHO-defined Eastern Mediterranean region. 11 countries use taxation and pricing measures (Article 6) to reduce use in the general population. In countries where they are applied to smokeless tobacco, FCTC policies had a positive impact on reducing their use. If multiple policies, including large taxes, are implemented, premature deaths can be prevented. However if taxation is higher for smoking products only people might switch to cheaper alternatives like smokeless tobacco. Banning The manufacture, distribution and sale of smokeless tobacco is banned completely in Bhutan, Singapore, and Sri Lanka. Partial bans on import and sales on some products are in effect in Australia, Bahrain, Brazil, India, Iran, Tanzania, Thailand, New Zealand, the UK and the European Union. History Smokeless tobacco was first discussed in the English language in 1683 as a powdered tobacco for breathing into the nose. People have used it for over a thousand years. Cigarette manufacturers have penetrated the smokeless tobacco market. Positions of medical organizations As long ago as 1986, the advisory committee to the Surgeon General concluded that the use of smokeless tobacco "is not a safe substitute for smoking cigarettes. It can cause cancer and a number of noncancerous oral conditions and can lead to nicotine addiction and dependence". According to a 2002 report by the Royal College of Physicians, "As a way of using nicotine, the consumption of non-combustible tobacco is of the order of 10–1,000 times less hazardous than smoking, depending on the product". A panel of experts convened by the National Institutes of Health (NIH) in 2006 stated that the "range of risks, including nicotine addiction, from smokeless tobacco products may vary extensively because of differing levels of nicotine, carcinogens, and other toxins in different products". In 2010 the National Cancer Institute stated that "because all tobacco products are harmful and cause cancer, the use of all of these products should be strongly discouraged. There is no safe level of tobacco use. People who use any type of tobacco product should be urged to quit". In 2015 the American Cancer Society stated that "Using any kind of spit or smokeless tobacco is a major health risk. It's less lethal than smoking tobacco, but less lethal is a far cry from safe." In 2017 the World Health Organization states that "Smokeless tobacco use is a significant part of the overall world tobacco problem." Public perceptions Many people who use smokeless tobacco may think it is safer than smoking, but all tobacco products contain toxicants, and use of smokeless tobacco poses its own significant health risks. In South and South-East Asia these products are considered part of the cultural heritage and there is little enthusiasm for regulation. Around 80% of users live in these regions. See also Herbal cigarette Tobacco Tobacco usage in sport References Tobacco products Carcinogens IARC Group 1 carcinogens Articles containing video clips
Smokeless tobacco
[ "Chemistry", "Environmental_science" ]
3,066
[ "Carcinogens", "Toxicology" ]
4,293,843
https://en.wikipedia.org/wiki/Pleckstrin%20homology%20domain
Pleckstrin homology domain (PH domain) or (PHIP) is a protein domain of approximately 120 amino acids that occurs in a wide range of proteins involved in intracellular signaling or as constituents of the cytoskeleton. This domain can bind phosphatidylinositol lipids within biological membranes (such as phosphatidylinositol (3,4,5)-trisphosphate and phosphatidylinositol (4,5)-bisphosphate), and proteins such as the βγ-subunits of heterotrimeric G proteins, and protein kinase C. Through these interactions, PH domains play a role in recruiting proteins to different membranes, thus targeting them to appropriate cellular compartments or enabling them to interact with other components of the signal transduction pathways. Lipid binding specificity Individual PH domains possess specificities for phosphoinositides phosphorylated at different sites within the inositol ring, e.g., some bind phosphatidylinositol (4,5)-bisphosphate but not phosphatidylinositol (3,4,5)-trisphosphate or phosphatidylinositol (3,4)-bisphosphate, while others may possess the requisite affinity. This is important because it makes the recruitment of different PH domain containing proteins sensitive to the activities of enzymes that either phosphorylate or dephosphorylate these sites on the inositol ring, such as phosphoinositide 3-kinase or PTEN, respectively. Thus, such enzymes exert a part of their effect on cell function by modulating the localization of downstream signaling proteins that possess PH domains that are capable of binding their phospholipid products. Structure The 3D structure of several PH domains has been determined. All known cases have a common structure consisting of two perpendicular anti-parallel beta sheets, followed by a C-terminal amphipathic helix. The loops connecting the beta-strands differ greatly in length, making the PH domain relatively difficult to detect while providing the source of the domain's specificity. The only conserved residue among PH domains is a single tryptophan located within the alpha helix that serves to nucleate the core of the domain. Proteins containing PH domain PH domains can be found in many different proteins, such as OSBP or ARF. Recruitment to the Golgi apparatus in this case is dependent on both PtdIns and ARF. A large number of PH domains have poor affinity for phosphoinositides and are hypothesized to function as protein binding domains. A Genome-wide look in Saccharomyces cerevisiae showed that most of the 33 yeast PH domains are indeed promiscuous in binding to phosphoinositides, while only one (Num1-PH) behaved highly specific . Proteins reported to contain PH domains belong to the following families: Pleckstrin, the protein where this domain was first detected, is the major substrate of protein kinase C in platelets. Pleckstrin contains two PH domains. ARAP proteins contain five PH domains. Serine/threonine-specific protein kinases such as the Akt/Rac family, protein kinase D1, and the trypanosomal NrkA family. Non-receptor tyrosine kinases belonging to the Btk/Itk/Tec subfamily. Insulin receptor substrate 1 (IRS-1). Regulators of small G-proteins: 64 RhoGEFs of the Dbl-like family., and several GTPase activating proteins like ABR, BCR or ARAP proteins. Cytoskeletal proteins such as dynamin (see ), Caenorhabditis elegans kinesin-like protein unc-104 (see ), spectrin beta-chain, syntrophin (2 PH domains), and S. cerevisiae nuclear migration protein NUM1. Oxysterol-binding proteins OSBP, S. cerevisiae OSH1 and YHR073w. Ceramide kinase, a lipid kinase that phosphorylates ceramides to ceramide-1-phosphate. G protein receptor kinases (GRK) of GRK2 subfamily (beta-adrenergic receptor kinases): GRK2 and GRK3. Subfamilies Spectrin/pleckstrin-like Examples Human genes encoding proteins containing this domain include: ABR, ADAP2, ADRBK1, ADRBK2, AFAP, AFAP1, AFAP1L1, AFAP1L2, AKAP13, AKT1, AKT2, AKT3, ANLN, APBB1IP, APPL1, APPL2, ARHGAP10, ARHGAP12, ARHGAP15, ARHGAP21, ARHGAP22, ARHGAP23, ARHGAP24, ARHGAP25, ARHGAP26, ARHGAP27, ARHGAP9, ARHGEF16, ARHGEF18, ARHGEF19, ARHGEF2, ARHGEF3, ARHGEF4, ARHGEF5, ARHGEF6, ARHGEF7, ARHGEF9, ASEF2, BMX, BTK, C20orf42, C9orf100, CADPS, CADPS2, CDC42BPA, CDC42BPB, CDC42BPG, CENTA1, CENTB1, CENTB2, CENTB5, CENTD1, CENTD2, CENTD3, CENTG1, CENTG2, CENTG3, CERK, CIT, CNKSR1, CNKSR2, COL4A3BP, CTGLF1, CTGLF2, CTGLF3, * CTGLF4, CTGLF5, CTGLF6, DAB2IP, DAPP1, DDEF1, DDEF2, DDEFL1, DEF6, DEPDC2, DGKD, DGKH, DGKK, DNM1, DNM2, DNM3, DOCK10, DOCK11, DOCK9, DOK1, DOK2, DOK3, DOK4, DOK5, DOK6, DTGCU2, EXOC8, FAM109A, FAM109B, FARP1, FARP2, FGD1, FGD2, FGD3, FGD4, FGD5, FGD6, GAB1, GAB2, GAB3, GAB4, GRB10, GRB14, GRB7, IRS1, IRS2, IRS4, ITK, ITSN1, ITSN2, KALRN, KIF1A, KIF1B, KIF1Bbeta, MCF2, MCF2L, MCF2L2, MRIP, MYO10, NET1, NGEF, OBPH1, OBSCN, OPHN1, OSBP, OSBP2, OSBPL10, OSBPL11, OSBPL3, OSBPL5, OSBPL6, OSBPL7, OSBPL8, OSBPL9, PHLDA2, PHLDA3, PHLDB1, PHLDB2, PHLPP, PIP3-E, PLCD1, PLCD4, PLCG1, PLCG2, PLCH1, PLCH2, PLCL1, PLCL2, PLD1, PLD2, PLEK, PLEK2, PLEKHA1, PLEKHA2, PLEKHA3, PLEKHA4, PLEKHA5, PLEKHA6, PLEKHA7, PLEKHA8, PLEKHB1, PLEKHB2, PLEKHC1, PLEKHF1, PLEKHF2, PLEKHG1, PLEKHG2, PLEKHG3, PLEKHG4, PLEKHG5, PLEKHG6, PLEKHH1, PLEKHH2, PLEKHH3, PLEKHJ1, PLEKHK1, PLEKHM1, PLEKHM2, PLEKHO1, PLEKHQ1, PREX1, PRKCN, PRKD1, PRKD2, PRKD3, PSCD1, PSCD2, PSCD3, PSCD4, PSD, PSD2, PSD3, PSD4, RALGPS1, RALGPS2, RAPH1, RASA1, RASA2, RASA3, RASA4, RASAL1, RASGRF1, RGNEF, ROCK1, ROCK2, RTKN, SBF1, SBF2, SCAP2, SGEF, SH2B, SH2B1, SH2B2, SH2B3, SH3BP2, SKAP1, SKAP2, SNTA1, SNTB1, SNTB2, SOS1, SOS2, SPATA13, SPNB4, SPTBN1, SPTBN2, SPTBN4, SPTBN5, STAP1, SWAP70, SYNGAP1, TBC1D2, TEC, TIAM1, TRIO, TRIOBP, TYL, URP1, URP2, VAV1, VAV2, VAV3, VEPH1 See also Pleckstrin The unrelated FYVE domain binds Phosphatidylinositol 3-phosphate and has been found in over 60 proteins. The GRAM domain is a structurally related protein domain. References External links Nash Lab Protein Interaction Domains - PH domain description - Calculated orientations of PH domains in membranes Peripheral membrane proteins Protein domains Protein superfamilies
Pleckstrin homology domain
[ "Biology" ]
2,139
[ "Protein superfamilies", "Protein domains", "Protein classification" ]
4,293,894
https://en.wikipedia.org/wiki/Saccharimeter
A saccharimeter is an instrument for measuring the concentration of sugar solutions. This is commonly achieved using a measurement of refractive index (refractometer) or the angle of rotation of polarization of optically active sugars (polarimeter). Saccharimeters are used in food processing industries, brewing, and the distilled alcoholic drinks industry. External links Historical Bates Type Saccharimeter NIST Museum object. Measuring instruments
Saccharimeter
[ "Technology", "Engineering" ]
93
[ "Measuring instruments" ]
4,294,169
https://en.wikipedia.org/wiki/Magnetic%20detector
The magnetic detector or Marconi magnetic detector, sometimes called the "Maggie", was an early radio wave detector used in some of the first radio receivers to receive Morse code messages during the wireless telegraphy era around the turn of the 20th century. Developed in 1902 by radio pioneer Guglielmo Marconi from a method invented in 1895 by New Zealand physicist Ernest Rutherford, it was used in Marconi wireless stations until around 1912, when it was superseded by vacuum tubes. It was widely used on ships because of its reliability and insensitivity to vibration. A magnetic detector was part of the wireless apparatus in the radio room of the RMS Titanic which was used to summon help during its famous 15 April 1912 sinking. History The primitive spark gap radio transmitters used during the first three decades of radio (1886-1916) could not transmit audio (sound) and instead transmitted information by wireless telegraphy; the operator switched the transmitter on and off with a telegraph key, creating pulses of radio waves to spell out text messages in Morse code. So the radio receiving equipment of the time did not have to convert the radio waves into sound like modern receivers, but merely detect the presence or absence of the radio signal. The device that did this was called a detector. The first widely used detector was the coherer, invented in 1890. The coherer was a very poor detector, insensitive and prone to false triggering due to impulsive noise, which motivated much research to find better radio wave detectors. Ernest Rutherford had first used the hysteresis of iron to detect Hertzian waves in 1896 by the demagnetization of an iron needle when a radio signal passed through a coil around the needle, however the needle had to be remagnetized so this was not suitable for a continuous detector. Many other wireless researchers such as E. Wilson, C. Tissot, Reginald Fessenden, John Ambrose Fleming, Lee De Forest, J.C. Balsillie, and L. Tieri had subsequently devised detectors based on hysteresis, but none had become widely used due to various drawbacks. Many earlier versions had a rotating magnet above a stationary iron band with coils on it. This type was only periodically sensitive, when the magnetic field was changing, which occurred as the magnetic poles passed the iron. During his transatlantic radio communication experiments in December 1902 Marconi found the coherer to be too unreliable and insensitive for detecting the very weak radio signals from long-distance transmissions. It was this need that drove him to develop his magnetic detector. Marconi devised a more effective configuration with a moving iron band driven by a clockwork motor passing by stationary magnets and coils, resulting in a continuous supply of iron that was changing magnetization, and thus continuous sensitivity (Rutherford claimed he had also invented this configuration). The Marconi magnetic detector was the "official" detector used by the Marconi Company from 1902 through 1912, when the company began converting to the Fleming valve and Audion-type vacuum tubes. It was used through 1918. Description See drawing at right. The Marconi version consisted of an endless iron band (B) built up of 70 strands of number 40 gage silk-covered iron wire. In operation, the band passes over two grooved pulleys rotated by a wind-up clockwork motor. The iron band passes through the center of a glass tube which is close wound with a single layer along several millimeters with number 36 gage silk-covered copper wire. This coil (C) functions as the radio frequency excitation coil. Over this winding is a small bobbin wound with wire of the same gauge to a resistance of about 140 ohms. This coil (D) functions as the audio pickup coil. Around these coils two permanent horseshoe magnets are arranged to magnetize the iron band as it passes through the glass tube. How it works The device works by hysteresis of the magnetization in the iron wires. The permanent magnets are arranged to create two opposite magnetic fields each directed toward (or away) from the center of the coils in opposite directions along the wire. This functions to magnetize the iron band along its axis, first in one direction as it approaches the center of the coils, then reverse its magnetism to the opposite direction as it leaves from the other side of the coil. Due to the hysteresis (coercivity) of the iron, a certain threshold magnetic field (the coercive field, Hc) is required to reverse the magnetization. So the magnetization in the moving wires does not reverse in the center of the device where the field reverses, but some way toward the departing side of the wires, when the field of the second magnet reaches Hc. Although the wire itself is moving through the coil, in the absence of a radio signal the location where the magnetization "flips" is stationary with respect to the pickup coil, so there is no flux change and no voltage is induced in the pickup coil. The radio signal from the antenna (A) is received by a tuner (not shown) and passed through the excitation coil C, the other end of which is connected to ground (E). The rapidly reversing magnetic field from the coil exceeds the coercivity Hc and cancels the hysteresis of the iron, causing the magnetization change to suddenly move up the wire to the center, between the magnets, where the field reverses. This had an effect similar to thrusting a magnet into the coil, causing the magnetic flux through the pickup coil D to change, inducing a current pulse in the pickup coil. The audio pickup coil is connected to a telephone receiver (earphone) (T) which converts the current pulse to sound. The radio signal from a spark gap transmitter consisted of pulses of radio waves (damped waves) which repeated at an audio rate, around several hundred per second. Each pulse of radio waves produced a pulse of current in the earphone, so the signal sounded like a musical tone or buzz in the earphone. Technical details The iron band was turned by a mainspring and clockwork mechanism inside the case. Differing values have been given for the speed of the band, from 1.6 to 7.5 cm per second; the device could probably function over a wide range of band speeds. The operator had to keep the mainspring wound up, using a crank on the side. Operators would sometimes forget to wind it, so the band would stop turning and the detector stop working, sometimes in the middle of a radio message. The detector produced electronic noise that was heard in the earphone as a "hissing" or "roaring" sound in the background, somewhat fatiguing to listen to. This was Barkhausen noise due to the Barkhausen effect in the iron. As the magnetic field in a given area of the iron wire changed as it moved through the detector, the microscopic domain walls between magnetic domains in the iron moved in a series of jerks, as they got hung up on defects in the iron crystal lattice, then pulled free. Each jerk produced a tiny change in the magnetic field through the coil, and induced a pulse of noise. Because the output was an audio alternating current and not a direct current, the detector could only be used with earphones and not with the common recording instrument used in coherer radiotelegraphy receivers, the siphon paper tape recorder. From a technical standpoint, several subtle prerequisites are necessary for operation. The strength of the magnetic field of the permanent magnets at the iron band must be of the same order of magnitude as the strength of the field generated by the radio frequency excitation coil, allowing the radio frequency signal to exceed the threshold hysteresis (coercivity) of the iron. Also, the impedance of the tuner that supplies the radio signal must be low to match the low impedance of the excitation coil, requiring special tuner design considerations. The impedance of the telephone earphone must roughly match the impedance of the audio pickup coil, which is a few hundred ohms. The iron band moves a few millimeters per second. The magnetic detector was much more sensitive than the coherers commonly in use at the time, although not as sensitive as the Fleming valve, which began to replace it around 1912. In the Handbook Of Technical Instruction For Wireless Telegraphists by: J. C. Hawkhead (Second Edition Revised by H. M. Dowsett) on pp 175 are detailed instructions and specifications for operation and maintenance of Marconi's magnetic detector. References External links The Marconi magnetic detector From the book "A Handbook of Wireless Telegraphy" (1913) by J. Erskine-Murray. D.Sc. Magnetic detector basics History of radio Radio electronics Detectors
Magnetic detector
[ "Engineering" ]
1,797
[ "Radio electronics" ]
4,294,204
https://en.wikipedia.org/wiki/MedCalc
MedCalc is a statistical software package designed for the biomedical sciences. It has an integrated spreadsheet for data input and can import files in several formats (Excel, SPSS, CSV, ...). MedCalc includes basic parametric and non-parametric statistical procedures and graphs such as descriptive statistics, ANOVA, Mann–Whitney test, Wilcoxon test, χ2 test, correlation, linear as well as non-linear regression, logistic regression, and multivariate statistics. Survival analysis includes Cox regression (Proportional hazards model) and Kaplan–Meier survival analysis. Procedures for method evaluation and method comparison include ROC curve analysis, Bland–Altman plot, as well as Deming and Passing–Bablok regression. The software also includes reference interval estimation, meta-analysis and sample size calculations. The first DOS version of MedCalc was released in April 1993 and the first version for Windows was available in November 1996. Version 15.2 introduced a user-interface in English, Chinese (simplified and traditional), French, German, Italian, Japanese, Korean, Polish, Portuguese (Brazilian), Russian and Spanish. Reviews Stephan C, Wesseling S, Schink T, Jung K. “Comparison of eight computer programs for receiver-operating characteristic analysis.” Clinical Chemistry 2003;49:433-439. Lukic IK. “MedCalc Version 7.0.0.2. Software Review.” Croatian Medical Journal 2003;44:120-121. Garber C. “MedCalc Software for Statistics in Medicine. Software review.” Clinical Chemistry, 1998;44:1370. Petrovecki M. “MedCalc for Windows. Software Review.” Croatian Medical Journal, 1997;38:178. See also List of statistical packages Comparison of statistical packages References External links MedCalc Statistical Software Homepage Statistical software Windows-only proprietary software Biostatistics
MedCalc
[ "Mathematics" ]
395
[ "Statistical software", "Mathematical software" ]
4,294,270
https://en.wikipedia.org/wiki/Niemeyer%E2%80%93Dolan%20technique
The Niemeyer–Dolan technique, also called the Dolan technique, angular evaporation, or the shadow evaporation technique, is a thin-film lithographic method to create nanometer-sized overlapping structures. This technique uses an evaporation mask that is suspended above the substrate (see figure). The evaporation mask can be formed from two or more layers of resist, to allow creation of the extreme undercut needed. Depending on the evaporation angle, the shadow image of the mask is projected onto different positions on the substrate. By carefully choosing the angle for each material to be deposited, adjacent openings in the mask can be projected onto the same spot, creating an overlay of two thin films with a well-defined geometry. Efforts to create multilayered structures are complicated by a need to align each layer with those below it; as all openings are on the same mask, shadow evaporation reduces this need by being self-aligning. Additionally, this allows the substrate to be kept under high vacuum, as there is no need to increase pressure to switch between multiple masks. Due to its downsides, including restrictions on feature density from excess evaporated material, shadow evaporation is generally only suitable for very low scale integration. Usage The Niemeyer–Dolan technique is used to create multi-layer thin-film electronic nanostructures such as quantum dots and tunnel junctions. References Nanoelectronics
Niemeyer–Dolan technique
[ "Materials_science" ]
299
[ "Nanotechnology", "Nanoelectronics" ]
4,294,791
https://en.wikipedia.org/wiki/Animal%20model%20of%20stroke
Animal models of stroke are procedures undertaken in animals (including non-human primates) intending to provoke pathophysiological states that are similar to those of human stroke to study basic processes or potential therapeutic interventions in this disease. Aim is the extension of the knowledge on and/or the improvement of medical treatment of human stroke. Classification by cause The term stroke subsumes cerebrovascular disorders of different etiologies, featuring diverse pathophysiological processes. Thus, for each stroke etiology one or more animal models have been developed: Animal models of ischemic stroke Animal models of intracerebral hemorrhage Animal models of subarachnoid hemorrhage and cerebral vasospasm Animal models of sinus vein thrombosis Transferability of animal results to human stroke Although multiple therapies have proven to be effective in animals, only very few have done so in human patients. Reasons for this are (Dirnagl 1999): Side effects: Many highly potent neuroprotective drugs display side effects which inhibit the application of effective doses in patients (e.g. MK-801) Delay: Whereas in animal studies the time of incidence onset is known and therapy can be started early, patients often present with delay and unclear time of symptom onset “Age and associated illnesses: Most experimental studies are conducted on healthy, young animals under rigorously controlled laboratory conditions. However, the typical stroke patient is elderly with numerous risk factors and complicating diseases (for example, diabetes, hypertension and heart diseases)” (Dirnagl 1999) Morphological and functional differences between the brain of humans and animals: Although the basic mechanisms of stroke are identical between humans and other mammals, there are differences. Evaluation of efficacy: In animals, treatment effects are mostly measured as a reduction of lesion volume, whereas in human studies functional evaluation (which reflects the severity of disabilities) is commonly used. Thus, therapies might reduce the size of the cerebral lesion (found in animals), but not the functional impairment when tested in patients. Ethical considerations Stroke models are carried out on animals which inevitably suffer during the procedure. These encumbrances are e.g. social stress during single or multiple animal caging (depending on the species), transport, animal handling, food deprivation, pain after surgical procedures, neurological disabilities etc. Thus, according to general consensus, these experiments require ethical justification. The following arguments can be produced to give reason for the conduction of animal experiments in stroke research: Stroke is very frequent in humans. Stroke is the third leading cause of death in the developed countries. Stroke is the leading cause of permanent disability in the developed countries. Yet there is no effective treatment available for the majority of stroke patients. Currently there are no in vitro methods that could satisfactorily simulate the complex interplay of vasculature, brain tissue, and blood during stroke, and thus could replace the greater part of animal experiments. During animal experimentation the following prerequisites have to be fulfilled to maintain the ethical justification (“the three Rs”): Reduction: Animal numbers have to be kept as little as possible (but as high as necessary - to avoid underpowered studies -) to draw scientific conclusions. Refinement: Experiments have to be best planned and to be conducted by trained personnel to minimize the suffering of animals on the one hand and to gain as much knowledge as possible out of the utilized animals. Replacement: Whenever possible animal experiments have to be replaced by other methods (e.g. cell culture, computed simulations etc.). References Stroke Stroke
Animal model of stroke
[ "Biology" ]
740
[ "Model organisms", "Animal models" ]
4,294,893
https://en.wikipedia.org/wiki/Link%20%28simplicial%20complex%29
The link in a simplicial complex is a generalization of the neighborhood of a vertex in a graph. The link of a vertex encodes information about the local structure of the complex at the vertex. Link of a vertex Given an abstract simplicial complex and a vertex in , its link is a set containing every face such that and is a face of . In the special case in which is a 1-dimensional complex (that is: a graph), contains all vertices such that is an edge in the graph; that is, the neighborhood of in the graph. Given a geometric simplicial complex and , its link is a set containing every face such that and there is a simplex in that has as a vertex and as a face. Equivalently, the join is a face in . As an example, suppose v is the top vertex of the tetrahedron at the left. Then the link of v is the triangle at the base of the tetrahedron. This is because, for each edge of that triangle, the join of v with the edge is a triangle (one of the three triangles at the sides of the tetrahedron); and the join of v with the triangle itself is the entire tetrahedron. An alternative definition is: the link of a vertex is the graph constructed as follows. The vertices of are the edges of incident to . Two such edges are adjacent in iff they are incident to a common 2-cell at . The graph is often given the topology of a ball of small radius centred at ; it is an analog to a sphere centered at a point. Link of a face The definition of a link can be extended from a single vertex to any face. Given an abstract simplicial complex and any face of , its link is a set containing every face such that are disjoint and is a face of : . Given a geometric simplicial complex and any face , its link is a set containing every face such that are disjoint and there is a simplex in that has both and as faces. Examples The link of a vertex of a tetrahedron is a triangle – the three vertices of the link corresponds to the three edges incident to the vertex, and the three edges of the link correspond to the faces incident to the vertex. In this example, the link can be visualized by cutting off the vertex with a plane; formally, intersecting the tetrahedron with a plane near the vertex – the resulting cross-section is the link. Another example is illustrated below. There is a two-dimensional simplicial complex. At the left, a vertex is marked in yellow. At the right, the link of that vertex is marked in green. Properties For any simplicial complex , every link is downward-closed, and therefore it is a simplicial complex too; it is a sub-complex of . Because is simplicial, there is a set isomorphism between and the set : every corresponds to , which is in . Link and star A concept closely related to the link is the star. Given an abstract simplicial complex and any face ,, its star is a set containing every face such that is a face of . In the special case in which is a 1-dimensional complex (that is: a graph), contains all edges for all vertices that are neighbors of . That is, it is a graph-theoretic star centered at . Given a geometric simplicial complex and any face , its star is a set containing every face such that there is a simplex in having both and as faces: . In other words, it is the closure of the set -- the set of simplices having as a face. So the link is a subset of the star. The star and link are related as follows: For any , . For any , , that is, the star of is the cone of its link at . An example is illustrated below. There is a two-dimensional simplicial complex. At the left, a vertex is marked in yellow. At the right, the star of that vertex is marked in green. See also Vertex figure - a geometric concept similar to the simplicial link. References Geometry
Link (simplicial complex)
[ "Mathematics" ]
869
[ "Geometry" ]
4,295,117
https://en.wikipedia.org/wiki/Object-oriented%20modeling
Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning of the software life cycle when using an object-oriented approach to software development. The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process. The reasons to model a system before writing the code are: Communication. Users typically cannot understand programming language or code. Model diagrams can be more understandable and can allow users to give developers feedback on the appropriate structure of the system. A key goal of the Object-Oriented approach is to decrease the "semantic gap" between the system and the real world by using terminology that is the same as the functions that users perform. Modeling is an essential tool to facilitate achieving this goal . Abstraction. A goal of most software methodologies is to first address "what" questions and then address "how" questions. I.e., first determine the functionality the system is to provide without consideration of implementation constraints and then consider how to take this abstract description and refine it into an implementable design and code given constraints such as technology and budget. Modeling enables this by allowing abstract descriptions of processes and objects that define their essential structure and behavior. Object-oriented modeling is typically done via use cases and abstract definitions of the most important objects. The most common language used to do object-oriented modeling is the Object Management Group's Unified Modeling Language (UML). See also Object-oriented analysis and design References Object-oriented programming Software design
Object-oriented modeling
[ "Engineering" ]
324
[ "Design", "Software design" ]
4,295,188
https://en.wikipedia.org/wiki/Courri%C3%A8res%20mine%20disaster
The Courrières mine disaster, Europe's worst mining accident, caused the death of 1,099 miners in Northern France on 10 March 1906. This disaster was surpassed only by the Benxihu Colliery accident in China on 26 April 1942, which killed 1,549 miners. A coaldust explosion, the cause of which is not known with certainty, devastated a coal mine operated by the Compagnie des mines de houille de Courrières. Victims lived nearby in the villages of Méricourt (404 people killed), Sallaumines (304 killed), Billy-Montigny (114 people killed), and Noyelles-sous-Lens (102 people killed). The mine was 2 km (1 mi) to the east of Lens, in the Pas-de-Calais département (about 220 km, or 140 miles, north of Paris). A large explosion was heard shortly after 06:30 on the morning of Saturday 10 March 1906. An elevator cage at Shaft 3 was thrown to the surface, damaging the pit-head; windows and roofs were blown out on the surface at Shaft 4; an elevator cage raised at Shaft 2 contained only dead or unconscious miners. Initial cause It is generally agreed that the majority of the deaths and destruction were caused by an explosion of coal dust which swept through the mine. However it has never been ascertained what caused the initial ignition of the coal dust. Two main causes have been hypothesized: An accident during the handling of mining explosives. Ignition of methane by the naked flame of a miner's lamp. There is evidence favoring both these hypotheses. Blasting was being done in the area believed to be the source of the explosion, after initial attempts to widen a gallery had been abandoned the previous day for lack of success. Many workers in the mine used lamps with naked flames (as opposed to the more expensive Davy lamps), despite the risk of gas explosions. As Monsieur Delafond, General Inspector of Mines, put it in his report: Rescue attempts Rescue attempts began quickly on the morning of the disaster, but were hampered by the lack of trained mine rescuers in France at that time, and by the scale of the disaster: some two-thirds of the miners in the mine at the time of the explosion perished, while many survivors suffered from the effects of gas inhalation. Expert teams from Paris and from Germany arrived at the scene on 12 March. The first funerals occurred on 13 March, during an unseasonal snowstorm; 15,000 people attended. The funerals were a focus for the anger of the mining communities against the companies which owned the concessions, and the first strikes started the next day in the Courrières area, extending quickly to other areas in the départements of the Pas-de-Calais and the Nord. The slow progress of the rescue exacerbated the tensions between the mining communities and the companies. By 1 April only 194 bodies had been brought to the surface. There were many accusations that the Compagnie des mines de Courrières was deliberately delaying the reopening of blocked shafts to prevent coalface fires (and hence to save the coal seams): more recent studies tend to consider such claims as exaggerated. The mine was unusually complex for its time, with the different pitheads being interconnected by underground galleries on multiple levels. Such complexity was intended to facilitate access for rescuers in the case of an accident—it also helped the coal to be brought to the surface—but it contributed to the large loss of life by allowing the dust explosion to travel further and then by increasing the debris which had to be cleared by the rescuers. About 110 km (70 mi) of tunnel are believed to have been affected by the explosion. Gérard Dumont of the Centre historique minier de Lewarde has shown that the plans of the mine existing at the time of the accident were difficult to interpret: some measured the depth of galleries by reference to the minehead, others by reference to sea level. Survivors About 500 miners were able to reach the surface in the hours immediately after the explosion. Many were severely burned and suffering the effects of mine gases. A group of 13 survivors, known later as the rescapés, was found by rescuers on 30 March, 20 days after the explosion. They had survived at first by eating bark from the crossbeams, later by eating a rotting mine horse. They avoided dehydration by drinking the water dripping from the walls. The two eldest (39 and 40 years old) were awarded the Légion d'honneur, the other eleven (including three younger than 18 years of age) received the Médaille d'or du courage. A final survivor was found on 4 April. Public response The disaster at the Courrières mine was one of the first in France to be reported on a large scale by the media of the day. The Law on the Freedom of the Press of 29 July 1881 had specified the basis for a (relative) freedom of the press, and Lille, the regional capital less than 40 km (25 mi) away, had at least five daily newspapers whose reporters engaged in a fierce competition for news from the mine. Photographs could not then be published in newspapers for technical reasons, but were widely distributed as postcards; on average, each French resident sent fifteen postcards during 1906. A postcard of the thirteen rescapés was available nine days after their discovery. The first public appeal for funds to help the victims and their families was established the day after the explosion by Le Réveil du Nord, a Lille daily newspaper. In the newspaper L'Humanité of the next day, socialist and pacifist politician Jean Jaurès wrote: It is a call for social justice that comes to the nation's representatives from the depths of the burning mines. It is the harsh and suffering destiny of work that, once more, manifests itself to all. And would political action be something else than the sad game of ambitions and vanities if it didn't propose to itself the liberation of the workers' people, the organisation of a better life for those who work? Such appeals became widespread, and were supplemented by the sale of special collections of postcards depicting the disaster. The different appeals were eventually subsumed by an official fund—itself established by a law enacted only four days after the explosion—and a total of 750,000 francs was raised. This at a time when the daily wage for a miner (a well-paid job compared to other manual work) was less than six francs. Over half the total was contributed by the Compagnie des mines de houille de Courrières and by the Comité central des houillières de France (Central Committee of French Coal Mines, an employers' association). On 18 March, a strike was publicized and quickly extended itself to the entire region. Minister of Interior Georges Clemenceau visited the region twice, but "no promises were kept", according to L'Humanité. Clemenceau's first visit was filled with optimism and ex-president Jean Casimir-Perier stated that "I have the strongest hope that our discussion... will lead to an understanding which is desirable for all." However, the following day the strikers rejected the concessions offered by the mining companies and the number of strikers reached 46,000. See also Kameradschaft, a 1931 dramatic film by G. W. Pabst, based on the disaster Fraternity, a 2016 brass band contest piece by Thierry Deleruyelle, based on the disaster Mining in France Polish immigration to the Nord-Pas-de-Calais coalfield References Further reading Vouters, Bruno (2006). Courrières 10 mars 1906 : la terrible catastrophe. Lille: Editions La Voix du Nord. 48 pp. . External links Centre historique minier de Lewarde (; each day during 2006 a new article about the March 10, 1906 accident) "Commemorating France’s Worst Mining Tragedy: 1099 Workers Perished to Profit the Bosses", article from L'Humanité'' . Translated from "Ils étaient 1099, morts pour le profit", published on March 11, 2006. Labor disputes in France Gas explosions in France 1906 mining disasters History of the Pas-de-Calais Coal mining disasters in France March 1906 events in Europe Explosions in 1906 1906 disasters in France Dust explosions 1906 fires 1900s fires in Europe 1906 labor disputes and strikes
Courrières mine disaster
[ "Chemistry" ]
1,744
[ "Dust explosions", "Explosions" ]
4,295,240
https://en.wikipedia.org/wiki/2-Phenylphenol
2-Phenylphenol, or o-phenylphenol, is an organic compound. In terms of structure, it is one of the monohydroxylated isomers of biphenyl. It is a white solid. It is a biocide used as a preservative with E number E231 and under the trade names Dowicide, Torsite, Fungal, Preventol, Nipacide and many others. Uses The primary use of 2-phenylphenol is as an agricultural fungicide. It is generally applied post-harvest. It is a fungicide used for waxing citrus fruits. It is no longer a permitted food additive in the European Union, but is still allowed as a post-harvest treatment in 4 EU countries. It is also used for disinfection of seed boxes. It is a general surface disinfectant, used in households, hospitals, nursing homes, farms, laundries, barber shops, and food processing plants. It can be used on fibers and other materials. It is used to disinfect hospital and veterinary equipment. Other uses are in rubber industry and as a laboratory reagent. It is also used in the manufacture of other fungicides, dye stuffs, resins and rubber chemicals. 2-Phenylphenol is a precursor to 9,10-dihydro-9-oxa-10-phosphaphenanthrene-10-oxide, a commercial fire retardant. The sodium salt of orthophenyl phenol, sodium orthophenyl phenol, is a preservative, used to treat the surface of citrus fruits. Orthophenyl phenol is also used as a fungicide in food packaging and may migrate into the contents. Preparation It is prepared by condensation of cyclohexanone to give cyclohexenylcyclohexanone. The latter undergoes dehydrogenation to give 2-phenylphenol. Safety LD50 (rats) is 2700 to 3000 mg/kg. References External links List of brand name products which contain 2-phenylphenol National Center for Biotechnology Information 2-Phenylphenol - Substance Summary 2-Hydroxyphenyl compounds Household chemicals Fungicides Antiseptics Fumigants Preservatives Biphenyls
2-Phenylphenol
[ "Biology" ]
509
[ "Fungicides", "Biocides" ]
4,295,286
https://en.wikipedia.org/wiki/Ghon%20focus
A Ghon focus is a primary lesion usually subpleural, often in the mid to lower zones, caused by Mycobacterium bacilli (tuberculosis) developed in the lung of a nonimmune host (usually a child). It is named for Anton Ghon (1866–1936), an Austrian pathologist. It is a small area of granulomatous inflammation, only detectable by chest X-ray if it calcifies or grows substantially (see tuberculosis radiology). Typically these will heal, but in some cases, especially in immunosuppressed patients, it will progress to miliary tuberculosis (so named due to the granulomas resembling millet seeds on a chest X-ray). The classical location for primary infection is surrounding the lobar fissures, either in the upper part of the lower lobe or lower part of the upper lobe. If the Ghon focus also involves infection of adjacent lymphatics and hilar lymph nodes, it is known as the Ghon's complex or primary complex. When a Ghon's complex undergoes fibrosis and calcification it is called a Ranke complex. References Tuberculosis Histopathology
Ghon focus
[ "Chemistry" ]
251
[ "Histopathology", "Microscopy" ]
4,295,428
https://en.wikipedia.org/wiki/Comparison%20of%20stylesheet%20languages
In computing, the two primary stylesheet languages are Cascading Style Sheets (CSS) and the Extensible Stylesheet Language (XSL). While they are both called stylesheet languages, they have very different purposes and ways of going about their tasks. Cascading Style Sheets CSS is designed around styling a document, structured in a markup language, HTML and XML (including XHTML and SVG) documents. It was created for that purpose. The code CSS is non-XML syntax to define the style information for the various elements of the document that it styles. The language to structure a document (markup language) is a prelimit to CSS. A markup language, like HTML and less XUL, may define some primitive elements to style a document, for example <emphasis> to bold. CSS post styles a document to "screen media" or "paged media". Screen media, displayed as a single page (possibly with hyperlinks), that has a fixed horizontal width but a virtually unlimited vertical height. Scrolling is often the method of choice for viewing parts of screen media. This is in contrast to "paged media", which has multiple pages, each with specific fixed horizontal and vertical dimensions. To style paged media involves a variety of complexities that screen media does not. Since CSS was designed originally for screen media, its paged facilities lacked. CSS version 3.0 provides new features that allow CSS to more adequately style documents for paged display. Extensible Stylesheet Language XSL has evolved drastically from its initial design into something very different from its original purpose. The original idea for XSL was to create an XML-based styling language directed toward paged display media. The mechanism they used to accomplish this task was to divide the process into two distinct steps. First, the XML document would be transformed into an intermediate form. The process for performing this transformation would be governed by the XSL stylesheet, as defined by the XSL specification. The result of this transformation would be an XML document in an intermediate language, known as XSL-FO (also defined by the XSL specification). However, in the process of designing the transformation step, it was realized that a generic XML transformation language would be useful for more than merely creating a presentation of an XML document. As such, a new working group was split off from the XSL working group, and the XSL Transformations (XSLT) language became something that was considered separate from the styling information of the XSL-FO document. Even that split was expanded when XPath became its own separate specification, though still strongly tied to XSLT. The combination of XSLT and XSL-FO creates a powerful styling language, though much more complex than CSS. XSLT is a Turing complete language, while CSS is not; this demonstrates a degree of power and flexibility not found in CSS. Additionally, XSLT is capable of creating content, such as automatically creating a table of contents just from chapters in a book, or removing/selecting content, such as only generating a glossary from a book. XSLT version 1.0 with the EXSLT extensions, or XSLT version 2.0 is capable of generating multiple documents as well, such as dividing the chapters in a book into their own individual pages. By contrast, a CSS can only selectively remove content by not displaying it. XSL-FO is unlike CSS in that the XSL-FO document stands alone. CSS modifies a document that is attached to it, while the XSL-FO document (usually the result of the transformation by XSLT of the original document) contains all of the content to be presented in a purely presentational format. It has a wide range of specification options with regard to paged formatting and higher-quality typesetting. But it does not specify the pages themselves. The XSL-FO document must be passed through an XSL-FO processor utility that generates the final paged media, much like HTML+CSS must pass through a web browser to be displayed in its formatted state. The complexity of XSL-FO is a problem, largely because implementing an FO processor is very difficult. CSS implementations in web browsers are still not entirely compatible with one another, and it is much simpler to write a CSS processor than an FO processor. However, for richly specified paged media, such complexity is ultimately required in order to be able to solve various typesetting problems. See also Extensible Stylesheet Language XSL Transformations XSL Formatting Objects XPath External links Why is the W3 producing 2 Style Sheet Languages? W article Using CSS and XSL together W3 article Printing XML Why CSS is better than XSL. Article on SLT for transformations and CSS for web presentation MS XML Team Blog CSS vs. XSL Stylesheet languages
Comparison of stylesheet languages
[ "Technology" ]
1,034
[ "Computing comparisons" ]
4,295,487
https://en.wikipedia.org/wiki/One-electron%20universe
The one-electron universe postulate, proposed by theoretical physicist John Wheeler in a telephone call to Richard Feynman in the spring of 1940, is the hypothesis that all electrons and positrons are actually manifestations of a single entity moving backwards and forwards in time. According to Feynman: A similar "zigzag world line description of pair annihilation" was independently devised by E. C. G. Stueckelberg at the same time. Overview The idea is based on the world lines traced out across spacetime by every electron. Rather than have myriad such lines, Wheeler suggested that they could all be parts of one single line like a huge tangled knot, traced out by the one electron. Any given moment in time is represented by a slice across spacetime, and would meet the knotted line a great many times. Each such meeting point represents a real electron at that moment. At those points, half the lines will be directed forward in time and half will have looped round and be directed backwards. Wheeler suggested that these backwards sections appeared as the antiparticle to the electron, the positron. Many more electrons have been observed than positrons, and electrons are thought to comfortably outnumber them. According to Feynman he raised this issue with Wheeler, who speculated that the missing positrons might be hidden within protons. Feynman was struck by Wheeler's insight that antiparticles could be represented by reversed world lines, and credits this to Wheeler, saying in his Nobel speech: Feynman later proposed this interpretation of the positron as an electron moving backward in time in his 1949 paper "The Theory of Positrons". Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then, is no creation nor annihilation, but only a change of directions of moving particles, from past to future, or from future to past." See also Eddington number Identical particles Retrocausality T-symmetry References External links Thought experiments in quantum mechanics Quantum electrodynamics 1940 in science Physical cosmology Conceptual models Richard Feynman Electron
One-electron universe
[ "Physics", "Chemistry", "Astronomy" ]
461
[ "Electron", "Astronomical sub-disciplines", "Molecular physics", "Theoretical physics", "Quantum mechanics", "Astrophysics", "Thought experiments in quantum mechanics", "Physical cosmology" ]
4,295,613
https://en.wikipedia.org/wiki/Charles%20Read%20%28mathematician%29
Charles John Read (16 February 1958 – 14 August 2015) was a British mathematician known for his work in functional analysis. In operator theory, he is best known for his work in the 1980s on the invariant subspace problem, where he constructed operators with only trivial invariant subspaces on particular Banach spaces, especially on . He won the 1985 Junior Berwick Prize for his work on the invariant subspace problem. Read has also published on Banach algebras and hypercyclicity; in particular, he constructed the first example of an amenable, commutative, radical Banach algebra. Education and career Read won a scholarship to study mathematics at Trinity College, Cambridge in October 1975, and was awarded a first-class degree in Mathematics in 1978. He completed his PhD thesis entitled Some Problems in the Geometry of Banach Spaces at the University of Cambridge under the supervision of Béla Bollobás. He spent the year 1981–82 at Louisiana State University. From 2000 until his death, he was a Professor of Pure Mathematics at the University of Leeds after having been a fellow of Trinity College for several years. Personal life Christianity On his personal website, formerly hosted on a server at the University of Leeds, Read described himself first and foremost as a Born-Again Christian. Some biographical details could be found in what he described as his "Christian Testimony" on that site, where he described his conversion process. He described losing his father to cancer in 1970 when he was 11 years old, and that this loss prompted him to ask questions about whether, and in what form, we might continue to live after we die - and that consciousness may be independent of the body. He came to the conclusion that the conscious mind must survive after death. This also led him to believe that since we are "immortal beings" that we must always try to "do the right thing". Some time later the article described an incident where he had pushed a smaller boy out of the way in a queue at a sweet shop. He later interpreted his later sense of remorse at having done something wrong as the "Classical Christian conviction of sin", and claimed to have had a religious experience on a London Underground train where he felt a sense of joy at being forgiven, and simultaneously bursting into tears. Read also claimed to have taken part in a miracle of Christian healing at a Christian meeting run by John Wimber, organiser of the Vineyard Movement. Controversy over Read's Christian Testimony An article in The Gryphon, the Leeds University Student Union newspaper, in February 2015 stated that Read had "sparked controversy" by stating at the end of his testimony that "I strongly urge you to seek the truth as a researcher, not trusting anyone else to do your basic investigations for you. That’s right, Jesus is the Way. But you have to find that out for yourself. For those who seek find, but those who can’t be bothered, or who think they’re too cool, end in a very dark place. It won't be cool in Hell." The article was prompted by a third-year Maths student who had expressed the opinion that "I don’t think his university webpage should be showing his personal opinions about faith and religion as it doesn’t have anything to do with someone’s ability to learn maths". Read subsequently displayed on his website a scanned image of the original article, under which a handwritten comment notes that "Leeds Gryphon, 13-2-15 notices Christian Testimony of CJR after several years!" Cave diving Read was also a devotee of solo cave diving and wrote extensively about it on his website. Death Read died in Winnipeg in August 2015 while on a research visit at the University of Manitoba. References External links Charles Read's Homepage Functional analysts Operator theorists 1958 births 2015 deaths British Christians 20th-century British mathematicians 21st-century British mathematicians Mathematical analysts Academics of the University of Leeds Alumni of the University of Cambridge
Charles Read (mathematician)
[ "Mathematics" ]
802
[ "Mathematical analysis", "Mathematical analysts" ]
4,295,779
https://en.wikipedia.org/wiki/Perillartine
Perillartine, also known as perillartin and perilla sugar, is a semisynthetic sweetener that is about 2000 times as sweet as sucrose. It is mainly used in Japan. Perillartine is the oxime of perillaldehyde, which is extracted from plants of the genus Perilla (Lamiaceae). See also Sweetener Oxime Perilla Shiso Oxime V References External links Aldoximes Sugar substitutes Monoterpenes Cyclohexenes
Perillartine
[ "Chemistry" ]
106
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
4,295,839
https://en.wikipedia.org/wiki/COGO
COGO is a suite of programs used in civil engineering for modelling horizontal and vertical alignments and solving coordinate geometry problems. Cogo alignments are used as controls for the geometric design of roads, railways, and stream relocations or restorations. COGO was originally a subsystem of MIT's Integrated Civil Engineering System (ICES), developed in the 1960s. Other ICES subsystems included STRUDL, BRIDGE, LEASE, PROJECT, ROADS and TRANSET, and the internal languages ICETRAN and CDL. Evolved versions of COGO are still widely used. Some basic types of elements of COGO are points, Euler spirals, lines and horizontal curves (circular arcs). More complex elements can be developed such as alignments or chains which are made up of a combination of points, curves or spirals. See also Civil engineering software References "Engineer's Guide to ICES COGO I", R67-46, Civil Engineering Dept MIT (Aug 1967) "An Integrated Computer System for Engineering Problem Solving", D. Roos, Proc SJCC 27(2), AFIPS (Spring 1965). Sammet 1969, pp.615-620. Mathematical software Surveying History of software
COGO
[ "Mathematics", "Technology", "Engineering" ]
255
[ "Surveying", "Civil engineering", "History of software", "Civil engineering stubs", "History of computing", "Mathematical software" ]
4,296,490
https://en.wikipedia.org/wiki/Completeness%20%28cryptography%29
In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits. This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way. See also Correlation immunity References Cryptography
Completeness (cryptography)
[ "Mathematics", "Engineering" ]
282
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
3,151,313
https://en.wikipedia.org/wiki/LocationFree%20Player
Sony's LocationFree is the marketing name for a group of products and technologies for timeshifting and placeshifting streaming video. The LocationFree Player is an Internet-based multifunctional device used to stream live television broadcasts (including digital cable and satellite), DVDs and DVR content over a home network or the Internet. It is in essence a remote video streaming server product (similar to the Slingbox). It was first announced by Sony in Q1 2004 and launched early in Q4 2004 alongside a co-branded wireless tablet TV. The last LocationFree product was the LF-V30 released in 2007. The LocationFree base station connects to a home network via a wired Ethernet cable, or for newer models, via a wireless connection. Up to three attached video sources can stream content through the network to local content provision devices or across the internet to remote devices. A remote user can connect to the internet at a wireless hotspot or any other internet connection anywhere in the world and receive streamed content. Content may only be streamed to one computer at a time. In addition, the original LocationFree Player software contained a license for only one client computer. Additional (paid) licenses were required to connect to the base station for up to a total of four clients. On November 29, 2007 Sony modified its LocationFree Player policy to provide free access to the latest LocationFree Player LFA-PC30 software for Windows XP/Vista. In addition, the software no longer requires a unique serial number in order to pair it with a LocationFree base station. In December, 2007 Sony Dropped the $30 license fee for the LocationFree client. However, the software still requires registration to Sony's servers after 30 days. Clients The player (server) can stream content to the following (client) devices: Windows or Mac computer - requires additional software Mobile/cellular phones - coming later in 2007 Pocket PCs running Windows Mobile Smartphones/tablets running Android 2.2+ Televisions - requires a Sony adapter Sony (Client) Products: Sony wireless Tablets (listed below) PlayStation Portable (PSP) - system software version 2.50 or later (version 3.11 or later recommended due to inclusion of AVC support) Sony Ericsson P990i - European Base Stations only Sony VAIO laptops - starting Summer 2007, these laptops included an LF Vaio branded compatible client These products do not act as DVRs, since they do not allow content to be recorded to a hard drive. A user can also access and control from anywhere in the world any device connected to the unit, and switch between multiple devices. BASE Station Models Base stations packaged with LocationFree Player installation disc and instructional DVD LF-PK1 ("PK" stands for "Pack" or "Package" as it is a package of the LF-B1 base station, LFA-PC2 LocationFree player software for the PC and instructional DVD) First standalone model sold without a tablet Only model in the North American market to ever come equipped with an RF coaxial input. However the European model did not have an RF coaxial input. Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Three firmware versions released. Version 1.000 was the original release version for Japan and North America. Version 2.000 added support for the Sony PSP (with PSP firmware version 2.50 or higher). An update was made available for Japanese and North American owners. Version 3.000 was the original release version for the European version of the LF-PK1 meaning all European models shipped with this latest firmware version. It increased the maximum number of clients that can be registered from 4 to 8. It also enhanced the way settings were changed through the web interface. Previously, whenever a setting was changed, the base station would have to be rebooted. With the 3.000, setting changes no longer created a reboot. An update program was offered for Japanese owners of the LF-PK1, however no update to firmware 3.000 has ever been made available for the North American model, nor can the Japanese 3.000 update be installed on the North American model. LF-B10 Bundled with LFA-PC20 LocationFree player for the PC and instructional DVD Wired 10/100 Two Infrared Ports LF-B20 Bundled with LFA-PC20 LocationFree player for the PC and instructional DVD Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Two Infrared Ports LF-V30 Bundled with LFA-PC30 LocationFree player for the PC and instructional DVD Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Component Support Possibly known as LF-W1HD in Japan Notes: - Wired models can be used via normal wireless routers. Access via internet via firewall provided necessary ports are opened. Can be used with DDNS. Client box - enables users to watch streamed content on a television set, without the need for a PC or laptop. LF-Box 1 LocationFree wireless tablet TV In October 2004 Sony unveiled a portable, wireless and rechargeable SVGA 12.1" LCD tablet screen with dualband Wi-Fi technology (IEEE 802.11a/b/g) which can receive pictures from the LocationFree player up to 100 feet from the source signal. The TV also has web-browsing and email functions, a Memory Stick Duo slot and an on-screen hand-drawing function for use as a drawing tablet. The screen can also be used as an intelligent universal AV remote control. These tablets were bundled with Base Stations. Three versions have been released: LF-X1 Original 12" Model, Aspect ratio 4:3 (LF-X1M is monitor only) Besides included tablet, base station ONLY compatible with LFA-PC1 LocationFree player for the PC, sold separately. LFA-PC2 or later, as well as all other software players and the PSP are NOT compatible with this base station. LF-X5 7" Model, Aspect ratio 16:9 Besides included tablet, base station ONLY compatible with LFA-PC1 LocationFree player for the PC, sold separately. LFA-PC2 or later, as well as all other software players and the PSP are NOT compatible with this base station. LF-X11 Bundled with same LF-B1 base station as LF-PK1. This means the base station also can be paired with other devices just like the LF-PK1 such as a PSP. However the LF-X11 tablet cannot be paired with another LF-B1 or other LocationFree base station, it is permanently bonded with the included LF-B1 base station. Bundled with LFA-PC2 LocationFree player for the PC Wireless 11a/b/g. Base station can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Please read LF-PK1 description above for more information and details about the LF-B1 base station and its different firmware versions, as they are the same base station. Software LFA-PC1 LocationFree Player Software for Windows (Only compatible with LF-X1 and LF-X5 base stations. No other software is compatible with these base stations, including all later versions of LocationFree Player for Windows, and players for Macintosh and all other platforms) LFA-PC2 LocationFree Player Software for Windows (Only compatible with LF-PK1 and LF-X11) LFA-PC20 LocationFree Player Software for Windows (Compatible with LF-PK1, LF-X11, LF-B10 and LF-B20) LFA-PC30 LocationFree Player Software for Windows (Latest Windows Version & available as a one week trial. Compatible with all LocationFree base stations except LFA-PC1 specific base stations. Version 4.0.3.53 maintains Windows XP & Vista support. NOTE: This software is NOT YET AVAILABLE for UK LocationFree boxes - the software does region checks on the base station and refuses to install.) TLF-MAC/J is a retail software package for Mac OS X (Only compatible with Japanese base stations) TLF-MAC/E is a retail software package for Mac OS X (Only compatible with North American base stations) Miglia software package for Mac OS X (Only compatible with European base stations) NetFront LocationFree Player for Pocket PC (See link below) ThereTV free Android client for the LocationFree Protocol (See link below) See also Slingbox HDHomeRun Monsoon HAVA Dreambox DBox2 Unibox References External links LocationFree Player and TV at Sony.com Mac Version of Software 3rd party? PocketPC aka Windows mobile software New base station hardware unveiled at Engadget.com LocationFree LF-B20 Review at SpicyGadget.com Time Magazine Bakeoff of Sling box and LocationFree Rumor of Licence fee drop on Gizmodo Review of LF-X5 7" Screen CNET review of V30 model. PC World review of V30 model Television technology Television placeshifting technology Consumer electronics Sony products
LocationFree Player
[ "Technology" ]
1,950
[ "Information and communications technology", "Television technology" ]
3,151,353
https://en.wikipedia.org/wiki/Oppau%20explosion
The Oppau explosion occurred on September 21, 1921, when approximately 4,500 tonnes of a mixture of ammonium sulfate and ammonium nitrate fertilizer stored in a tower silo exploded at a BASF plant in Oppau, now part of Ludwigshafen, Germany, killing 500–600 people and injuring about 2,000 more. Background The plant began producing ammonium sulfate in 1911, but during World War I when Germany was unable to obtain the necessary sulfur, it began to produce ammonium nitrate as well. Ammonia could be produced without overseas resources, using the Haber process, and the plant was the first of its kind to do so in the world. Compared to ammonium sulfate, ammonium nitrate is strongly hygroscopic, thus the mixture of ammonium sulfate and nitrate compresses under its own weight, turning it into a plaster-like substance in the silo. The workers needed to use pickaxes to get it out, a problematic situation because they could not enter the silo and risk being buried in collapsing fertilizer. To ease their work, small charges of dynamite were used to loosen the mixture. This highly dangerous procedure was in fact common practice. It was well known that ammonium nitrate was explosive, having been used extensively for this purpose during World War I, but tests conducted in 1919 had suggested that mixtures of ammonium sulfate and nitrate containing less than 60% nitrate would not explode. On these grounds, the material handled by the plant, nominally a 50/50 mixture, was considered stable enough to be stored in 50,000-tonne lots, more than ten times the amount involved in the disaster. Indeed, nothing extraordinary had happened during an estimated 20,000 firings, until the fateful explosion on September 21. As all involved died in the explosion, the causes are not clear. However, according to modern sources and contrary to the above-mentioned 1919 tests, the "less than 60% nitrate = safe" criterion is inaccurate; in mixtures containing 50% nitrate, any explosion of the mixture is confined to a small volume around the initiating charge, but increasing the proportion of nitrate to 55–60% greatly increases the explosive properties and creates a mixture whose detonation is sufficiently powerful to initiate detonation in a surrounding mixture of a lower nitrate concentration which would normally be considered minimally explosive. Changes in humidity, density, particle size in the mixture and homogeneity of crystal structure also affect the explosive properties. A few months before the incident, the manufacturing process had been changed in such a way as to lower the humidity level of the mixture from 3–4% to 2%, and also to lower the apparent density. Both these factors rendered the substance more likely to explode. There is also evidence that the lot in question was not of uniform composition and contained pockets of up to several dozen metric tons of mixture enriched in ammonium nitrate. It has therefore been proposed that one of the charges had been placed in or near such a pocket, exploding with sufficient violence to set off some of the surrounding lower-nitrate mixture. Two months earlier, at Kriewald, then part of Germany, 19 people had died when 30 metric tons of ammonium nitrate detonated under similar circumstances. It is not clear why this warning was not heeded. Scale of the explosion Two explosions, half a second apart, occurred at 7:32 am on September 21, 1921, at Silo 110 of the plant, forming a crater wide and deep. In these explosions 10% of the 4,500 t of fertilizer stored in the silo detonated. The explosions were heard as two loud bangs in north-eastern France and in Munich, more than 300 km away, and are estimated to have contained an energy of 1–2 kilotonnes TNT equivalent. The damage to property was valued in 1922 at 321 million marks, estimated by The New York Times at the time to be equivalent to 1.7 million US dollars<ref name=frenchdoc> </ref> (since Germany suffered heavy hyperinflation in 1919–1924, given amounts and exchange rates were not very descriptive). About 80 percent of all buildings in Oppau were destroyed, leaving 6,500 homeless. The pressure wave caused great damage in Mannheim, located just across the Rhine, ripped roofs off up to 25 km away, and destroyed windows farther away, including all the medieval stained-glass windows of Worms cathedral, to the north. In Heidelberg ( from Oppau), traffic was stopped by the mass of broken glass on the streets, a tram was derailed, and some roofs were destroyed. Five hundred bodies were recovered within the first 48 hours, with the final death toll recorded being in excess of 560 people. The funeral was attended by German President Friedrich Ebert and Prime Minister Hugo Lerchenfeld, and saw crowds of 70,000 people at the cemetery in Ludwigshafen. See also Ammonium nitrate disasters Largest artificial non-nuclear explosions References Also see: External links Photo of the Oppau explosion Deutsche Welle Chemical industry in Germany Explosions in 1921 Explosions in Germany 1921 in Germany 20th century in Rhineland-Palatinate Industrial fires and explosions BASF September 1921 Ammonium nitrate disasters
Oppau explosion
[ "Chemistry" ]
1,060
[ "Industrial fires and explosions", "Explosions" ]
3,151,382
https://en.wikipedia.org/wiki/Archaeoastronomy%20and%20Stonehenge
The prehistoric monument of Stonehenge has long been studied for its possible connections with ancient astronomy. The site is aligned in the direction of the sunrise of the summer solstice and the sunset of the winter solstice. Early work Stonehenge has an opening in the henge earthwork facing northeast, and suggestions that particular significance was placed by its builders on the solstice and equinox points have followed. For example, the summer solstice Sun rose close to the Heel Stone, and the Sun's first rays shone into the centre of the monument between the horseshoe arrangement. While it is possible that such an alignment could be coincidental, this astronomical orientation had been acknowledged since William Stukeley drew the site and first identified its axis along the midsummer sunrise in 1720. Stukeley noticed that the Heel Stone was not precisely aligned on the sunrise. The drifting of the position of the sunrise due to the change in the obliquity of the ecliptic since the monument's erection does not account for this imprecision. Recently, evidence has been found for a neighbour to the Heel Stone, no longer extant. The second stone may have instead been one side of a 'solar corridor' used to frame the sunrise. Stukeley and the renowned astronomer Edmund Halley attempted what amounted to the first scientific attempt to date a prehistoric monument. Stukeley concluded the Stonehenge had been set up "by the use of a magnetic compass to lay out the works, the needle varying so much, at that time, from true north." He attempted to calculate the change in magnetic variation between the observed and theoretical (ideal) Stonehenge sunrise, which he imagined would relate to the date of construction. Their calculations returned three dates, the earliest of which, 460 BC, was accepted by Stukeley. That was incorrect, but this early exercise in dating is a landmark in field archaeology. Early efforts to date Stonehenge exploited changes in astronomical declinations and led to efforts such as H. Broome's 1864 theory that the monument was built in 977 BC, when the star Sirius would have risen over Stonehenge's Avenue. Sir Norman Lockyer proposed a date of 1680 BC based entirely on an incorrect sunrise azimuth for the Avenue, aligning it on a nearby Ordnance Survey trig point, a modern feature. Petrie preferred a later date of 730 AD. The relevant stones were leaning considerably during his survey, and it was not considered accurate. An archaeoastronomy debate was triggered by the 1963 publication of Stonehenge Decoded, by Gerald Hawkins an American astronomer. Hawkins claimed to observe numerous alignments, both lunar and solar. He argued that Stonehenge could have been used to predict eclipses. Hawkins' book received wide publicity, in part because he used a computer in his calculations, then a novelty. Archaeologists were suspicious in the face of further contributions to the debate coming from British astronomer C. A. 'Steve' Newham and Sir Fred Hoyle, the famous Cambridge cosmologist, as well as by Alexander Thom, a retired professor of engineering, who had been studying stone circles for more than 20 years. Their theories have faced criticism in recent decades from Richard J. C. Atkinson and others who have suggested impracticalities in the 'Stone Age calculator' interpretation. Gerald Hawkins' work Gerald Hawkins' work on Stonehenge was first published in Nature in 1963 following analyses he had carried out using the Harvard-Smithsonian IBM computer. Hawkins found not one or two alignments but dozens. He had studied 165 significant features of the monument and used the computer to check every alignment between them against every rising and setting point for the Sun, Moon, planets, and bright stars in the positions they would have occupied in 1500 BCE. Thirteen solar and eleven lunar correlations were very precise in relation to the early features at the site but precision was less for later features of the monument. Hawkins also proposed a method for using the Aubrey holes to predict lunar eclipses by moving markers from hole to hole. In 1965 Hawkins and J.B. White wrote Stonehenge Decoded, which detailed his findings and proposed that the monument was a 'Neolithic computer'. Atkinson replied with his article "Moonshine on Stonehenge" in Antiquity in 1966, pointing out that some of the pits which Hawkins had used for his sight lines were more likely to have been natural depressions, and that he had allowed a margin of error of up to 2 degrees in his alignments. Atkinson found that the probability of so many alignments being visible from 165 points to be close to 0.5 (or rather 50:50) rather that the "one in a million" possibility which Hawkins had claimed. That the Station Stones stood on top of the earlier Aubrey Holes meant that many of Hawkins' alignments between the two features were illusory. The same article by Atkinson contains further criticisms of the interpretation of Aubrey Holes as astronomical markers, and of Fred Hoyle's work. A question exists over whether the English climate would have permitted accurate observation of astronomical events. Modern researchers were looking for alignments with phenomena they already knew existed; the prehistoric users of the site did not have this advantage. Newham and the Station Stones In 1966, C. A. 'Peter' Newham described an alignment for the equinoxes by drawing a line between one of the Station Stones with a posthole next to the Heel Stone. He also identified a lunar alignment; the long sides of the rectangle created by the four station stones matched the Moon rise and moonset at the major standstill. Newham also suggested that the postholes near the entrance were used for observing the saros cycle. Two of the Station Stones are damaged and although their positions would create an approximate rectangle, their date and thus their relationship with the other features at the site is uncertain. Stonehenge's latitude ( 51° 10′ 44″ N ) is unusual in that only at this approximate latitude (within about 50 km) do the lunar and solar alignments mentioned above occur at right angles to one another. More than 50 km north or south of the latitude of Stonehenge, the station stones could not be set out as a rectangle. Alexander Thom's work Alexander Thom had been examining stone circles since the 1950s in search of astronomical alignments and the megalithic yard. It was not until 1973 that he turned his attention to Stonehenge. Thom chose to ignore alignments between features within the monument, considering them to be too close together to be reliable. He looked for landscape features that could have marked lunar and solar events. However, one of Thom's key sites – Peter's Mound – turned out to be a twentieth-century rubbish dump. Later theories An observation published in 2017 notes that the mean average diameter of the moon and the Earth might be drawn in the diameters of the stone and Earth circles at Stonehenge. Though this overlap could be coincidental, the same ratio between the size of the moon and the Earth is also seen in the size of the Earthwork at Stonehenge and the nearby circle at Durrington Walls. Although Stonehenge has become an increasingly popular destination during the summer solstice, with 20,000 people visiting in 2005, scholars have developed growing evidence that indicates prehistoric people visited the site only during the winter solstice. The only megalithic monuments in the British Isles to contain a clear, compelling solar alignment are Maeshowe, which famously faces the winter solstice sunrise. The most recent evidence supporting the theory of winter visits includes bones and teeth from pigs which were slaughtered at nearby Durrington Walls, their age at death indicating that they were slaughtered either in December or January every year. Mike Parker Pearson of the University of Sheffield has said in 2005, "We have no evidence that anyone was in the landscape in summer." Later on, in light of more recent research and findings, Mike Pearson reconsidered arguing that it is "reasonable to assume that they came to celebrate the midsummer solstice as well as the midwinter solstice". See also List of archaeoastronomical sites sorted by country References External links Temporal Epoch Calculations, An introduction to research considerations regarding temporal variations in archaeoastronomical and archaeogeodetic variables. Archaeoastronomy Stonehenge Phenomena Astronomical hypotheses
Archaeoastronomy and Stonehenge
[ "Astronomy" ]
1,719
[ "Archaeoastronomy", "Astronomical hypotheses", "Astronomical controversies", "Astronomical sub-disciplines" ]
3,152,473
https://en.wikipedia.org/wiki/RootkitRevealer
RootkitRevealer is a proprietary freeware tool for rootkit detection on Microsoft Windows by Bryce Cogswell and Mark Russinovich. It runs on Windows XP and Windows Server 2003 (32-bit-versions only). Its output lists Windows Registry and file system API discrepancies that may indicate the presence of a rootkit. It is the same tool that triggered the Sony BMG copy protection rootkit scandal. RootkitRevealer is no longer being developed. See also Sysinternals Process Explorer Process Monitor ProcDump References Microsoft software Computer security software Windows security software Windows-only freeware Rootkit detection software 2006 software
RootkitRevealer
[ "Engineering" ]
134
[ "Cybersecurity engineering", "Computer security software" ]
3,152,853
https://en.wikipedia.org/wiki/Large%20set%20%28Ramsey%20theory%29
In Ramsey theory, a set S of natural numbers is considered to be a large set if and only if Van der Waerden's theorem can be generalized to assert the existence of arithmetic progressions with common difference in S. That is, S is large if and only if every finite partition of the natural numbers has a cell containing arbitrarily long arithmetic progressions having common differences in S. Examples The natural numbers are large. This is precisely the assertion of Van der Waerden's theorem. The even numbers are large. Properties Necessary conditions for largeness include: If S is large, for any natural number n, S must contain at least one multiple (equivalently, infinitely many multiples) of n. If is large, it is not the case that sk≥3sk-1 for k≥ 2. Two sufficient conditions are: If S contains n-cubes for arbitrarily large n, then S is large. If where is a polynomial with and positive leading coefficient, then is large. The first sufficient condition implies that if S is a thick set, then S is large. Other facts about large sets include: If S is large and F is finite, then S – F is large. is large. If S is large, is also large. If is large, then for any , is large. 2-large and k-large sets A set is k-large, for a natural number k > 0, when it meets the conditions for largeness when the restatement of van der Waerden's theorem is concerned only with k-colorings. Every set is either large or k-large for some maximal k. This follows from two important, albeit trivially true, facts: k-largeness implies (k-1)-largeness for k>1 k-largeness for all k implies largeness. It is unknown whether there are 2-large sets that are not also large sets. Brown, Graham, and Landman (1999) conjecture that no such sets exists. See also Partition of a set Further reading External links Mathworld: van der Waerden's Theorem Basic concepts in set theory Ramsey theory Theorems in discrete mathematics
Large set (Ramsey theory)
[ "Mathematics" ]
450
[ "Discrete mathematics", "Mathematical theorems", "Combinatorics", "Theorems in discrete mathematics", "Basic concepts in set theory", "Mathematical problems", "Ramsey theory" ]
3,152,855
https://en.wikipedia.org/wiki/Small%20set%20%28category%20theory%29
In category theory, a small set is one in a fixed universe of sets (as the word universe is used in mathematics in general). Thus, the category of small sets is the category of all sets one cares to consider. This is used when one does not wish to bother with set-theoretic concerns of what is and what is not considered a set, which concerns would arise if one tried to speak of the category of "all sets". A small set is not to be confused with a small category, which is a category in which the collection of arrows (and therefore also the collection of objects) is a set. In other choices of foundations, such as Grothendieck universes, there exist both sets that belong to the universe, called “small sets” and sets that do not, such as the universe itself, “large sets”. We gain an intermediate notion of moderate set: a subset of the universe, which may be small or large. Every small set is moderate, but not conversely. Since in many cases the choice of foundations is irrelevant, it makes sense to always say “small set” for emphasis even if one has in mind a foundation where all sets are small. Similarly, a small family is a family indexed by a small set; the axiom of replacement (if it applies in the foundation in question) then says that the image of the family is also small. See also Category of sets References S. Mac Lane, Ieke Moerdijk, Sheaves in geometry and logic: a first introduction to topos theory, , , the chapter on "Categorical preliminaries" Categories in category theory
Small set (category theory)
[ "Mathematics" ]
337
[ "Mathematical structures", "Category theory", "Categories in category theory" ]
3,152,856
https://en.wikipedia.org/wiki/Large%20set%20%28combinatorics%29
In combinatorial mathematics, a large set of positive integers is one such that the infinite sum of the reciprocals diverges. A small set is any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges. Large sets appear in the Müntz–Szász theorem and in the Erdős conjecture on arithmetic progressions. Examples Every finite subset of the positive integers is small. The set of all positive integers is a large set; this statement is equivalent to the divergence of the harmonic series. More generally, any arithmetic progression (i.e., a set of all integers of the form an + b with a ≥ 1, b ≥ 1 and n = 0, 1, 2, 3, ...) is a large set. The set of square numbers is small (see Basel problem). So is the set of cube numbers, the set of 4th powers, and so on. More generally, the set of positive integer values of any polynomial of degree 2 or larger forms a small set. The set {1, 2, 4, 8, ...} of powers of 2 is a small set, and so is any geometric progression (i.e., a set of numbers of the form of the form abn with a ≥ 1, b ≥ 2 and n = 0, 1, 2, 3, ...). The set of prime numbers is large. The set of twin primes is small (see Brun's constant). The set of prime powers which are not prime (i.e., all numbers of the form pn with n ≥ 2 and p prime) is small although the primes are large. This property is frequently used in analytic number theory. More generally, the set of perfect powers is small; even the set of powerful numbers is small. The set of numbers whose expansions in a given base exclude a given digit is small. For example, the set of integers whose decimal expansion does not include the digit 7 is small. Such series are called Kempner series. Any set whose upper asymptotic density is nonzero, is large. The set of all primes in an arithmetic progression an+b where a and b are coprime is large (see Dirichlet's theorem on arithmetic progressions). Properties Every subset of a small set is small. The union of finitely many small sets is small, because the sum of two convergent series is a convergent series. (In set theoretic terminology, the small sets form an ideal.) The complement of every small set is large. The Müntz–Szász theorem states that a set is large if and only if the set of polynomials spanned by is dense in the uniform norm topology of continuous functions on a closed interval in the positive real numbers. This is a generalization of the Stone–Weierstrass theorem. Open problems involving large sets Paul Erdős conjectured that all large sets contain arbitrarily long arithmetic progressions. He offered a prize of $3000 for a proof, more than for any of his other conjectures, and joked that this prize offer violated the minimum wage law. The question is still open. It is not known how to identify whether a given set is large or small in general. As a result, there are many sets which are not known to be either large or small. See also List of sums of reciprocals Notes References A. D. Wadhwa (1975). An interesting subseries of the harmonic series. American Mathematical Monthly 82 (9) 931–933. Combinatorics Integer sequences Mathematical series
Large set (combinatorics)
[ "Mathematics" ]
754
[ "Sequences and series", "Discrete mathematics", "Mathematical structures", "Series (mathematics)", "Integer sequences", "Calculus", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
3,153,014
https://en.wikipedia.org/wiki/Jean-Baptiste%20L.%20Rom%C3%A9%20de%20l%27Isle
Jean-Baptiste Louis Romé de l'Isle (26 August 1736 – 3 July 1790) was a French mineralogist, considered one of the creators of modern crystallography. Romé was born in Gray, Haute-Saône, in eastern France. As secretary of a company of artillery in the Carnatic Wars he visited the East Indies, was taken prisoner by the English in 1761, and held in captivity for several years. He was also an alumnus of the Collège Sainte-Barbe in Paris. Subsequently, he became distinguished for his researches on mineralogy and crystallography. He was the author of Essai de Cristallographie (1772), the second edition of which, regarded as his principal work, was published as Cristallographie (3 vols. and atlas, 1783). His formulation of the law of constancy of interfacial angles built on observations by the geologist Nicolaus Steno. In 1775, he was elected a foreign member of the Royal Swedish Academy of Sciences. He died in Paris, France on 3 July 1790. References Works External links Romé de L'Isle, Jean-Baptiste Louis de (1736-1790). Des caractères extérieurs des minéraux, ou Réponse à cette question, 1794 1736 births 1790 deaths People from Gray, Haute-Saône Crystallographers French geologists French mineralogists Members of the Royal Swedish Academy of Sciences Members of the Prussian Academy of Sciences
Jean-Baptiste L. Romé de l'Isle
[ "Chemistry", "Materials_science" ]
300
[ "Crystallographers", "Crystallography" ]
3,153,173
https://en.wikipedia.org/wiki/Leptoquark
Leptoquarks are hypothetical particles that would interact with quarks and leptons. Leptoquarks are color-triplet bosons that carry both lepton and baryon numbers. Their other quantum numbers, like spin, (fractional) electric charge and weak isospin vary among models. Leptoquarks are encountered in various extensions of the Standard Model, such as technicolor theories, theories of quark–lepton unification (e.g., Pati–Salam model), or GUTs based on SU(5), SO(10), E6, etc. Leptoquarks are currently searched for in experiments ATLAS and CMS at the Large Hadron Collider in CERN. In March 2021, there were some reports to hint at the possible existence of leptoquarks as an unexpected difference in how bottom quarks decay to create electrons or muons. The measurement has been made at a statistical significance of 3.1σ, which is well below the 5σ level that is usually considered a discovery. Overview Leptoquarks, if they exist, must be heavier than any of the currently known elementary particles, otherwise they would have already been discovered. Current experimental lower limits on leptoquark mass (depending on their type) are around (i.e., about 1000 times the proton mass). By definition, leptoquarks decay directly into a quark and a lepton or an antilepton. Like most of other elementary particles, they live for a very short time and are not present in ordinary matter. However, they might be produced in high energy particle collisions such as in particle colliders or from cosmic rays hitting the Earth's atmosphere. Like quarks, leptoquarks must carry color and therefore must also interact with gluons. This strong interaction of theirs is important for their production in hadron colliders (such as the Tevatron or LHC). Simplified typology Several kinds of leptoquarks, depending on their electric charge, can be considered: Q = : Such a leptoquark decays into up-type quarks (up, charm, top) and charged antileptons (e+, μ+, τ+). Q = : Such a leptoquark decays into up-type quarks and neutrinos (or antineutrinos), and/or to down-type quarks (down, strange, bottom) and charged antileptons. Q = −: Such a leptoquark decays into down-type quarks and (anti)neutrinos, and/or to up-type quark and a charged lepton. Q = −: Such a leptoquark decays into down-type quarks and charged leptons. If a leptoquark with a given charge exists, its antiparticle with an opposite charge and which would decay into conjugated states to those listed above, must exist as well. A leptoquark with given electric charge may, in general, interact with any combination of a lepton and quark with given electric charges (this yields up to distinct interactions of a single type of a leptoquark). However, experimental searches usually assume that only one of those "channels" is possible. Especially, a Q =  charged leptoquark that decays into a positron and a down quark is called a "first generation leptoquark", a leptoquark that decays into strange quark and antimuon is a "second-generation leptoquark" etc. Nevertheless, most theories do not bring much of a theoretical motivation to believe that leptoquarks have only a single interaction and that the generation of the quark and lepton involved is the same. Proton decay Existence of pure leptoquarks would not spoil the baryon number conservation. However, some theories allow (or require) the leptoquark to also have a diquark interaction vertex. For example, a Q =  charged leptoquark might also decay into two down-type antiquarks. Existence of such a leptoquark-diquark would cause protons to decay. The current limits on proton lifetime are strong probes of existence of these leptoquark-diquarks. These fields emerge in grand unification theories; for example, in the Georgi–Glashow SU(5) model, they are called X and Y bosons. Experimental searches In 1997, an excess of events at the HERA accelerator created a stir in the particle physics community, because one possible explanation of the excess was the involvement of leptoquarks. However, later studies performed both at HERA and at the Tevatron with larger samples of data ruled out this possibility for masses of the leptoquark up to around . Second generation leptoquarks were also looked for and not found. Current best limits on leptoquarks are set by LHC, which has been searching for the first, second, and third generation of leptoquarks and some mixed-generation leptoquarks and have raised the lower mass limit to about . For leptoquarks coupling to a neutrino and a quark to be proven to exist, the missing energy in particle collisions attributed to neutrinos would have to be excessively energetic. It is likely that the creation of leptoquarks would mimic the creation of massive quarks. For leptoquarks coupling to electrons and up or down quarks, experiments of atomic parity violation and parity-violating electron scattering set the best limits. The LHeC project to add an electron ring to collide bunches with the existing LHC proton ring is proposed as a project to look for higher-generation leptoquarks. See also X and Y bosons Quark–lepton complementarity References Hypothetical elementary particles Grand Unified Theory Gauge bosons
Leptoquark
[ "Physics" ]
1,281
[ "Hypothetical elementary particles", "Unsolved problems in physics", "Physics beyond the Standard Model", "Grand Unified Theory" ]
3,153,312
https://en.wikipedia.org/wiki/Reflector%20%28cipher%20machine%29
A reflector, in cryptology, is a component of some rotor cipher machines, such as the Enigma machine, that sends electrical impulses that have reached it from the machine's rotors, back in reverse order through those rotors. The reflector simplified using the same machine setup for encryption and decryption, but it creates a weakness in the encryption: with a reflector the encrypted version of a given letter can never be that letter itself. That limitation aided World War II code breakers in cracking Enigma encryption. The comparable WW II U.S. cipher machine, SIGABA, did not include a reflector. Other names The reflector is also known as the reversing drum or, from the German, the Umkehrwalze or UKW. Rotor machines
Reflector (cipher machine)
[ "Physics", "Technology" ]
165
[ "Physical systems", "Machines", "Rotor machines" ]
3,153,666
https://en.wikipedia.org/wiki/Nuclear%20Waste%20Policy%20Act
The Nuclear Waste Policy Act of 1982 is a United States federal law which established a comprehensive national program for the safe, permanent disposal of highly radioactive wastes. The US Congress amended the act in 1987 to designate Yucca Mountain, Nevada, as the sole repository. The act allowed Nevada to override this designation, which it did in April 2002. Congress overrode Nevada's veto in July 2002. Nevada appealed, and the U.S. Court of Appeals for the District of Columbia sided with Nevada in 2004. At least one other jurisdiction (Aiken County, South Carolina in 2011) filed suit to force Yucca Mountain to accept the nuclear waste from the rest of the US. Historical overview During the first 40 years that nuclear waste was being created in the United States, no legislation was enacted to manage its disposal. Nuclear waste, some of which remains radioactive with a half-life of more than one million years, was kept in various types of temporary storage. Of particular concern during nuclear waste disposal are two long-lived fission products, Tc-99 (half-life 220,000 years) and I-129 (half-life 17 million years), which dominate spent fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Np-237 (half-life two million years) and Pu-239 (half-life 24,000 years). Most existing nuclear waste came from production of nuclear weapons. About 77 million gallons of military nuclear waste in liquid form was stored in steel tanks, mostly in South Carolina, Washington, and Idaho. In the private sector, 82 nuclear plants operating in 1982 used uranium fuel to produce electricity. Highly radioactive spent fuel rods were stored in pools of water at reactor sites, but many utilities were running out of storage space. The Nuclear Waste Policy Act of 1982 created a timetable and procedure for establishing a permanent, underground repository for high-level radioactive waste by the mid-1990s, and provided for some temporary federal storage of waste, including spent fuel from civilian nuclear reactors. State governments were authorized to veto a national government decision to place a waste repository within their borders, and the veto would stand unless both houses of Congress voted to override it. The Act also called for developing plans by 1985 to build monitored retrievable storage (MRS) facilities, where wastes could be kept for 50 to 100 years or more and then be removed for permanent disposal or for reprocessing. Congress assigned responsibility to the U.S. Department of Energy (DOE) to site, construct, operate, and close a repository for the disposal of spent nuclear fuel and high-level radioactive waste. The U.S. Environmental Protection Agency (EPA) was directed to set public health and safety standards for releases of radioactive materials from a repository, and the U.S. Nuclear Regulatory Commission (NRC) was required to promulgate regulations governing construction, operation, and closure of a repository. Generators and owners of spent nuclear fuel and high-level radioactive waste were required to pay the costs of disposal of such radioactive materials. The waste program, which was expected to cost billions of dollars, would be funded through a fee paid by electric utilities on nuclear-generated electricity. An Office of Civilian Radioactive Waste Management was established in the DOE to implement the Act. Permanent repositories The Nuclear Waste Policy Act required the Secretary of Energy to issue guidelines for selection of sites for construction of two permanent, underground nuclear waste repositories. DOE was to study five potential sites, and then recommend three to the President by January 1, 1985. Five additional sites were to be studied and three of them recommended to the president by July 1, 1989, as possible locations for a second repository. A full environmental impact statement was required for any site recommended to the President. Locations considered to be leading contenders for a permanent repository were basalt formations at the government's Hanford Nuclear Reservation in Washington, volcanic tuff formations at its Nevada nuclear test site, and several salt formations in Utah, Texas, Louisiana, and Mississippi. Salt and granite formations in other states from Maine to Georgia had also been surveyed, but not evaluated in great detail. The President was required to review site recommendations and submit to Congress by March 31, 1987, his recommendation of one site for the first repository, and by March 31, 1990, his recommendation for a second repository. The amount of high-level waste or spent fuel that could be placed in the first repository was limited to the equivalent of 70,000 metric tons of heavy metal until a second repository was built. The Act required the national government to take ownership of all nuclear waste or spent fuel at the reactor site, transport it to the repository, and thereafter be responsible for its containment. Temporary spent fuel storage The Act authorized DOE to provide up to 1,900 metric tons of temporary storage capacity for spent fuel from civilian nuclear reactors. It required that spent fuel in temporary storage facilities be moved to permanent storage within three years after a permanent waste repository went into operation. Costs of temporary storage would be paid by fees collected from electric utilities using the storage. Monitored retrievable storage The Act required the Secretary of Energy to report to Congress by June 1, 1985, on the need for and feasibility of a monitored retrievable storage facility (MRS) and specified that the report was to include five different combinations of proposed sites and facility designs, involving at least three different locations. Environmental assessments were required for the sites. It barred construction of a MRS facility in a state under consideration for a permanent waste repository. The DOE in 1985 recommended an integral MRS facility. Of the eleven sites identified within the preferred geographic region, the DOE selected three sites in Tennessee for further study. In March 1987, after more than a year of legal action in the federal courts, the DOE submitted its final proposal to Congress for the construction of a MRS facility at the Clinch River Breeder Reactor Site in Oak Ridge, Tennessee. Following considerable public pressure and threat of veto by the Governor of Tennessee, the 1987 amendments to the NWPA "annulled and revoked" MRS plans for all of the proposed sites. There are carefully selected geological locations that build places specifically for disposing nuclear waste in a safe location. State veto of site selected The Act required DOE to consult closely throughout the site selection process with states or Indian tribes that might be affected by the location of a waste facility, and allowed a state (governor or legislature) or Indian tribe to veto a federal decision to place within its borders a waste repository or temporary storage facility holding 300 tons or more of spent fuel, but provided that the veto could be overruled by a vote of both houses of Congress. Payment of costs The Act established a Nuclear Waste Fund composed of fees levied against electric utilities to pay for the costs of constructing and operating a permanent repository, and set the fee at one mill per kilowatt-hour of nuclear electricity generated. Utilities were charged a one-time fee for storage of spent fuel created before enactment of the law. Nuclear waste from defense activities was exempted from most provisions of the Act, which required that if military waste were put into a civilian repository, the government would pay its pro rata share of the cost of development, construction, and operation of the repository. The Act authorized impact assistance payments to states or Indian tribes to offset any costs resulting from location of a waste facility within their borders. Nuclear Waste Fund The Nuclear Waste Fund previously received $750 million in fee revenues each year and had an unspent balance of $44.5 billion as of the end of FY2017. However (according to the Draft Report by the Blue Ribbon Commission on America's Nuclear Future), actions by both Congress and the Executive Branch have made the money in the fund effectively inaccessible to serving its original purpose. The commission made several recommendations on how this situation may be corrected. In late 2013, a federal court ruled that the Department of Energy must stop collecting fees for nuclear waste disposal until provisions are made to collect nuclear waste. Yucca Mountain In December 1987, Congress amended the Nuclear Waste Policy Act to designate Yucca Mountain, Nevada, as the only site to be characterized as a permanent repository for all of the nation's nuclear waste. The plan was added to the fiscal 1988 budget reconciliation bill signed on December 22, 1987. Working under the 1982 Act, DOE had narrowed down the search for the first nuclear-waste repository to three Western states: Nevada, Washington, and Texas. The amendment repealed provisions in the 1982 law calling for a second repository in the eastern United States. No one from Nevada participated on the House–Senate conference committee on reconciliation. The amendment explicitly named Yucca Mountain as the only site that DOE was to consider for a permanent repository for the nation's radioactive waste. Years of study and procedural steps remained. The amendment also authorized a monitored retrievable storage facility, but not until the permanent repository was licensed. Early in 2002, the Secretary of Energy recommended approval of Yucca Mountain for development of a repository based on the multiple factors as required in the Nuclear Waste Policy Act of 1987 and, after review, President George W. Bush submitted the recommendation to Congress for its approval. Nevada exercised its state veto in April 2002, but the veto was overridden by both houses of Congress by mid-July 2002. In 2004, the U.S. Court of Appeals for the District of Columbia Circuit upheld a challenge by Nevada, ruling that EPA's 10,000-year compliance period for isolation of radioactive waste was not consistent with National Academy of Sciences (NAS) recommendations and was too short. The NAS report had recommended standards be set for the time of peak risk, which might approach a period of one million years. By limiting the compliance time to 10,000 years, EPA did not respect a statutory requirement that it develop standards consistent with NAS recommendations. The EPA subsequently revised the standards to extend out to 1 million years. A license application was submitted in the summer of 2008 and is presently under review by the Nuclear Regulatory Commission. The Obama Administration rejected use of the site in the 2010 United States federal budget, which eliminated all funding except that needed to answer inquiries from the Nuclear Regulatory Commission, "while the Administration devises a new strategy toward nuclear waste disposal." On March 5, 2009, Energy Secretary Steven Chu told a Senate hearing the Yucca Mountain site is no longer viewed as an option for storing reactor waste. In Obama's 2011 budget proposal released February 1, all funding for nuclear waste disposal was zeroed out for the next ten years and it proposed to dissolve the Office of Civilian Waste Management required by the NWPA. In late February 2010, multiple lawsuits were proposed and/or being filed in various federal courts across the country to contest the legality of Chu's direction to DOE to withdraw the license application. These lawsuits were evidently foreseen as eventually being necessary to enforce the NWPA because Section 119 of the NWPA provides for federal court interventions if the President, Secretary of Energy, or the Nuclear Regulatory Commission fail to uphold the NWPA. Prerequisites for radioactive waste management Hannes Alfvén, Nobel laureate in physics, described the as-yet-unresolved dilemma of permanent radioactive waste disposal: "The problem is how to keep radioactive waste in storage until it decays after hundreds of thousands of years. The [geologic] deposit must be absolutely reliable as the quantities of poison are tremendous. It is very difficult to satisfy these requirements for the simple reason that we have had no practical experience with such a long term project. Moreover permanently guarded storage requires a society with unprecedented stability." Thus, Alfvén identified two fundamental prerequisites for effective management of high-level radioactive waste: (1) stable geological formations, and (2) stable human institutions over hundreds of thousands of years. However, no known human civilization has ever endured for so long. Moreover, no geologic formation of adequate size for a permanent radioactive waste repository has yet been discovered that has been stable for so long a period. Because some radioactive species have half-lives longer than one million years, even very low container leakage and radionuclide migration rates must be taken into account. Moreover, it may require more than one half-life until some nuclear waste loses enough radioactivity so that it is no longer lethal to humans. Waste containers have a modeled lifetime of 12,000 to over 100,000 years and it is assumed they will fail in about two million years. A 1983 review of the Swedish radioactive waste disposal program by the National Academy of Sciences found that country's estimate of about one million years being necessary for waste isolation "fully justified." The Nuclear Waste Policy Act did not require anything approaching this standard for permanent deep-geologic disposal of high-level radioactive waste in the United States. U.S. Department of Energy guidelines for selecting locations for permanent deep-geologic high-level radioactive waste repositories required containment of waste within waste packages for only 300 years. A site would be disqualified from further consideration only if groundwater travel time from the "disturbed zone" of the underground facility to the "accessible environment" (atmosphere, land surface, surface water, oceans or lithosphere extending no more than 10 kilometers from the underground facility) was expected to be less than 1,000 years along any pathway of radionuclide travel. Sites with groundwater travel time greater than 1,000 years from the original location to the human environment were considered potentially acceptable, even if the waste would be highly radioactive for 200,000 years or more. Moreover, the term "disturbed zone" was defined in the regulations to exclude shafts drilled into geologic structures from the surface, so the standard applied to natural geologic pathways was more stringent than the standard applied to artificial pathways of radionuclide travel created during construction of the facility. Alternative to waste storage Enrico Fermi described an alternative solution: Consume all actinides in fast neutron reactors, leaving only fission products requiring special custody for less than 300 years. This requires continuous fuel reprocessing. PUREX separates plutonium and uranium, but leaves other actinides with fission products, thereby not addressing the long-term custody problem. Pyroelectric refining, as perfected at EBR-II, separates essentially all actinides from fission products. U.S. DOE Research on pyroelectric refining and fast neutron reactors was stopped in 1994. Repository closure Current repository closure plans require backfilling of waste disposal rooms, tunnels, and shafts with rubble from initial excavation and sealing openings at the surface, but do not require complete or perpetual isolation of radioactive waste from the human environment. Current policy relinquishes control over radioactive materials to geohydrologic processes at repository closure. Existing models of these processes are empirically underdetermined, meaning there is not much evidence they are accurate. DOE guidelines contain no requirements for permanent offsite or onsite monitoring after closure. This may seem imprudent, considering repositories will contain millions of dollars worth of spent reactor fuel that might be reprocessed and used again either in reactors generating electricity, in weapons applications, or possibly in terrorist activities. Technology for permanently sealing large-bore-hole walls against water infiltration or fracture does not currently exist. Previous experiences sealing mine tunnels and shafts have not been entirely successful, especially where there is any hydraulic pressure from groundwater infiltration into disturbed underground geologic structures. Historical attempts to seal smaller bore holes created during exploration for oil, gas, and water are notorious for their high failure rates, often in periods less than 50 years. See also Non-Proliferation Trust Basalt Waste Isolation Project References External links Nuclear Waste Policy Act of 1982 as amended (PDF/details) in the GPO Statute Compilations collection Summary of Nuclear Waste Policy Act can be found on the EPA site: EPA Laws & Regulations 1982 in American law United States federal energy legislation United States federal environmental legislation 97th United States Congress 1982 in the environment Radioactive waste Acts of the 97th United States Congress Presidency of Ronald Reagan
Nuclear Waste Policy Act
[ "Chemistry", "Technology" ]
3,249
[ "Radioactive waste", "Environmental impact of nuclear power", "Radioactivity", "Hazardous waste" ]
3,153,971
https://en.wikipedia.org/wiki/Mean%20piston%20speed
The mean piston speed is the average speed of the piston in a reciprocating engine. It is a function of stroke and RPM. There is a factor of 2 in the equation to account for one stroke to occur in 1/2 of a crank revolution (or alternatively: two strokes per one crank revolution) and a '60' to convert seconds from minutes in the RPM term. For example, a piston in an automobile engine which has a stroke of 90 mm will have a mean speed at 3000 rpm of 2 * (90 / 1000) * 3000 / 60 = 9 m/s. The 5.2-liter V10 that debuted in the 2009 Audi R8 has the highest mean piston speed for any production car (26.9 m/s) thanks to its 92.8 mm stroke and 8700-rpm redline. Classes low speed diesels ~8.5 m/s for marine and electric power generation applications medium speed diesels ~11 m/s for trains or trucks high speed diesel ~14–17 m/s for automobile engines medium speed petrol ~16 m/s for automobile engines high speed petrol ~20–25 m/s for sport automobile engines or motorcycles competition Some extreme examples are NASCAR Sprint Cup Series and Formula one engines with ~25 m/s and Top Fuel and MotoGP engines ~30 m/s The mean of any function refers to the average value. In the case of mean piston speed, taken in a narrow mathematical sense, it is zero because half of the time the piston is moving up and half of the time the piston is moving down; this is not useful. The way the term is usually used is to describe the distance traveled by the piston per unit of time, taking distance positive in both up and down senses. It is related to the rate that friction work is done on the cylinder walls, and thus the rate that heat is generated there. This is sort of a non-puzzle. It represents a specification to be designed to rather than as a result of design and the mean piston velocity is a function of the revolutions per minute, that is, the piston at a specific rpm is going to be the same at the peak of the graph as it is at the trough, that is at 286.071 degrees on the crankshaft if the rpm is held consistent. At 0 and 180 degrees, the piston velocity is zero. Piston velocity is a test of the strength of the piston and connecting rod subassembly. The alloy used to make the piston itself is what determines the maximum velocity that the piston can reach before friction coefficients, heat levels and reciprocating stress overcome the maximum levels that the piston can sustain before it begins to fail structurally. As the alloy tends to be fairly consistent across most manufacturers, the maximum velocity of the piston at a given rpm is determined by the length of the stroke, that is, the radius of the journal of the crankshaft. The most common engine types in production are built to square, or below square. That is, a square engine has the same diameter of cylinder bore as the total length of the stroke from 0 to 180 degrees, whereas in an undersquare engine, the total length of the stroke is greater than the diameter of the bore. The opposite, oversquare, is mostly used in higher performance engines where the torque curve approaches the peak of the maximum piston velocity. Generally in this type of engine, the volume of the cylinder can be artificially enhanced with turbochargers or superchargers, increasing the amount of fuel/air available for combustion. An example is found in Formula 1 racing engines, where the cylinder diameter is substantially greater than the length of the stroke, resulting in higher available rpm but necessitating greater requirements of the strengths of connecting rods and pistons and higher temperature tolerances for bearings. The cylinder diameter in these engines are fairly small (under 45 mm) and the stroke is less than that, depending on the torque curve and maximum available rpm as designed by the builder. Peak torque is reached at higher rpm and is spread over a wider range of rpm. The specifications of these are known factors and can be designed to. Torque is a function of the length of the stroke, the shorter the stroke, the less available torque at lower rpm, but the piston velocity can be taken to much greater speeds, meaning higher engine rpm. These types of engines are much more delicate and require a much higher level of precision in the moving parts than square or undersquare engines. Up until the early 1960s, the focus by designers was on torque rather than piston velocity, probably due to material considerations and machining technologies. As materials have improved, engine rpm has increased. References Piston engines Engine technology
Mean piston speed
[ "Technology" ]
958
[ "Engine technology", "Piston engines", "Engines" ]
3,154,072
https://en.wikipedia.org/wiki/Tracking%20system
A tracking system, also known as a locating system, is used for the observing of persons or objects on the move and supplying a timely ordered sequence of location data for further processing. Applications A myriad of tracking systems exist. Some are 'lag time' indicators, that is, the data is collected after an item has passed a point for example, a bar code or choke point or gate. Others are 'real-time' or 'near real-time' like Global Positioning Systems (GPS) depending on how often the data is refreshed. There are bar-code systems which require items to be scanned and other which have automatic identification (RFID auto-id). For the most part, the tracking worlds are composed of discrete hardware and software systems for different applications. That is, bar-code systems are separate from Electronic Product Code (EPC) systems and GPS systems are separate from active real time locating systems or RTLS. For example, a passive RFID system would be used in a warehouse to scan the boxes as they are loaded on a truck - then the truck itself is tracked on a different system using GPS with its own features and software. The major technology “silos” in the supply chain are: Distribution/warehousing/manufacturing Indoors assets are tracked repetitively reading e.g. a barcode, any passive and active RFID, then, feeding read data into Work in Progress models (WIP) or Warehouse Management Systems (WMS) or ERP software. The readers required per choke point are meshed auto-ID or hand-held ID applications. However, tracking could also be capable of providing data monitoring without being bound to a fixed location by using a cooperative tracking capability such as an RTLS. Yard management Outdoors mobile assets of high value are tracked by choke point, 802.11, Received Signal Strength Indication (RSSI), Time Delay on Arrival (TDOA), active RFID or GPS Yard Management; feeding into either third party yard management software from the provider or to an existing system. Yard Management Systems (YMS) couple location data collected by RFID and GPS systems to help supply chain managers to optimize utilization of yard assets such as trailers and dock doors. YMS systems can use either active or passive RFID tags. Fleet management Fleet management is applied as a tracking application using GPS and composing tracks from subsequent vehicle's positions. Each vehicle to be tracked is equipped with a GPS receiver and relays the obtained coordinates via cellular or satellite networks to a home station. Fleet management is required by: Large fleet operators, (vehicle/railcars/trucking/shipping) Forwarding operators (containers, machines, heavy cargo, valuable shippings) Operators who have high equipment and/or cargo/product costs Operators who have a dynamic workload Person tracking Person tracking relies on unique identifiers that are temporarily (RFID tags) or permanently assigned to persons like personal identifiers (including biometric identifiers), or national identification numbers and a way to sample their positions, either on short temporal scales as through GPS or for public administration to keep track of a state's citizens or temporary residents. The purposes for doing so are numerous, for example from welfare and public security to mass surveillance. Attendance management Mobile phone services Location-based services (LBS) utilise a combination of A-GPS, newer GPS and cellular locating technology that is derived from the telematics and telecom world. Line of sight is not necessarily required for a location fix. This is a significant advantage in certain applications since a GPS signal can still be lost indoors. As such, A-GPS enabled cell phones and PDAs can be located indoors and the handset may be tracked more precisely. This enables non-vehicle centric applications and can bridge the indoor location gap, typically the domain of RFID and Real-time locating system (RTLS) systems, with an off the shelf cellular device. Currently, A-GPS enabled handsets are still highly dependent on the LBS carrier system, so handset device choice and application requirements are still not apparent. Enterprise system integrators need the skills and knowledge to correctly choose the pieces that will fit the application and geography. Operational requirements Regardless of the tracking technology, for the most part, the end-users just want to locate themselves or wish to find points of interest. The reality is that there is no "one size fits all" solution with locating technology for all conditions and applications. Application of tracking is a substantial basis for vehicle tracking in fleet management, asset management, individual navigation, social networking, or mobile resource management and more. Company, group or individual interests can benefit from more than one of the offered technologies depending on the context. GPS tracking GPS has global coverage but can be hindered by line-of-sight issues caused by buildings and urban canyons; Map matching techniques, which involve several algorithms, can help improve accuracy in such conditions. RFID is excellent and reliable indoors or in situations where close proximity to tag readers is feasible, but has limited range and still requires costly readers. RFID stands for Radio Frequency Identification. This technology uses electromagnetic waves to receive the signal from the targeting object to then save the location on a reader that can be looked at through specialized software. Real-time locating systems (RTLS) RTLS are enabled by Wireless LAN systems (according to IEEE 802.11) or other wireless systems (according to IEEE 802.15) with multilateration. Such equipment is suitable for certain confined areas, such as campuses and office buildings. RTLS requires system-level deployments and server functions to be effective. In virtual space In virtual space technology, a tracking system is generally a system capable of rendering virtual space to a human observer while tracking the observer's coordinates. For instance, in dynamic virtual auditory space simulations, a head tracker provides information to a central processor in real time and this enables the processor to select what functions are necessary to give feedback to the user in relation to where they are positioned. Additionally, there is vision-based trajectory tracking, that uses a color and depth camera known as a KINECT sensor to track 3D position and movement. This technology can be used in traffic control, human-computer interface, video compression and robotics. See also Data logger Geopositioning GPS tracking Intelligent Mail barcode Internet geolocation Locating engine Location-based service MAC address anonymization Mass surveillance Multilateration Positional tracking Real-time locating RFID in schools Simultaneous localization and mapping Track and trace Vehicle tracking system References Further reading Geopositioning Navigation Radio navigation Technology systems Ubiquitous computing Wireless locating
Tracking system
[ "Technology", "Engineering" ]
1,343
[ "Systems engineering", "Technology systems", "Wireless locating", "Tracking", "nan" ]
3,154,127
https://en.wikipedia.org/wiki/Virtual%20acoustic%20space
Virtual acoustic space (VAS), also known as virtual auditory space, is a technique in which sounds presented over headphones appear to originate from any desired direction in space. The illusion of a virtual sound source outside the listener's head is created. Sound localization cues generate an externalized percept When one listens to sounds over headphones (in what is known as the "closed field") the sound source appears to arise from center of the head. On the other hand, under normal, so-called free-field, listening conditions sounds are perceived as being externalized. The direction of a sound in space (see sound localization) is determined by the brain when it analyses the interaction of incoming sound with head and external ears. A sound arising to one side reaches the near ear before the far ear (creating an interaural time difference, ITD), and will also be louder at the near ear (creating an interaural level difference, ILD – also known as interaural intensity difference, IID). These binaural cues allow sounds to be lateralized. Although conventional stereo headphone signals make used of ILDs (not ITDs) the sound is not perceived as being externalized. The perception of an externalized sound source is due to the frequency and direction-dependent filtering of the pinna which makes up the external ear structure. Unlike ILDs and ITDs, these spectral localization cues are generated monaurally. The same sound presented from different directions will produce at the eardrum a different pattern of peaks and notches across frequency. The pattern of these monaural spectral cues is different for different listeners. Spectral cues are vital for making elevation judgments and distinguishing if a sound arose from in front or behind the listener. They are also vital for creating the illusion of an externalized sound source. Since only ILDs are present in stereo recordings, the lack of spectral cues means that the sound is not perceived as being externalized. The easiest way of re-creating this illusion is to make a recording using two microphones placed inside a dummy human head. Playing back the recording via headphones will create the illusion of an externalized sound source. VAS creates the perception of an externalized sound source VAS emulates the dummy head technique via digital signal processing. The VAS technique involves two stages: estimating the transfer functions of the head from difference directions, and playing sounds through VAS filters with similar transfer functions. The ILDs, ITDs, and spectral cues make up what is known as the head-related transfer function (HRTF) which defines how the head and outer ears filter incoming sound. The HRTF can be measured by placing miniature probe microphones into the subject's ears and recording the impulse responses to broad-band sounds presented from a range of directions in space. Since head size and outer ear shape vary between listeners a more accurate effect can be created by individualizing the VAS filters in this way. However, a foreign HRTF or an average HRTF taken over many listeners is still very effective. The bank of HRTF impulse responses are now be converted into a filter bank of some sort. Any desired sound can now be convolved with one of these filters and played to a listener over headphones. This creates the perception of an externalised sound source. This approach has obvious advantages over the "dummy head technique", most notably the fact that once the filter bank has been obtained it can be applied to any desired sound source. Uses for VAS in science In addition to obvious uses in the home entertainment market, VAS has been used to study how the brain processes sound source location. For example, at the Oxford Auditory Neuroscience Lab scientists have presented VAS-filtered sounds whilst recording from neurons in the auditory cortex and mid-brain. See also Sound localization acoustic space Auralization Acoustics Virtual reality Digital signal processing
Virtual acoustic space
[ "Physics" ]
791
[ "Classical mechanics", "Acoustics" ]
3,155,147
https://en.wikipedia.org/wiki/Albert%20Meyers
Albert I. Meyers (November 22, 1932 – October 23, 2007) was an American organic chemist, University Distinguished Professor Emeritus at Colorado State University, and member of the U.S. National Academy of Sciences. Born in New York City, Meyers earned undergraduate and doctoral degrees from New York University in 1954 and 1957, respectively. After finishing his graduate degree, Meyers worked as a research chemist for a year before joining the faculty of Louisiana State University as an associate professor. He rose to the rank of full professor in 1964, and was a special NIH fellow at Harvard University in 1965–1966. Meyers later moved to Wayne State University in 1970 and finally to Colorado State University in 1972. Meyers has served on the editorial boards and staff of several major chemical journals, including the Journal of the American Chemical Society. For his work in the area of synthetic organic chemistry, particularly in synthesis of heterocyclic compounds, Meyers was elected to the U.S. National Academy of Sciences in 1994. An endowed faculty chair at Colorado State in synthetic organic chemistry and Meyers synthesis is named in honor of Meyers. External links Curriculum Vitae In Memoriam. Professor Albert I. Meyers. 20th-century American chemists Members of the United States National Academy of Sciences 1932 births 2007 deaths American organic chemists Harvard University staff New York University alumni Louisiana State University faculty Wayne State University faculty Colorado State University faculty
Albert Meyers
[ "Chemistry" ]
289
[ "Organic chemists", "American organic chemists" ]
3,155,399
https://en.wikipedia.org/wiki/Mostafa%20El-Sayed
Mostafa A. El-Sayed (Arabic: مصطفى السيد) is an Egyptian-American physical chemist, nanoscience researcher, member of the National Academy of Sciences and US National Medal of Science laureate. He is known for the spectroscopy rule named after him, the El-Sayed rule. Early life and academic career El-Sayed was born in Zifta, Egypt and spent his early life in Cairo. He earned his B.Sc. in chemistry from Ain Shams University Faculty of Science, Cairo in 1953. El-Sayed earned his doctoral degree in chemistry from Florida State University working with Michael Kasha, the last student of the legendary G. N. Lewis. While attending graduate school he met and married Janice Jones, his wife of 48 years. He spent time as a post-doctoral researcher at Harvard University, Yale University and the California Institute of Technology before joining the faculty of the University of California at Los Angeles in 1961. In 1994, he retired from UCLA and accepted the position of Julius Brown Chair and Regents Professor of Chemistry and Biochemistry at the Georgia Institute of Technology. He led the Laser Dynamics Lab there until his full retirement in 2020. El-Sayed is a former editor-in-chief of the Journal of Physical Chemistry (1980–2004). Research El-Sayed's research interests include the use of steady-state and ultra fast laser spectroscopy to understand relaxation, transport and conversion of energy in molecules, in solids, in photosynthetic systems, semiconductor quantum dots and metal nanostructures. The El-Sayed group has also been involved in the development of new techniques such as magnetophotonic selection, picosecond Raman spectroscopy and phosphorescence microwave double resonance spectroscopy. A major focus of his lab is currently on the optical and chemical properties of noble metal nanoparticles and their applications in nanocatalysis, nanophotonics and nanomedicine. His lab is known for the development of the gold nanorod technology. As of 2021, El-Sayed has produced over 1200 publications in refereed journals in the areas of spectroscopy, molecular dynamics and nanoscience, with over 130,000 citations. Honors For his work in the area of applying laser spectroscopic techniques to study of properties and behavior on the nanoscale, El-Sayed was elected to the National Academy of Sciences in 1980. In 1989 he received the Tolman Award, and in 2002, he won the Irving Langmuir Award in Chemical Physics. He has been the recipient of the 1990 King Faisal International Prize ("Arabian Nobel Prize") in Sciences, Georgia Tech's highest award, "The Class of 1943 Distinguished Professor", an honorary doctorate of philosophy from the Hebrew University, and several other awards including some from the different American Chemical Society local sections. He was a Sherman Fairchild Distinguished Scholar at the California Institute of Technology and an Alexander von Humboldt Senior U.S. Scientist Awardee. He served as editor-in-chief of the Journal of Physical Chemistry from 1980 to 2004 and has also served as the U.S. editor of the International Reviews in Physical Chemistry. He is a Fellow of the American Academy of Arts and Sciences, a member of the American Physical Society, the American Association for the Advancement of Science and the Third World Academy of Science. Mostafa El-Sayed was awarded the 2007 US National Medal of Science "for his seminal and creative contributions to our understanding of the electronic and optical properties of nanomaterials and to their applications in nanocatalysis and nanomedicine, for his humanitarian efforts of exchange among countries and for his role in developing the scientific leadership of tomorrow." Mostafa was also announced to be the recipient of the 2009 Ahmed Zewail prize in molecular sciences. In 2011, he was listed #17 in Thomson-Reuters listing of the Top Chemists of the Past Decade. Professor El-Sayed also received the 2016 Priestley Medal, the American Chemical Society’s highest honor, for his decades-long contributions to chemistry. The El-Sayed rule This rule pertains to phosphorescence and similar phenomena. Electrons vibrate and resonate around molecules in different modes (electronic state), usually depending on the energy of the system of electrons. This law states that constant-energy flipping between two electronic states happens more readily when the vibrations of the electrons are preserved during the flip: any change in the spin of an electron is compensated by a change in its orbital motion (spin-orbit coupling). Intersystem crossing (ISC) is a photophysical process involving an isoenergetic radiationless transition between two electronic states having different multiplicities. It often results in a vibrationally excited molecular entity in the lower electronic state, which then usually decays to its lowest molecular vibrational level. ISC is forbidden by rules of conservation of angular momentum. As a consequence, ISC generally occurs on very long time scales. However, the El-Sayed rule states that the rate of intersystem crossing, e.g. from the lowest singlet state to the triplet manifold, is relatively large if the radiationless transition involves a change of molecular orbital type. For example, a (π,π*) singlet could transition to a (n,π*) triplet state, but not to a (π,π*) triplet state and vice versa. Formulated by El-Sayed in the 1960s, this rule found in most photochemistry textbooks as well as the IUPAC Gold Book. The rule is useful in understanding phosphorescence, vibrational relaxation, intersystem crossing, internal conversion and lifetimes of excited states in molecules. Notes References El-Sayed, M.A., Acc. Chem. Res. 1968,1,8. Lower, S.K.; El-Sayed, M.A., Chem. Rev. 1966,66,199 Mostafa Amr El-Sayed (8 May 1933 – Egyptian-American, b. Zifta, Egypt) Biographical References: McMurray, Emily J. (ed.), Notable Twientieth-Century Scientists, Gale Research, Inc.: New York, 1995. External links Faculty web page at Georgia Tech Laser Dynamics Lab at Georgia Tech President Bush to laud Georgia Tech’s Mostafa El-Sayed Mostafa El-Sayed praised for contributions to nanotechnology Biochemists Egyptian chemists Egyptian Muslims American Muslims Egyptian inventors Egyptian emigrants to the United States Harvard University staff Members of the United States National Academy of Sciences Florida State University alumni Georgia Tech faculty Living people 1933 births National Medal of Science laureates Ain Shams University alumni American physical chemists Fellows of the American Physical Society
Mostafa El-Sayed
[ "Chemistry", "Biology" ]
1,400
[ "Biochemistry", "Biochemists" ]
3,155,512
https://en.wikipedia.org/wiki/Digital%20Video%20Interactive
Digital Video Interactive (DVI) was the first multimedia desktop video standard for IBM-compatible personal computers. It enabled full-screen, full motion video, as well as stereo audio, still images, and graphics to be presented on a DOS-based desktop computer using a special compression chipset. The scope of Digital Video Interactive encompasses a file format, including a digital container format, a number of video and audio compression formats, as well as hardware associated with the file format. History Development of DVI was started around 1984 by Section 17 of The David Sarnoff Research Center Labs (DSRC) then responsible for the research and development activities of RCA. When General Electric purchased RCA in 1986, GE considered the DSRC redundant with its own labs, and sought a buyer. In 1988, GE sold the DSRC to SRI International, but sold the DVI technology separately to Intel corporation. DVI technology allowed full-screen, full motion digital video, as well as stereo audio, still images, and graphics to be presented on a DOS-based desktop computer. DVI content was created using the Authology Multimedia authoring system developed by CEIT Systems and usually distributed on CD-ROM discs, which in turn was decoded and displayed via specialized add-in card hardware installed in the computer. Audio and video files for DVI were among the first to use data compression, with audio content using ADPCM. DVI was the first technology of its kind for the desktop PC, and ushered in the multimedia revolution for PCs. DVI was announced at the second annual Microsoft CD-ROM conference in Seattle to a standing ovation in 1987. The excitement at the time stemmed from the fact that a CD-ROM drive of the era had a maximum data playback rate of ~1.2 Mbit/s, thought to be insufficient for good quality motion video. However, the DSRC team was able to extract motion video, stereo audio and still images from this relatively low data rate with good quality. Implementations The first implementation of DVI developed in the mid-80s relied on three 16-bit ISA cards installed inside the computer, one for audio processing, another for video processing, and the last as an interface to a Sony CDU-100 CD-ROM drive. The DVI video card used a custom chipset (later known as the i80750 or i750 chipset) for decompression, one device was known as the pixel processor & the display device was called the VDP (video display processor). Later DVI implementations used one, more highly integrated card, such as Intel's ActionMedia series (omitting the CD-ROM interface). The ActionMedia (and the later ActionMedia II) were available in both ISA and MCA-bus cards, the latter for use in MCA-bus PCs like IBM's PS/2 series. Intel utilized the i750 technology in driving creation of the MMX instruction set. Compression The DVI format specified two video compression schemes, Presentation Level Video or Production Level Video and Real-Time Video and two audio compression schemes, ADPCM and PCM8. The original video compression scheme, called Presentation Level Video, was asymmetric in that a Digital VAX-11/750 minicomputer was used to compress the video in non-real time to 30 frames per second with a resolution of 320x240. Encoding was performed by Intel at its facilities or at licensed encoding facilities set up by Intel. Video compression involved coding both still frames and motion-compensated residuals using Vector Quantization in dimensions 1, 2, and 4. The resulting file (in the .AVS format) was displayed in real-time on an IBM PC-AT (i286) with the add-in boards providing decompression and display functions at NTSC (30 frame/s) resolutions. The IBM PC-AT equipped with the DVI add-in boards hence had 2 monitors, the original monochrome control monitor, and a second Sony CDP1302 monitor for the color video. Stereo audio at near FM quality was also available from the system. The Real-Time Video format was introduced in March 1988, then called Edit-Level Video (ELV). In fall 1992, version 2.1 of the Real-Time Video format was introduced by Intel as Indeo 2. Legacy of DSRC The original team from DSRC (David Sarnoff Research Center) set up an Intel operation NJ1 as the Princeton Operation. The team occupied new quarters after moving out of the DSRC in Plainsboro, New Jersey. From the original 35 researchers the Princeton Operation grew to over 200 people at its height. Andy Grove was a great supporter of the Princeton Team during its term of operation. However in 1992 Ken Fine (a vice president of Intel) decided to shutter the operation and transfer those employees willing to move to other Intel sites in Arizona and Oregon. Fine left the company shortly after he implemented this decision. Final site closure occurred almost a year later in September 1993. References External links Information on the .DVI file extension, as well as a background on DVI itself A paper titled "The Implication of Digital Video Interaction [sic] (DVI) Technology in Multimedia Post-Production Techniques" Multimedia Video codecs
Digital Video Interactive
[ "Technology" ]
1,080
[ "Multimedia" ]
3,155,563
https://en.wikipedia.org/wiki/Colipase
Colipase, abbreviated CLPS, is a protein co-enzyme that counteracts the inhibitory effect of intestinal bile acid on the enzymatic activity of pancreatic lipase. It is secreted by the pancreas in an inactive form, procolipase, which is activated in the intestinal lumen by trypsin. Intestinal bile acids (which aid lipid digestion by facilitating micelle formation) adhere to the surface of emulsified fat droplets, displacing lipase (which is only active at the water-fat interface) from the droplet surface. Colipase acts as a bridging molecule, binding to both lipase and bile acids, thus anchoring lipase onto the droplet surface, preventing its displacement. In humans, the colipase protein is encoded by the CLPS gene. Protein domain Colipase is also a family of evolutionarily related proteins. Colipase is a small protein cofactor needed by pancreatic lipase for efficient dietary lipid hydrolysis. Efficient absorption of dietary fats is dependent on the action of pancreatic triglyceride lipase. Colipase binds to the C-terminal, non-catalytic domain of lipase, thereby stabilising an active conformation and considerably increasing the hydrophobicity of its binding site. Structural studies of the complex and of colipase alone have revealed the functionality of its architecture. Colipase is a small protein (12K) with five conserved disulphide bonds. Structural analogies have been recognised between a developmental protein (Dickkopf), the pancreatic lipase C-terminal domain, the N-terminal domains of lipoxygenases and the C-terminal domain of alpha-toxin. These non-catalytic domains in the latter enzymes are important for interaction with membrane. It has not been established if these domains are also involved in eventual protein cofactor binding as is the case for pancreatic lipase. See also Enterostatin References Further reading External links PDBe-KB provides an overview of all the structure information available in the PDB for Pig Colipase Protein domains Membrane proteins Enzymes Protein families
Colipase
[ "Biology" ]
451
[ "Protein families", "Protein domains", "Protein classification", "Membrane proteins" ]
3,155,895
https://en.wikipedia.org/wiki/Robin%20Gandy
Robin Oliver Gandy (22 September 1919 – 20 November 1995) was a British mathematician and logician. He was a friend, student, and associate of Alan Turing, having been supervised by Turing during his PhD at the University of Cambridge, where they worked together. Education and early life Robin Gandy was born in the village of Rotherfield Peppard, Oxfordshire, England. A great-great-grandson of the architect and artist Joseph Gandy (1771–1843), he was the son of Thomas Hall Gandy (1876–1948), a general practitioner, and Ida Caroline née Hony (1885–1977), a social worker and later an author. His brother was the diplomat Christopher Gandy and his sister was the physician Gillian Gandy. Educated at Abbotsholme School in Derbyshire, Gandy took two years of the Mathematical Tripos, at King's College, Cambridge, before enlisting for military service in 1940. During World War II he worked on radio intercept equipment at Hanslope Park, where Alan Turing was working on a speech encipherment project, and he became one of Turing's lifelong friends and associates. In 1946, he completed Part III of the Mathematical Tripos, then began studying for a PhD under Turing's supervision. He completed his thesis, On axiomatic systems in mathematics and theories in Physics, in 1952. He was a member of the Cambridge Apostles. Career and research Gandy held positions at the University of Leicester, the University of Leeds, and the University of Manchester. He was a visiting associate professor at Stanford University from 1966 to 1967 and held a similar position at University of California, Los Angeles in 1968. In 1969, he moved to Wolfson College, Oxford, where he became Reader in Mathematical Logic. Gandy is known for his work in recursion theory. His contributions include the Spector–Gandy theorem, the Gandy Stage Comparison theorem, and the Gandy Selection theorem. He also made a significant contribution to the understanding of the Church–Turing thesis, and his generalisation of the Turing machine is called a Gandy machine. Gandy died in Oxford, England on 20 November 1995. Legacy The Robin Gandy Buildings, a pair of accommodation blocks at Wolfson College, Oxford, are named after Gandy. A one-day centenary Gandy Colloquium was held on 22 February 2020 at the College in Gandy's honour, including contributions by some of his students; the speakers were Marianna Antonutti Marfori (Munich), Andrew Hodges (Oxford), Martin Hyland (Cambridge), Jeff Paris (Manchester), Göran Sundholm (Leiden), Christine Tasson (Paris), and Philip Welch (Bristol). References 1919 births 1995 deaths Military personnel from Oxfordshire British Army soldiers People from South Oxfordshire District Alumni of King's College, Cambridge British Army personnel of World War II English logicians 20th-century English philosophers 20th-century English mathematicians Mathematical logicians Academics of the University of Leicester Academics of the University of Leeds Academics of the Victoria University of Manchester Stanford University Department of Mathematics faculty University of California, Los Angeles faculty Fellows of Wolfson College, Oxford Robin
Robin Gandy
[ "Mathematics" ]
650
[ "Mathematical logic", "Mathematical logicians" ]
3,155,959
https://en.wikipedia.org/wiki/Intercast
Intercast was a short-lived technology developed in 1996 by Intel for broadcasting information such as web pages and computer software, along with a single television channel. It required a compatible TV tuner card installed in a personal computer and a decoding program called Intel Intercast Viewer. The data for Intercast was embedded in the Vertical Blanking Interval (VBI) of the video signal carrying the Intercast-enabled program, at a maximum of 10.5 Kilobytes/sec in 10 of the 45 lines of the VBI. With Intercast, a computer user could watch the TV broadcast in one window of the Intercast Viewer, while being able to view HTML web pages in another window. Users were also able to download software transmitted via Intercast as well. Most often the web pages received were relevant to the television program being broadcast, such as extra information relating to a television program, or extra news headlines and weather forecasts during a newscast. Intercast can be seen as a more modern version of teletext. The Intercast Viewer software was bundled with several TV tuner cards at the time, such as the Hauppauge Win-TV card. Also at the time of Intercast's introduction, Compaq offered some models of computers with built-in TV tuners installed with the Intercast Viewer software. Upon its debut, Intercast was used by several TV networks, such as NBC, CNN, The Weather Channel, and MTV Networks. On June 25, 1996, Intel and NBC announced an arrangement which enabled users to watch coverage of the 1996 Summer Olympics and other programming from NBC News. Intel discontinued support for Intercast a couple of years later. NBC's series Homicide: Life on the Street was a show that was Intercast-enabled. References External links Archived copy of Intercast's web site from archive.org Article about Intercast, NBC, and the 1996 Summer Olympics Businessweek article Microsoft press release regarding Intercast and Windows 98 Intercast dying of neglect Television technology Multimedia Streaming television in the United States Computer-related introductions in 1996
Intercast
[ "Technology" ]
421
[ "Information and communications technology", "Multimedia", "Television technology" ]