id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
16,153,400
https://en.wikipedia.org/wiki/Harding%20test
The term Harding test is generically understood to mean an automatic test for photosensitive epilepsy (PSE), triggered by provocative image sequences in television content. This is properly known as a PSE test since the publication of the Digital Production Partnership (DPP) technical requirements and the DPP PSE Devices document (in the UK) updated in November 2018. The Harding Flash and Pattern Analyser (FPA) is proprietary software that is used to analyse video content for flashing and stationary patterns which may cause harm to those who suffer from photosensitive epilepsy. It is an implementation of the guidelines set by the regulator Ofcom in the UK largely based on the findings by Graham Harding, a professor at Aston University. It is available in both tape-based and file-based versions, allowing video streams from SDI, composite, component, HDMI, and files to all be analysed, in resolutions up to 8k. Versions for both Microsoft Windows and Apple mac-IOS are available. There are other manufacturers of similar and different solutions available which are also approved on the DPP Devices list. Photosensitive epilepsy Photosensitive epilepsy affects approximately one in 4,000 people and is a form of epilepsy in which seizures are triggered by visual stimuli that form patterns in time or space, such as flashing lights, bold regular patterns, or regular moving patterns. In 1993, an advert for Pot Noodles induced seizures in three people in the United Kingdom, leading to the then regulator the ITC introducing these guidelines. The Broadcast Code of Advertising Practice requires that TV ads are tested and pass a PSE Test. Companies such as Clearcast, are responsible for clearing ads for UK commercial broadcasters and will perform a PSE check on all ads before clearance. Testing procedures The algorithms behind PSE testing look at video frames from second to second and analyse for potentially provocative image sequences. Luminance flashes, red flashes and spatial patterns over prescribed amplitude and frequency limits are then logged. Any such over limit violations give rise to the media being failed. Otherwise the media is passed fit for broadcast and a pass certificate can be automatically generated. The first PSE test was developed by Cambridge Research Systems Ltd. and are based on research by Graham Harding. All Harding FPA products implement the same guidelines. There are also other approved manufacturers' products which either use the same algorithm in different packages or have independently developed software and algorithms that broadly provide PSE checks to the same specifications. The PSE testing is currently used by all television stations in the UK to check for compliance with the guidelines. If a program fails, it usually requires a re-editing of offending scenes. Normally, problems can be rectified by reducing the number of flashes in the scene and/or reducing the intensity of colors (most notably saturated red). After re-editing the problem areas, the entire program must be re-tested in order to obtain a PSE test certificate. PSE testing is also used in Japan, particularly for anime content on both broadcast TV and online streaming platforms, following the Pokémon Shock incident in 1997. In 2010, HardingTest.com was launched to provide users with a way of testing video remotely, without the need to have an in-house Harding FPA machine. This provided a much-needed service for freelance editors and production companies who previously had to export their movie to video tape to send to a larger post-production facility for testing, all of which increased time and expense. This service means users can upload a digital video file and have it tested and results returned within minutes rather than hours. References British inventions Epilepsy types Medical software Television advertising
Harding test
[ "Biology" ]
747
[ "Medical software", "Medical technology" ]
16,153,473
https://en.wikipedia.org/wiki/Device%20fingerprint
A device fingerprint or machine fingerprint is information collected about the software and hardware of a remote computing device for the purpose of identification. The information is usually assimilated into a brief identifier using a fingerprinting algorithm. A browser fingerprint is information collected specifically by interaction with the web browser of the device. Device fingerprints can be used to fully or partially identify individual devices even when persistent cookies (and zombie cookies) cannot be read or stored in the browser, the client IP address is hidden, or one switches to another browser on the same device. This may allow a service provider to detect and prevent identity theft and credit card fraud, but also to compile long-term records of individuals' browsing histories (and deliver targeted advertising or targeted exploits) even when they are attempting to avoid tracking – raising a major concern for internet privacy advocates. History Basic web browser configuration information has long been collected by web analytics services in an effort to measure real human web traffic and discount various forms of click fraud. Since its introduction in the late 1990s, client-side scripting has gradually enabled the collection of an increasing amount of diverse information, with some computer security experts starting to complain about the ease of bulk parameter extraction offered by web browsers as early as 2003. In 2005, researchers at the University of California, San Diego showed how TCP timestamps could be used to estimate the clock skew of a device, and consequently to remotely obtain a hardware fingerprint of the device. In 2010, Electronic Frontier Foundation launched a website where visitors can test their browser fingerprint. After collecting a sample of 470161 fingerprints, they measured at least 18.1 bits of entropy possible from browser fingerprinting, but that was before the advancements of canvas fingerprinting, which claims to add another 5.7 bits. In 2012, Keaton Mowery and Hovav Shacham, researchers at University of California, San Diego, showed how the HTML5 canvas element could be used to create digital fingerprints of web browsers. In 2013, at least 0.4% of Alexa top 10,000 sites were found to use fingerprinting scripts provided by a few known third parties. In 2014, 5.5% of Alexa top 10,000 sites were found to use canvas fingerprinting scripts served by a total of 20 domains. The overwhelming majority (95%) of the scripts were served by AddThis, which started using canvas fingerprinting in January that year, without the knowledge of some of its clients. In 2015, a feature to protect against browser fingerprinting was introduced in Firefox version 41, but it has been since left in an experimental stage, not initiated by default. The same year a feature named Enhanced Tracking Protection was introduced in Firefox version 42 to protect against tracking during private browsing by blocking scripts from third party domains found in the lists published by Disconnect Mobile. At WWDC 2018 Apple announced that Safari on macOS Mojave "presents simplified system information when users browse the web, preventing them from being tracked based on their system configuration." In 2019, starting from Firefox version 69, Enhanced Tracking Protection has been turned on by default for all users also during non-private browsing. The feature was first introduced to protect private browsing in 2015 and was then extended to standard browsing as an opt-in feature in 2018. Diversity and stability Motivation for the device fingerprint concept stems from the forensic value of human fingerprints. In order to uniquely distinguish over time some devices through their fingerprints, the fingerprints must be both sufficiently diverse and sufficiently stable. In practice neither diversity nor stability is fully attainable, and improving one has a tendency to adversely impact the other. For example, the assimilation of an additional browser setting into the browser fingerprint would usually increase diversity, but it would also reduce stability, because if a user changes that setting, then the browser fingerprint would change as well. A certain degree of instability can be compensated by linking together fingerprints that, although partially different, might probably belong to the same device. This can be accomplished by a simple rule-based linking algorithm (which, for example, links together fingerprints that differ only for the browser version, if that increases with time) or machine learning algorithms. Entropy is one of several ways to measure diversity. Sources of identifying information Applications that are locally installed on a device are allowed to gather a great amount of information about the software and the hardware of the device, often including unique identifiers such as the MAC address and serial numbers assigned to the machine hardware. Indeed, programs that employ digital rights management use this information for the very purpose of uniquely identifying the device. Even if they are not designed to gather and share identifying information, local applications might unwillingly expose identifying information to the remote parties with which they interact. The most prominent example is that of web browsers, which have been proved to expose diverse and stable information in such an amount to allow remote identification, see . Diverse and stable information can also be gathered below the application layer, by leveraging the protocols that are used to transmit data. Sorted by OSI model layer, some examples of protocols that can be utilized for fingerprinting are: OSI Layer 7: SMB, FTP, HTTP, Telnet, TLS/SSL, DHCP OSI Layer 5: SNMP, NetBIOS OSI Layer 4: TCP (see TCP/IP stack fingerprinting) OSI Layer 3: IPv4, IPv6, ICMP OSI Layer 2: IEEE 802.11, CDP Passive fingerprinting techniques merely require the fingerprinter to observe traffic originated from the target device, while active fingerprinting techniques require the fingerprinter to initiate connections to the target device. Techniques that require interaction with the target device over a connection initiated by the latter are sometimes addressed as semi-passive. Browser fingerprint The collection of a large amount of diverse and stable information from web browsers is possible for most part due to client-side scripting languages, which were introduced in the late 1990s. Today there are several open-source browser fingerprinting libraries, such as FingerprintJS, ImprintJS, and ClientJS, where FingerprintJS is updated the most often and supersedes ImprintJS and ClientJS to a large extent. Browser version Browsers provide their name and version, together with some compatibility information, in the User-Agent request header. Being a statement freely given by the client, it should not be trusted when assessing its identity. Instead, the type and version of the browser can be inferred from the observation of quirks in its behavior: for example, the order and number of HTTP header fields is unique to each browser family and, most importantly, each browser family and version differs in its implementation of HTML5, CSS and JavaScript. Such differences can be remotely tested by using JavaScript. A Hamming distance comparison of parser behaviors has been shown to effectively fingerprint and differentiate a majority of browser versions. Browser extensions A combination of extensions or plugins unique to a browser can be added to a fingerprint directly. Extensions may also modify how any other browser attributes behave, adding additional complexity to the user's fingerprint. Adobe Flash and Java plugins were widely used to access user information before their deprecation. Hardware properties User agents may provide system hardware information, such as phone model, in the HTTP header. Properties about the user's operating system, screen size, screen orientation, and display aspect ratio can be also retrieved by using JavaScript to observe the result of CSS media queries. Browsing history The fingerprinter could determine which sites the browser had previously visited within a list it provided, by querying the list using JavaScript with the CSS selector . Typically, a list of 50 popular websites were sufficient to generate a unique user history profile, as well as provide information about the user's interests. However, browsers have since then mitigated this risk. Font metrics The letter bounding boxes differ between browsers based on anti-aliasing and font hinting configuration and can be measured by JavaScript. Canvas and WebGL Canvas fingerprinting uses the HTML5 canvas element, which is used by WebGL to render 2D and 3D graphics in a browser, to gain identifying information about the installed graphics driver, graphics card, or graphics processing unit (GPU). Canvas-based techniques may also be used to identify installed fonts. Furthermore, if the user does not have a GPU, CPU information can be provided to the fingerprinter instead. A canvas fingerprinting script first draws text of specified font, size, and background color. The image of the text as rendered by the user's browser is then recovered by the ToDataURL Canvas API method. The hashed text-encoded data becomes the user's fingerprint. Canvas fingerprinting methods have been shown to produce 5.7 bits of entropy. Because the technique obtains information about the user's GPU, the information entropy gained is "orthogonal" to the entropy of previous browser fingerprint techniques such as screen resolution and JavaScript capabilities. Hardware benchmarking Benchmark tests can be used to determine whether a user's CPU utilizes AES-NI or Intel Turbo Boost by comparing the CPU time used to execute various simple or cryptographic algorithms. Specialized APIs can also be used, such as the Battery API, which constructs a short-term fingerprint based on the actual battery state of the device, or OscillatorNode, which can be invoked to produce a waveform based on user entropy. A device's hardware ID, which is a cryptographic hash function specified by the device's vendor, can also be queried to construct a fingerprint. Mitigation methods for browser fingerprinting Different approaches exist to mitigate the effects of browser fingerprinting and improve users' privacy by preventing unwanted tracking, but there is no ultimate approach that can prevent fingerprinting while keeping the richness of a modern web browser. Offering a simplified fingerprint Users may attempt to reduce their fingerprintability by selecting a web browser which minimizes the availability of identifying information, such as browser fonts, device ID, canvas element rendering, WebGL information, and local IP address. As of 2017 Microsoft Edge is considered to be the most fingerprintable browser, followed by Firefox and Google Chrome, Internet Explorer, and Safari. Among mobile browsers, Google Chrome and Opera Mini are most fingerprintable, followed by mobile Firefox, mobile Edge, and mobile Safari. Tor Browser disables fingerprintable features such as the canvas and WebGL API and notifies users of fingerprint attempts. In order to reduce diversity, Tor browser doesn't allow the width and height of the window available to the webpage to be any number of pixels, but allows only some given values. The result is that the webpage is windowboxed: it fills a space that is slightly smaller than the browser window. Offering a spoofed fingerprint Spoofing some of the information exposed to the fingerprinter (e.g. the user agent) may create a reduction in diversity, but the contrary could be also achieved if the spoofed information differentiates the user from all the others who do not use such a strategy more than the real browser information. Spoofing the information differently at each site visit, for example by perturbating the sound and canvas rendering with a small amount of random noise, allows a reduction of stability. This technique has been adopted by the Brave browser in 2020. Blocking scripts Blindly blocking client-side scripts served from third-party domains, and possibly also first-party domains (e.g. by disabling JavaScript or using NoScript) can sometimes render websites unusable. The preferred approach is to block only third-party domains that seem to track people, either because they are found on a blacklist of tracking domains (the approach followed by most ad blockers) or because the intention of tracking is inferred by past observations (the approach followed by Privacy Badger). Using multiple browsers Different browsers on the same machine would usually have different fingerprints, but if both browsers are not protected against fingerprinting, then the two fingerprints could be identified as originating from the same machine. See also Anonymous web browsing Browser security Browser sniffing Evercookie Fingerprint (computing) Internet privacy Web tracking References Further reading External links Panopticlick, by the Electronic Frontier Foundation, gathers some elements of a browser's device fingerprint and estimates how identifiable it makes the user Am I Unique, by INRIA and INSA Rennes, implements fingerprinting techniques including collecting information through WebGL. Computer network security Internet privacy Internet fraud Fingerprinting algorithms Web analytics
Device fingerprint
[ "Technology", "Engineering" ]
2,626
[ "Cybersecurity engineering", "Wireless locating", "Computer networks engineering", "Tracking", "Computer network security" ]
16,155,733
https://en.wikipedia.org/wiki/WOT%20Services
WOT Services is the developer of MyWOT (also known as WOT and Web of Trust), an online reputation and Internet safety service which shows indicators of trust about existing websites. The confidence level is based both on user ratings and on third-party malware, phishing, scam and spam blacklists. The service also provides crowdsourced reviews, about to what extent websites are trustworthy, and respect user privacy, vendor reliability and child safety. Its website user interface is available in four languages, namely, English, French, Portuguese and Russian. Its website uses machine translation on the domain name scorecard webpages for logged-in users/commenters. History WOT Services was founded in 2006 by Sami Tolvanen and Timo Ala-Kleemola, who wrote the MyWOT software as post-graduates at the Tampere University of Technology in Finland. They launched the service officially in 2007, with Esa Suurio as CEO. Suurio was replaced in November 2009, and both founders left the company in 2014. In 2009, MySQL founder Michael Widenius invested in WOT Services and became a member of the board of directors. WOT Services is no longer a portfolio company of Widenius's venture capital firm, OpenOcean.vc. WOT Services has partnerships with Mail.ru, Facebook, hpHosts, Legit Script, Panda Security, Phish tank, GlobalSign and TRUSTe. By November 2013, WOT Services had over 100 million downloads. A 2016 Norddeutscher Rundfunk investigation revealed that WOT Services sold user activity data collected from its apps and browser extensions to third parties in violation of the privacy policies of the app stores on which the software was distributed. In 2016, it was revealed that WOT Services had made money by collecting browsing history data from its users and selling that usage data; it said that it anonymized the data before selling it. Sale of user-related data In November 2016, a German state media investigation found that WOT Services had secretly collected personal user details and sold or licensed this information to unidentified third-party businesses and entities for data monetization purposes. This activity breached the privacy rules and guidelines set by several browsers. As a result, the browser add-on was involuntarily removed from Mozilla Firefox's add-on store, and voluntarily removed from other browsers' add-on/extension stores. WOT was eventually reinstated. On November 1, 2016, German public broadcasting station NDR reported the results of an investigation by in-house journalists, showing that WOT collected, recorded, analyzed and sold user-related data to third parties. The data obtained was traceable to WOT and could be assigned to specific individuals, despite WOT's claim that user data was anonymized. The NDR investigative journalism report was based on freely available sample data, and revealed that sensitive private information of more than 50 users could be retrieved. The information included websites visited, account names and email addresses, potentially revealing user illnesses, sexual preferences and drug consumption. The journalists also reconstructed a media company's confidential revenue data, and details about an ongoing police investigation. German media contacted WOT Services with the results of the investigation prior to publication of the report. WOT declined to comment on the findings. A few days after the news story aired, Mozilla removed the browser add-on from the Firefox add-on store. WOT subsequently removed its browsing tool for other browsers, including Chrome and Opera. The WOT "Mobile Security & Protection" mobile app was removed from Google Play, approximately one week after the extension was removed from the Google Chrome extension store. In a blog post published on December 19, 2016, WOT Services stated that they had upgraded their browser extension, and released it in the Google Chrome extension gallery. The upgraded version included "several major code updates in order to protect our user's privacy and an opt-out option from the user Settings, for users who do not wish to share data with us but still want to have easy access to WOT." In February 2017, Mozilla reinstated the MyWOT browsing tool in the Firefox add-on store. MyWOT addon WOT Services offers an add-on for web browsers including Firefox, Google Chrome, Opera, Internet Explorer and Baidu. The extension rates websites based on their reputation score and provides end users with a red, yellow, or green indicator, with red meaning that the site has a poor reputation score. Lawsuits In February 2011, a lawsuit in Florida (United States) was filed against WOT and some of its forum members, demanding that WOT remove certain website ratings and associated comments cautioning about phishing scams. The court dismissed the case with prejudice. In Germany, some preliminary injunctions were issued by courts, to delete feedback. See also McAfee SiteAdvisor Netcraft Norton Safe Web Website Reputation Ratings Google SafeSearch References Free Firefox WebExtensions Internet Explorer add-ons Google Chrome extensions Computer security software Social engineering (security) Review websites Reputation management Computational trust Privacy controversies
WOT Services
[ "Technology", "Engineering" ]
1,072
[ "Malware", "Cybersecurity engineering", "Computer security software", "Computer security exploits", "Computational trust" ]
16,157,429
https://en.wikipedia.org/wiki/Part%20%28music%29
A part in music refers to a component of a musical composition. Because there are multiple ways to separate these components, there are several contradictory senses in which the word "part" is used: any individual melody (or voice), whether vocal or instrumental, that can be abstracted as continuous and independent from other notes being performed simultaneously in polyphony. Within the music played by a single pianist, one can often identify outer parts (the top and bottom parts) or an inner part (those in between). On the other hand, within a choir, "outer parts" and "inner parts" would refer to music performed by different singers. (See ) the musical instructions for any individual instrument or voice (often given as a handwritten, printed, or digitized document) of sheet music (as opposed to the full score which shows all parts of the ensemble in the same document). A musician's part usually does not contain instructions for the other players in the ensemble, only instructions for that individual. the music played by any group of musicians who all perform together for a given piece; in a symphony orchestra, a dozen or more cello players may all play "the same part" even if they each have their own physical copy of the music. This part may be in unison or may be harmonized, and may even sometimes contain counter-melodies within it. A percussion part may sometimes only contain rhythm. This sense of "part" does not require a written copy of the music; a bass player in a rock band "plays the bass part" even if there is no written version of the song. a section in the large-scale form of a piece. (See ) Polyphony and part-writing Part-writing (or voice leading) is the composition of parts in consideration of harmony and counterpoint. In the context of polyphonic composition the term voice may be used instead of part to denote a single melodic line or textural layer. The term is generic, and is not meant to imply that the line should necessarily be vocal in character, instead referring to instrumentation, the function of the line within the counterpoint structure, or simply to register. The historical development of polyphony and part-writing is a central thread through European music history. The earliest notated pieces of music in Europe were gregorian chant melodies. It appears that the Codex Calixtinus (12th century) contains the earliest extant decipherable part music. Many histories of music trace the development of new rules for dissonances, and shifting stylistic possibilities for relationships between parts. In some places and time periods, part-writing has been systematized as a set of counterpoint rules taught to musicians as part of their early education. One notable example is Johann Fux's Gradus ad Parnassum, which dictates a style of counterpoint writing that resembles the work of the famous Renaissance composer Palestrina. The standard for most Western music theory in the twentieth century is generalized from the work of Classical composers in the common practice period. For example, a recent general music textbook states, Polyphony and part-writing are also present in many popular music and folk music traditions, although they may not be described as explicitly or systematically as they sometimes are in the Western tradition. The lead part or lead voice is the most prominent, melodically-important voice (often the highest in pitch but not necessarily) and is played by a lead instrument (e.g. a lead vocalist). Musical form In musical forms, a part may refer to a subdivision in the structure of a piece. Sometimes "part" is a title given by the composer or publisher to the main sections of a large-scale work, especially oratorios. For example, Handel's Messiah, which is organized into Part I, Part II, and Part III, each of which contains multiple scenes and one or two dozen individual arias or choruses. Other times, "part" is used to refer in a more general sense to any identifiable section of the piece. This is for example the case in the widely used ternary form, usually schematized as A–B–A. In this form the first and third parts (A) are musically identical, or very nearly so, while the second part (B) in some way provides a contrast with them. In this meaning of part, similar terms used are section, strain, or turn. See also Partbook Cantus firmus References Melody Musical texture Polyphonic form Voicing (music) Formal sections in music analysis
Part (music)
[ "Technology" ]
923
[ "Components", "Formal sections in music analysis" ]
16,159,670
https://en.wikipedia.org/wiki/Coxeter%27s%20loxodromic%20sequence%20of%20tangent%20circles
In geometry, Coxeter's loxodromic sequence of tangent circles is an infinite sequence of circles arranged so that any four consecutive circles in the sequence are pairwise mutually tangent. This means that each circle in the sequence is tangent to the three circles that precede it and also to the three circles that follow it. Properties The radii of the circles in the sequence form a geometric progression with ratio where is the golden ratio. This ratio and its reciprocal satisfy the equation and so any four consecutive circles in the sequence meet the conditions of Descartes' theorem. The centres of the circles in the sequence lie on a logarithmic spiral. Viewed from the centre of the spiral, the angle between the centres of successive circles is The angle between consecutive triples of centers is the same as one of the angles of the Kepler triangle, a right triangle whose construction also involves the square root of the golden ratio. History and related constructions The construction is named after geometer H. S. M. Coxeter, who generalised the two-dimensional case to sequences of spheres and hyperspheres in higher dimensions. It can be interpreted as a degenerate special case of the Doyle spiral. See also Apollonian gasket References External links Circle packing Golden ratio Eponyms in geometry
Coxeter's loxodromic sequence of tangent circles
[ "Mathematics" ]
263
[ "Geometry problems", "Eponyms in geometry", "Packing problems", "Golden ratio", "Circle packing", "Geometry", "Mathematical problems" ]
16,159,954
https://en.wikipedia.org/wiki/Mohammad%20Aslam%20Khan%20Khalil
Mohammad Aslam Khan Khalil, M.A.K. Khalil or Aslam Khalil (born January 7, 1950) is a theoretical physicist known for his leading research in atmospheric physics. Early in his career, he worked on quantum field theory of elementary particles. During the last three decades, he has worked on Global Change Science, including the physics, chemistry and biology of greenhouse gases and ozone depleting compounds. He is a professor of physics at Portland State University. Selected publications Scientific papers, books and articles M.A.K.Khalil. Global Climate Change and Human Life, J. Wiley & Sons, UK., 2022. M.A.K. Khalil. Non-CO2 greenhouse gases in the atmosphere. Annual Review of Energy/Environment, Annual Reviews, 1999, Vol. 24: 245–261, 1999. M.A.K.Khalil. Earth’s atmosphere. Encyclopedia of Geochemistry, Encyclopedia of Earth Sciences Series, C.P. Marshall and R.W. Fairbridge, Editors, Kluwer Academic Publishers, p. 143–145, 1999. M.A.K. Khalil, R.A. Rasmussen, M.J. Shearer, R.W. Dalluge, L.X. Ren, and C.-L. Duan; Measurements of methane emissions from rice fields in China. J. Geophys. Res., 103(D19): 25,181–25,210, 1998. M.A.K. Khalil, R.A. Rasmussen, M.J. Shearer, Z.-L. Chen, H. Yao, and Y. Jun; Emissions of methane, nitrous oxide, and other trace gases from rice fields in China. J. Geophys. Res., 103(D19): 25,241–25,250, 1998. M.A.K. Khalil, M.J. Shearer, and R.A.Rasmussen, Atmospheric methane over the last century.World Resource Review, 8(4): 481–492, 1996. Y. Lu and M.A.K. Khalil. The distribution of solar radiation in the Earth’s atmosphere: The effects of ozone, aerosols, and clouds. Chemosphere, 32(4): 739–758, 1996. M.A.K. Khalil. Greenhouse gases in the earth’s atmosphere. Encyclopedia of Environmental Biology, Volume 2, W.A. Nirenberg, Editor, Academic Press, Florida, p. 251–265, 1995. M.A.K. Khalil and R.A. Rasmussen. The global sources of nitrous oxide. J. Geophys. Res., 97(D13):14651-14660, 1992. R.M. MacKay and M.A.K. Khalil; Theory and development of a one-dimensional time-dependent radiative convective climate model. Chemosphere, 22(3–4):383–417, 1991. M. A. K. Khalil and F. P. Moraes. Linear least squares method for time series analysis with an application to a methane time series. Journal of the Air and Waste Management Association, 45, Jan 1995. Bibliography Atmospheric Methane by Mohammad Aslam Khan Khalil (1993, 2000) See also Atmospheric Physics Atmospheric Chemistry References http://www.ecomed-medizin.de/sj/all/AutorenAnzeigeESS/autorenId/630 https://web.archive.org/web/20110715112746/http://www.prideofpakistan.com:8080/maincontroller?requestId=4&userId=182 1950 births Living people Pakistani emigrants to the United States Theoretical physicists Portland State University faculty Pakistani physicists Muslims from Oregon
Mohammad Aslam Khan Khalil
[ "Physics" ]
840
[ "Theoretical physics", "Theoretical physicists" ]
16,160,880
https://en.wikipedia.org/wiki/II%20Pegasi
II Pegasi is a binary star system in the constellation of Pegasus with an apparent magnitude of 7.4 and a distance of 130 light-years. It is a very active RS Canum Venaticorum variable (RS CVn), a close binary system with active starspots. The primary (II Pegasi A) is a cool subgiant, an orange K-type star. It has begun to evolve off the main sequence and expand. Starspots cover about 40% of its surface. The star produces intense flares observable at all wavelengths. Its smaller companion (II Pegasi B) is too close to have been observed directly. It is a red dwarf, an M-type main-sequence star. The stars are tidally locked in a very close orbit with a period of 6.7 days and a separation of a few stellar radii. X-ray flares from II Pegasi A were observed with the Ariel 5 satellite in the 1970s and with later X-ray observatories. In December 2005, a superflare was detected by the Swift Gamma-Ray Burst Mission. It was the largest stellar flare ever seen and was a hundred million times more energetic than the Sun's typical solar flare. References RS Canum Venaticorum variables Pegasus (constellation) Pegasi, II 224085 K-type subgiants M-type main-sequence stars Durchmusterung objects 117915 4375 Binary stars
II Pegasi
[ "Astronomy" ]
303
[ "Pegasus (constellation)", "Constellations" ]
16,161,100
https://en.wikipedia.org/wiki/SN%201990U
SN 1990U was a type Ic supernova event in the nucleus of the galaxy NGC 7479. It was discovered July 27, 1990 by the Berkeley Automated Supernova Search after reaching magnitude . Initially this was classified as a Type Ib supernova, but the weakness of the neutral helium absorption lines led to a reclassification. References External links Spectra on the Open Supernova Catalog Simbad Image SN 1990U Pegasus (constellation) Supernovae
SN 1990U
[ "Chemistry", "Astronomy" ]
92
[ "Supernovae", "Pegasus (constellation)", "Astronomical events", "Constellations", "Explosions" ]
16,161,136
https://en.wikipedia.org/wiki/Wilde%20Sau
Wilde Sau (Lit. wild sow; generally known in English as "Wild Boar") was the term given by the Luftwaffe to the tactic used from 1943 to 1944 during World War II by which British night bombers were engaged by single-seat day-fighter aircraft flying in the Defence of the Reich. It was adopted when the Allies had the advantage over German radar controlled interception. The fighters had to engage the British bombers freely as they were illuminated by searchlight batteries, while avoiding their own anti-aircraft fire. After some initial successes, rising losses and deteriorating weather conditions led to the abandonment of the tactic. Background In 1943 Allied bombing raids against the German industry and cities intensified significantly. Strained by fighting on several fronts the Luftwaffe was not able to answer those raids adequately. Mismanagement by the Luftwaffe leadership led to stagnant production of much needed aircraft, and indecision regarding aerial doctrine worsened the situation. Another blow was the British capture of a Junkers Ju 88 R-1 night fighter (Werknummer 360 043) when its crew defected and flew to Scotland. The aircraft carried the initial B/C form of the UHF-band Lichtenstein radar, so its existence was revealed to the Allies. RAF Bomber Command began to use a new form of "Window" (or chaff), aluminium strips sized to jam the Lichtenstein B/C radar when dropped. This brought about the need to deploy new night-fighting methods that no longer relied solely on AI radar until the longer wavelength, VHF-band Lichtenstein SN-2 radar could be produced for use in German night fighters. By mid-1943 it became clear that the past approach was not working and a change in the general aerial defensive doctrine was needed. One of those was the introduction of new fighter tactics to counter the increasing Allied bomber threat. On 27 June 1943, Luftwaffe officer Major Hajo Herrmann proposed an experimental approach to counter Allied night bombing. His proposal, which he had tested in trials secretly, was picked up and expanded by Viktor von Loßberg, by prepared reports from his staff group. Loßberg presented his proposal on 29 July, before the Luftwaffe leadership, Erhard Milch and Hermann Göring. The successful trial runs of the new tactic convinced them and especially Hitler to officially put this doctrine into use. Wilde Sau The new tactic outlined in Herrmann's report envisaged the use of free-ranging day fighters (and to a lesser extent night fighters) to counter Bomber Command. The single-engined fighters were to supplement the ground controlled Himmelbett (four-poster bed) technique, by co-operation with searchlight crews, mostly over the target. Pyrotechnic and other visual means were to guide the fighters in operations known as Wilde Sau (Wild Boar). After the fighters had reached the combat zone, pilots tried to identify and intercept enemy bombers visually; searchlights were to be used to illuminate the sky. Initial tests using former flying instructors experienced in blind-flying techniques, suggested the ideal weather conditions were when a certain (not too thick) lower level cloud cover prevailed, since then the bomber would be silhouetted against the back-lit clouds and the high-flying German fighters could easily spot their targets. During trials, ceasefires with the German flak units were arranged, to prevent friendly fire but it became apparent that co-ordination of ceasefires with Wilde Sau operations was difficult. To remove this threat from their own flak, the fighters were limited to certain altitudes, so the German flak could avoid firing on them. Another problem was navigation. As night-flying aids in a day fighter were rudimentary, an elaborate system of visual aids to navigation had to be established, encompassing light beacons, searchlight patterns, flak guns firing combinations of various tracer colours through the clouds and parachute flares. To make up for the lack of visual aids initially, converted bomber pilots had to be used, because they already had experience with navigating at night. Another navigation aid was simply the Allied bombing target; a city illuminated as it burned would guide the fighters to their target. Battle The week-long Battle of Hamburg in July 1943 proved disastrous for the Luftwaffe, when the first use of Window by Bomber Command knocked out the Himmelbett radar defence system. Window jammed the GCI system, airborne radar sets, gun-laying radar and searchlight controls and British losses to Flak and night-fighters declined. The raids were aided by fortunate weather conditions resulting in a firestorm. As a result, every other promising measure of preventing such a recurrence was considered and Hermann's proposal was put into effect. His original experimental unit was rapidly expanded into Jagdgeschwader 300. Jagdgeschwader 301 and Jagdgeschwader 302 were also raised to use the Wilde Sau tactic under the new Fighter Division 30 (30. Jagddivision), which was commanded by Herrmann. On the night of 3/4 July 1943, 653 Bomber Command aircraft attacked Cologne and the Wilde Sau squadrons took part in the defense of the city. The Luftwaffe shot down thirty British aircraft, of which twelve were shot down by Wilde Sau units. Anti-aircraft batteries restricted the height of their flak and the fighters operated above that ceiling. After this success and Loßberg's influential report, the use of the Wilde Sau tactic was increased and together with the Zahme Sau tactic generally integrated in a new German aerial defense approach. They were part of a wider reformation of the German aerial defense and armament industry in the summer of 1943. These measures accelerated the abandonment of the Kammhuber Himmelbett system and paved the way for a more flexible approach. Those reforms were initially successful, as fighter victories increased and industrial production rose. During the next air battles in summer to fall 1943, the Germans were able to deal some blows to the British bombing force with the aid of the new tactic. During the bombing raids on Berlin (a deception attack) and Operation Hydra on the Peenemünde research facilities on the night of 17/18 August, 64 bombers were shot down. In another raid on the night of 23/24 August, 56 bombers were shot down, representing 8 percent of the attacking force. Those battles also saw the first operational implementation of Schräge Musik, which was two fuselage-mounted autocannon of at least 20 mm calibre, which allowed German Nachtjagdflieger night fighter pilots to shoot upwards from their aircraft. The success continued and the new tactics were improved while German night fighters were able to inflict many losses on the British during the next period in fall to winter 1943. British bombing losses were as high as up to 8 percent per sortie. In December alone the British lost 316 bombers. British losses were amplified due their persistence in the Battle of Berlin; despite the improved German air defenses, the British continued their campaign. Aftermath Analysis The success of Wilde Sau was short-lived and proved to be very costly to the 100 fighters of Fighter Division 30. The tactic provided a stop-gap and more Allied bombers were shot down but German losses also rose. The Luftwaffe was not able to replace losses and due to a high attrition rate, fighter readiness dwindled. The dual use of day fighters for Wilde Sau night-fighter operations amplified this effect and the resulting erratic maintenance schedules led to serviceability rates dropping drastically. With the onset of poorer weather in the autumn of 1943, wastage through accidents and icing soared and German pilots could not implement Wilde Sau as effectively as before. Wilde Sau was discontinued in spring 1944 but had tided over the Luftwaffe air defences until new radar equipment immune to Window/Düppel had been developed. Zahme Sau Simultaneous with Wilde Sau, Zahme Sau (Tame Boar) was introduced, in which the twin-engined night fighters in to the Himmelbett system using individual ground-controlled interception were released. The fighters flew against the bomber stream in a co-ordinated operation over a wide area, guided by a running commentary derived from radar, ground observation, wireless interception and contact reports from aircraft tracking the bomber stream. Audio and visual beacons were used to assemble the fighters, which circled the beacons until the target was identified and then intercepted the bombers, at a height beyond the range of flak fire. See also List of World War II electronic warfare equipment: Tactics Battle of the Beams Defense of the Reich References Bibliography Luftwaffe Aerial maneuvers 1943 introductions 1944 disestablishments in Germany Night Searchlights
Wilde Sau
[ "Astronomy" ]
1,759
[ "Time in astronomy", "Night" ]
16,161,178
https://en.wikipedia.org/wiki/Effective%20number%20of%20codons
Effective number of codons (abbreviated as ENC or Nc) is a measure to study the state of codon usage biases in genes and genomes. The way that ENC is computed has obvious similarities to the computation of effective population size in population genetics. Although it is easy to compute ENC values, it has been shown that this measure is one of the best measures to show codon usage bias. Since the original suggestion of the ENC, several investigators have tried to improve the method, but it seems that there is much room to improve this measure. References Molecular biology
Effective number of codons
[ "Chemistry", "Biology" ]
119
[ "Biochemistry", "Molecular biology" ]
16,161,443
https://en.wikipedia.org/wiki/IOS
iOS (formerly iPhone OS) is a mobile operating system developed by Apple exclusively for its mobile devices. It was unveiled in January 2007 for the first-generation iPhone, which launched in June 2007. Major versions of iOS are released annually; the current stable version, iOS 18, was released to the public on September 16, 2024. It is the operating system that powers many of the company's mobile devices, including the iPhone, and is the basis for three other operating systems made by Apple: iPadOS, tvOS, and watchOS. iOS formerly also powered iPads until iPadOS was introduced in 2019 and the iPod Touch line of devices until its discontinuation. iOS is the world's second most widely installed mobile operating system, after Android. As of December 2023, Apple's App Store contains more than 3.8 million iOS mobile apps. iOS is based on macOS. Like macOS, it includes components of the Mach microkernel and FreeBSD. It is a Unix-like operating system. Although some parts of iOS are open source under the Apple Public Source License and other licenses, iOS is proprietary software. History In 2005, when Steve Jobs began planning the iPhone, he had a choice to either "shrink the Mac, which would be an epic feat of engineering, or enlarge the iPod". Jobs favored the former approach but pitted the Macintosh and iPod teams, led by Scott Forstall and Tony Fadell, respectively, against each other in an internal competition, with Forstall winning by creating iPhone OS. The decision enabled the success of the iPhone as a platform for third-party developers: using a well-known desktop operating system as its basis allowed the many third-party Mac developers to write software for the iPhone with minimal retraining. Forstall was also responsible for creating a software development kit for programmers to build iPhone apps, as well as an App Store within iTunes. The operating system was unveiled with the iPhone at the Macworld Conference & Expo on January 9, 2007, and released in June of that year. At the time of its unveiling in January, Steve Jobs claimed: "iPhone runs OS X" and runs "desktop class applications", but at the time of the iPhone's release, the operating system was renamed "iPhone OS". Initially, third-party native applications were not supported. Jobs' reasoning was that developers could build web applications through the Safari web browser that "would behave like native apps on the iPhone". In October 2007, Apple announced that a native software development kit (SDK) was under development and that they planned to put it "in developers' hands in February". On March 6, 2008, Apple held a press event, announcing the iPhone SDK. The iOS App Store was opened on July 10, 2008, with an initial 500 applications available. This quickly grew to 3,000 in September 2008, 15,000 in January 2009, 50,000 in June 2009, 100,000 in November 2009, 250,000 in August 2010, 650,000 in July 2012, 1 million in October 2013, 2 million in June 2016, and 2.2 million in January 2017. , 1 million apps are natively compatible with the iPad tablet computer. These apps have collectively been downloaded more than 130 billion times. App intelligence firm Sensor Tower estimated that the App Store would reach 5 million apps by 2020. In September 2007, Apple announced the iPod Touch, a redesigned iPod based on the iPhone form factor. On January 27, 2010, Apple introduced their much-anticipated media tablet, the iPad, featuring a larger screen than the iPhone and iPod Touch, and designed for web browsing, media consumption, and reading, and offering multi-touch interaction with multimedia formats including newspapers, e-books, photos, videos, music, word processing documents, video games, and most existing iPhone apps using a screen. It also includes a mobile version of Safari for web browsing, as well as access to the App Store, iTunes Library, iBookstore, Contacts, and Notes. Content is downloadable via Wi-Fi and optional 3G service or synced through the user's computer. AT&T was initially the sole U.S. provider of 3G wireless access for the iPad. In June 2010, Apple rebranded iPhone OS as "iOS". The trademark "IOS" had been used by Cisco for over a decade for its operating system, IOS, used on its routers. To avoid any potential lawsuit, Apple licensed the "IOS" trademark from Cisco. The Apple Watch smartwatch was announced by Tim Cook on September 9, 2014, being introduced as a product with health and fitness-tracking. It was released on April 24, 2015. It uses watchOS as its operating system; watchOS is based on iOS, with new features created specially for the Apple Watch such as an activity tracking app. In October 2016, Apple opened its first iOS Developer Academy in Naples inside University of Naples Federico II's new campus. The course is completely free, aimed at acquiring specific technical skills on the creation and management of applications for the Apple ecosystem platforms. At the academy there are also issues of business administration (business planning and business management with a focus on digital opportunities) and there is a path dedicated to the design of graphical interfaces. Students have the opportunity to participate in the "Enterprise Track", an in-depth training experience on the entire life cycle of an app, from design to implementation, to security, troubleshooting, data storage and cloud usage. As of 2020, the academy graduated almost a thousand students from all over the world, who have worked on 400 app ideas and have already published about 50 apps on the iOS App Store. In the 2018–2019 academic year, students from more than 30 countries arrived. 35 of these have been selected to attend the Worldwide Developer Conference, the annual Apple Developer Conference held annually in California in early June. On June 3, 2019, iPadOS, the branded version of iOS for iPad, was announced at the 2019 WWDC; it was launched on September 25, 2019. Features Interface The iOS user interface is based upon direct manipulation, using multi-touch gestures such as swipe, tap, pinch, and reverse pinch. Interface control elements include sliders, switches, and buttons. Internal accelerometers are used by some applications to respond to shaking the device (one common result is the undo command) or rotating it in three dimensions (one common result is switching between portrait and landscape mode). Various accessibility described in functions enable users with vision and hearing disabilities to properly use iOS. iOS devices boot to the lock screen. The lock screen shows the time and a user's lock screen widgets, which display timely information from apps. Upon unlock, a user is directed to the home screen, which is the primary navigation and information "hub" on iOS devices, analogous to the desktop found on personal computers. iOS home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as a weather forecast, the user's email inbox, or a news ticker directly on the home screen. Along the top of the screen is a status bar, showing information about the device and its connectivity. The Control Center can be "pulled" down from the top right of the notch or Dynamic Island (on iPhones with Face ID) or can be "pulled" up from the bottom to top of the screen (on iPhones with Touch ID), giving access to various toggles to manage the device more quickly without having to open the Settings. It is possible to manage brightness, volume, wireless connections, music player, etc. Scrolling from the top left to the bottom (or top to bottom on iPhones with Touch ID) will open the Notification Center, which in the latest versions of iOS is very similar to the lock screen. It displays notifications in chronological order and groups them by application. From the notifications of some apps it is possible to interact directly, for example by replying to a message directly from it. Notifications are sent in two modes, critical alerts that are displayed on the lock screen and signaled by a distinctive sound and vibration (e.g. emergency alerts or severe weather alerts), accompanied by a warning banner and the app badge icon, and standard alerts which use a default sound and vibration. Both can be found in the Notification Center, and show for a set amount of time on the lock screen (unless the user has Notification Center allowed when locked). On iPhones with Touch ID, screenshots can be created with the simultaneous press of the home and power buttons. In comparison to Android, which requires the buttons to be held down, a short press does suffice on iOS. On iPhone with Face ID, screenshots are captured using the volume-up and power buttons instead. The camera application used a skeuomorphic closing camera shutter animation prior to iOS 7. Since then, it uses a simple short blackout effect. Notable additions over time include HDR photography and the option to save both normal and high dynamic range photographs simultaneously where the former prevents ghosting effects from moving objects (since iPhone 5 on iOS 6), automatic HDR adjustment (since iOS 7.1), "live photo" with short video bundled to each photo if enabled (iPhone 6s, iOS 9), and a digital zoom shortcut (iPhone 7 Plus, iOS 10). Some camera settings such as video resolution and frame rate are not adjustable through the camera interface itself, but are outsourced to the system settings. A new feature in iOS 13 called "context menus" shows related actions when you touch and hold an item. When the context menu is displayed, the background is blurred. To choose from a few options, a selection control is used. Selectors can appear anchored at the bottom or in line with the content (called date selectors). Date selectors take on the appearance of any other selection control, but with a column for day, month, and optionally year. Alerts appear in the center of the screen, but there are also alerts that scroll up from the bottom of the screen (called "action panels"). Destructive actions (such as eliminating any element) are colored red. The official font of iOS is San Francisco. It is designed for small text readability, and is used throughout the operating system, including third-party apps. The icons are 180x180px in size for iPhones with a larger screen, usually models over 6 inches, including iPhone 11 Pro and iPhone 8 Plus, while they are 120x120px on iPhones with smaller displays. Home screen The home screen, rendered by SpringBoard, displays application icons and a dock at the bottom where users can pin their most frequently used apps. iOS home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as a weather forecast, the user's email inbox, or a news ticker directly on the home screen. The home screen appears whenever the user unlocks the device, presses the physical "Home" button while in an app, or swipes up from the bottom of the screen using the home bar. The screen has a status bar across the top to display data, such as time, battery level, and signal strength. The rest of the screen is devoted to the current application. When a passcode is set and a user switches on the device, the passcode must be entered at the Lock Screen before access to the Home screen is granted. In iPhone OS 3, Spotlight was introduced, allowing users to search media, apps, emails, contacts, messages, reminders, calendar events, and similar content. In iOS 7 and later, Spotlight is accessed by pulling down anywhere on the home screen (except for the top and bottom edges that open Notification Center and Control Center). In iOS 9, there are two ways to access Spotlight. As with iOS 7 and 8, pulling down on any homescreen will show Spotlight. However, it can also be accessed as it was in iOS versions 3 through 6. This endows Spotlight with Siri suggestions, which include app suggestions, contact suggestions and news. In iOS 10, Spotlight is at the top of the now-dedicated "Today" panel. With the release of iPhone OS 3.2, users gained the ability to set a wallpaper for the Home Screen. The feature was initially only available on the iPad (1st generation) until the release of iOS 4 a few months after the release of iPhone OS 3.2, which brought the feature to all iPhone and iPod Touch models that could run the operating system, with the exception of the iPhone 3G and the iPod touch (2nd generation) due to performance issues with icon animations. iOS 7 introduced a parallax effect on the Home Screen, which shifts the device's wallpaper and icons in response to the movement of the device, creating a 3D effect and an illusion of floating icons. This effect is also visible in the tab view of Mail and Safari. Researchers found that users organize icons on their homescreens based on usage frequency and relatedness of the applications, as well as for reasons of usability and aesthetics. System font iOS originally used Helvetica as the system font. Apple switched to Helvetica Neue exclusively for the iPhone 4 and its Retina Display, and retained Helvetica as the system font for older iPhone devices on iOS 4. With iOS 7, Apple announced that they would change the system font to Helvetica Neue Light, a decision that sparked criticism for inappropriate usage of a light, thin typeface for low-resolution mobile screens. Apple eventually chose Helvetica Neue instead. The release of iOS 7 also introduced the ability to scale text or apply other forms of text accessibility changes through Settings. With iOS 9, Apple changed the font to San Francisco, an Apple-designed font aimed at maximum legibility and font consistency across its product lineup. Folders iOS 4 introduced folders, which can be created by dragging an application on top of another, and from then on, more items can be added to the folder using the same procedure. A title for the folder is automatically selected by the category of applications inside, but the name can also be edited by the user. When apps inside folders receive notification badges, the individual numbers of notifications are added up and the total number is displayed as a notification badge on the folder itself. Originally, folders on an iPhone could include up to 12 apps, while folders on iPad could include 20. With increasing display sizes on newer iPhone hardware, iOS 7 updated the folders with pages similar to the home screen layout, allowing for a significant expansion of folder functionality. Each page of a folder can contain up to nine apps, and there can be 15 pages in total, allowing for a total of 135 apps in a single folder. In iOS 9, Apple updated folder sizes for iPad hardware, allowing for 16 apps per page, still at 15 pages maximum, increasing the total to 240 apps. Notification Center Before iOS 5, notifications were delivered in a modal window and could not be viewed after being dismissed. In iOS 5, Apple introduced Notification Center, which allows users to view a history of notifications. The user can tap a notification to open its corresponding app, or clear it. Notifications are now delivered in banners that appear briefly at the top of the screen. If a user taps a received notification, the application that sent the notification will be opened. Users can also choose to view notifications in modal alert windows by adjusting the application's notification settings. Introduced with iOS 8, widgets are now accessible through the Notification Center, defined by 3rd parties. When an app sends a notification while closed, a red badge appears on its icon. This badge tells the user, at a glance, how many notifications that app has sent. Opening the app clears the badge. Applications iOS devices come with preinstalled apps developed by Apple including Mail, Maps, TV, Music, FaceTime, Wallet, Health, and many more. Applications ("apps") are the most general form of application software that can be installed on iOS. They are downloaded from the official catalog of the App Store digital store, where apps are subjected to security checks before being made available to users. In June 2017, Apple updated its guidelines to specify that app developers will no longer have the ability to use custom prompts for encouraging users to leave reviews for their apps. IOS applications can also be installed directly from an IPA file provided by the software distributor, via unofficial ways. They are written using iOS Software Development Kit (SDK) and, often, combined with Xcode, using officially supported programming languages, including Swift and Objective-C. Other companies have also created tools that allow for the development of native iOS apps using their respective programming languages. Applications for iOS are mostly built using components of UIKit, a programming framework. It allows applications to have a consistent look and feel with the OS, nevertheless offering customization. Elements automatically update along with iOS updates, automatically including new interface rules. UIKit elements are very adaptable, this allows developers to design a single app that looks the same on any iOS device. In addition to defining the iOS interface, UIKit defines the functionality of the application. At first, Apple did not intend to release an SDK to developers, because they did not want third-party apps to be developed for iOS, building web apps instead. However, this technology never entered into common use, this led Apple to change its opinion, so in October 2007 the SDK for developers was announced, finally released on March 6, 2008. The SDK includes an inclusive set of development tools, including an audio mixer and an iPhone simulator. It is a free download for Mac users. It is not available for Microsoft Windows PCs. To test the application, get technical support, and distribute applications through App Store, developers are required to subscribe to the Apple Developer Program. Over the years, the Apple Store apps surpassed multiple major milestones, including 50,000, 100,000, 250,000, 500,000, 1 million, and 2 million apps. The billionth application was installed on April 24, 2009. App Library App Library automatically categorizes apps into folders based on their function or type and includes an alphabetical list of all installed apps. For example, it might group all social media apps into one folder and productivity apps into another. Users can quickly find and access apps by using the search bar at the top of the App Library. Users can choose to hide specific app pages from the home screen, making it easier to focus on the apps they use most frequently. Storage iOS enforces strict sandboxing to maintain security and privacy. Apps are generally limited to accessing their own containers and specific system-provided directories, such as the Photos library. To access files outside of their sandbox, iOS uses mechanisms like document pickers, file providers, and app extensions. iOS 8 introduced the Document Picker and Document Provider extensions as part of the document interaction controller. This allows apps to open, save, and interact with documents stored in a central location or cloud storage services. With iOS 11, Apple introduced the Files app and the File Provider extension, providing a central location for users to manage and organize their files. Apps can integrate with the Files app to make their documents accessible and editable directly from the Files app. The storage of iOS devices can be expanded through iCloud, the Apple's cloud-based storage solution that provides 5GB of storage for free to all users, while other plans require a paid subscription. iCloud Drive allows users to store various types of files, such as documents, presentations, and spreadsheets, in the cloud. These files can be accessed across multiple devices as long as the user is signed in with the same Apple ID. Accessibility iOS offers various accessibility features to help users with vision and hearing disabilities. One major feature, VoiceOver, provides a voice reading information on the screen, including contextual buttons, icons, links and other user interface elements, and allows the user to navigate the operating system through gestures. Any apps with default controls and developed with a UIKit framework gets VoiceOver functionality built in. One example includes holding up the iPhone to take a photo, with VoiceOver describing the photo scenery. As part of a "Made for iPhone" program, introduced with the release of iOS 7 in 2013, Apple has developed technology to use Bluetooth and a special technology protocol to let compatible third-party equipment connect with iPhones and iPads for streaming audio directly to a user's ears. Additional customization available for Made for iPhone products include battery tracking and adjustable sound settings for different environments. Apple made further efforts for accessibility for the release of iOS 10 in 2016, adding a new pronunciation editor to VoiceOver, adding a Magnifier setting to enlarge objects through the device's camera, software TTY support for deaf people to make phone calls from the iPhone, and giving tutorials and guidelines for third-party developers to incorporate proper accessibility functions into their apps. In 2012, Liat Kornowski from The Atlantic wrote that "the iPhone has turned out to be one of the most revolutionary developments since the invention of Braille", and in 2016, Steven Aquino of TechCrunch described Apple as "leading the way in assistive technology", with Sarah Herrlinger, Senior Manager for Global Accessibility Policy and Initiatives at Apple, stating that "We see accessibility as a basic human right. Building into the core of our products supports a vision of an inclusive world where opportunity and access to information are barrier-free, empowering individuals with disabilities to achieve their goals". Criticism has been aimed at iOS depending on both internet connection (either WiFi or through iTunes) and a working SIM card upon first activation. This restriction has been loosened in iOS 12, which no longer requires the latter. Multitasking Multitasking for iOS was first released in June 2010 along with the release of iOS 4. Only certain devices—iPhone 4, iPhone 3GS, and iPod Touch 3rd generation—were able to multitask. The iPad did not get multitasking until iOS 4.2.1 in that November. The implementation of multitasking in iOS has been criticized for its approach, which limits the work that applications in the background can perform to a limited function set and requires application developers to add explicit support for it. Before iOS 4, multitasking was limited to a selection of the applications Apple included on the device. Users could however "jailbreak" their device in order to unofficially multitask. Starting with iOS 4, on third-generation and newer iOS devices, multitasking is supported through seven background APIs: Background audio – application continues to run in the background as long as it is playing audio or video content Voice over IP – application is suspended when a phone call is not in progress Background location – application is notified of location changes Push notifications Local notifications – application schedules local notifications to be delivered at a predetermined time Task completion – application asks the system for extra time to complete a given task Fast app switching – application does not execute any code and may be removed from memory at any time In iOS 5, three new background APIs were introduced: Newsstand – application can download content in the background to be ready for the user External Accessory – application communicates with an external accessory and shares data at regular intervals Bluetooth Accessory – application communicates with a bluetooth accessory and shares data at regular intervals In iOS 7, Apple introduced a new multitasking feature, providing all apps with the ability to perform background updates. This feature prefers to update the user's most frequently used apps and prefers to use Wi-Fi networks over a cellular network, without markedly reducing the device's battery life. Switching applications In iOS 4.0 to iOS 6.x, double-clicking the home button activates the application switcher. A scrollable dock-style interface appears from the bottom, moving the contents of the screen up. Choosing an icon switches to an application. To the far left are icons which function as music controls, a rotation lock, and on iOS 4.2 and above, a volume controller. With the introduction of iOS 7, double-clicking the home button also activates the application switcher. However, unlike previous versions it displays screenshots of open applications on top of the icon and horizontal scrolling allows for browsing through previous apps, and it is possible to close applications by dragging them up, similar to how WebOS handled multiple cards. With the introduction of iOS 9, the application switcher received a significant visual change; while still retaining the card metaphor introduced in iOS 7, the application icon is smaller, and appears above the screenshot (which is now larger, due to the removal of "Recent and Favorite Contacts"), and each application "card" overlaps the other, forming a rolodex effect as the user scrolls. Now, instead of the home screen appearing at the leftmost of the application switcher, it appears rightmost. In iOS 11, the application switcher receives a major redesign. In the iPad, the Control Center and app switcher are combined. The app switcher in the iPad can also be accessed by swiping up from the bottom. In the iPhone, the app switcher cannot be accessed if there are no apps in the RAM. Ending tasks In iOS 4.0 to iOS 6.x, briefly holding the icons in the application switcher makes them "jiggle" (similarly to the homescreen) and allows the user to force quit the applications by tapping the red minus circle that appears at the corner of the app's icon. Clearing applications from multitasking stayed the same from iOS 4.0 through 6.1.6, the last version of iOS 6. As of iOS 7, the process has become faster and easier. In iOS 7, instead of holding the icons to close them, they are closed by simply swiping them upwards off the screen. Up to three apps can be cleared at a time compared to one in versions up to iOS 6.1.6. Task completion Task completion allows apps to continue a certain task after the app has been suspended. As of iOS 4.0, apps can request up to ten minutes to complete a task in the background. This doesn't extend to background uploads and downloads though (e.g. if a user starts a download in one application, it won't finish if they switch away from the application). Siri Siri () is a virtual assistant integrated into iOS. The assistant uses voice queries and a natural-language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services. The software adapts to users' individual language usages, searches, and preferences, with continuing use. Returned results are individualized. Originally released as an app for iOS in February 2010, it was acquired by Apple two months later, and then integrated into iPhone 4S at its release in October 2011. At that time, the separate app was also removed from the iOS App Store. Siri supports a wide range of user commands, including performing phone actions, checking basic information, scheduling events and reminders, handling device settings, searching the Internet, navigating areas, finding information on entertainment, and is able to engage with iOS-integrated apps. With the release of iOS 10 in 2016, Apple opened up limited third-party access to Siri, including third-party messaging apps, as well as payments, ride-sharing, and Internet calling apps. With the release of iOS 11, Apple updated Siri's voices for more clear, human voices, it now supports follow-up questions and language translation, and additional third-party actions. iOS 17 enabled users to activate Siri by simply saying “Siri”, while the previous command, “Hey Siri”, is still supported. Game Center Game Center is an online multiplayer "social gaming network" released by Apple. It allows users to "invite friends to play a game, start a multiplayer game through matchmaking, track their achievements, and compare their high scores on a leaderboard." iOS 5 and above adds support for profile photos. Game Center was announced during an iOS 4 preview event hosted by Apple on April 8, 2010. A preview was released to registered Apple developers in August. It was released on September 8, 2010, with iOS 4.1 on iPhone 4, iPhone 3GS, and iPod Touch 2nd generation through 4th generation. Game Center made its public debut on the iPad with iOS 4.2.1. There is no support for the iPhone 3G, original iPhone and the first-generation iPod Touch (the latter two devices did not have Game Center because they did not get iOS 4). However, Game Center is unofficially available on the iPhone 3G via a hack. Hardware The main hardware platform for iOS is the ARM architecture (the ARMv7, ARMv8-A, ARMv8.2-A, ARMv8.3-A). iOS releases before iOS 7 can only be run on iOS devices with 32-bit ARM processors (ARMv6 and ARMv7-A architectures). In 2013, iOS 7 was released with full 64-bit support (which includes a native 64-bit kernel, libraries, drivers as well as all built-in applications), after Apple announced that they were switching to 64-bit ARMv8-A processors with the introduction of the Apple A7 chip. 64-bit support was also enforced for all apps in the App Store; All new apps submitted to the App Store with a deadline of February 2015, and all app updates submitted to the App Store with a deadline of June 1, 2015. iOS 11 drops support for all iOS devices with 32-bit ARM processors as well as 32-bit applications, making iOS 64-bit only. Development The iOS software development kit (SDK) allows for the development of mobile apps that can run on iOS. While originally developing iPhone prior to its unveiling in 2007, Apple's then-CEO Steve Jobs did not intend to let third-party developers build native apps for iOS, instead directing them to make web applications for the Safari web browser. However, backlash from developers prompted the company to reconsider, with Jobs announcing in October 2007 that Apple would have a software development kit available for developers by February 2008. The SDK was released on March 6, 2008. The SDK is a free download for users of Mac personal computers. It is not available for Microsoft Windows PCs. The SDK contains sets giving developers access to various functions and services of iOS devices, such as hardware and software attributes. It also contains an iPhone simulator to mimic the look and feel of the device on the computer while developing. New versions of the SDK accompany new versions of iOS. In order to test applications, get technical support, and distribute apps through App Store, developers are required to subscribe to the Apple Developer Program. Combined with Xcode, the iOS SDK helps developers write iOS apps using officially supported programming languages, including Swift and Objective-C. Other companies have also created tools that allow for the development of native iOS apps using their respective programming languages. Update history & schedule Apple provides major updates to the iOS operating system annually via iTunes and, since iOS 5, also over-the-air. The device checks an XML-based PLIST file on mesu.apple.com for updates. Updates are delivered as unencrypted ZIP files. Updates are checked for regularly, and are downloaded and installed automatically if enabled. Otherwise, the user can install them manually or are prompted to allow automatic installation overnight if plugged in and connected to Wi-Fi. iPod Touch users originally had to pay for system software updates due to accounting rules that designated it not a "subscription device" like the iPhone or Apple TV, causing many iPod Touch owners not to update. In September 2009, a change in accounting rules won tentative approval, affecting Apple's earnings and stock price, and allowing iPod Touch updates to be delivered free of charge. Apple significantly extended the cycle of updates for iOS-supported devices over the years. The iPhone (1st generation) and iPhone 3G only received two iOS updates, while later models had support for five, six, and seven years. XNU kernel The iOS kernel is the XNU kernel of Darwin. The original iPhone OS (1.0) up to iPhone OS 3.1.3 used Darwin 9.0.0d1. iOS 4 was based on Darwin 10. iOS 5 was based on Darwin 11. iOS 6 was based on Darwin 13. iOS 7 and iOS 8 are based on Darwin 14. iOS 9 is based on Darwin 15. iOS 10 is based on Darwin 16. iOS 11 is based on Darwin 17. iOS 12 is based on Darwin 18. iOS 13 is based on Darwin 19. iOS 14 is based on Darwin 20. iOS 15 is based on Darwin 21. iOS 16 is based on Darwin 22. In iOS 6 the kernel is subject to ASLR, similar to that of OS X Mountain Lion. This makes exploit possibilities more complex since it is not possible to know the location of kernel code. Apple has made the XNU kernel open source. The source is under a 3-clause BSD license for the original BSD parts, with parts added by Apple under the Apple Public Source License. The versions contained in iOS are not available; only the versions used in macOS are available. iOS does not have kernel extensions (kexts) in the file system, even if they are actually present. The kernel cache can be decompressed to show the correct kernel, along with the kexts (all packed in the __PRELINK_TEXT section) and their plists (in the __PRELINK_INFO section). The kernel cache can also be directly decompressed (if decrypted) using third-party tools. With the advent of iOS 10 betas and default plain text kernelcaches, these tools can only be used after unpacking and applying lzssdec to unpack the kernel cache to its full size. The kextstat provided by the Cydia alternative software does not work on iOS because the kextstat is based on kmod_get_info(...), which is a deprecated API in iOS 4 and Mac OS X Snow Leopard. There are other alternative software that can also dump raw XML data. On developing devices, the kernel is always stored as a statically linked cache stored in /System/Library/Caches/com.apple.kernelcaches/kernelcache which is unpacked and executed at boot. In the beginning, iOS had a kernel version usually higher than the corresponding version of macOS. Over time, the kernels of iOS and macOS have gotten closer. This is not surprising, considering that iOS introduced new features (such as the ASLR Kernel, the default freezer, and various security-strengthening features) that were first incorporated and subsequently arrived on macOS. It appears Apple is gradually merging the iOS and macOS kernels over time. The build date for each version varies slightly between processors. This is due to the fact that the builds are sequential. Jailbreaking Since its initial release, iOS has been subject to a variety of different hacks centered around adding functionality not allowed by Apple. Prior to the 2008 debut of Apple's native iOS App Store, the primary motive for jailbreaking was to bypass Apple's purchase mechanism for installing the App Store's native applications. Apple claimed that it would not release iOS software updates designed specifically to break these tools (other than applications that perform SIM unlocking); however, with each subsequent iOS update, previously un-patched jailbreak exploits are usually patched. When a device is booting, it loads Apple's own kernel initially, so a jailbroken device must be exploited and have the kernel patched each time it is booted up. There are different types of jailbreak. An untethered jailbreak uses exploits that are powerful enough to allow the user to turn their device off and back on at will, with the device starting up completely, and the kernel will be patched without the help of a computer – in other words, it will be jailbroken even after each reboot. However, some jailbreaks are tethered. A tethered jailbreak is only able to temporarily jailbreak the device during a single boot. If the user turns the device off and then boots it back up without the help of a jailbreak tool, the device will no longer be running a patched kernel, and it may get stuck in a partially started state, such as Recovery Mode. In order for the device to start completely and with a patched kernel, it must be "re-jailbroken" with a computer (using the "boot tethered" feature of a tool) each time it is turned on. All changes to the files on the device (such as installed package files or edited system files) will persist between reboots, including changes that can only function if the device is jailbroken (such as installed package files). In more recent years, two other solutions have been created – semi-tethered and semi-untethered. A semi-tethered solution is one where the device is able to start up on its own, but it will no longer have a patched kernel, and therefore will not be able to run modified code. It will, however, still be usable for normal functions, just like stock iOS. To start with a patched kernel, the user must start the device with the help of the jailbreak tool. A semi-untethered jailbreak gives the ability to start the device on its own. On first boot, the device will not be running a patched kernel. However, rather than having to run a tool from a computer to apply the kernel patches, the user is able to re-jailbreak their device with the help of an app (usually sideloaded using Cydia Impactor) running on their device. In the case of the iOS 9.2-9.3.3 and 64-bit 10.x jailbreaks, Safari-based exploits were available, thereby meaning websites could be used to re-jailbreak. In more detail: Each iOS device has a bootchain that tries to make sure only trusted/signed code is loaded. A device with a tethered jailbreak is able to boot up with the help of a jailbreaking tool because the tool executes exploits via USB that bypass parts of that "chain of trust", bootstrapping to a pwned (no signature check) iBEC, or iBoot to finish the boot process. Since the arrival of Apple's native iOS App Store, and—along with it—third-party applications, the general motives for jailbreaking have changed. People jailbreak for many different reasons, including gaining filesystem access, installing custom device themes, and modifying SpringBoard. An additional motivation is that it may enable the installation of pirated apps. On some devices, jailbreaking also makes it possible to install alternative operating systems, such as Android and the Linux kernel. Primarily, users jailbreak their devices because of the limitations of iOS. Depending on the method used, the effects of jailbreaking may be permanent or temporary. In 2010, the Electronic Frontier Foundation (EFF) successfully convinced the U.S. Copyright Office to allow an exemption to the general prohibition on circumvention of copyright protection systems under the Digital Millennium Copyright Act (DMCA). The exemption allows jailbreaking of iPhones for the sole purpose of allowing legally obtained applications to be added to the iPhone. The exemption does not affect the contractual relations between Apple and an iPhone owner, for example, jailbreaking voiding the iPhone warranty; however, it is solely based on Apple's discretion on whether they will fix jailbroken devices in the event that they need to be repaired. At the same time, the Copyright Office exempted unlocking an iPhone from DMCA's anticircumvention prohibitions. Unlocking an iPhone allows the iPhone to be used with any wireless carrier using the same GSM or CDMA technology for which the particular phone model was designed to operate. Unlocking Initially most wireless carriers in the US did not allow iPhone owners to unlock it for use with other carriers. However AT&T allowed iPhone owners who had satisfied contract requirements to unlock their iPhone. Instructions to unlock the device are available from Apple, but it is ultimately at the sole discretion of the carrier to authorize unlocking the device. This allows the use of a carrier-sourced iPhone on other networks. Modern versions of iOS and the iPhone fully support LTE across multiple carriers wherever the phone was purchased. Programs to remove SIM lock restrictions are available, but are not supported by Apple, and most often not a permanent unlock – a soft unlock, which modifies the iPhone so that the baseband will accept the SIM card of any GSM carrier. SIM unlocking is not jailbreaking, but a jailbreak is also required for these unofficial software unlocks. The legality of software unlocking varies in each country; for example, in the US, there is a DMCA exemption for unofficial software unlocking of devices purchased before January 26, 2013. Digital rights management The closed and proprietary nature of iOS has garnered criticism, particularly by digital rights advocates such as the Electronic Frontier Foundation, computer engineer and activist Brewster Kahle, Internet-law specialist Jonathan Zittrain, and the Free Software Foundation who protested the iPad's introductory event and have targeted the iPad with their "Defective by Design" campaign. Competitor Microsoft, via a PR spokesman, criticized Apple's control over its platform. At issue are restrictions imposed by the design of iOS, namely digital rights management (DRM) intended to lock purchased media to Apple's platform, the development model (requiring a yearly subscription to distribute apps developed for the iOS), the centralized approval process for apps, as well as Apple's general control and lockdown of the platform itself. Particularly at issue is the ability for Apple to remotely disable or delete apps at will. Some in the tech community have expressed concern that the locked-down iOS represents a growing trend in Apple's approach to computing, particularly Apple's shift away from machines that hobbyists can "tinker with" and note the potential for such restrictions to stifle software innovation. Former Facebook developer Joe Hewitt protested against Apple's control over its hardware as a "horrible precedent" but praised iOS's sandboxing of apps. Security and privacy iOS utilizes many security features in both hardware and software. Reception Market share iOS is the second most popular mobile operating system in the world, after Android. Sales of iPads in recent years are also behind Android, while, by web use (a proxy for all use), iPads (using iOS) are still the most popular. At WWDC 2014, Tim Cook said 800 million devices had been sold by June 2014. During Apple's quarterly earnings call in January 2015, the company announced that they had sold over one billion iOS devices since 2007. By February 2023, there were 2 billion devices activated, and 1.5 billion iPhones had been sold since 2007. By late 2011, iOS accounted for 60% of the market share for smartphones and tablets. By the end of 2014, iOS accounted for 14.8% of the smartphone market and 27.6% of the tablet and two-in-one market. In May 2023, StatCounter reported iOS was used on 31.44% of smartphones and 55.75% of tablets worldwide, measured by internet usage instead of sales. In the third quarter of 2015, research from Strategy Analytics showed that iOS adoption of the worldwide smartphone market was at a record low 12.1%, attributed to lackluster performance in China and Africa. Android accounted for 87.5% of the market, with Windows Phone and BlackBerry accounting for the rest. Devices See also Comparison of mobile operating systems References External links Official dev center website at Apple Developer Connection iOS Reference Library – on the Apple Developer Connection website 2007 software Apple Inc. Apple Inc. operating systems Apple Inc. software ARM operating systems Computing platforms Mach (kernel) Mobile operating systems Products introduced in 2007 Smartphone operating systems Tablet operating systems
IOS
[ "Technology" ]
9,163
[ "Computing platforms" ]
16,161,577
https://en.wikipedia.org/wiki/Achmatowicz%20reaction
The Achmatowicz reaction, also known as the Achmatowicz rearrangement, is an organic synthesis in which a furan is converted to a dihydropyran. In the original publication by the Polish chemist Osman Achmatowicz Jr. (b. 20 December 1931 in Vilnius) in 1971 furfuryl alcohol is reacted with bromine in methanol to 2,5-dimethoxy-2,5-dihydrofuran which rearranges to the dihydropyran with dilute sulfuric acid. Additional reaction steps, alcohol protection with methyl orthoformate and boron trifluoride) and then ketone reduction with sodium borohydride produce an intermediate from which many monosaccharides can be synthesised. The Achmatowitz protocol has been used in total synthesis, including those of desoxoprosophylline, pyrenophorin Recently it has been used in diversity oriented synthesis and in enantiomeric scaffolding. References Organic reactions Name reactions
Achmatowicz reaction
[ "Chemistry" ]
218
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
16,161,817
https://en.wikipedia.org/wiki/XTE%20J1739-285
|- style="vertical-align: top;" | Distance | 39,000 Ly XTE J1739−285 is a neutron star, in the constellation Ophiuchus, situated approximately 39,000 light-years from Earth. It was first observed on 19 October 1999 by NASA's Rossi X-ray Timing Explorer satellite. It had previously been claimed that XTE J1739−285 was the fastest-spinning celestial body yet known, with a frequency of 1122 Hz. However, a re-analysis of these data by other astronomers has been unable to reproduce this result. XTE J1739−285 has been proposed as a possible quark star, as well as 3C 58. References Ophiuchus Neutron stars
XTE J1739-285
[ "Astronomy" ]
158
[ "Ophiuchus", "Constellations" ]
16,163,098
https://en.wikipedia.org/wiki/MobiTV
MobiTV, Inc. (previously known as Idetic, Inc.) operated as a provider of live and on-demand video delivery solutions, with its headquarters located in Emeryville, California. Founded in 1999 by Paul Scanlan, Phillip Alvelda, and Jeff Annison, Mobi TV was a privately held company supported by venture capital. Charlie Nooney held the roles of CEO and Chairman of the company on October 15, 2007. History MobiTV was one of the first companies to bring live and on-demand TV to mobile devices through partnerships with numerous content providers and carriers such as Sprint and ESPN. In 2016, MobiTV introduced The MOBITV CONNECT Platform, an IP delivery service which allows cable operators to deliver video to connected retail devices such as Roku, Apple TV, Amazon Fire TV and select smart TVs, as well as Android and iOS, in place of a traditional set-top box. The company has raised $163.8 million in eight funding rounds from investors including Adobe Ventures, Ally Bank, Gefinor Capital, Hearst Ventures, Leader Ventures, Menlo Ventures, Redpoint, and Oak Investment Partners. In February 2017, they raised $21 million from repeat investors Ally Bank and Oak Investment Partners. Products Sprint and Sprint TV Sprint launched Sprint TV in November 2003, which could stream live video and audio at 15 frames per second using MobiTV's technology. The service launched with content from CNN, NBC, Fox Sports, Weather Channel, E! Entertainment, and others. Sprint and MobiTV were recognized with an Engineering Emmy Award for Sprint TV in 2005. In October 2007, MobiTV and Sprint resigned their deal, in which MobiTV offers Sprint TV, Sprint TV Extra and Sprint TV en Vivo. In April 2010, MobiTV and Sprint announced the availability of Sprint TV for iPhone users, with access to content from ESPN Mobile, Disney, ABC, NBC, CBS and others. Shortly after, MobiTV announced the application surpassed one million downloads on the Apple App Store, which it attributed to the World Cup tournament. Sprint announced the availability of the new Sprint Spot content application powered by MobiTV in April 2017 for Android device users. MobiTV application offering In November 2003, MobiTV offered a $10 per month live and on-demand television service available to wireless subscribers, first offered via Sprint. MobiTV was the wireless television delivery providers for many major U.S. carriers, including Sprint Nextel, Cingular, AT&T, T-Mobile, and Alltel, as well as Telcel, the largest mobile carrier in Mexico, by 2006. In 2005, MobiTV signed a deal with satellite operator SES Americom to extend its service to customers in Canada, Europe and Latin America. MobiTV and Orange, a former UK mobile network operator, launched the first multi-channel mobile TV service in the UK in 2005. This was MobiTV's first entry into the European market. MobiTV and AT&T launched an online TV service offering 20 live and on-demand channels delivered via broadband connections in 2006, expanding on the existing deal between the companies. In 2007, MobiTV worked with AT&T to bring the U-verse IPTV service to mobile phones, and later expanded the service to be available through PCs. In other partnerships, MobiTV worked with Microsoft to bring the live television service to Windows Mobile-powered phones and devices in 2006. Also that year, it was reported that MobiTV's technology powered Comcast's mobile television offering in Portland and Boston. MobiTV's subscriber base surpassed 4 million in 2008, and the company announced a unicast/multicast system for their service, which decreased time lag between changing channels. On June 3, 2009, MobiTV announced that their live television service has upwards of 7 million subscribers, and the service was available on 350 handheld devices and through 20 different carrier networks. MobiTV saw a 49 percent increase in daily viewership in 2009, which it attributed to the growth of live events available to viewers through its service. At the CTIA wireless conference in October 2010, MobiTV announced that it would be expanding its offerings beyond the mobile device, to television sets and personal computers. The MOBITV CONNECT Platform In July 2016, the Company announced The MOBITV CONNECT Platform. C Spire announced that it was the first US cable operator to sign on to utilize the platform. C Spire launched the new virtual pay TV service to consumers in July 2017. The service employs a unicast IP delivery platform that features a fully integrated CMS, DRM, Media Player, nDVR, content policy, identity management, billing and authentication. Charlie Nooney was recognized as a "Digital All Star" by Broadcasting & Cable in February 2017 for his leadership over the product's introduction. Cablefax also recognized the service as the winner in the Connected TV/Smart TV Solution category of their annual Tech Awards. In December 2017, the National Cable Television Cooperative (NCTC) announced that they selected MobiTV as a partner to provide IP-based video to its members. NCTC President and CEO Rich Fickle noted that "the deal represents NCTC's most advanced all-IP solution to-date". NCTC members DirectLink, Citizens Fiber, USA Communication, and Hickory Telephone signed on with MobiTV as of December 2017. Advertising technology MobiTV launched an advertising platform in 2005 that allows local affiliates and advertisers to re-purpose national TV ads for local markets. Mobi4BIZ MobiTV launched Mobi4BIZ in 2008, featuring live and on-demand content from CNBC, Bloomberg, Fox Business and TheStreet.com, available on RIM's BlackBerry Bold. MobiTV2 MobiTV announces at CES in 2006 that MobiTV2 will be made available to Sprint and Cingular subscribers, featuring improved audio and video quality, and a programming guide. XM satellite radio application MobiTV developed the XM Satellite Radio applications for Alltel and Cingular, which both launched in 2006. Cingular partnered with MobiTV and Music Choice to offer mobile radio stations to customers with the Nokia 6620 or Sony Ericsson S710 and Z500a devices. References Mobile telephone broadcasting Technology companies based in the San Francisco Bay Area Mass media companies established in 1999 Technology companies established in 1999 1999 establishments in California Companies that filed for Chapter 11 bankruptcy in 2021
MobiTV
[ "Technology" ]
1,335
[ "Mobile telecommunications", "Mobile telephone broadcasting" ]
16,163,264
https://en.wikipedia.org/wiki/Kempe%20Award%20for%20Distinguished%20Ecologists
The Kempe Award for Distinguished Ecologists is a prize awarded biennially from 1994 onwards to recognise outstanding individuals within the science of ecology. The Award is an honorarium of SEK 50,000. The award is given by the Kempe Foundations (Kempefonden), Umeå University and the Swedish University of Agricultural Sciences in cooperation. Kempe Award Laureates 1994 Stuart L. Pimm 1996 F. Stuart Chapin III 1998 John Lawton 2000 Daniel Simberloff 2002 David Read 2004 Mary Power 2006 Peter M. Vitousek 2008 Stephen P. Hubbell 2011 Ilkka Hanski See also List of ecology awards References Awards established in 1994 Ecology awards Swedish awards 1994 establishments in Sweden
Kempe Award for Distinguished Ecologists
[ "Technology" ]
142
[ "Science and technology awards", "Science award stubs" ]
16,163,635
https://en.wikipedia.org/wiki/Hydra%20game
In mathematics, specifically in graph theory and number theory, a hydra game is a single-player iterative mathematical game played on a mathematical tree called a hydra where, usually, the goal is to cut off the hydra's "heads" while the hydra simultaneously expands itself. Hydra games can be used to generate large numbers or infinite ordinals or prove the strength of certain mathematical theories. Unlike their combinatorial counterparts like TREE and SCG, no search is required to compute these fast-growing function values – one must simply keep applying the transformation rule to the tree until the game says to stop. Introduction A simple hydra game can be defined as follows: A hydra is a finite rooted tree, which is a connected graph with no cycles and a specific node designated as the root of the tree. In a rooted tree, each node has a single parent (with the exception of the root, which has no parent) and a set of children, as opposed to an unrooted tree where there is no parent-child relationship and we simply refer to edges between nodes. The player selects a leaf node from the tree and a natural number during each turn. A leaf node can be defined as a node with no children, or a node of degree 1 which is not . Remove the leaf node . Let be 's parent. If , return to stage 2. Otherwise if , let be the parent of . Then create leaf nodes as children of such that the new nodes would appear after any existing children of during a post-order traversal (visually, these new nodes would appear to the right side of any existing children). Then return to stage 2. Even though the hydra may grow by an unbounded number of leaves at each turn, the game will eventually end in finitely many steps: if is the greatest distance between the root and the leaf, and the number of leaves at this distance, induction on can be used to demonstrate that the player will always kill the hydra. If , removing the leaves can never cause the hydra to grow, so the player wins after turns. For general , we consider two kinds of moves: those that involve a leaf at a distance less than from the root, and those that involve a leaf at a distance of exactly . Since moves of the first kind are also identical to moves in a game with depth , the induction hypothesis tells us that after finitely many such moves, the player will have no choice but to choose a leaf at depth . No move introduces new nodes at this depth, so this entire process can only repeat up to times, after which there are no more leaves at depth and the game now has depth (at most) . Invoking the induction hypothesis again, we find that the player must eventually win overall. While this shows that the player will win eventually, it can take a very long time. As an example, consider the following algorithm. Pick the rightmost leaf (i.e., the newest leaf which will be on the level closest to the root) and set the first time, the second time, and so on, always increasing by one. If a hydra has a single -length branch, then for , the hydra is killed in a single step, while it is killed in three steps if . There are 11 steps required for . There are 1114111 steps required for . has been calculated exactly. Let and to be nested n times. Then . General Solution The general solution to the hydra game is as follows: Let denote the number of steps required to decrement a head of depth n when the heads closer to the roots are all singular (no further "right" branches). Then and . The answer to is: The growth rate of this function is faster than the standard fast-growing hierarchy, as alone grows at the rate of the fast-growing hierarchy, and the solution is the nth nesting of . Kirby–Paris and Buchholz Hydras The Kirby–Paris hydra is defined by altering the fourth rule of the hydra defined above. 4KP: Assume is the parent of if . Attach copies of the subtree with root to to the right of all other nodes connected to . Return to stage 2. Instead of adding only new leaves, this rule adds duplicates of an entire subtree. Keeping everything else the same, this time requires turn, requires steps, requires steps and requires more steps than Graham's number. This functions growth rate is massive, equal to in the fast-growing hierarchy. This is not the most powerful hydra. The Buchholz hydra is a more potent hydra. It entails a labelled tree. The root has a unique label (call it ), and each other node has a label that is either a non-negative integer or . A hydra is a labelled tree with finite roots. The root should be labelled . Label all nodes adjacent to the root (important to ensure that it always ends) and every other node with a non-negative integer or . Choose a leaf node and a natural number at each stage. Remove the leaf . Let be 's parent. Nothing else happens if . Return to stage 2. If the label of is , Assume is the parent of . Attach copies of the subtree with root to to the right of all other nodes connected to . Return to stage 2. If x's label is , replace it with . Return to stage 2. If the label of is a positive integer . go down the tree looking for a node with a label . Such a node exists because all nodes adjacent to the root are labelled . Take a copy of the subtree with root . Replace with this subtree. However, relabel (the root of the copy of the subtree) with . Call the equivalent of in the copied subtree (so is to as is to ), and relabel it 0. Go back to stage 2. Surprisingly, even though the hydra can grow enormously taller, this sequence always ends. More about KP hydras For Kirby–Paris hydras, the rules are simple: start with a hydra, which is an unordered unlabelled rooted tree . At each stage, the player chooses a leaf node to chop and a non-negative integer . If  is a child of the root , it is removed from the tree and nothing else happens that turn. Otherwise, let  be 's parent, and  be 's parent. Remove from the tree, then add  copies of the modified  as children to . The game ends when the hydra is reduced to a single node. To obtain a fast-growing function, we can fix , say,  at the first step, then , , and so on, and decide on a simple rule for where to cut, say, always choosing the rightmost leaf. Then,  is the number of steps needed for the game to end starting with a path of length , that is, a linear stack of  nodes. eventually dominates all recursive functions which are provably total in Peano arithmetic, and is itself provably total in . This could alternatively expressed using strings of brackets: Start with a finite sequence of brackets such as . Pick an empty pair and a non-negative integer . Delete the pair, and if its parent is not the outermost pair, take its parent and append  copies of it. For example, with , . Next is a list of values of : More about Buchholz hydras The Buchholz hydra game is a hydra game in mathematical logic, a single player game based on the idea of chopping pieces off a mathematical tree. The hydra game can be used to generate a rapidly growing function , which eventually dominates all provably total recursive functions. It is an extension of Kirby-Paris hydras. What we use to obtain a fast-growing function is the same as Kirby-Paris hydras, but because Buchholz hydras grow not only in width but also in height, has a much greater growth rate of : This system can also be used to create an ordinal notation for infinite ordinals, e.g. . See also Goodstein's theorem References External links The hydra game Hercules and the hydra Kill the Mathematical Hydra by PBS Infinite Series Graph theory Graph theory objects Number theory Set theory
Hydra game
[ "Mathematics" ]
1,674
[ "Discrete mathematics", "Graph theory objects", "Set theory", "Mathematical logic", "Graph theory", "Combinatorics", "Mathematical relations", "Number theory" ]
16,163,906
https://en.wikipedia.org/wiki/NGC%207243
NGC 7243 (also known as Caldwell 16) is an open cluster and Caldwell object in the constellation Lacerta. It shines at magnitude +6.4. Its celestial coordinates are RA , dec . It is located near the naked-eye stars Alpha Lacertae, 4 Lacertae, an A-class double star, and planetary nebula IC 5217. It lies approximately 2,800 light-years away, and is thought to be just over 100 million years old, consisting mainly of white and blue stars. Notes References External links SEDS – NGC 7243 VizieR – NGC 7243 NED – NGC 7243 Open clusters 7243 016b Lacerta
NGC 7243
[ "Astronomy" ]
137
[ "Lacerta", "Constellations" ]
16,164,340
https://en.wikipedia.org/wiki/Dendrophilia%20%28paraphilia%29
Dendrophilia (or less often arborphilia or dendrophily) literally means "love of trees". The term may sometimes refer to a paraphilia in which people are attracted to or sexually aroused by trees. This may involve sexual contact or veneration as phallic symbols or both. Andrew Marvell made poetry using dendrophilic themes. Description Many people use vegetables and fruits such as cucumbers or carrots to insert into their vagina or anus as an object to receive sexual pleasure or orgasms when they masturbate. In men, holes can be used inside trees or trunks, assimilating the shape of a vagina, through which the penis is inserted. Many people experience feelings toward plants after having sex in a garden, forest, greenhouse, or bedroom with many plants. The use of flowers to caress the body is also included in dendrophilia. It is widely regarded as illegal to act upon such thoughts in public due to Indecent exposure, and people have been arrested for such attempts In popular culture In the film 40 Days and 40 Nights, Josh Hartnett's girlfriend reaches orgasm in this way when he was carrying out a sexual fast for Lent. In an episode of the Mexican-American series Case Closed, a man said that he reached orgasm through a fruit and that he could not please his wife in the usual way. Bibliography Corsini, Raymond J. (1999). The Dictionary of Psychology. Psychology Press, p. 263. ISBN 1-58391-028-X. Love, Brenda (1992). The Encyclopedia of Unusual Sex Practices. Barricade Books, NY. ISBN 1-56980-011-1. Gregor, Thomas (1987). Anxious Pleasures: The Sexual Lives of an Amazonian People. University of Chicago Press. ISBN 9780226307435 References Paraphilias
Dendrophilia (paraphilia)
[ "Biology" ]
391
[ "Behavior", "Sexuality stubs", "Sexuality" ]
16,164,493
https://en.wikipedia.org/wiki/Affordable%20Weapon%20System
The Affordable Weapon System is a US Navy program to design and produce a low cost "off the shelf" cruise missile launchable from a self-contained unit mounted in a standard shipping container. The need for the US Army to mass-manufacture more affordable, low overhead weapons became a pressing matter during the 1970s, a decade when costs to operate and support an armed inventory grew rapidly and consequently reduced budgets for new weapons acquisitions. The US weapons inventory is the most advanced in the world, but its volume is deemed insufficient in a theoretical war against China for example (especially the long-range precision-guided weaponry). To that effect, BAE Systems had developed a kit (Advanced Precision Kill Weapon System) to convert Hydra rockets into smart, precision-guided ammo. Specifications Length: (w/o booster): 3.32 m (10 ft 11 in) Diameter: 34.3 cm (13.5 in) Weight: 394 kg (737 lb) Speed: 400 km/h (250 mph) Ceiling: 4570 m (15000 ft) Range: > 1560 km (840 nm) Propulsion: Solid rocket booster and SWB Turbines SWB-65 turbojet sustainer. Payload: 200 lbs. Guidance: GPS and in-flight datalink. Program status April 2002 - International Systems LLC of San Diego, Calif. (subsidiary of Titan Corp.) awarded a $25,657,312 cost-plus-fixed-fee contract for continuing development and implementation. June 2005 - Titan awarded a $32.4 million contract modification to produce approximately 85 missiles for demonstration, test and evaluation. The contract also includes work for the AWS launcher design and ship integration. September 2005 - Titan awards contract for launch systems to BAE Systems. 2007 - Duncan L. Hunter pushed a 30 million dollars budget in the yearly defense appropriations bill to continue the development of AWS, despite inconclusive 2006 tryouts. July 2008 - DOD Research, Development, Test, and Evaluation budget earmarks $15,200,000 for program. November 2008 - MBDA Incorporated is awarded a $4,530,231 contract for research into the best material approach and the completion of risk reduction tasks for the AWS. References External links SWB Turbines SWB-65 Titan shoots for bargain missile - San Diego Union-Tribune Cruise missiles of the United States Area denial weapons Naval weapons Proposed weapons of the United States Equipment of the United States Navy
Affordable Weapon System
[ "Engineering" ]
492
[ "Area denial weapons", "Military engineering" ]
16,164,511
https://en.wikipedia.org/wiki/IC%205146
IC 5146 (also Caldwell 19, Sh 2-125, Barnard 168, and the Cocoon Nebula) is a reflection/emission nebula and Caldwell object in the constellation Cygnus. The NGC description refers to IC 5146 as a cluster of 9.5 mag stars involved in a bright and dark nebula. The cluster is also known as Collinder 470. It shines at magnitude +10.0/+9.3/+7.2. Its celestial coordinates are RA , dec . It is located near the naked-eye star Pi Cygni, the open cluster NGC 7209 in Lacerta, and the bright open cluster M39. The cluster is about 4,000 ly away, and the central star that lights it formed about 100,000 years ago; the nebula is about 12 arcmins across, which is equivalent to a span of 15 light years. When viewing IC 5146, dark nebula Barnard 168 (B168) is an inseparable part of the experience, forming a dark lane that surrounds the cluster and projects westward forming the appearance of a trail behind the Cocoon. Young Stellar Objects IC 5146 is a stellar nursery where star-formation is ongoing. Observations by both the Spitzer Space Telescope and the Chandra X-ray Observatory have collectively identified hundreds of young stellar objects. Young stars are seen in both the emission nebula, where gas has been ionized by massive young stars, and in the infrared-dark molecular cloud that forms the "tail". One of the most massive stars in the region is BD +46 3474, a star of class B1 that is an estimated 14±4 times the mass of the sun. Another interesting star in the nebula is BD +46 3471, which is an example of a HAeBe star, an intermediate mass star with strong emission lines in its spectrum. References External links Diffuse nebulae Reflection nebulae Emission nebulae Open clusters 019b 5146 Cygnus (constellation) Sharpless objects Star-forming regions
IC 5146
[ "Astronomy" ]
411
[ "Cygnus (constellation)", "Constellations" ]
16,164,690
https://en.wikipedia.org/wiki/Normal%20Accidents
Normal Accidents: Living with High-Risk Technologies is a 1984 book by Yale sociologist Charles Perrow, which analyses complex systems from a sociological perspective. Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around. System accidents "Normal" accidents, or system accidents, are so-called by Perrow because such accidents are inevitable in extremely complex systems. Given the characteristic of the system involved, multiple failures that interact with each other will occur, despite efforts to avoid them. Perrow said that, while operator error is a very common problem, many failures relate to organizations rather than technology, and major accidents almost always have very small beginnings. Such events appear trivial to begin with before unpredictably cascading through the system to create a large event with severe consequences. Normal Accidents contributed key concepts to a set of intellectual developments in the 1980s that revolutionized the conception of safety and risk. It made the case for examining technological failures as the product of highly interacting systems, and highlighted organizational and management factors as the main causes of failures. Technological disasters could no longer be ascribed to isolated equipment malfunction, operator error, or acts of God. Perrow identifies three conditions that make a system likely to be susceptible to Normal Accidents. These are: The system is complex The system is tightly coupled The system has catastrophic potential Three Mile Island The inspiration for Perrow's books was the 1979 Three Mile Island accident, where a nuclear accident resulted from an unanticipated interaction of multiple failures in a complex system. The event was an example of a normal accident because it was "unexpected, incomprehensible, uncontrollable and unavoidable". Perrow concluded that the failure at Three Mile Island was a consequence of the system's immense complexity. Such modern high-risk systems, he realized, were prone to failures however well they were managed. It was inevitable that they would eventually suffer what he termed a 'normal accident'. Therefore, he suggested, we might do better to contemplate a radical redesign, or if that was not possible, to abandon such technology entirely. New reactor designs One disadvantage of any new nuclear reactor technology is that safety risks may be greater initially as reactor operators have little experience with the new design. Nuclear engineer David Lochbaum has said that almost all serious nuclear accidents have occurred with what was at the time the most recent technology. He argues that "the problem with new reactors and accidents is twofold: scenarios arise that are impossible to plan for in simulations; and humans make mistakes". As Dennis Berry, Director Emeritus of Sandia National Laboratory put it, "fabrication, construction, operation, and maintenance of new reactors will face a steep learning curve: advanced technologies will have a heightened risk of accidents and mistakes. The technology may be proven, but people are not". Sometimes, engineering redundancies which are put in place to help ensure safety may backfire and produce less, not more reliability. This may happen in three ways: First, redundant safety devices result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to shirking of responsibility among workers. Third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely. Readership Normal Accidents has more than 1,000 citations in the Social Sciences Citation Index and Science Citation Index to 2003. A German translation of the book was published in 1987, with a second edition in 1992. See also List of books about nuclear issues Black Swan theory Megaprojects and Risk Northeast Blackout of 2003 Brittle Power Fukushima nuclear disaster International Nuclear Event Scale Small Is Beautiful Space Shuttle Challenger disaster Meltdown: Why Our Systems Fail and What We Can Do about It Paul Virilio Literature Charles Perrow, (1984). Normal accidents : living with high-risk technologies, Charles Perrow: Accidents, Normal, in: International Encyclopedia of the Social & Behavioral Sciences, Elsevier 2001, Pages 33–38, online References Aviation safety Nuclear safety and security Safety engineering Failure Three Mile Island accident Books about nuclear issues 1984 non-fiction books
Normal Accidents
[ "Engineering" ]
855
[ "Safety engineering", "Systems engineering" ]
16,164,987
https://en.wikipedia.org/wiki/NGC%204244
NGC 4244, also known as Caldwell 26, is an edge-on loose spiral galaxy in the constellation Canes Venatici, and is part of the M94 Group or Canes Venatici I Group, a galaxy group relatively close to the Local Group containing the Milky Way. In the sky, it is located near the yellow naked-eye star, Beta Canum Venaticorum, but also near the barred spiral galaxy NGC 4151 and irregular galaxy NGC 4214. With an apparent V-band magnitude of 10.18, NGC 4244 lies approximately 4.3 megaparsecs (14 million light years) away. A nuclear star cluster and halo is located near the centre of this galaxy. See also IC 5052 - a similar edge-on galaxy Notes References External links Unbarred spiral galaxies 4244 026b Canes Venatici M94 Group 039422
NGC 4244
[ "Astronomy" ]
187
[ "Canes Venatici", "Constellations" ]
16,165,687
https://en.wikipedia.org/wiki/Rings%20of%20Rhea
Rhea, the second-largest moon of Saturn, may have a tenuous ring system consisting of three narrow, relatively dense bands within a particulate disk. This would be the first discovery of rings around a moon. The potential discovery was announced in the journal Science on March 6, 2008. In November 2005 the Cassini orbiter found that Saturn's magnetosphere is depleted of energetic electrons near Rhea. According to the discovery team, the pattern of depletion is best explained by assuming the electrons are absorbed by solid material in the form of an equatorial disk of particles perhaps several decimeters to approximately a meter in diameter and that contains several denser rings or arcs. Subsequent targeted optical searches of the putative ring plane from several angles by Cassini'''s narrow-angle camera failed to find any evidence of the expected ring material, and in August 2010 it was announced that Rhea was unlikely to have rings, and that the reason for the depletion pattern, which is unique to Rhea, is unknown. However, an equatorial chain of bluish marks on the Rhean surface suggests past impacts of deorbiting ring material and leaves the question unresolved. Detection Voyager 1 observed a broad depletion of energetic electrons trapped in Saturn's magnetic field downstream from Rhea in 1980. These measurements, which were never explained, were made at a greater distance than the Cassini data. On November 26, 2005, Cassini made the one targeted Rhea flyby of its primary mission. It passed within 500 km of Rhea's surface, downstream of Saturn's magnetic field, and observed the resulting plasma wake as it had with other moons, such as Dione and Tethys. In those cases, there was an abrupt cutoff of energetic electrons as Cassini crossed into the moons' plasma shadows (the regions where the moons themselves blocked the magnetospheric plasma from reaching Cassini). However, in the case of Rhea, the electron plasma started to drop off slightly at eight times that distance, and decreased gradually until the expected sharp drop off as Cassini entered Rhea's plasma shadow. The extended distance corresponds to Rhea's Hill sphere, the distance of 7.7 times Rhea's radius inside of which orbits are dominated by Rhea's rather than Saturn's gravity. When Cassini emerged from Rhea's plasma shadow, the reverse pattern occurred: A sharp surge in energetic electrons, then a gradual increase out to Rhea's Hill-sphere radius. These readings are similar to those of Enceladus, where water venting from its south pole absorbs the electron plasma. However, in the case of Rhea, the absorption pattern is symmetrical. In addition, the Magnetospheric Imaging Instrument (MIMI) observed that this gentle gradient was punctuated by three sharp drops in plasma flow on each side of the moon, a pattern that was also nearly symmetrical. In August 2007, Cassini passed through Rhea's plasma shadow again, but further downstream. Its readings were similar to those of Voyager 1. Two years later, in October 2009, it was announced that a set of small ultraviolet-bright spots distributed in a line that extends three quarters of the way around Rhea's circumference, within 2 degrees of the equator, may represent further evidence for a ring. The spots presumably represent the impact points of deorbiting ring material. There are no images or direct observations of the material thought to be absorbing the plasma, but the likely candidates would be difficult to detect directly. Further observations during Cassinis targeted flyby on March 2, 2010 found no evidence of orbiting ring material. Interpretation Cassini'''s flyby trajectory makes interpretation of the magnetic readings difficult. The obvious candidates for magnetospheric plasma-absorbing matter are neutral gas and dust, but the quantities required to explain the observed depletion are far greater than Cassinis measurements allow. Therefore the discoverers, led by Geraint Jones of the Cassini MIMI team, argue that the depletions must be caused by solid particles orbiting Rhea: "An analysis of the electron data indicates that this obstacle is most likely in the form of a low optical depth disk of material near Rhea’s equatorial plane and that the disk contains solid bodies up to ~1 m in size." The simplest explanation for the symmetrical punctuations in plasma flow are "extended arcs or rings of material" orbiting Rhea in its equatorial plane. These symmetric dips bear some similarity to the method by which the rings of Uranus were discovered in 1977. The slight deviations from absolute symmetry may be due to "a modest tilt to the local magnetic field" or "common plasma flow deviations" rather than to asymmetry of the rings themselves, which may be circular. Not all scientists are convinced that the observed signatures are caused by a ring system. No rings have been seen in images, which puts a very low limit on dust-sized particles. Furthermore, a ring of boulders would be expected to generate dust that would likely have been seen in the images. Physics Simulations suggest that solid bodies can stably orbit Rhea near its equatorial plane over astronomical timescales. They may not be stable around Dione and Tethys because those moons are far nearer Saturn, and therefore have far smaller Hill spheres, or around Titan because of drag from its dense atmosphere. Several suggestions have been made for the possible origin of rings. An impact could have ejected material into orbit; this could have happened as recently as 70 million years ago. A small body could have been disrupted when caught in orbit about Rhea. In either case, the debris would eventually have settled into circular equatorial orbits. Given the possibility of long-term orbital stability, however, it is possible that they survive from the formation of Rhea. For discrete rings to persist, something must confine them. Suggestions include moonlets or clumps of material within the disk, similar to those observed within Saturn's A ring. See also Subsatellite References External links NASA podcast Moons of Saturn Planetary rings Rhea (moon) 20080306 Solar System
Rings of Rhea
[ "Astronomy" ]
1,236
[ "Outer space", "Solar System" ]
16,166,462
https://en.wikipedia.org/wiki/Plant%20disease%20epidemiology
Plant disease epidemiology is the study of disease in plant populations. Much like diseases of humans and other animals, plant diseases occur due to pathogens such as bacteria, viruses, fungi, oomycetes, nematodes, phytoplasmas, protozoa, and parasitic plants. Plant disease epidemiologists strive for an understanding of the cause and effects of disease and develop strategies to intervene in situations where crop losses may occur. Destructive and non-destructive methods are used to detect diseases in plants. Additionally, understanding the responses of the immune system in plants will further benefit and limit the loss of crops. Typically successful intervention will lead to a low enough level of disease to be acceptable, depending upon the value of the crop. Plant disease epidemiology is often looked at from a multi-disciplinary approach, requiring biological, statistical, agronomic and ecological perspectives. Biology is necessary for understanding the pathogen and its life cycle. It is also necessary for understanding the physiology of the crop and how the pathogen is adversely affecting it. Agronomic practices often influence disease incidence for better or for worse. Ecological influences are numerous. Native species of plants may serve as reservoirs for pathogens that cause disease in crops. Statistical models are often applied in order to summarize and describe the complexity of plant disease epidemiology, so that disease processes can be more readily understood. For example, comparisons between patterns of disease progress for different diseases, cultivars, management strategies, or environmental settings can help in determining how plant diseases may best be managed. Policy can be influential in the occurrence of diseases, through actions such as restrictions on imports from sources where a disease occurs. In 1963 J. E. van der Plank published "Plant Diseases: Epidemics and Control", providing a theoretical framework for the study of the epidemiology of plant diseases. This book provides a theoretical framework based on experiments in many different host pathogen systems and moved the study of plant disease epidemiology forward rapidly, especially for fungal foliar pathogens. Using this framework we can now model and determine thresholds for epidemics that take place in a homogeneous environment such as a mono-cultural crop field. Elements of an epidemic Disease epidemics in plants can cause huge losses in yield of crops as well threatening to wipe out an entire species such as was the case with Dutch Elm Disease and could occur with Sudden Oak Death. An epidemic of potato late blight, caused by Phytophthora infestans, led to the Great Irish Famine and the loss of many lives. Commonly the elements of an epidemic are referred to as the “disease triangle”: a susceptible host, pathogen, and conducive environment. For a disease to occur all three of these must be present. Below is an illustration of this point. Where all three items meet, there is a disease. The fourth element missing from this illustration for an epidemic to occur is time. As long as all three of these elements are present disease can initiate, an epidemic will only ensue if all three continue to be present. Anyone of the three might be removed from the equation though. The host might out-grow susceptibility as with high-temperature adult-plant resistance, the environment changes and is not conducive for the pathogen to cause disease, or the pathogen is controlled through a fungicide application. Sometimes a fourth factor of time is added as the time at which a particular infection occurs, and the length of time conditions remain viable for that infection, can also play an important role in epidemics. The age of the plant species can also play a role, as certain species change in their levels of disease resistance as they mature; in a process known as ontogenic resistance. If all of the criteria are not met, such as a susceptible host and pathogen are present, but the environment is not conducive to the pathogen infecting and causing disease, a disease cannot occur. For example, corn is planted into a field with corn residue that has the fungus Cercospora zea-maydis, the causal agent of Grey leaf spot of corn, but if the weather is too dry, and there is no leaf wetness the spores of the fungus in the residue cannot germinate and initiate infection. Likewise, if the host is susceptible and the environment favours the development of disease but the pathogen is not present there is no disease. Taking the example above, the corn is planted into a ploughed field where there is no corn residue with the fungus Cercospora zea-maydis, the causal agent of Grey leaf spot of corn, present but the weather means extended periods of leaf wetness, there is no infection initiated. When a pathogen requires a vector to be spread then for an epidemic to occur the vector must be plentiful and active. Types of epidemics Pathogens cause monocyclic epidemics with a low birth rate and death rate, meaning they only have one infection cycle per season. They are typical of soil-borne diseases such as Fusarium wilt of flax. Polycyclic epidemics are caused by pathogens capable of several infection cycles a season. They are most often caused by airborne diseases such as powdery mildew. Bimodal polycyclic epidemics can also occur. For example, in brown rot of stone fruits the blossoms and the fruits are infected at different times. For some diseases the disease occurrence needs to be evaluated over several growing seasons, especially if growing the crops in monoculture year after year or growing perennial plants. Such conditions can mean that the inoculum produced in one season can be carried over to the next leading to a build-up over the years, especially in the tropics where there are no clear-cut breaks between growing seasons. Epidemics under these conditions are called polyetic; they can be caused by both monocyclic and polycyclic pathogens. Apple powdery mildew is an example of a polyetic epidemic caused by a polycyclic pathogen; Dutch Elm disease a polyetic epidemic caused by a monocyclic pathogen. Detecting diseases There are many different ways to spot a disease both destructively and non-destructively. In order to understand the cause, affects, and cure for a disease, the non-destructive method is more favorable. They are techniques where sample preparation and/or repetitive processes are not necessary for measuring and observing the conditions of the plants’ health. Non-destructive approaches may include image processing, imaging-based, spectroscopy based, and remote sensing. Photography, digital imaging, and image analysis technology are useful tools to set up for image processing. Valuable data are extracted from these images and then are analyzed for diseases. But before any analysis happens, image acquisition is the first step. And within this step contains three stages. First, is energy which is the light source of  illuminating from the object of interest. Second, is the optical system such as a camera to focus on the energy. Third, is the energy measured by the sensor. To continue with the image processing, there is a pre-process where one can make certain that there are no factors such as background, size, shape of leave, light, and camera effects the analysis.After the pre-process, image segmentation is used to split the image between regions of disease and non-disease. In these images, there features of color, texture, and shape that can be extracted and used for the analysis. Imaging-based approaches for the detection has two main methods, fluorescence imaging and hyper-spectral imaging. Fluorescence imaging helps identify the metabolic conditions of the plant. In order to do so, a tool is used to present light onto the chlorophyll complex of the plant. Hyper-spectral imaging is used to obtain reflected images. Such methods consist of the spectral information divergence (SID) where it can assess the spectral reflectance by looking at wavelength bands. Another non-destructive approach is spectroscopy. This is where the electromagnetic spectrum and matter becomes involved. There are visible and infrared spectroscopy, fluorescence spectroscopy, and electric impedance spectroscopy. Each spectroscopy gives information including the types of radiation energy, the types of material, the nature of interaction, and more. Finally, the last non-destructive approach is the application of remote sensing in plant diseases. This is where data is obtained without having to be with the plant while observing. There is hyper-spectral and multispectral in remote sensing. Hyper-spectral helps provide high spectral and spatial resolution. Multispectral remote sensing provides the severity of the disease. there is a need for further development of antibody- and molecular marker-tests for new pathogens and occurrence of known pathogens in new hosts, and also a need for further global integration of quarantine and surveillance. Immune System Plants can show many signs or physical evidence of fungal, viral or bacterial infections. This can range from rusts or molds to not showing anything at all when a pathogen invades the plant (occurs in some viral diseases in plants). Symptoms which are visible effects of diseases on the plant consist of changes in color, shape or function. These changes in the plant coordinates with their response to pathogens or foreign organisms that is negatively effecting their system. Even though plants do not have cells that can move and fight foreign organisms and they do not have a somatic adaptive immune system, they do have and depend on innate immunity of each cell and on systemic signals. In responses to infections, plants have a two-branched innate immune system. The first branch has to recognize and respond to molecules that are similar to classes of microbes, this includes non-pathogens. On the other hand, the second branch responds to pathogen virulence factors, either directly or indirectly to the host. Pattern recognition receptors (PRRs) are activated by recognition of pathogen or microbial-associated molecular patterns known as PAMPs or MAMPs. These leads to PAMP-Triggered Immunity or Pattern-Triggered Immunity (PTI) where PRRs causes intracellular signaling, transcriptional reprogramming, and biosynthesis of a complex output response that decreases colonization. In addition, R genes also known as Effector-Triggered Immunity is activated by specific pathogen “effectors” that can trigger a strong antimicrobial response. Both PTI and ETI assist in plant defense through activation of DAMP which is Damage-associated Compounds. Cellular changes or changes in gene expression are activated through ion channel gating, oxidative burst, cellular redox changes, or protein kinase cascades through PTI and ETI receptors. Impact Through 2013, invasive tree diseases had killed about 100 million elm trees combined in the United Kingdom and United States and 3.5 billion American chestnut trees. See also Distance Diagnostics Through Digital Imaging (DDDI) Landscape epidemiology Plant disease forecasting Robert Hartig Forest pathology Phytopathology with historical landmarks in plant pathology References Further reading Crop disease epidemiology External links Ecology and epidemiology in the R programming environment - Open access modules published in The Plant Health Instructor Phytopathology Agronomy Epidemiology Plant diseases
Plant disease epidemiology
[ "Environmental_science" ]
2,276
[ "Epidemiology", "Environmental social science" ]
13,457,839
https://en.wikipedia.org/wiki/Social%20commerce
Social commerce is a subset of electronic commerce that involves social media and online media that supports social interaction, and user contributions to assist online buying and selling of products and services. More succinctly, social commerce is the use of social network(s) in the context of e-commerce transactions from browsing to checkout, without ever leaving a social media platform. The term social commerce was introduced by Yahoo! in November 2005 which describes a set of online collaborative shopping tools such as shared pick lists, user ratings and other user-generated content-sharing of online product information and advice. The concept of social commerce was developed by David Beisel to denote user-generated advertorial content on e-commerce sites, and by Steve Rubel to include collaborative e-commerce tools that enable shoppers "to get advice from trusted individuals, find goods and services and then purchase them". The social networks that spread this advice have been found to increase the customer's trust in one retailer over another. Social commerce aims to assist companies in achieving the following purposes. Firstly, social commerce helps companies engage customers with their brands according to the customers' social behaviors. Secondly, it provides an incentive for customers to return to their website. Thirdly, it provides customers with a platform to talk about their brand on their website. Fourthly, it provides all the information customers need to research, compare, and ultimately choose you over your competitor, thus purchasing from you and not others. In these days, the range of social commerce has been expanded to include social media tools and content used in the context of e-commerce, especially in the fashion industry. Examples of social commerce include customer ratings and reviews, user recommendations and referrals, social shopping tools (sharing the act of shopping online), forums and communities, social media optimization, social applications and social advertising. Technologies such as augmented reality have also been integrated with social commerce, allowing shoppers to visualize apparel items on themselves and solicit feedback through social media tools. Some academics have sought to distinguish "social commerce" from "social shopping", with the former being referred to as collaborative networks of online vendors; the latter, the collaborative activity of online shoppers. Timeline 2005: The term "social commerce" was first introduced on Yahoo! in 2005. 2021: The Global Web Index associated one's use of social media to his/her eagerness to buy. Social media with its entertaining and inspirational content can increase a product's profitability. This explains why Instagram expanded its Checkout feature to similar content like IG Stories, IGTV, and Reels. Elements The attraction and effectiveness of Social Commerce can understood in terms of Robert Cialdini's Principles of InfluenceInfluence: Science and Practice": Reciprocity – When a company gives a person something for free, that person will feel the need to return the favor, whether by buying again or giving good recommendations for the company. Community – When people find an individual or a group that shares the same values, likes, beliefs, etc., they find community. People are more committed to a community that they feel accepted within. When this commitment happens, they tend to follow the same trends as a group and when one member introduces a new idea or product, it is accepted more readily based on the previous trust that has been established. It would be beneficial for companies to develop partnerships with social media sites to engage social communities with their products. Social proof – To receive positive feedback, a company needs to be willing to accept social feedback and to show proof that other people are buying, and like, the same things that I like. This can be seen in a lot of online companies such as eBay and Amazon, that allow public feedback of products and when a purchase is made, they immediately generate a list showing purchases that other people have made in relation to my recent purchase. It is beneficial to encourage open recommendation and feedback. This creates trust for you as a seller. 55% of buyers turn to social media when they're looking for information. Authority – Many people need proof that a product is of good quality. This proof can be based on the recommendations of others who have bought the same product. If there are many user reviews about a product, then a consumer will be more willing to trust their own decision to buy this item. Liking – People trust based on the recommendations of others. If there are a lot of "likes" of a particular product, then the consumer will feel more confident and justified in making this purchase. Scarcity – As part of supply and demand, a greater value is assigned to products that are regarded as either being in high demand or are seen as being in a shortage. Therefore, if a person is convinced that they are purchasing something that is unique, special, or not easy to acquire, they will have more of a willingness to make a purchase. If there is trust established from the seller, they will want to buy these items immediately. This can be seen in the cases of Zara and Apple Inc. who create demand for their products by convincing the public that there is a possibility of missing out on being able to purchase them. Types Social Commerce has become a really broad term encapsulating a lot of different technologies. It can be categorized as Offsite and Onsite social commerce. Onsite Onsite social commerce refers to retailers including social sharing and other social functionality on their website. Some notable examples include Zazzle which enables users to share their purchases, Macy's which allows users to create a poll to find the right product, and Fab.com which shows a live feed of what other shoppers are buying. Onsite user reviews are also considered a part of social commerce. This approach has been successful in improving customer engagement, conversion and word-of-mouth branding according to several industry sources. Offsite Offsite social commerce includes activities that happen outside of the retailers' website. These may include Facebook storefronts, posting products on Facebook, Twitter, Pinterest and other social networks, advertisement etc. However, many large brands seem to be abandoning that approach. A recent study by W3B suggests that just two percent of Facebook's 1.5 billion users have ever made a purchase through the social network. Measurements Social commerce can be measured by any of the principle ways to measure social media. Return on Investment: measures the effect or action of social media on sales. Reputation: indices measure the influence of social media investment in terms of changes to online reputation – made up of the volume and valence of social media mentions. Reach: metrics use traditional media advertising metrics to measure the exposure rates and levels of an audience with social media. Business applications This category is based on individuals' shopping, selling, recommending behaviors. Social network-driven sales (Soldsie) – Facebook commerce and Twitter commerce belong to this part. Sales take place on established social network sites. Peer-to-peer sales platforms (eBay, Etsy, Amazon) – In these websites, users can directly communicate and sell products to other users. Group buying (Groupon, LivingSocial) – Users can buy products or services at a lower price when enough users agree to make this purchase. Peer recommendations and reviews (Amazon, Yelp, Bazaarvoice) – Users can see recommendations and reviews from other users. User-curated shopping (The Fancy, Lyst) – Users create and share lists of products and services for others to shop from. Participatory commerce (Betabrand, Threadless, Kickstarter) – Users can get involved in the production process. Social shopping (Squadded) – Allowing e-commerce to provide their users live chat sessions and shared shopping lists so they can communicate with their friends or other shoppers for advice. Business examples Here are some notable business examples of Social Commerce: Betabrand: an online brand using participatory design to release new, community-created ideas every week. Cafepress: an online retailer of stock and user-customized on demand products. Etsy: an e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items under Etsy's new guidelines. Eventbrite: an online ticketing service that allows event organizers to plan, set up ticket sales and promote events (event management) and publish them across Facebook, Twitter and other social-networking tools directly from the site's interface. Groupon: a deal-of-the-day website that features discounted gift certificates usable at local or national companies. Houzz: a web site and online community about architecture, interior design and decorating, landscape design and home improvement. LivingSocial: an online marketplace that allows clients to buy and share things to do in their city. Lockerz: an international social commerce website based in Seattle, Washington. OpenSky: is a registered trademark of Harris Corporation and is the trade name for a wireless communication system, invented by M/A-COM Inc., that is now a division of Harris RF Communications. Pinterest: a web and mobile application company that offers a visual discovery, collection, sharing, and storage tool. Polyvore: a community powered social commerce website. Members curate products into a shared product index and use them to create image collages called "Sets". Solavei: a social commerce network offering contract-free mobile service in the United States. Facebook commerce (f-commerce) Facebook commerce, f-commerce, and f-comm refer to the buying and selling of goods or services through Facebook, either through Facebook directly or through the Facebook Open Graph. Until March 2010, 1.5 million businesses had pages on Facebook which were built by Facebook Markup Language (FBML). A year later, in March 2011, Facebook deprecated FBML and adopted iframes. This allowed developers to gather more information about their Facebook visitors. History The "2011 Social Commerce Study" estimated that 42% of online consumers had "followed" a retailer proactively through Facebook, Twitter or the retailer's blog, and that a full one-third of shoppers said they would be likely to make a purchase directly from Facebook (35%) or Twitter (32%). Influencer marketing Micro-influencers are designers, photographers, writers, athletes, bohemian world-wanderers, professors, or any professional who could authentically channel things that speak about a brand. It is clear that these channels have fewer followers than the average celebrity accounts, most of the time they have less than 10,000 followers (according to Georgia Hatton from Social Media Today), but the quality of the audiences tends to be better, with a higher potential for like-minded tight-knit community of shoppers eager to take recommendations from one another. This topic has been also discussed by many other organizations such as Adweek, Medium, Forbes, Brand24, and many others. See also Referral marketing Web 2.0 References External links Academic paper on social commerce from Columbia University Collaboration E-commerce Social media Social networks
Social commerce
[ "Technology" ]
2,257
[ "Information technology", "Computing and society", "E-commerce", "Social media" ]
13,458,509
https://en.wikipedia.org/wiki/Internet%20linguistics
Internet linguistics is a domain of linguistics advocated by the English linguist David Crystal. It studies new language styles and forms that have arisen under the influence of the Internet and of other new media, such as Short Message Service (SMS) text messaging. Since the beginning of human–computer interaction (HCI) leading to computer-mediated communication (CMC) and Internet-mediated communication (IMC), experts, such as Gretchen McCulloch have acknowledged that linguistics has a contributing role in it, in terms of web interface and usability. Studying the emerging language on the Internet can help improve conceptual organization, translation and web usability. Such study aims to benefit both linguists and web users combined. The study of internet linguistics can take place through four main perspectives: sociolinguistics, education, stylistics and applied linguistics. Further dimensions have developed as a result of further technological advances, which include the development of the Web as corpus and the spread and influence of the stylistic variations brought forth by the spread of the Internet, through the mass media and through literary works. In view of the increasing number of users connected to the Internet, the linguistics future of the Internet remains to be determined, as new computer-mediated technologies continue to emerge and people adapt their languages to suit these new media. The Internet continues to play a significant role both in encouraging people and in diverting attention away from the usage of languages. Main perspectives David Crystal has identified four main perspectives for further investigation: the sociolinguistic perspective, the educational perspective, the stylistic perspective and the applied perspective. The four perspectives are effectively interlinked and affect one another. Sociolinguistic perspective This perspective deals with how society views the impact of Internet development on languages. The advent of the Internet has revolutionized communication in many ways; it changed the way people communicate and created new platforms with far-reaching social impact. Significant avenues include but are not limited to SMS text messaging, e-mails, chatgroups, virtual worlds and the Web. The evolution of these new mediums of communications has raised much concern with regards to the way language is being used. According to Crystal (2005), these concerns are neither without grounds nor unseen in history it surfaces almost always when a new technology breakthrough influences languages; as seen in the 15th century when printing was introduced, the 19th century when the telephone was invented and the 20th century when broadcasting began to penetrate our society. At a personal level, CMC such as SMS text messaging and mobile e-mailing (push mail) has greatly enhanced instantaneous communication. Some examples include the iPhone and the BlackBerry. In schools, it is not uncommon for educators and students to be given personalized school e-mail accounts for communication and interaction purposes. Classroom discussions are increasingly being brought onto the Internet in the form of discussion forums. For instance, at Nanyang Technological University, students engage in collaborative learning at the university's portal edveNTUre, where they participate in discussions on forums and online quizzes and view streaming podcasts prepared by their course instructors among others. iTunes U in 2008 began to collaborate with universities as they converted the Apple music service into a store that makes available academic lectures and scholastic materials for free they have partnered more than 600 institutions in 18 countries, including Oxford, Cambridge and Yale Universities. These forms of academic social networking and media are slated to rise as educators from all over the world continue to seek new ways to better engage students. It is commonplace for students in New York University to interact with “guest speakers weighing in via Skype, library staffs providing support via instant messaging, and students accessing library resources from off campus”. This will affect the way language is used as students and teachers begin to use more of these CMC platforms. At a professional level, it is a common sight for companies to have their computers and laptops hooked up onto the Internet (via wired and wireless Internet connection), and for employees to have individual e-mail accounts. This greatly facilitates internal (among staffs of the company) and external (with other parties outside of one's organization) communication. Mobile communications such as smart phones are increasingly making their way into the corporate world. For instance, in 2008, Apple announced their intention to actively step up their efforts to help companies incorporate the iPhone into their enterprise environment, facilitated by technological developments in streamlining integrated features (push e-mail, calendar and contact management) using ActiveSync. In general, these new CMCs that are made possible by the Internet have altered the way people use language there is heightened informality and consequently a growing fear of its deterioration. However, as David Crystal puts it, these should be seen positively as it reflects the power of the creativity of a language. Themes The sociolinguistics of the Internet may also be examined through five interconnected themes. Multilingualism – It looks at the prevalence and status of various languages on the Internet. Language change – From a sociolinguistic perspective, language change is influenced by the physical constraints of technology (e.g. typed text) and the shifting social-economic priorities such as globalization. It explores the linguistic changes over time, with emphasis on Internet lingo. Conversation discourse – It explores the changes in patterns of social interaction and communicative practice on the Internet. Stylistic diffusion – It involves the study of the spread of Internet jargons and related linguistic forms into common usage. As language changes, conversation discourse and stylistic diffusion overlap with the aspect of language stylistics. See below: Stylistic perspective. Metalanguage and folk linguistics – It involves looking at the way these linguistic forms and changes on the Internet are labelled and discussed (e.g. impact of Internet lingo resulted in the "death" of the apostrophe and loss of capitalization.) Educational perspective The educational perspective of internet linguistics examines the Internet's impact on formal language use, specifically on Standard English, which in turn affects language education. The rise and rapid spread of Internet use has brought about new linguistic features specific only to the Internet platform. These include, but are not limited to, an increase in the use of informal written language, inconsistency in written styles and stylistics and the use of new abbreviations in Internet chats and SMS text messaging, where constraints of technology on word count contributed to the rise of new abbreviations. Such acronyms exist primarily for practical reasons to reduce the time and effort required to communicate through these mediums apart from technological limitations. Examples of common acronyms include lol (for "laughing out loud"; a general expression of laughter), omg ("oh my god") and gtg ("got to go"). The educational perspective has been considerably established in the research on the Internet's impact on language education. It is an important and crucial aspect, as it affects and involves the education of current and future student generations in the appropriate and timely use of informal language that arises from Internet usage. There are concerns for the growing infiltration of informal language use and incorrect word use into academic or formal situations, such as the usage of casual words like "guy" or the choice of the word "preclude" in place of "precede" in academic papers by students. There are also issues with spellings and grammar occurring at a higher frequency among students' academic works as noted by educators, with the use of abbreviations such as "u" for "you" and "2" for "to" being the most common. Linguists and professors like Eleanor Johnson suspect that widespread mistakes in writing are strongly connected to Internet usage, where educators have similarly reported new kinds of spelling and grammar mistakes in student works. There is, however, no scientific evidence to confirm the proposed connection. Naomi S. Baron argues in Always On that student writings suffer little impact from the use of Internet-mediated communication (IMC) such as internet chat, SMS text messaging and e-mail. A study in 2009 published by the British Journal of Developmental Psychology found that students who regularly texted (sent messages via SMS using a mobile phone) displayed a wider range of vocabulary, and this may lead to a positive impact on their reading development. Though the use of the Internet resulted in stylistics that are not deemed appropriate in academic and formal language use, Internet use may not hinder language education but instead aid it. The Internet has proven in different ways that it can provide potential benefits in enhancing language learning, especially in second or foreign-language learning. Language education through the Internet in relation to Internet linguistics is, most significantly, applied through the communication aspect (use of e-mails, discussion forums, chat messengers, blogs, etc.). IMC allows greater interaction between language learners and native speakers of the language, providing for greater error corrections and better learning opportunities of standard language, in the process allowing the picking up of specific skills such as negotiation and persuasion. Stylistic perspective This perspective examines how the Internet and its related technologies have encouraged new and different forms of creativity in language, especially in literature. It looks at the Internet as a medium through which new language phenomena have arisen. This new mode of language is interesting to study because it is an amalgam of both spoken and written languages. For example, traditional writing is static compared to the dynamic nature of the new language on the Internet, where words can appear in different colors and font sizes on the computer screen. Yet, this new mode of language also contains other elements not found in natural languages. One example is the concept of framing found in e-mails and discussion forums. In replying to e-mails, people generally use the sender's e-mail message as a frame to write their own messages. They can choose to respond to certain parts of an e-mail message while leaving other bits out. In discussion forums, one can start a new thread, and anyone regardless of their physical location can respond to the idea or thought that was set down through the Internet. This is something that is usually not found in written language. Future research also includes new varieties of expressions that the Internet and its various technologies are constantly producing and their effects not only on written languages but also their spoken forms. The communicative style of Internet language is best observed in the CMC channels below, as there are often attempts to overcome technological restraints such as transmission time lags and to re-establish social cues that are often vague in written text. Mobile phones Mobile phones (also called cell phones) have an expressive potential beyond their basic communicative functions. This can be seen in text-messaging poetry competitions such as the one held by The Guardian. The 160-character limit imposed by the cell phone has motivated users to exercise their linguistic creativity to overcome them. A similar example of new technology with character constraints is Twitter, which has a 280-character limit. There have been debates as to whether these new abbreviated forms introduced in users’ Tweets are "lazy" or whether they are creative fragments of communication. Despite the ongoing debate, there is no doubt that Twitter has contributed to the linguistic landscape with new lingoes and also brought about a new dimension of communication. The cell phone has also created a new literary genre cell phone novels. A typical cell phone novel consists of several chapters, which readers download in short installments. These novels are in their "raw" form, as they do not go through editing processes like traditional novels. They are written in short sentences, similar to text messaging. Authors of such novels are also able to receive feedback and new ideas from their readers through e-mails or online feedback channels. Unlike traditional novel writing, readers’ ideas sometimes get incorporated into the storyline, or authors may also decide to change their story's plot according to the demand and popularity of their novel (typically gauged by the number of download hits). Despite their popularity, there has also been criticism regarding the novels’ "lack of diverse vocabulary" and poor grammar. Blogs Blogging has brought about new ways of writing diaries and from a linguistic perspective, the language used in blogs is "in its most 'naked' form", published for the world to see without undergoing the formal editing process. This is what makes blogs stand out because almost all other forms of printed language have gone through some form of editing and standardization. David Crystal stated that blogs were "the beginning of a new stage in the evolution of the written language". Blogs have become so popular that they have expanded beyond written blogs, with the emergence of photoblog, videoblog, audioblog and moblog. These developments in interactive blogging have created new linguistic conventions and styles, with more expected to arise in the future. Virtual worlds Virtual worlds provide insights into how users are adapting the usage of natural language for communication within these new mediums. The Internet language that has arisen through user interactions in text-based chatrooms and computer-simulated worlds has led to the development of slangs within digital communities. Examples of these include "pwn" and "noob". Emoticons are further examples of how users have adapted different expressions to suit the limitations of cyberspace communication, one of which is the "loss of emotivity". Communication in niches such as role-playing games (RPG) of multi-user domains (MUDs) and virtual worlds is highly interactive, with emphasis on speed, brevity and spontaneity. As a result, CMC is generally more vibrant, volatile, unstructured and open. There are often complex organization of sequences and exchange structures evident in the connection of conversational strands and short turns. Some of the CMC strategies used include capitalization for words such as EMPHASIS, usage of symbols such as the asterisk to enclose words as seen in *stress* and the creative use of punctuation like ???!?!?!?. Symbols are also used for discourse functions, such as the asterisk as a conversational repair marker and arrows and carats as deixis and referent markers. Besides contributing to these new forms in language, virtual worlds are also being used to teach languages. Virtual world language learning provides students with simulations of real-life environments, allowing them to find creative ways to improve their language skills. Virtual worlds are good tools for language learning among the younger learners because they already see such places as a "natural place to learn and play". E-mail One of the most popular Internet-related technologies to be studied under this perspective is e-mail, which has expanded the stylistics of languages in many ways. A study done on the linguistic profile of e-mails has shown that there is a hybrid of speech and writing styles in terms of format, grammar and style. E-mail is rapidly replacing traditional letter-writing because of its convenience, speed and spontaneity. It is often related to informality, as it feels temporary and can be deleted easily. However, as this medium of communication matures, e-mail is no longer confined to sending informal messages between friends and relatives. Instead, business correspondences are increasingly being carried out through e-mails. Job seekers are also using e-mails to send their resumes to potential employers. The result of a move towards more formal usages will be a medium representing a range of formal and informal stylistics. While e-mail has been blamed for students’ increased usage of informal language in their written work, David Crystal argues that e-mail is "not a threat, for language education" because e-mail with its array of stylistic expressiveness can act as a domain for language learners to make their own linguistic choices responsibly. Furthermore, the younger generation's high propensity for using e-mail may improve their writing and communication skills because of the efforts they are making to formulate their thoughts and ideas, albeit through a digital medium. Instant messaging Like other forms of online communication, instant messaging has also developed its own acronyms and short forms. However, instant messaging is quite different from e-mail and chatgroups because it allows participants to interact with one another in real-time while conversing in private. With instant messaging, there is an added dimension of familiarity among participants. This increased degree of intimacy allows greater informality in language and "typographical idiosyncrasies". There are also greater occurrences of stylistic variation because there can be a very wide age gap between participants. For example, a granddaughter can catch up with her grandmother through instant messaging. Unlike chatgroups where participants come together with shared interests, there is no pressure to conform in language here. Applied perspective The applied perspective views the linguistic exploitation of the Internet in terms of its communicative capabilities the good and the bad. The Internet provides a platform where users can experience multilingualism. Although English is still the dominant language used on the Internet, other languages are gradually increasing in their number of users. The Global Internet usage page provides some information on the number of users of the Internet by language, nationality and geography. This multilingual environment continues to increase in diversity as more language communities become connected to the Internet. The Internet is thus a platform where minority and endangered languages can seek to revive their language use and/or create awareness. This can be seen in two instances where it provides these languages opportunities for progress in two important regards language documentation and language revitalization. Language documentation Firstly, the Internet facilitates language documentation. Digital archives of media such as audio and video recordings not only help to preserve language documentation, but also allows for global dissemination through the Internet. Publicity about endangered languages, such as has helped to spur a worldwide interest in linguistic documentation. Foundations such as the Hans Rausing Endangered Languages Project (HRELP), funded by Arcadia also help to develop the interest in linguistic documentation. The HRELP is a project that seeks to document endangered languages, preserve and disseminate documentation materials among others. The materials gathered are made available online under its Endangered Languages Archive (ELAR) program. Other online materials that support language documentation include the Language Archive Newsletter, which provides news and articles about topics in endangered languages. The web version of Ethnologue also provides brief information of all of the world's known living languages. By making resources and information of endangered languages and language documentation available on the Internet, it allows researchers to build on these materials and hence preserve endangered languages. Language revitalization Secondly, the Internet facilitates language revitalization. Throughout the years, the digital environment has developed in various sophisticated ways that allow virtual contact. From e-mails, chats to instant messaging, these virtual environments have helped to bridge the spatial distance between communicators. The use of e-mails has been adopted in language courses to encourage students to communicate in various styles such as conference-type formats and also to generate discussions. Similarly, the use of e-mails facilitates language revitalization in the sense that speakers of a minority language who moved to a location where their native language is not being spoken can take advantage of the Internet to communicate with their family and friends, thus maintaining the use of their native language. With the development and increasing use of telephone broadband communication such as Skype, language revitalization through the internet is no longer restricted to literate users. Hawaiian educators have been taking advantage of the Internet in their language revitalization programs. The graphical bulletin board system Leoki (Powerful Voice) was established in 1994. The content, interface and menus of the system are entirely in the Hawaiian language. It is installed throughout the immersion school system and includes components for e-mails, chat, dictionary and online newspaper among others. In higher institutions such as colleges and universities where the Leoki system is not yet installed, the educators make use of other software and Internet tools such as Daedalus Interchange, e-mails and the Web to connect students of Hawaiian language with the broader community. Another use of the Internet includes having students of minority languages write about their native cultures in their native languages for distant audiences. Also, in an attempt to preserve their language and culture, Occitan speakers have been taking advantage of the Internet to reach out to other Occitan speakers from around the world. These methods provide reasons for using the minority languages by communicating in it. In addition, the use of digital technologies, which the young generation think of as "cool", will appeal to them and in turn maintain their interest and usage of their native languages. Exploitation of the Internet The Internet can also be exploited for activities such as terrorism, internet fraud and internet crimes against children. In recent years, there has been an increase in crimes that involved the use of the Internet such as e-mails and Internet Relay Chat (IRC), as it is relatively easy to remain anonymous. These conspiracies carry concerns for security and protection. From a forensic linguistic point of view, there are many potential areas to explore. While developing a chat room child protection procedure based on search terms filtering is effective, there is still minimal linguistically orientated literature to facilitate the task. In other areas, it is observed that the Semantic Web has been involved in tasks such as personal data protection, which helps to prevent fraud. Dimensions The dimensions covered in this section include looking at the Web as a corpus and issues of language identification and normalization. The impacts of Internet linguistics on everyday life are examined under the spread and influence of Internet stylistics, trends of language change on the Internet and conversation discourse. The Web as a corpus With the Web being a huge reservoir of data and resources, language scientists and technologists are increasingly turning to the web for language data. Corpora were first formally mentioned in the field of computational linguistics at the 1989 ACL meeting in Vancouver. It was met with much controversy, as they lacked theoretical integrity leading to much skepticism of their role in the field, until the publication of the journal "Using Large Corpora" in 1993 that the relationship between computational linguistics and corpora became widely accepted. To establish whether the Web is a corpus, it is worthwhile to turn to the definition established by McEnery and Wilson (1996, p. 21): Relating closer to the Web as a corpus, Manning and Schütze (1999, p. 120) further streamlines the definition: Hit counts were used for carefully constructed search engine queries to identify rank orders for word sense frequencies, as an input to a word sense disambiguation engine. This method was further explored with the introduction of the concept of a parallel corpora where the existing Web pages that exist in parallel in local and major languages be brought together. It was demonstrated that it is possible to build a language-specific corpus from a single document in that specific language. Themes There has been much discussion about the possible developments in the arena of the Web as a corpus. The development of using the web as a data source for word sense disambiguation was brought forward in The EU MEANING project in 2002. It used the assumption that within a domain, words often have a single meaning, and that domains are identifiable on the Web. This was further explored by using Web technology to gather manual word sense annotations on the Word Expert Web site. In areas of language modeling, the Web has been used to address data sparseness. Lexical statistics have been gathered for resolving prepositional phrase attachments, while Web document were used to seek a balance in the corpus. In areas of information retrieval, a Web track was integrated as a component in the community's TREC evaluation initiative. The sample of the Web used for this exercise amount to around 100GB, compromising of largely documents in the .gov top level domain. British National Corpus The British National Corpus contains ample information on the dominant meanings and usage patterns for the 10,000 words that forms the core of English. The number of words in the British National Corpus (about 100 million) is sufficient for many empirical strategies for learning about language for linguists and lexicographers, and is satisfactory for technologies that utilize quantitative information about the behavior of words as input (parsing). However, for some other purposes, it is insufficient, as an outcome of the Zipfian nature of word frequencies. Because the bulk of the lexical stock occurs less than 50 times in the British National Corpus, it is insufficient for statistically stable conclusions about such words. Furthermore, for some rarer words, rare meanings of common words, and combinations of words, no data has been found. Researchers find that probabilistic models of language based on very large quantities of data are better than ones based on estimates from smaller, cleaner data sets. The multilingual Web The Web was clearly perceived as a multilingual corpus as early as 2013.. According to Observatory of Linguistic and Cultural Diversity on the Internet, in 2024, around 750 languages have now digital codification, and amongst the estimated 50 billion webpages from 200 millions active websites, around 20% are in English or Chinese, and are followed by Spanish (7.7%), Hindi (3.8%), Russian (3.7%), Arabic (3.7%), French (3.4%) and Portuguese (3.1%). The same source mention figures for "cybergeography of languages" regrouping data by language families and highlighting the fact that the Internet is the most multilingual realm ever existed. If today more tan 95% of persons can use their first or second language to relate with the Internet, 90% of existing languages remain without digital existence, mainly minority and endangered languages. Challenges In areas of language modeling, there are limitations on the applicability of any language model as the statistics for different types of text will be different. When a language technology application is put into use (applied to a new text type), it is not certain that the language model will fare in the same way as how it would when applied to the training corpus. It is found that there are substantial variations in model performance when the training corpus changes. This lack of theory types limits the assessment of the usefulness of language-modeling work. As Web texts are easily produced (in terms of cost and time) and with many different authors working on them, it often results in little concern for accuracy. Grammatical and typographical errors are regarded as “erroneous” forms that cause the Web to be a dirty corpus. Nonetheless, it may still be useful even with some noise. The issue of whether sublanguages should be included remains unsettled. Proponents of it argue that with all sublanguages removed, it will result in an impoverished view of language. Since language is made up of lexicons, grammar and a wide array of different sublanguages, they should be included. However, it is not until recently that it became a viable option. Striking a middle ground by including some sublanguages is contentious because it is an arbitrary issue of which to include and which not. The decision of what to include in a corpus lies with corpus developers, and it has been done so with pragmatism. The desiderata and criteria used for the British National Corpus serves as a good model for a general-purpose, general-language corpus with the focus of being representative replaced with being balanced. Search engines such as Google serves as a default means of access to the Web and its wide array of linguistics resources. However, for linguists working in the field of corpora, there presents a number of challenges. This includes the limited instances that are presented by the search engines (1,000 or 5,000 maximum); insufficient context for each instance (Google provides a fragment of around ten words); results selected according to criteria that are distorted (from a linguistic point of view) as search term in titles and headings often occupy the top results slots; inability to allow searches to be specified according to linguistic criteria, such as the citation form for a word, or word class; unreliability of statistics, with results varying according to search engine load and many other factors. At present, in view of the conflicts of priorities among the different stakeholders, the best solution is for linguists to attempt to correct these problems by themselves. This will then lead to a large number of possibilities opening in the area of harnessing the rich potential of the Web. Representation Despite the sheer size of the Web, it may still not be representative of all the languages and domains in the world, and neither are other corpora. However, the huge quantities of text, in numerous languages and language types on a huge range of topics makes it a good starting point that opens up to a large number of possibilities in the study of corpora. Impact of its spread and influence Stylistics arising from Internet usage has spread beyond the new media into other areas and platforms, including but not limited to, films, music and literary works. The infiltration of Internet stylistics is important as mass audiences are exposed to the works, reinforcing certain Internet specific language styles which may not be acceptable in standard or more formal forms of language. Apart from internet slang, grammatical errors and typographical errors are features of writing on the Internet and other CMC channels. As users of the Internet gets accustomed to these errors, it progressively infiltrates into everyday language use, in both written and spoken forms. It is also common to witness such errors in mass media works, from typographical errors in news articles to grammatical errors in advertisements and even internet slang in drama dialogues. The more the internet is incorporated into daily life, the greater the impact it has on formal language. This is especially true in modern Language Arts classes through the use of smart phones, tablets, and social media. Students are exposed to the language of the internet more than ever, and as such, the grammatical structure and slang of the internet are bleeding into their formal writing. Full immersion into a language is always the best way to learn it. Mark Lester in his book Teaching Grammar and Usage states: “The biggest single problem that basic writers have in developing successful strategies for coping with errors is simply their lack of exposure to formal written English ... We would think it absurd to expect a student to master a foreign language without extensive exposure to it.” Since students are immersed in internet language, that is the form and structure they are mirroring. In addition, the rise of the Internet and overall immersion of people within it has brought forth a new wave over internet activism that has an impact on the public every day. Memes The origin of the term "meme" can be traced back to Richard Dawkins, an ethologist, where he describes it as "a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation". The term was later adapted to the realm of the Internet by David Beskow, Sumeet Kumar, and Kathleen Carley, wherein they labeled Internet memes as "any digital unit that transfers culture". Shifman's definition of Internet-Memes also includes their status as "(a) a group of digital items sharing common characteristics of content, form, and/or stance, which (b) were created with awareness of each other, and (c) were circulated, imitated, and/or transformed via the Internet by many users." Mass media There has been instances of television advertisements using Internet slang, reinforcing the penetration of Internet stylistics in everyday language use. For example, in the Cingular advevrtisement in the United States, acronyms such as "BFF Jill" (which means "Best Friend Forever, Jill") were used. More businesses have adopted the use of Internet slang in their advertisements as the more people are growing up using the Internet and other CMC platforms, in an attempt to relate and connect to them better. Such advertisements have received relatively enthusiastic feedback from its audiences. The use of Internet lingo has also spread into the arena of music, significantly seen in popular music. A recent example is Trey Songz's lyrics for , which incorporated many Internet lingo and mentions of Twitter and texting. The spread of Internet linguistics is also present in films made by both commercial and independent filmmakers. Though primarily screened at film festivals, DVDs of independent films are often available for purchase over the internet including paid-live-streamings, making access to films more easily available for the public. The very nature of commercial films being screened at public cinemas allows the wide exposure to the mainstream mass audience, resulting in a faster and wider spread of Internet slangs. The latest commercial film is titled "LOL" (acronym for Laugh Out Loud or Laughing Out Loud), starring Miley Cyrus and Demi Moore. This movie is a 2011 remake of the Lisa Azuelos' 2008 popular French film similarly titled "LOL (Laughing Out Loud)". The use of internet slangs is not limited to the English language but extends to other languages as well. The Korean language has incorporated the English alphabet in the formation of its slang, while others were formed from common misspellings arising from fast typing. The new Korean slang is further reinforced and brought into everyday language use by television shows such as soap operas or comedy dramas like “High Kick Through the Roof” released in 2009. Linguistic future of the Internet With the emergence of greater computer/Internet mediated communication systems, coupled with the readiness with which people adapt to meet the new demands of a more technologically sophisticated world, it is expected that users will continue to remain under pressure to alter their language use to suit the new dimensions of communication. As the number of Internet users increase rapidly around the world, the cultural background, linguistic habits and language differences among users are brought into the Web at a much faster pace. These individual differences among Internet users are predicted to significantly impact the future of Internet linguistics, notably in the aspect of the multilingual web. As seen from 2000 to 2010, Internet penetration has experienced its greatest growth in non-English speaking countries such as China and India and countries in Africa, resulting in more languages apart from English penetrating the Web. Also, the interaction between English and other languages is predicted to be an important area of study. As global users interact with each other, possible references to different languages may continue to increase, resulting in formation of new Internet stylistics that spans across languages. Chinese and Korean languages have already experienced English language's infiltration leading to the formation of their multilingual Internet lingo. At current state, the Internet provides a form of education and promotion for minority languages. However, similar to how cross-language interaction has resulted in English language's infiltration into Chinese and Korean languages to form new slangs, minority languages are also affected by the more common languages used on the Internet (such as English and Spanish). While language interaction can cause a loss in the authentic standard of minority languages, familiarity of the majority language can also affect the minority languages in adverse ways. For example, users attempting to learn the minority language may opt to read and understand about it in a majority language and stop there, resulting in a loss instead of gain in the potential speakers of the minority language. Also, speakers of minority languages may be encouraged to learn the more common languages that are being used on the Web in order to gain access to more resources, and in turn leading to a decline in their usage of their own language. The future of endangered minority languages in view of the spread of Internet remains to be observed. See also Appendix: Internet Slang Applied linguistics Enron Corpus, publicly available database of 600,000 emails within the Enron Corporation Glossary of Internet-related terminology Internetlinguistik (German) References Further reading Aitchison, J., & Lewis, D. M. (Eds.). (2003). New Media Language. London and New York: Routledge. Baron, N. S. (2000). Alphabet to Email: How Written English Evolved and Where It's Heading. London and New York: Routledge. Beard, A. (2004). Language Change. London and New York: Routledge. Biewer, C., Nesselhauf, N., & Hundt, M. (Eds.). (2006). Corpus Linguistics and the Web. The Netherlands: Rodopi. Boardman, M. (2005). The Language of Websites. New York and London: Routledge. Crystal, D. (2004). A Glossary of Netspeak and Textspeak. Edinburgh: Edinburgh University Press. Crystal, D. (2004). The Language Revolution (Themes for the 21st Century). United Kingdom: Polity Press Ltd. Crystal, D. (2006). Language and the Internet (2nd Ed.). Cambridge: Cambridge University Press. Crystal, D. (2011). Internet Linguistics: A Student Guide. New York: Routledge. Dieter, J. (2007). Webliteralität: Lesen und Schreiben im World Wide Web. Enteen, J. (2010). Virtual English: Internet Use, Language, and Global Subjects. London and New York: Routledge. Gerrand, P. (2009). Minority Languages on the Internet: Promoting the Regional Languages of Spain. VDM Verlag. Gibbs, D., & Krause, K. (Eds.). (2006). Cyberlines 2.0.: Languages and Cultures of the Internet. Australia: James Nicholas Publishers. Jenkins, J. (2003). World Englishes: A Resource Book for Students. London and New York: Routledge. Macfadyen, L. P., Roche, J., & Doff, S. (2005). Communicating Across Cultures in Cyberspace : A Bibliographical Review of Intercultural Communication Online. Lit Verlag. Pimienta, D. (2022) - Internet and linguistic diversity: the cyber-geography of languages with the largest number of speakers. LinguaPax Review 2021, Language Technologies and Language Diversity. Pimienta, D., Müller de Oliveira G. (2022) Cyber-geography of languages. Part 1: method, results and focus on English”, Part 2: the demographic factor and the growth of Asian languages and Arabic. International Review of Information Ethics, vol. 32 no. 1: Emerging Technologies and Changing Dynamics of Information (ETCDI) special issue. Thurlow, C., Lengel, L. B., & Tomic, A. (2004). Computer Mediated Communication: Social Interaction and the Internet. London: Sage Publications. Internet culture Natural language and computing Applied linguistics Sociolinguistics
Internet linguistics
[ "Technology" ]
7,881
[ "Natural language and computing" ]
13,459,016
https://en.wikipedia.org/wiki/Cognitive%20description
Cognitive description is a term used in psychology to describe the cognitive workings of the human mind. A cognitive description specifies what information is utilized during a cognitive action, how this information is processed and transformed, what data structures are used, and what behaviour is generated. Cognitive description, a fundamental concept in cognitive science, refers to the elucidation of the processes and mechanisms underlying cognitive actions. It specifies the nature of information utilized, the processes of transforming this information, the data structures involved, and the resulting behaviour. This domain is interdisciplinary, intertwining psychology, neuroscience, linguistics, and computer science. Definition and Core Aspects Cognitive description concerns itself with detailing how cognitive actions are executed from start to finish. It addresses several key aspects: Information Utilization: This involves identifying what specific information is required and accessed during a cognitive action, such as sensory data or memories. Information Processing and Transformation: Here, the focus is on how information is processed — the mental algorithms and operations applied to transform the input information. Data Structures: This relates to the internal cognitive structures, such as schemas and mental models, that organize and store information. Generated Behaviour: Finally, cognitive description explains the behaviour that results from these processes, including decision-making, problem-solving, and physical actions. Significance in Cognitive Science The significance of cognitive descriptions lies in their ability to offer a structured, detailed analysis of mental operations. This analysis is instrumental in formulating theories about the human mind and its functioning. Additionally, it provides a framework for designing and interpreting cognitive research experiments. Applications and Real-World Relevance Cognitive descriptions have practical applications across various fields: Education: They aid in developing teaching methods that align with how information is processed and understood. Artificial Intelligence: Insights from cognitive descriptions inform the development of AI algorithms that mimic human cognitive processes. Clinical Psychology: They are crucial in diagnosing and treating cognitive impairments and understanding mental health disorders. Future Directions Future advancements in cognitive description are expected to integrate more deeply with neuroscience, linking cognitive processes with brain activities and structures. There is also a growing emphasis on understanding these processes in diverse cultural and developmental contexts. See also Cognitive module Cognition Cognition Disorder References Behavioural sciences Cognitive psychology Evolutionary psychology Ethology Semantics
Cognitive description
[ "Biology" ]
446
[ "Behavioural sciences", "Ethology", "Behavior", "Cognitive psychology" ]
13,459,280
https://en.wikipedia.org/wiki/Kernfysische%20Dienst
The Kernfysische dienst (Department of Nuclear Safety, Security and Safeguards) is the Dutch nuclear regulatory organisation. It is a part of the ministry of Ministry of Economic Affairs (Netherlands). It is the legal supervisor of the nuclear reactors in Borssele, Petten, Dodewaard and Delft, as well as other installations dealing with civil radioactive substances. In fact it is for the IAEA an issue of concern that the nuclear regulator is part of the same governmental agency which is also in charge of stimulating nuclear power. References Nuclear regulatory organizations
Kernfysische Dienst
[ "Engineering" ]
117
[ "Nuclear regulatory organizations", "Nuclear organizations" ]
13,459,326
https://en.wikipedia.org/wiki/International%20Nuclear%20Regulators%27%20Association
The International Nuclear Regulators' Association (INRA) was established in January 1997 and is an association of the most senior officials of the nuclear regulatory authorities of the following countries: Canada: Canadian Nuclear Safety Commission France: Autorité de sûreté nucléaire Germany: Japan: Japanese Atomic Energy Commission Republic of Korea: Nuclear Safety and Security Commission Spain: Nuclear Safety Council (Spain) Sweden: Swedish Radiation Safety Authority United Kingdom: Office for Nuclear Regulation United States: Nuclear Regulatory Commission The main purpose of the association is to influence and enhance nuclear safety, from the regulatory prospective, among its members and worldwide. Other international nuclear organizations include International Atomic Energy Agency and Nuclear Energy Agency. Notable people Laurence Williams, Chairman from 2000 to 2002 References Nuclear regulatory organizations
International Nuclear Regulators' Association
[ "Engineering" ]
149
[ "Nuclear regulatory organizations", "Nuclear organizations" ]
13,459,707
https://en.wikipedia.org/wiki/Aminocoumarin
Aminocoumarin is a class of antibiotics that act by an inhibition of the DNA gyrase enzyme involved in the cell division in bacteria. They are derived from Streptomyces species, whose best-known representative – Streptomyces coelicolor – was completely sequenced in 2002. The aminocoumarin antibiotics include: Novobiocin, Albamycin (Pharmacia And Upjohn) Coumermycin Clorobiocin Structure The core of aminocoumarin antibiotics is made up of a 3-amino-4,7-dihydroxycumarin ring, which is linked, e.g., with a sugar in 7-Position and a benzoic acid derivative in 3-Position. Clorobiocin is a natural antibiotic isolated from several Streptomyces strains and differs from novobiocin in that the methyl group at the 8 position in the coumarin ring of novobiocin is replaced by a chlorine atom, and the carbamoyl at the 3' position of the noviose sugar is substituted by a 5-methyl-2-pyrrolylcarbonyl group. Mechanism of action The aminocoumarin antibiotics are known inhibitors of DNA gyrase. Antibiotics of the aminocoumarin family exert their therapeutic activity by binding tightly to the B subunit of bacterial DNA gyrase, thereby inhibiting this essential enzyme. They compete with ATP for binding to the B subunit of this enzyme and inhibit the ATP-dependent DNA supercoiling catalysed by gyrase. X-ray crystallography studies have confirmed binding at the ATP-binding site located on the gyrB subunit of DNA gyrase. Their affinity for gyrase is considerably higher than that of modern fluoroquinolones, which also target DNA gyrase but at the gyrA subunit. Resistance Resistance to this class of antibiotics usually results from genetic mutation in the gyrB subunit. Other mechanisms include de novo synthesis of a coumarin-resistant gyrase B subunit by the novobiocin producer S. sphaeroides . Clinical use The clinical use of this antibiotic class has been restricted due to the low water solubility, low activity against gram-negative bacteria, and toxicity in vivo of this class of antibiotics. References Antibiotics Coumarin drugs
Aminocoumarin
[ "Biology" ]
484
[ "Antibiotics", "Biocides", "Biotechnology products" ]
13,459,781
https://en.wikipedia.org/wiki/Micellar%20solution
In colloid science, a micellar solution consists of a dispersion of micelles (small particles) in a solvent (most usually water). Micelles are made of chemicals that are attracted to both water and oily solvents, known as amphiphiles. In a micellar solution, some amphiphiles are clumped together and some are dispersed. Micellar solutions form when the concentration of amphiphile exceeds the critical micelle concentration (CMC) or critical aggregation concentration (CAC), which is when there are enough amphiphiles in the solution to clump together to form micells. Micellar solutions persist until the amphiphile concentration becomes sufficiently high to form a lyotropic liquid crystal phase. Although micelles are often depicted as being spherical, they can be cylindrical or oblate depending on the chemical structure of the amphiphile. Micellar solutions are isotropic phases. History Micellar originates from France, with its usage in skincare dating back to 1913. Its popularity boomed internationally when French pharmaceutical company Bioderma released their product Sensibio H2O micellar water in 1991, which is said to be sold every two seconds worldwide today. Commercial uses Micellar water is used to remove makeup and oil from the face. References Colloidal chemistry
Micellar solution
[ "Chemistry" ]
272
[ "Colloidal chemistry", "Surface science", "Colloids" ]
13,459,955
https://en.wikipedia.org/wiki/Micellar%20cubic
A micellar cubic phase is a lyotropic liquid crystal phase formed when the concentration of micelles dispersed in a solvent (usually water) is sufficiently high that they are forced to pack into a structure having a long-ranged positional (translational) order. For example, spherical micelles a cubic packing of a body-centered cubic lattice. Normal topology micellar cubic phases, denoted by the symbol I1, are the first lyotropic liquid crystalline phases that are formed by type I amphiphiles. The amphiphiles' hydrocarbon tails are contained on the inside of the micelle and hence the polar-apolar interface of the aggregates has a positive mean curvature, by definition (it curves away from the polar phase). The first pure surfactant system found to exhibit three different type I (oil-in-water) micellar cubic phases was observed in the dodecaoxyethylene mono-n-dodecyl ether (C12EO12)/water system. Inverse topology micellar cubic phases (such as the Fd3m phase) are observed for some type II amphiphiles at very high amphiphile concentrations. These aggregates, in which water is the minority phase, have a polar-apolar interface with a negative mean curvature. The structures of the normal topology micellar cubic phases that are formed by some types of amphiphiles (e.g. the oligoethyleneoxide monoalkyl ether series of non-ionic surfactants are the subject of debate. Micellar cubic phases are isotropic phases but are distinguished from micellar solutions by their very high viscosity. When thin film samples of micellar cubic phases are viewed under a polarising microscope they appear dark and featureless. Small air bubbles trapped in these preparations tend to appear highly distorted and occasionally have faceted surfaces. A reversed micellar cubic phase has been observed, although it is much less common. It was observed that a reverse micellar cubic phase with Fd3m (Q227) symmetry formed in a ternary system of an amphiphilic diblock copolymer (EO17BO10, where EO represents ethylene oxide and BO represents butylene oxide), water, and p-xylene. References Phases of matter Liquid crystals
Micellar cubic
[ "Physics", "Chemistry" ]
482
[ "Phases of matter", "Matter" ]
13,460,581
https://en.wikipedia.org/wiki/Hexagonal%20phase
A hexagonal phase of lyotropic liquid crystal is formed by some amphiphilic molecules when they are mixed with water or another polar solvent. In this phase, the amphiphile molecules are aggregated into cylindrical structures of indefinite length and these cylindrical aggregates are disposed on a hexagonal lattice, giving the phase long-range orientational order. In normal topology hexagonal phases, which are formed by type I amphiphiles, the hydrocarbon chains are contained within the cylindrical aggregates such that the polar-apolar interface has a positive mean curvature. Inverse topology hexagonal phases have water within the cylindrical aggregates and the hydrocarbon chains fill the voids between the hexagonally packed cylinders. Normal topology hexagonal phases are denoted by HI while inverse topology hexagonal phases are denoted by HII. When viewed by polarization microscopy, thin films of both normal and inverse topology hexagonal phases exhibit birefringence, giving rise to characteristic optical textures. Typically, these textures are smoke-like, fan-like or mosaic in appearance. The phases are highly viscous and small air bubbles trapped within the preparation have highly distorted shapes. Size and shapes of lamellar, micellar and hexagonal phases of lipid bilayer phase behavior and mixed lipid polymorphism in aqueous dispersions can be easily identified and characterized by negative staining transmission electron microscopy too. See also Lamellar phase Lipid polymorphism Micelle References Surfactants Liquid crystals Colloidal chemistry Biophysics
Hexagonal phase
[ "Physics", "Chemistry", "Biology" ]
316
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Colloids", "Surface science", "Biophysics" ]
13,460,646
https://en.wikipedia.org/wiki/Dimaprit
Dimaprit is a histamine analog working as a selective H2 histamine receptor agonist. References Biogenic amines Amidines Thioethers
Dimaprit
[ "Chemistry" ]
35
[ "Biomolecules by chemical classification", "Amidines", "Biogenic amines", "Functional groups", "Bases (chemistry)" ]
13,460,657
https://en.wikipedia.org/wiki/New%20York%20v.%20United%20States
New York v. United States, 505 U.S. 144 (1992), was a decision of the United States Supreme Court. Justice Sandra Day O'Connor, writing for the majority, found that the federal government may not require states to “take title” to radioactive waste through the "Take Title" provision of the Low-Level Radioactive Waste Policy Amendments Act, which the Court found to exceed Congress's power under the Commerce Clause. The Court permitted the federal government to induce shifts in state waste policy through other means. Background The Low-Level Radioactive Waste Policy Amendments Act was an attempt to imbue a negotiated agreement of states with federal incentives for compliance. The problem of what to do with radioactive waste was a national issue complicated by the political reluctance of the states to deal with the problem individually. New York was a willing participant in the compromise. After the Act was passed, it announced locations in the counties of Allegany and Cortland, as potential places for waste storage. Public opposition in both counties was immediate and very determined and eventually helped motivate New York to challenge the law. Decision The Act provided three "incentives" for states to comply with the agreement. The first two incentives were held constitutional. The first incentive allowed states to collect gradually increasing surcharges for waste that was received from other states. The Secretary of Energy would then collect a portion of the income and redistribute it to reward states that achieved a series of milestones in waste disposal. That was held within Congress's power under the Taxing and Spending Clause, an "unexceptionable" exercise of that power. The second incentive, the "access" incentive, allowed states to reprimand other states that missed certain deadlines by raising surcharges or eventually denying access to disposal at their facilities completely. That was held to be a permitted exercise of Congress's power, under the Commerce Clause. The third incentive, requiring states to "take title" and assume liability for waste generated within their borders if they failed to comply, was held to be impermissibly coercive and a threat to state sovereignty, thereby violating the Tenth Amendment. After noting the constitutionality of the first two incentives, Justice O'Connor characterized the "take title" incentive as an attempt to "commandeer" the state governments by directly compelling them to participate in the federal regulatory program. The federal government "crossed the line distinguishing encouragement from coercion." The distinction was that with respect to the "take title" provision, states had to choose between conforming to federal regulations or taking title to the waste. Since Congress cannot directly force states to legislate according to their scheme, and since Congress likewise cannot force them to take title to radioactive waste, O'Connor reasoned that Congress cannot force States to choose between the two. Such coercion would be counter to the federalist structure of government in which a "core of state sovereignty" is enshrined in the Tenth Amendment. The Court found the "take title" provision to be severable and, noting the seriousness of the "pressing national problem" being addressed, allowed the remainder of the Act to survive. Dissenting opinion Justice White wrote a dissenting opinion that was joined by Justices Blackmun and Stevens. White stressed that the Act was a product of "cooperative federalism," as the states "bargained among themselves to achieve compromises for Congress to sanction." Noting that Congress can directly regulate radioactive waste, as opposed to "compelling state legislatures" to regulate according to their scheme, he said that the "ultimate irony of the decision today is that in its formalistically rigid obeisance to 'federalism,' the Court gives Congress fewer incentives to defer to the wishes of state officials in achieving local solutions to local problems." See also List of United States Supreme Court cases, volume 505 List of United States Supreme Court cases Lists of United States Supreme Court cases by volume List of United States Supreme Court cases by the Rehnquist Court References External links United States Constitution Article One case law United States Tenth Amendment case law United States Supreme Court cases United States Supreme Court cases of the Rehnquist Court United States Commerce Clause case law 1992 in the environment 1992 in United States case law Radioactive waste Energy in New York (state)
New York v. United States
[ "Chemistry", "Technology" ]
875
[ "Radioactive waste", "Environmental impact of nuclear power", "Radioactivity", "Hazardous waste" ]
13,460,892
https://en.wikipedia.org/wiki/Operation%20Sandcastle
Operation Sandcastle was a United Kingdom non-combat military operation conducted between 1955–1956. Its purpose was to dispose of chemical weapons by dumping them in the sea. Background The British possessed almost 71,000 air-dropped bombs of 250 kilograms in weight, each of which was filled with tabun. These had been seized from German ammunition dumps during the final months of World War II. A total of 250,000 tons of German chemical weapons had been discovered, the majority of which were destroyed because they comprised warfare agents which the allies already possessed in great abundance e.g. mustard gas at sites such as RAF Bowes Moor. However, the stocks of tabun and sarin were considered more valuable because the allies did not possess nerve agent technology at that time. As a result, captured stocks of German nerve agents were divided between Britain and the United States after discussion, with the Americans taking the sarin. The British transferred their 14,000 tons of ordnance containing tabun in October 1945, via Hamburg and Newport, to temporary storage at the RAF strategic reserve ammunition store at Llanberis. Longer term facilities were prepared at RAF Llandwrog where the bombs were to be stored in stacks, out in the open, on the runways of the disused airfield. The intention was that any leaks of nerve agent would be dispersed by the prevailing winds. The bombs were transported to Llandwrog by truck from August 1946 to July 1947. In July 1947 it was discovered that the bombs were fuzed and a number of them were leaking nerve agent. The fact that the bombs had fuzes inserted meant that they were inherently unsafe: to reduce the risk of accidental detonation, standard practice is to avoid installing the fuze in any air-dropped bomb until shortly before it is loaded onto an aircraft to be used in combat. For similar reasons bomb fuzes are always stored separately, well away from bombs. This was not the case with the 250 kilogram tabun bombs at RAF Landwrog. Not only had the bombs been left with fuzes inserted for a considerable amount of time (possibly years), but they were also left exposed to the elements creating a corrosion risk, together with the inevitable temperature fluctuations which resulted from changing weather. None of these factors was accepted practice regarding the safe, long-term storage of bomb fuzes or explosive ordnance in general. At a rate of 500 bombs a week they were defuzed and individually coated in a waxy preservative to seal them. Seventy-two irreparable devices were neutralised on-site by being drained into individual pits filled with caustic soda crystals. Despite being given a preservative covering the bombs continued to suffer in the damp Welsh climate and in 1951 twenty-one Bellman hangars were erected on the site to store the bombs. Finally in June 1954 it was decided to dispose of the entire stock because by then it was recognised that not only did the weapons have no military value but they had actually become a liability, which could only become worse as time passed. Logistics Operation Sandcastle was divided into two sections, a sea voyage to Cairnryan and then a transfer to suitable hulks there for later sinking north-west of Ireland beyond the continental shelf. It was intended to process 16,000 bombs in the first attempt in mid-1955. The work began with the construction of a road between Llandwrog and the nearby port of Fort Belan where six tank landing craft were assembled. Loading trials in June indicated only 400 bombs could be loaded on each craft, fewer than hoped. It was then decided to remove the tail-fins from the bombs to reduce their length, and to pack them in new boxes. This work increased each craft's load to 800 bombs and by mid-July all 16,000 devices had been safely carried to Cairnryan. Disposal at sea The was the first scuttling ship. Its loading began in late June, and by 23 July all 16,000 bombs were aboard, although an ill-considered loading plan had given it a noticeable list to starboard. The three scuttling charges of TNT were positioned to ensure its sinking would be steady and flat, and the nine-man crew embarked. Departure was delayed by industrial action on the Firth of Clyde preventing the departure of the ocean-going tugboat Forester. On 25 July 1955 the SS Empire Claire, SS Forester, and navy escorts Mull and Sir Walter Campbell left Cairnryan. The Empire Claire soon broke down and was taken under tow. They reached the scuttling point () in the early morning of 27 July, but waited until 10:00am for the arrival of an RAF photo-reconnaissance aircraft to observe the operation. The initial two scuttling charges blew and dramatically increased the vessel's starboard list, forcing the use of the emergency charge to open its stern and cause it to sink rapidly, bows up, to a depth of around . The later sinking went without any problems. MV Vogtland was scuttled on 30 May 1956 at the same site, taking 28,737 bombs with it, and on 21 July 1956 the SS Kotka was sunk (at ) with 26,000 bombs, 330 tons of arsenic compounds, and three tons of toxic seed dressings. References Bibliography Bless 'em all - aspects of the war in North West Wales, Reg Chambers Jones, Bridge Books, The Tale of Tabun - Nazi chemical weapons in North Wales, Roy Sloan, Carrge Gwalch, 1955 in the United Kingdom 1956 in the United Kingdom 1950s in the United Kingdom 20th-century military history of the United Kingdom 1955 in military history 1956 in military history Chemical weapons demilitarization Military operations involving chemical weapons Sandcastle Ocean pollution United Kingdom chemical weapons program
Operation Sandcastle
[ "Chemistry", "Environmental_science" ]
1,171
[ "Ocean pollution", "Chemical weapons", "Chemical weapons demilitarization", "Water pollution", "Military operations involving chemical weapons" ]
13,461,905
https://en.wikipedia.org/wiki/Oil%20Companies%20International%20Marine%20Forum
Oil Companies International Marine Forum (OCIMF) is a voluntary association of oil companies having an interest in the shipment and terminalling of crude oil, oil products, petrochemicals and gas, and includes companies engaged in offshore marine operations supporting oil and gas exploration, development and production. OCIMF's aim is to ensure that the global marine industry causes no harm to people or the environment. OCIMF's mission is lead the global marine industry in the promotion of safe and environmentally responsible transportation of crude oil, oil products, petrochemicals and gas, and to drive the same values in the management of related offshore marine operations. This is to be done by developing best practices in the design, construction and safe operation of tankers, barges and offshore vessels and their interfaces with terminals and considering human factors in everything done. History OCIMF was formed at a meeting in London on 8 April 1970. It was initially the oil industry's response to increasing public awareness of marine pollution, particularly by oil, after the Torrey Canyon incident.e Governments had reacted to this incident by debating the development of international conventions and national legislation and the oil industry sought to play its part by making its professional expertise available and its views known to governmental and inter-governmental bodies. The role of OCIMF has broadened over the intervening period. Most recently the organisation has contributed to the EU discussion on tanker safety and the draft EU Directive on Environmental Liability, and has provided support to the European Union (EU) and the International Maritime Organization (IMO) debate on the accelerated phasing out of single-hull tankers and on the carriage of heavy grades of oil. OCIMF was incorporated in Bermuda in 1977 and a branch office was established in London primarily to maintain contact with the IMO. Organisation OCIMF has 110 members. OCIMF’s committee structure comprises the Executive Committee at its head and four senior standing committees with the power to establish sub-committees or forums as necessary. The Executive Committee is the senior policymaking Committee of OCIMF. The membership of the Executive Committee is limited to a maximum of 15 members plus the Chairman and Vice Chairmen who are ex officio members. Members of the Executive Committee are elected at the Annual General Meeting. Present chairman is Mark Ross from Chevron Shipping Company. A full-time Director, currently Rob Drysdale from ExxonMobil, is in charge of a small permanent Secretariat located in London. This Secretariat comprises full-time employees and technical staff seconded from member companies. The work of OCIMF is carried out through four main Committees General Purposes Committee (GPC), Ports and Terminals Committee, Offshore Marine Committee and the Legal Committee. Sub-Committees, Forums, work groups and task forces composed of members' representatives and assisted by the Secretariat. Publications OCIMF produces industry guidance for oil tankers and oil terminals, including the leading industry title 'International Safety Guide for Oil Tankers and Terminals' (the 6th edition was published in 2020). The OCIMF along with the Society of International Gas Tanker and Terminal Operators (SIGTTO) developed the Jetty Maintenance and Inspection Guide (JMIG) to provide guidelines for effective maintenance on oil and liquified gas terminal jetty equipment. References External links Official website Tanker shipping companies Petroleum industry Energy business associations
Oil Companies International Marine Forum
[ "Chemistry" ]
688
[ "Chemical process engineering", "Petroleum", "Petroleum industry" ]
13,461,936
https://en.wikipedia.org/wiki/Timoshenko%E2%80%93Ehrenfest%20beam%20theory
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of fourth order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter (in principle comparable to the height of the beam or shorter), and thus the distance between opposing shear forces decreases. Rotary inertia effect was introduced by Bresse and Rayleigh. If the shear modulus of the beam material approaches infinity—and thus the beam becomes rigid in shear—and if rotational inertia effects are neglected, Timoshenko beam theory converges towards Euler–Bernoulli beam theory. Quasistatic Timoshenko beam In static Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction. The governing equations are the following coupled system of ordinary differential equations: The Timoshenko beam theory for the static case is equivalent to the Euler–Bernoulli theory when the last term above is neglected, an approximation that is valid when where is the length of the beam. is the cross section area. is the elastic modulus. is the shear modulus. is the second moment of area. , called the Timoshenko shear coefficient, depends on the geometry. Normally, for a rectangular section. is a distributed load (force per length). is the displacement of the mid-surface in the -direction. is the angle of rotation of the normal to the mid-surface of the beam. Combining the two equations gives, for a homogeneous beam of constant cross-section, The bending moment and the shear force in the beam are related to the displacement and the rotation . These relations, for a linear elastic Timoshenko beam, are: {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of quasistatic Timoshenko beam equations |- |From the kinematic assumptions for a Timoshenko beam, the displacements of the beam are given by Then, from the strain-displacement relations for small strains, the non-zero strains based on the Timoshenko assumptions are Since the actual shear strain in the beam is not constant over the cross section we introduce a correction factor such that The variation in the internal energy of the beam is Define Then Integration by parts, and noting that because of the boundary conditions the variations are zero at the ends of the beam, leads to The variation in the external work done on the beam by a transverse load per unit length is Then, for a quasistatic beam, the principle of virtual work gives The governing equations for the beam are, from the fundamental theorem of variational calculus, For a linear elastic beam Therefore the governing equations for the beam may be expressed as Combining the two equations together gives |} Boundary conditions The two equations that describe the deformation of a Timoshenko beam have to be augmented with boundary conditions if they are to be solved. Four boundary conditions are needed for the problem to be well-posed. Typical boundary conditions are: Simply supported beams: The displacement is zero at the locations of the two supports. The bending moment applied to the beam also has to be specified. The rotation and the transverse shear force are not specified. Clamped beams: The displacement and the rotation are specified to be zero at the clamped end. If one end is free, shear force and bending moment have to be specified at that end. Strain energy of a Timoshenko beam The strain energy of a Timoshenko beam is expressed as a sum of strain energy due to bending and shear. Both these components are quadratic in their variables. The strain energy function of a Timoshenko beam can be written as, Example: Cantilever beam For a cantilever beam, one boundary is clamped while the other is free. Let us use a right handed coordinate system where the direction is positive towards right and the direction is positive upward. Following normal convention, we assume that positive forces act in the positive directions of the and axes and positive moments act in the clockwise direction. We also assume that the sign convention of the stress resultants ( and ) is such that positive bending moments compress the material at the bottom of the beam (lower coordinates) and positive shear forces rotate the beam in a counterclockwise direction. Let us assume that the clamped end is at and the free end is at . If a point load is applied to the free end in the positive direction, a free body diagram of the beam gives us and Therefore, from the expressions for the bending moment and shear force, we have Integration of the first equation, and application of the boundary condition at , leads to The second equation can then be written as Integration and application of the boundary condition at gives The axial stress is given by Dynamic Timoshenko beam In Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction. Starting from the above assumption, the Timoshenko beam theory, allowing for vibrations, may be described with the coupled linear partial differential equations: where the dependent variables are , the translational displacement of the beam, and , the angular displacement. Note that unlike the Euler–Bernoulli theory, the angular deflection is another variable and not approximated by the slope of the deflection. Also, is the density of the beam material (but not the linear density). is the cross section area. is the elastic modulus. is the shear modulus. is the second moment of area. , called the Timoshenko shear coefficient, depends on the geometry. Normally, for a rectangular section. is a distributed load (force per length). is the displacement of the mid-surface in the -direction. is the angle of rotation of the normal to the mid-surface of the beam. These parameters are not necessarily constants. For a linear elastic, isotropic, homogeneous beam of constant cross-section these two equations can be combined to give {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of combined Timoshenko beam equation |- |The equations governing the bending of a homogeneous Timoshenko beam of constant cross-section are From equation (1), assuming appropriate smoothness, we have Differentiating equation (2) gives Substituting equation (3), (4), (5) into equation (6) and rearrange, we get |} However, it can easily be shown that this equation is incorrect. Consider the case where q is constant and does not depend on x or t, combined with the presence of a small damping all time derivatives will go to zero when t goes to infinity. The shear terms are not present in this situation, resulting in the Euler-Bernoulli beam theory, where shear deformation is neglected. The Timoshenko equation predicts a critical frequency For normal modes the Timoshenko equation can be solved. Being a fourth order equation, there are four independent solutions, two oscillatory and two evanescent for frequencies below . For frequencies larger than all solutions are oscillatory and, as consequence, a second spectrum appears. Axial effects If the displacements of the beam are given by where is an additional displacement in the -direction, then the governing equations of a Timoshenko beam take the form where and is an externally applied axial force. Any external axial force is balanced by the stress resultant where is the axial stress and the thickness of the beam has been assumed to be . The combined beam equation with axial force effects included is Damping If, in addition to axial forces, we assume a damping force that is proportional to the velocity with the form the coupled governing equations for a Timoshenko beam take the form and the combined equation becomes A caveat to this Ansatz damping force (resembling viscosity) is that, whereas viscosity leads to a frequency-dependent and amplitude-independent damping rate of beam oscillations, the empirically measured damping rates are frequency-insensitive, but depend on the amplitude of beam deflection. Shear coefficient Determining the shear coefficient is not straightforward (nor are the determined values widely accepted, i.e. there's more than one answer); generally it must satisfy: . The shear coefficient depends on Poisson's ratio. The attempts to provide precise expressions were made by many scientists, including Stephen Timoshenko, Raymond D. Mindlin, G. R. Cowper, N. G. Stephen, J. R. Hutchinson etc. (see also the derivation of the Timoshenko beam theory as a refined beam theory based on the variational-asymptotic method in the book by Khanh C. Le leading to different shear coefficients in the static and dynamic cases). In engineering practice, the expressions by Stephen Timoshenko are sufficient in most cases. In 1975 Kaneko published a review of studies of the shear coefficient. More recently, experimental data shows that the shear coefficient is underestimated. Corrective shear coefficients for homogeneous isotropic beam according to Cowper - selection. where is Poisson's ratio. See also Plate theory Sandwich theory References Beam theory Continuum mechanics Structural analysis
Timoshenko–Ehrenfest beam theory
[ "Physics", "Engineering" ]
2,125
[ "Structural engineering", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering" ]
13,463,690
https://en.wikipedia.org/wiki/Contraction%20principle%20%28large%20deviations%20theory%29
In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function. Statement Let X and Y be Hausdorff topological spaces and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let T : X → Y be a continuous function, and let νε = T∗(με) be the push-forward measure of με by T, i.e., for each measurable set/event E ⊆ Y, νε(E) = με(T−1(E)). Let with the convention that the infimum of I over the empty set ∅ is +∞. Then: J : Y → [0, +∞] is a rate function on Y, J is a good rate function on Y if I is a good rate function on X, and (νε)ε>0 satisfies the large deviation principle on Y with rate function J. References (See chapter 4.2.1) Asymptotic analysis Large deviations theory Mathematical principles Probability theorems
Contraction principle (large deviations theory)
[ "Mathematics" ]
276
[ "Mathematical principles", "Mathematical analysis", "Mathematical theorems", "Theorems in probability theory", "Asymptotic analysis", "Mathematical problems" ]
13,463,844
https://en.wikipedia.org/wiki/Richardson%27s%20theorem
In mathematics, Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers, , and exponential and sine functions. It was proved in 1968 by the mathematician and computer scientist Daniel Richardson of the University of Bath. Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number ln 2, the variable x, the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions. For some classes of expressions generated by other primitives than in Richardson's theorem, there exist algorithms that can determine whether an expression is zero. Statement of the theorem Richardson's theorem can be stated as follows: Let E be a set of expressions that represent functions. Suppose that E includes these expressions: x (representing the identity function) ex (representing the exponential functions) sin x (representing the sin function) all rational numbers, ln 2, and π (representing constant functions that ignore their input and produce the given number as output) Suppose E is also closed under a few standard operations. Specifically, suppose that if A and B are in E, then all of the following are also in E: A + B (representing the pointwise addition of the functions that A and B represent) A − B (representing pointwise subtraction) AB (representing pointwise multiplication) A∘B (representing the composition of the functions represented by A and B) Then the following decision problems are unsolvable: Deciding whether an expression A in E represents a function that is nonnegative everywhere If E includes also the expression |x| (representing the absolute value function), deciding whether an expression A in E represents a function that is zero everywhere If E includes an expression B representing a function whose antiderivative has no representative in E, deciding whether an expression A in E represents a function whose antiderivative can be represented in E. (Example: has an antiderivative in the elementary functions if and only if .) Extensions After Hilbert's tenth problem was solved in 1970, B. F. Caviness observed that the use of ex and ln 2 could be removed. Wang later noted that under the same assumptions under which the question of whether there was x with A(x) < 0 was insolvable, the question of whether there was x with A(x) = 0 was also insolvable. Miklós Laczkovich removed also the need for π and reduced the use of composition. In particular, given an expression A(x) in the ring generated by the integers, x, sin xn, and sin(x sin xn) (for n ranging over positive integers), both the question of whether A(x) > 0 for some x and whether A(x) = 0 for some x are unsolvable. By contrast, the Tarski–Seidenberg theorem says that the first-order theory of the real field is decidable, so it is not possible to remove the sine function entirely. See also References Further reading External links Undecidable problems Functions and mappings Theorems in the foundations of mathematics
Richardson's theorem
[ "Mathematics" ]
656
[ "Functions and mappings", "Mathematical analysis", "Foundations of mathematics", "Mathematical logic", "Mathematical objects", "Computational problems", "Mathematical relations", "Undecidable problems", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics...
13,463,881
https://en.wikipedia.org/wiki/Map%20of%20lattices
The concept of a lattice arises in order theory, a branch of mathematics. The Hasse diagram below depicts the inclusion relationships among some important subclasses of lattices. Proofs of the relationships in the map 1. A boolean algebra is a complemented distributive lattice. (def) 2. A boolean algebra is a heyting algebra. 3. A boolean algebra is orthocomplemented. 4. A distributive orthocomplemented lattice is orthomodular. 5. A boolean algebra is orthomodular. (1,3,4) 6. An orthomodular lattice is orthocomplemented. (def) 7. An orthocomplemented lattice is complemented. (def) 8. A complemented lattice is bounded. (def) 9. An algebraic lattice is complete. (def) 10. A complete lattice is bounded. 11. A heyting algebra is bounded. (def) 12. A bounded lattice is a lattice. (def) 13. A heyting algebra is residuated. 14. A residuated lattice is a lattice. (def) 15. A distributive lattice is modular. 16. A modular complemented lattice is relatively complemented. 17. A boolean algebra is relatively complemented. (1,15,16) 18. A relatively complemented lattice is a lattice. (def) 19. A heyting algebra is distributive. 20. A totally ordered set is a distributive lattice. 21. A metric lattice is modular. 22. A modular lattice is semi-modular. 23. A projective lattice is modular. 24. A projective lattice is geometric. (def) 25. A geometric lattice is semi-modular. 26. A semi-modular lattice is atomic. 27. An atomic lattice is a lattice. (def) 28. A lattice is a semi-lattice. (def) 29. A semi-lattice is a partially ordered set. (def) Notes References Lattice theory
Map of lattices
[ "Mathematics" ]
434
[ "Fields of abstract algebra", "Order theory", "Lattice theory" ]
13,464,118
https://en.wikipedia.org/wiki/Gary%20Taubes
Gary Taubes (born April 30, 1956) is an American journalist, writer, and low-carbohydrate / high-fat (LCHF) diet advocate. His central claim is that carbohydrates, especially sugar and high-fructose corn syrup, overstimulate the secretion of insulin, causing the body to store fat in fat cells and the liver, and that it is primarily a high level of dietary carbohydrate consumption that accounts for obesity and other metabolic syndrome conditions. He is the author of Nobel Dreams (1987); Bad Science: The Short Life and Weird Times of Cold Fusion (1993); Good Calories, Bad Calories (2007), titled The Diet Delusion (2008) in the UK and Australia; Why We Get Fat: And What to Do About It (2010); The Case Against Sugar (2016); and The Case for Keto: Rethinking Weight Control and the Science and Practice of Low-Carb/High-Fat Eating (2020). Taubes's work often goes against accepted scientific, governmental, and popular tenets such as that obesity is caused by eating too much and exercising too little and that excessive consumption of fat, especially saturated fat in animal products, leads to cardiovascular disease. Biography Born in Rochester, New York, Taubes studied physics at Harvard University (BS, 1977) and aerospace engineering at Stanford University (MS, 1978). After receiving a master's degree in journalism at Columbia University in 1981, Taubes joined Discover magazine as a staff reporter in 1982. Since then he has written numerous articles for Discover, Science and other magazines. Originally focusing on physics issues, his interests have more recently turned to medicine and nutrition. His brother, Clifford Henry Taubes, is the William Petschek Professor of Mathematics at Harvard University. Scientific controversies Taubes' books have all dealt with scientific controversies. Nobel Dreams takes a critical look at the politics and experimental techniques behind the Nobel Prize-winning work of physicist Carlo Rubbia. In Bad Science: The Short Life and Weird Times of Cold Fusion, he chronicles the short-lived media frenzy surrounding the Pons–Fleischmann cold fusion experiments of 1989. He opines in the book that heat generation in the experiments of Drs. Martin Fleischmann and Stanley Pons was due entirely to difference in ionic conductivity of deuterated salts solutions compared to normal aqueous solutions. He also formulated an allegation of fraud regarding the results from John Bockris's research group. Diet advocacy Taubes gained prominence in the low-carb diet debate following the publication of his 2002 New York Times Magazine piece "What if It's All Been a Big Fat Lie?". The article, which questioned the efficacy and health benefits of low-fat diets, was seen as defending the Atkins diet against the medical establishment, and it became extremely controversial. Some scholars interviewed for the article complained that Taubes misinterpreted their words or treated them out of context. Taubes himself stated: "[E]ven though I knew the article would be the most controversial article the Times Magazine ran all year, [the reaction] still shocked me." The Center for Science in the Public Interest published a rebuttal to the Times article in its November 2002 newsletter. Cardiologist John W. Farquhar commented that "Gary Taubes tricked us all into coming across as supporters of the Atkins diet." Taubes is an advocate of eating beef. Beef industry leader Amanda Radke has written in Beef Daily that "Today's best beef advocates wear a variety of hats [...] like Nina Teicholz or Gary Taubes who turn against conventional health advice to promote diets rich in animal fats and proteins". Good Calories, Bad Calories In 2007, Taubes published his book Good Calories, Bad Calories: Challenging the Conventional Wisdom on Diet, Weight Control, and Disease (published as The Diet Delusion in the UK). This book proposed that a hypothesis — that dietary fat is the cause of obesity and heart disease — became dogma, and claims to show how the scientific method was circumvented so a contestable hypothesis could remain unchallenged. The book uses data and studies compiled from more than a century of dietary research to support what Taubes calls "the alternative hypothesis." Taubes' argument is that the medical community and the U.S. federal government have relied upon misinterpreted scientific data on nutrition to build the prevailing paradigm about what constitutes healthful eating. Taubes argues that — contrary to conventional nutritional science — it is a carbohydrate-laced diet, augmented with sugar, that leads to heart disease, type 2 diabetes, obesity, cancer, and other "maladies of civilization." In the Epilogue to Good Calories, Bad Calories on page 454, Taubes sets out ten "inescapable" conclusions, the first of which is, "Dietary fat, whether saturated or not, is not a cause of obesity, heart disease, or any other chronic disease of civilization." Reviewing Good Calories, Bad Calories, obesity researcher George A. Bray, wrote that the book "...has much useful information and is well worth reading" but that "obese people clearly eat more than do lean ones" and that "some of the conclusions that the author reaches are not consistent with current concepts about obesity." In 2007, New York Times science writer John Tierney cited Taubes's book Good Calories, Bad Calories and discussed information cascades and the role of physiologist Ancel Keys in widely held beliefs related to diet and fat. Tierney follows Taubes in noting that a 2001 Cochrane meta-analysis of low-fat diets found that they had "no significant effect on mortality". Harriet A. Hall, however, has criticized Taubes for selectively quoting the meta-analysis, and, writing for Science-Based Medicine, states that although it is possible some of Taubes' hypotheses may be borne out by subsequent evidence, his idea that carbohydrate restriction can lead to weight loss independently of calorie restriction is "simply wrong". The Case Against Sugar Taubes authored The Case Against Sugar in 2016. The book argues that sugar is an addictive drug and is the cause of obesity and many health-related problems. It was positively reviewed by chef and food-writer Dan Barber, who described Taubes's writing as "inflammatory and copiously researched". Food journalist Joanna Blythman also praised the book, noting "his clear and persuasive argument that obesity is a hormonal disorder, switched on by sugar, is one that urgently needs wider airing." Harriet Hall, who is known as a skeptic in the medical community, wrote that Taubes made a compelling case against sugar but the evidence was inconclusive. C. Albert Yeung in the Journal of Public Health described the book as very informative but insufficient to draw any conclusion and a "polemic, not a balanced scientific review." NuSI In September, 2012, Taubes and Peter Attia launched the Nutrition Science Initiative (NuSI), a nonprofit organization they described as "a Manhattan Project-like effort to solve" the problem of obesity. The project set out to validate the "carbohydrate-insulin hypothesis", a model by which carbohydrate is proposed to be uniquely fattening because of its influence on insulin levels. A pilot study funded by NuSI was conducted in 2014 by a team led by NIH researcher Kevin Hall, and produced evidence which did not support the hypothesis. In 2017, Kevin Hall wrote that the hypothesis had been falsified by experiment. Not long after the completion of that study NuSI was confronted with a number of issues. They lost a significant source of funding; co-founder Peter Attia left the organization. In 2018, NuSI was described as having "two part-time employees and an unpaid volunteer hanging around". Awards Taubes has won the Science in Society Journalism Award of the National Association of Science Writers three times and was awarded an MIT Knight Science Journalism Fellowship for 1996–97. He is a Robert Wood Johnson Foundation independent investigator in health policy. Selected bibliography (Also published as The Diet Delusion ) References External links 1956 births Living people American nutritionists American science writers Cold fusion Columbia University Graduate School of Journalism alumni Harvard John A. Paulson School of Engineering and Applied Sciences alumni Low-carbohydrate diet advocates Stanford University alumni Writers from Rochester, New York 20th-century American Jews 21st-century American Jews Discover (magazine) people
Gary Taubes
[ "Physics", "Chemistry" ]
1,789
[ "Nuclear fusion", "Cold fusion", "Nuclear physics" ]
13,464,520
https://en.wikipedia.org/wiki/Husky%20VMMD
The Husky VMMD (Vehicle-Mounted Mine Detection) is a configurable counter-IED MRAP (Mine-Resistant Ambush Protected) vehicle, developed by South African-based DCD Protected Mobility and American C-IED company Critical Solutions International. Designed for use in route clearance and de-mining operations, the Husky is equipped with technologies to help detect explosives and minimise blast damage. The Husky VMMD can help operators detect land mines, and improvised explosive devices (IEDs) using basic sensor equipment, and imaging systems. The Husky is equipped with countermeasures like jamming systems in an attempt to help disrupt the effect of IEDs. The Husky's armour is also able to withstand damage from basic explosives. Development The Husky traces its lineage to the Pookie, a Rhodesian mine clearance vehicle. Originally used as the lead element of a mine removal convoy, the Husky was employed as part of the Chubby mine detection system. The early Chubby system comprised a lead detection vehicle (the Meerkat), a second proofing vehicle (the Husky) towing a mine detonation trailer, and a third vehicle carrying spare parts for expedient blast repair. The Husky was initially deployed in the 1970s. During the South African Border War, the South African Defense Force used the Husky extensively to clear mines from military convoy routes in Namibia and Angola. In the mid-1990s, DCD Group and Critical Solutions International planned to bring the technology to the U.S. and underwent a two-year foreign comparative test program with the United States Department of Defense and follow-on modifications and testing. In 1997, CSI was directed to produce and deliver production systems under the U.S. Army Interim Vehicle Mounted Mine Detection Program. Over the next twenty years, the Husky underwent several iterations and upgrades. U.S. military clearance units currently train on and employ Husky vehicles as detection assets and clearance vehicles. Design The Husky is part of a class of MRAP vehicles developed from South African blast protection designs. The sharp V-hull of the Husky helps reduces blast effect by increasing ground clearance and standoff from the blast, increasing structural hull rigidity, and diverting blast energy and fragmentation away from the platform and its occupants. The Husky is designed to break apart in a blast event, allowing energy to transfer to the detachable front and rear modules rather than the critical components of the vehicle or the occupants located in the cab. Its three main components (a center cab with front and rear wheel modules) are connected by shear pins. Critical components are engineered to break apart predictably, to help prevent catastrophic damage, and enabling users to quickly replace modules on site. This approach increases the lifespan of the vehicle and limits the need for recovery teams to evacuate the vehicle to maintenance facilities. The cabin of the Husky is fitted with bulletproof glass windows. There is an entry hatch on the roof. The Husky Mk III and 2G are powered by a Mercedes-Benz OM906LA turbo diesel engine coupled with an Allison Transmission 2500 SP 5-speed automatic transmission. It can reach a maximum speed of 72 km/h, and has a range of 350 km. Variants Husky Mk I First Husky production model. Replaced by Husky Mk II. Husky Mk II Second Husky model. Replaced by Husky Mk III. Husky Mk III Modern single-occupant Husky model. The platform is integrated with pulse induction metal detector panels and overpass tires that enable operators to regulate tire air pressure in order to reduce the risk of initiating land mines without causing detonation. The Mk III, like other Husky models, is engineered in a modular, frangible configuration. Husky 2G Project Type - Mine clearance vehicle Manufacturer - DCD Protected Mobility Crew - Two Operating Weight - 9,200kg Husky 2G is a two-seat variant of Husky MK III vehicle mounted mine detector (VMMD) designed and manufactured by South African firm DCD Protected Mobility (DCD PM). Equipped with a number of sensors, the vehicle is ideally suited for mine-clearing operations including detection, identification and destruction of improvised explosive devices (IED), landmines and other explosive materials. Development of the Husky 2G was prompted by the need to conduct longer missions and employ multiple detection systems. The Husky 2G was designed with added high sensitivity detectors, ground-penetrating radar, video optics suites, and remote weapon stations. These additional components required a second operator to manage the additional workload, hence the required two occupants. Equipment The Husky is capable of carrying the following equipment and payloads: Autonomous vehicle upgrades Rocket-propelled grenade armor and netting Smoke grenade launchers Electronic countermeasures Remote weapon station Metal detectors Ground-penetrating radar Nonlinear junction detectors Gunfire detectors Robotic arms Blowers Water diggers Thermal cameras Optics suite Mine-clearing line charges Mine rollers Rhino Passive Infrared Defeat System Mine plows Proofing rollers Electrostatic discharge Red Pack repair kit Operators Husky Mk III United States Army United States Marine Corps Canadian Army Australian Army South African Defense Force Kenyan Army Husky 2G Islamic Republic of Iran Army Iraqi Army Turkish Army Spanish Army Royal Saudi Land Forces Egyptian Army Jordanian Army Latvian Army United States Army (limited fielding in support of Operation Enduring Freedom) Recognitions The Husky was listed on the U.S. Army’s Top Ten inventions of 2010. References External links Critical Solutions International (CSI) Soldier Armed magazine article Military engineering vehicles Cold War military equipment of South Africa Mine warfare countermeasures Military vehicles of the United States Military vehicles of South Africa Bomb disposal Military vehicles introduced in the 1970s
Husky VMMD
[ "Chemistry", "Engineering" ]
1,127
[ "Explosion protection", "Military engineering", "Military engineering vehicles", "Bomb disposal", "Engineering vehicles" ]
13,464,676
https://en.wikipedia.org/wiki/Pile%20%28abstract%20data%20type%29
In computer science, a pile is an abstract data type for storing data in a loosely ordered way. There are two different usages of the term; one refers to an ordered double-ended queue, the other to an improved heap. Ordered double-ended queue The first version combines the properties of the double-ended queue (deque) and a priority queue and may be described as an ordered deque. An item may be added to the head of the list if the new item is valued less than or equal to the current head or to the tail of the list if the new item is greater than or equal to the current tail. Elements may be removed from both the head and the tail. Piles of this kind are used in the "UnShuffle sort" sorting algorithm. Improved heap The second version is a subject of patents and improves the heap data structure. The whole data pile based system can be generalized as shown: References Abstract data types
Pile (abstract data type)
[ "Mathematics" ]
192
[ "Type theory", "Mathematical structures", "Abstract data types" ]
13,464,844
https://en.wikipedia.org/wiki/Bell%20Laboratories%20Layered%20Space-Time
Bell Laboratories Layer Space-Time (BLAST) is a transceiver architecture for offering spatial multiplexing over multiple-antenna wireless communication systems. Such systems have multiple antennas at both the transmitter and the receiver in an effort to exploit the many different paths between the two in a highly-scattering wireless environment. BLAST was developed by Gerard Foschini at Lucent Technologies' Bell Laboratories (now Nokia Bell Labs). By careful allocation of the data to be transmitted to the transmitting antennas, multiple data streams can be transmitted simultaneously within a single frequency band — the data capacity of the system then grows directly in line with the number of antennas (subject to certain assumptions). This represents a significant advance on current, single-antenna systems. V-BLAST V-BLAST (Vertical-Bell Laboratories Layered Space-Time) is a detection algorithm to the receipt of multi-antenna MIMO systems. Available for the first time in 1996 at Bell Laboratories in New Jersey in the United States by Gerard J. Foschini. He proceeded simply to eliminate interference caused successively issuers. Its principle is quite simple: to make a first detection of the most powerful signal. It regenerates the received signal from this user from this decision. Then, the signal is regenerated subtracted from the received signal and, with this new signal, it proceeds to the detection of the second user's most powerful, since it has already cleared the first and so forth. What gives a vector containing received less interference. The complete detection algorithm can be summarized as recursive as follows: Initialize: Recursive: See also Space–time code — a means for using multiple antennas to improve reliability rather than data-rate. Telecommunication References Further reading External links http://www.alcatel-lucent.com/wps/portal/BellLabs Antennas Detection theory
Bell Laboratories Layered Space-Time
[ "Engineering" ]
377
[ "Antennas", "Telecommunications engineering" ]
13,464,959
https://en.wikipedia.org/wiki/Bittern%20%28salt%29
Bittern (pl. bitterns), or nigari, is the salt solution formed when halite (table salt) precipitates from seawater or brines. Bitterns contain magnesium, calcium, and potassium ions as well as chloride, sulfate, iodide, and other ions. Bittern is commonly formed in salt ponds where the evaporation of water prompts the precipitation of halite. These salt ponds can be part of a salt-producing industrial facility, or they can be used as a waste storage location for brines produced in desalination processes. Bittern is a source of many useful salts. It is used as a natural source of Mg2+, and it can be used as a coagulant both in the production of tofu and in the treatment of industrial wastewater. History Bittern has been extracted for a long time, at least several centuries. The Dutch chemist Petrus Jacobus Kipp (1808–1864) experimented with saturated solutions of bittern. The term for the solution is a modification of "bitter". Uses Salt derivation Bittern is a source of many salts including magnesium sulfate (epsom salt). Multiple methods exist for removing these salts from the bittern, and the method ultimately used depends on the target product. Products that would naturally precipitate from the bitterns crystallize as evaporation proceeds (e. g. kainite). Products that do not preferentially precipitate from bitterns may precipitate through the addition of another compound or through ion exchange. Potassium-magnesium sulfate double salt, a good fertilizer, is a salt that precipitates from bitterns upon addition of methanol. Ethanol is also used, but it exhibits a preference for potassium sulfate precipitation. The solution can furthermore be used in the production of potash and potassium salts. Tartaric acid is one compound that can facilitate the precipitation of these salts. Magnesium hydroxide (Mg(OH)2) can be derived from bittern. Adding an alkaline solution such as sodium hydroxide (NaOH) or lime will cause magnesium hydroxide to precipitate, although lime is not as effective. Slower addition of the alkaline solution results in the precipitation of larger particles that are easier to remove from solution. Coagulation Tofu Nigari is produced from seawater after first removing sodium chloride. It contains mostly magnesium chloride, smaller amounts of magnesium sulfate (Epsom salt), potassium chloride, calcium chloride, and trace amounts of other naturally occurring salts. Nigari was the first coagulant used to make tofu in Japan. It is still used today because tofu made using bittern preserves the original flavor of the soybeans used to make it. Bittern causes rapid coagulation which influences the quality of the tofu. Alternatively calcium sulfate, calcium chloride or other substances are also used. Wastewater treatment Bittern can be used instead of aluminum-based coagulants in the treatment of wastewater produced during the fabric-dyeing process. The wastewater pH is basic, which is favorable for the use of bittern. After the addition of bittern, precipitated magnesium hydroxide works as the coagulant to collect dye, solids, organic matter, and heavy metals from the wastewater before settling out of solution. The sludge produced from this wastewater treatment is also easier to dispose of than sludge produced by aluminum-based coagulants because there are less restrictions surrounding the disposal of magnesium, and it may be possible to recycle the sludge as fertilizer. Bittern can also be used as a source of magnesium ions (Mg2+) for the precipitation of struvite, a useful fertilizer, from wastewater containing nitrogen and phosphorus. One source of useful wastewater is landfill leachate. Bittern is just as good as other sources of magnesium ions at removing phosphorus from wastewater streams, but it lags behind other magnesium ion sources in terms of the removal of ammonia (a nitrogen compound). Other uses Bittern can be used to culture Haloquadratum archaea. Haloquadratum are distinctly square-shaped and are abundant in hypersaline environments such as salt ponds. Their cultivation is necessary for understanding both their ecological function in those environments as well as their unique morphology. The presence of Haloquadratum in an environment deemed inhospitable for most life has prompted closer study of these archaea. A study has been performed exploring the use of bittern as a natural magnesium supplement used to decrease cholesterol spikes after a meal (postprandial hyperlipidemia). Due to its high salinity, bittern can also be used as a draw solution for an osmotic process that concentrates sucrose in sugarcane juice. Because forward osmosis is being used, the process is relatively energy-efficient. Epsom salt can also be taken from the bittern draw solution once it is used. This method is particularly useful in areas where sugarcane and salt production are in close proximity to avoid costs associated with movement of either the sugarcane juice or the bittern. Environmental impact In some jurisdictions, most bitterns are used for other production instead of being directly discarded. In other jurisdictions each tonne of salt produced can create 3+ tonnes of waste bitterns. Although bittern generally contains the same compounds as seawater, it is much more concentrated than seawater. If bittern is released directly into seawater, the ensuing salinity increase may harm marine life around the point of release. Even small increases in salinity can disrupt marine species' osmotic balances, which may result in the death of the organism in some cases. In December 1997, 94 corpses of green sea turtles, Chelonia mydas, were found at the Ojo de Liebre Lagoon (OLL) in Mexico, adjacent to the industrial operation of Exportadora de Sal S.A. (ESSA), the largest saltworks in the world. The fluoride ion F− content in bitterns was 60.5-fold more than that in seawater. The bitterns osmolality was 11,000 mosm/kg of water, whereas the turtle's plasma osmolality was about 400 mosm/kg of water. Researchers concluded that the dumping of bitterns into the ocean should be avoided. The lack of adequate disposal methods for bitterns and concerns of local commercial and recreational fishing associations about bitterns’ deleterious impacts upon local fish and prawn hatchery areas led the Western Australian EPA in 2008 to recommend against the proposed 4.2 million tonne per annum Straits Salt project in The Pilbara region of WA. The EPA concluded that: References Salts Chemistry Evaporite
Bittern (salt)
[ "Chemistry" ]
1,384
[ "Salts" ]
13,465,362
https://en.wikipedia.org/wiki/Aggregation%20number
In colloidal chemistry, an aggregation number is a description of the number of molecules present in a micelle once the critical micelle concentration (CMC) has been reached. In more detail, it has been defined as the average number of surfactant monomers in a spherical micelle. The aggregation number of micelles can be determined by isothermal titration calorimetry when the aggregation number is not too high. Another classical experiment to determine the mean aggregation number would involve the use of a luminescent probe, a quencher and a known concentration of surfactant. If the concentration of the quencher is varied, and the CMC of the surfactant known, the mean aggregation number can be calculated. References Colloidal chemistry
Aggregation number
[ "Chemistry" ]
157
[ "Colloidal chemistry", "Surface science", "Physical chemistry stubs", "Colloids" ]
13,466,165
https://en.wikipedia.org/wiki/Duration%20%28philosophy%29
Duration (French: la durée) is a theory of time and consciousness posited by the French philosopher Henri Bergson. Bergson sought to improve upon inadequacies he perceived in the philosophy of Herbert Spencer, due, he believed, to Spencer's lack of comprehension of mechanics, which led Bergson to the conclusion that time eluded mathematics and science. Bergson became aware that the moment one attempted to measure a moment, it would be gone: one measures an immobile, complete line, whereas time is mobile and incomplete. For the individual, time may speed up or slow down, whereas, for science, it would remain the same. Hence Bergson decided to explore the inner life of man, which is a kind of duration, neither a unity nor a quantitative multiplicity. Duration is ineffable and can only be shown indirectly through images that can never reveal a complete picture. It can only be grasped through a simple intuition of the imagination. Bergson first introduced his notion of duration in his essay Time and Free Will: An Essay on the Immediate Data of Consciousness. It is used as a defense of free will in a response to Immanuel Kant, who believed free will was only possible outside time and space. Responses to Kant and Zeno Zeno of Elea believed reality was an uncreated and indestructible immobile whole. He formulated four paradoxes to present mobility as an impossibility. We can never, he said, move past a single point because each point is infinitely divisible, and it is impossible to cross an infinite space. But to Bergson, the problem only arises when mobility and time, that is, duration, are mistaken for the spatial line that underlies them. Time and mobility are mistakenly treated as things, not progressions. They are treated retrospectively as a thing's spatial trajectory, which can be divided ad infinitum, whereas they are, in fact, an indivisible whole. Bergson's response to Kant is that free will is possible within a duration within which time resides. Free will is not really a problem but merely a common confusion among philosophers caused by the immobile time of science. To measure duration (durée), it must be translated into the immobile, spatial time (temps) of science, a translation of the unextended into the extended. It is through this translation that the problem of free will arises. Since space is a homogeneous, quantitative multiplicity, as opposed to what Bergson calls a heterogenous, qualitative multiplicity, duration becomes juxtaposed and converted into a succession of distinct parts, one coming after the other and therefore "caused" by one another. Nothing within a duration can be the cause of anything else within it. Hence determinism, the belief everything is determined by a prior cause, is an impossibility. One must accept time as it really is through placing oneself within duration where freedom can be identified and experienced as pure mobility. Images of duration The first is of two spools, one unrolling to represent the continuous flow of ageing as one feels oneself moving toward the end of one's life-span, the other rolling up to represent the continuous growth of memory which, for Bergson, equals consciousness. No two successive moments are identical, for the one will always contain the memory left by the other. A person with no memory might experience two identical moments but, Bergson says, that person's consciousness would thus be in a constant state of death and rebirth, which he identifies with unconsciousness. The image of two spools, however, is of a homogeneous and commensurable thread, whereas, according to Bergson, no two moments can be the same, hence duration is heterogeneous. Bergson then presents the image of a spectrum of a thousand gradually changing shades with a line of feeling running through them, being both affected by and maintaining each of the shades. Yet even this image is inaccurate and incomplete, for it represents duration as a fixed and complete spectrum with all the shades spatially juxtaposed, whereas duration is incomplete and continuously growing, its states not beginning or ending but intermingling. Even this image is incomplete, because the wealth of colouring is forgotten when it is invoked. But as the three images illustrate, it can be stated that duration is qualitative, unextended, multiple yet a unity, mobile and continuously interpenetrating itself. Yet these concepts put side-by-side can never adequately represent duration itself; The truth is we change without ceasing...there is no essential difference between passing from one state to another and persisting in the same state. If the state which "remains the same" is more varied than we think, [then] on the other hand the passing of one state to another resembles—more than we imagine—a single state being prolonged: the transition is continuous. Just because we close our eyes to the unceasing variation of every physical state, we are obliged when the change has become so formidable as to force itself on our attention, to speak as if a new state were placed alongside the previous one. Of this new state we assume that it remains unvarying in its turn and so on endlessly. Because a qualitative multiplicity is heterogeneous and yet interpenetrating, it cannot be adequately represented by a symbol; indeed, for Bergson, a qualitative multiplicity is inexpressible. Thus, to grasp duration, one must reverse habitual modes of thought and place oneself within duration by intuition. Influence on Gilles Deleuze Gilles Deleuze was profoundly influenced by Bergson's theory of duration, particularly in his work Cinema 1: The Movement Image in which he described cinema as providing people with continuity of movement (duration) rather than still images strewn together. Physics and Bergson's ideas Bergson had a correspondence with physicist Albert Einstein in 1922 and a debate over Einstein's theory of relativity and its implications. For Bergson, the primary disagreement was over metaphysical and epistemological claims made by the theory of relativity, rather than a dispute about scientific evidence for or against the theory. Bergson famously stated of the theory that it is "a metaphysics grafted upon science, it is not science". See also Loop quantum gravity Problem of time Specious present Uncertainty principle References External links 1910 English translation of Time and Free Will Multiple formats at Internet Archive Metaphysical properties Concepts in the philosophy of mind Free will Henri Bergson Philosophy of time
Duration (philosophy)
[ "Physics" ]
1,345
[ "Spacetime", "Philosophy of time", "Physical quantities", "Time" ]
13,467,020
https://en.wikipedia.org/wiki/WiMAX%20MIMO
WiMAX MIMO refers to the use of Multiple-input multiple-output communications (MIMO) technology on WiMAX, which is the technology brand name for the implementation of the standard IEEE 802.16. Background WiMAX WiMAX is the technology brand name for the implementation of the standard IEEE 802.16, which specifies the air interface at the PHY (Physical layer) and at the MAC (Medium Access Control layer) . Aside from specifying the support of various channel bandwidths and adaptive modulation and coding, it also specifies the support for MIMO antennas to provide good Non-line-of-sight (NLOS) characteristics. See Also: WiMax Forum MIMO MIMO stands for Multiple Input and Multiple Output, and refers to the technology where there are multiple antennas at the base station and multiple antennas at the mobile device. Typical usage of multiple antenna technology includes cellular phones with two antennas, laptops with two antennas (e.g. built in the left and right side of the screen), as well as CPE devices with multiple sprouting antennas. The predominant cellular network implementation is to have multiple antennas at the base station and a single antenna on the mobile device. This minimizes the cost of the mobile radio. As the costs for radio frequency (RF) components in mobile devices go down, second antennas in mobile device may become more common. Multiple mobile device antennas are currently used in Wi-Fi technology (e.g. IEEE 802.11n), where WiFi-enabled cellular phones, laptops and other devices often have two or more antennas. MIMO Technology in WiMAX WiMAX implementations that use MIMO technology have become important. The use of MIMO technology improves the reception and allows for a better reach and rate of transmission. The implementation of MIMO also gives WiMAX a significant increase in spectral efficiency. MIMO auto-negotiation The 802.16 defined MIMO configuration is negotiated dynamically between each individual base station and mobile station. The 802.16 specification supports the ability to support a mix of mobile stations with different MIMO capabilities. This helps to maximize the sector throughput by leveraging the different capabilities of a diverse set of vendor mobile stations. Space Time Code The 802.16 specification supports the Multiple-input and single-output (MISO) technique of Transmit Diversity, which is commonly referred to Space Time Code (STC). With this method, two or more antennas are employed at the transmitter and one antenna at the receiver. The use of multiple receive antennas (thus MIMO) can further improve the reception of STC transmitted signals. With a Transmit Diversity rate = 1 (a.k.a. "Matrix A" in the 802.16 standard), different data bit constellations are transferred on two different antennas during the same symbol. The conjugate and/or inverse of the same two constellations are transferred again on the same antennas during the next symbol. The data transfer rate with STC remains the same as the baseline case. The received signal is more robust with this method due to the transmission redundancy. This configuration delivers similar performance to the case of two receive antennas and one transmitter antenna. Spatial Multiplexing The 802.16 specification also supports the MIMO technique of Spatial Multiplexing (SMX), also known as Transmit Diversity rate = 2 (a.k.a. "Matrix B" in the 802.16 standard). Instead of transmitting the same bit over two antennas, this method transmits one data bit from the first antenna, and another bit from the second antenna simultaneously, per symbol. As long as the receiver has more than one antenna and the signal is of sufficient quality, the receiver can separate the signals. This method involves added complexity and expense at both the transmitter and receiver. However, with two transmit antennas and two receive antennas, data can be transmitted twice as fast as compared systems using Space Time Codes with only one receive antenna. WiMAX Network use of Spatial Multiplexing One specific use of Spatial Multiplexing is to apply it to users who have the best signal quality, so that less time is spent transmitting to them. Users whose signal quality is too low to allow the spatially multiplexed signals to be resolved stay with conventional transmission. This allows an operator to offer higher data rates to some users and/or to serve more users. The WiMAX specification's dynamic negotiation mechanism helps enable this use. WiMAX MISO/MIMO with four antennas The 802.16 specification also supports the use of four antennas. Three configurations are supported. WiMAX four antenna mode 1 With rate = 1 using four antennas, data is transmitted four times per symbol, where each time the data is conjugated and/or inverted. This does not change the data rate, but does give the signal more robustness and avoids sudden increases in error rates. WiMAX four antenna mode 2 With rate = 2 using four antennas, the data rate is only doubled, but increases in robustness since the same data is transmitted twice as compared to only once with using two antennas. WiMAX four antenna Matrix C mode The third configuration that is only available using four antennas is Matrix C, where a different data bit is transmitted from the four antennas per symbol, which gives it four times the baseline data rate. Note: MRC (Maximum Ratio Combining) is vendor discretionary and improves rate and range. In WiMAX, MRC at the Base Station is sometimes also referred to as Receive Beamforming. See also: Space Time Coding and Spatial Multiplexing Other advanced MIMO techniques applied to WiMAX Uplink Collaborative MIMO A related technique is called Uplink Collaborative MIMO, where users transmit at the same time in the same frequency. This type of spatial multiplexing improves the sector throughput without requiring multiple transmit antennas at the mobile device. The common non-MIMO method for this in OFDMA is by scheduling different mobile stations at different points in an OFDMA time-frequency map. Collaborative Spatial Multiplexing (Collaborative MIMO) is comparable to regular spatial multiplexing, where multiple data streams are transmitted from multiple antennas on the same device. WiMAX Uplink Collaborative MIMO In the case of WiMAX, Uplink Collaborative MIMO is spatial multiplexing with two different devices, each with one antenna. These transmitting devices are collaborating in the sense that both devices must be synchronized in time and frequency so that the intentional overlapping occurs under controlled circumstances. The two streams of data will then interfere with each other. As long as the signal quality is sufficiently good and the receiver at the base station has at least two antennas, the two data streams can be separated again. This technique is sometimes also termed Virtual Spatial Multiplexing. Other MIMO-related radio techniques applied to WiMAX Adaptive Antenna Steering (AAS), a.k.a. Beamforming A MIMO-related technique that can be used with WiMAX is called AAS or Beamforming. Multiple antennas and multiple signals are employed, which then shape the beam with the intent of improving transmission to the desired station. The result is reduced interference because the signal going to the desired user is increased and the signal going to other users is reduced. Cyclic Delay Diversity Another MIMO-related technique that can be used in WiMAX systems, but which is outside of the scope of the 802.16 specification, is known as Cyclic Delay Diversity. In this technique, one or more of the signals are delayed before transmission. Because the signals are coming out of two antennas, their receive spectrums differ as each spectrum is characterized by humps and notches due to multi-path fading. At the receiver the signals combine, which improves reception because the joint reception results in shallower spectral humps and fewer spectral notches. The closer the signal can get towards a flat channel at a certain power level, the higher the throughput that can be obtained. Radio Conformance Test of WiMAX MIMO The WiMax Forum has a set of standardized conformance test procedures for PHY and MAC specification compliance called the Radio Conformance Test (RCT). Any technology aspect of a particular implementation of a radio interface must first undergo the RCT. Generally, any aspect of the IEEE 802.16 standard that does not have a test procedure in the RCT may be assumed to not yet be widely implemented. Silicon implementations of WiMAX MIMO Companies that make RFICs that support WiMAX MIMO include Intel, Beceem , NXP Semiconductors and PMC-Sierra. See also Advanced MIMO communications IEEE 802.16 Integrated Circuit Design MIMO OFDM WiMAX Wi-Fi References Louay M.A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: http://chapters.comsoc.org/comsig/meet.html Alex, S.P.; Jalloul, L.M.A.; "Performance Evaluation of MIMO in IEEE802.16e/WiMAX," IEEE Journal of Selected Topics in Signal Processing, vol.2, no.2, pp. 181–190, April 2008 External links The WiMAX Forum IEEE website for 802.16 PMC-Sierra WiMAX Products WiMAX Evolution: Emerging Technologies and Applications, edited by M. Katz and F. Fitzek, 2009. Chapter 16, MIMO Technologies for WiMAX Systems: Present and Future, by C.-B. Chae, K. Huang, and T. Inoue GEDOMIS (GEneric hardware DemOnstrator for MIMO Systems): PHY-layer implementation of MIMO mobile WiMAX Network access WiMAX
WiMAX MIMO
[ "Technology", "Engineering" ]
1,986
[ "Electronic engineering", "WiMAX", "Wireless networking", "Network access" ]
13,467,266
https://en.wikipedia.org/wiki/StretchText
StretchText is a hypertext feature that has not gained mass adoption in systems like the World Wide Web, but gives more control to the reader in determining what level of detail to read at. Authors write content to several levels of detail in a work. StretchText is similar to outlining, however instead of drilling down lists to greater detail, the current node is replaced with a newer node. This ‘stretching’ to increase the amount of writing, or contracting to decrease it gives the feature its name. This is analogous to zooming in to get more detail. Ted Nelson coined the term . Conceptually, StretchText is similar to existing hypertexts system where a link provides a more descriptive or exhaustive explanation of something, but there is a key difference between a link and a piece of StretchText. A link completely replaces the current piece of hypertext with the destination, whereas StretchText expands or contracts the content in place. Thus, the existing hypertext serves as context. References “Stretchtext – hypertext note #8” by Ted Nelson (April 29, 1967). Part of Nelson’s Project Xanadu. (TIFF) Hypertext 1960s neologisms
StretchText
[ "Technology" ]
239
[ "Computing stubs", "World Wide Web stubs" ]
3,056,454
https://en.wikipedia.org/wiki/Str%C3%B6mberg%20%28company%29
Stromberg Oy or Strömberg Ab, was a company founded by Gottfrid Strömberg in 1889 in Helsinki, Finland, and manufactured electromechanical products such as: generators, electric motors and small power plants. The company was founded initially as Gottfrid Strömbergin sähköyhtiö in Finnish, Gottfrid Strömbergs elföretag in Swedish. Strömberg was acquired by Swedish ASEA in 1987, then later in 1988 when ASEA merged with Brown, Boveri & Cie to form ABB, the company became a division of ABB and hence known as ABB Strömberg. In the later 1990s, the company name was changed from ABB Strömberg Oy into ABB Oy and a more integral part of ABB. The company Strömberg expanded in an early stage and founded another factory branch in Vaasa, Finland, in an area which is now known as Strömberg Park. The Strömberg Park area was planned, and parts of the buildings there today were designed by Alvar Aalto. The company has many inventions and firsts to its credit. It has many innovations particularly in the protection relay industry. It was one of the first to manufacture numerical relays in the 1980s and are still in production with high demand. The company manufactured the electrical components of numerous rail vehicles built by other Finnish companies, as well as for the Finnish State Railways' Soviet-built Sr1 class locomotives. External links ABB Heritage Brands - Stromberg ABB Strömberg Park Vaasa Parks - Strömberg Park Engineering companies of Finland Electrical engineering companies
Strömberg (company)
[ "Engineering" ]
337
[ "Electrical engineering companies", "Electrical engineering organizations", "Engineering companies" ]
3,056,687
https://en.wikipedia.org/wiki/Hydrosere
A hydrosere is a plant succession which occurs in an area of fresh water such as in oxbow lakes and kettle lakes. In time, an area of open freshwater will naturally dry out, ultimately becoming woodland. During this change, a range of different landtypes such as swamp and marsh will succeed each other. The succession from open water to climax woodland takes centuries or millennia. Some intermediate stages will last a shorter time than others. For example, swamp may change to marsh within a decade or less. How long it takes will depend largely on the amount of siltation occurring in the area of open water. Stages Hydrosere is the primary succession sequence which develops in aquatic environments such as lakes and ponds. It results in conversion of water body and its community into a land community. The early changes are allogenic as inorganic particles such as sand and clay are washed from catchment areas and begin filling the basin of the water body. Later, remains of dead plants also fill up these bodies and contribute to further changes in the environment. If a water body is large and very deep, a strong wave action is at work, therefore in these bodies a noticeable change cannot easily be observed. However, in smaller water bodies such as a pond the succession is easily recognizable. Different plant communities occupy different zones in a water body and exhibit concentric zonation. The edges of the water body are occupied by rooted species, submerged species are found in the littoral zone and plankton and floating species occupy the open water zone. There is nevertheless still debate about whether dry woodland is always the final climax community, or whether a watery, bog community can also be the final, stable, climax community. Phytoplankton stage Phytoplanktons (cyanobacteria), green algae (Spirogyra, Oedogonium), diatoms, etc. are the pioneer colonizers in the initial stage, starting from a water body, such as a pond. Their spores are carried by air to the pond. The phytoplankton are followed by zooplankton. They settle down to the bottom of the pond after death, and decay into humus that mixes with silt and clay particles brought into the basin by run off water and wave action and form soil. As soil builds up, the pond becomes shallower and further environmental changes follow. Submerged stage As the water body becomes shallower, more submerged rooted species are able to become established due to increasing light penetration in the shallower water. This is suitable for growth of rooted submerged species such as Myriophyllum, Vallisneria, Elodea, Hydrilla, and Ceratophyllum. These plants root themselves in mud. Once submerged species colonize the successional changes are more rapid and are mainly autogenic as organic matter accumulates. Inorganic sediment is still entering the lake and is trapped more quickly by the net of plant roots and rhizomes growing on the pond floor. The pond becomes sufficiently shallow (2–5 ft) for floating species and less suitable for rooted submerged plants. Floating stage The floating plants are rooted in the mud, but some or all their leaves float on the surface of the water. These include species like Nymphaea, Nelumbo and Potamogeton. Some free-floating species also become associated with root plants. The large and broad leaves of floating plants shade the water surface and conditions become unsuitable for growth of submerged species which start disappearing. The plants decay to form organic mud which makes the pond more shallow yet (1–3 ft). Reed swamp stage The pond is now invaded by emergent plants such as Phragmites (reed-grasses), Typha (cattail), and Zizania (wild rice) to form a reed-swamp (in North American usage, this habitat is called a marsh). These plants have creeping rhizomes which knit the mud together to produce large quantities of leaf litter. This litter is resistant to decay and reed peat builds up, accelerating the autogenic change. The surface of the pond is converted into water-saturated marshy land. Sedge-meadow stage Successive decreases in water level and changes in substratum help members of Cyperaceae and Graminae such as Carex spp. and Juncus to establish themselves. They form a mat of vegetation extending towards the centre of the pond. Their rhizomes knit the soil further. The above water leaves transpire water to lower the water level further and add additional leaf litter to the soil. Eventually the sedge peat accumulates above the water level and soil is no longer totally waterlogged. The habitat becomes suitable for invasion of herbs (secondary species) such as Mentha, Caltha, Iris, and Galium which grow luxuriantly and bring further changes to the environment. Mesic conditions develop and marshy vegetation begins to disappear. Woodland stage The soil now remains drier for most of the year and becomes suitable for development of wet woodland. It is invaded by shrubs and trees such as Salix (willow), Alnus (alder), and Populus (poplar). These plants react upon the habitat by producing shade, lower the water table still further by transpiration, build up the soil, and lead to the accumulation of humus with associated microorganisms. This type of wet woodland is also known as carr. Climax stage Finally a self-perpetuating climax community develops. It may be a forest if the climate is humid, grassland in case of sub-humid environment, or a desert in arid and semi-arid conditions. A forest is characterized by presence of all types of vegetation including herbs, shrubs, mosses, shade-loving plants and trees. Decomposers are frequent in climax vegetation. The overall changes taking place during development of successional communities are building up of substratum, shallowing of water, addition of humus and minerals, soil building and aeration of soil. As the water body fills in with sediment, the area of open water decreases and the vegetation types moves inwards as the water becomes shallower. Many of the above-mentioned communities can be seen growing together in a water body. The center is occupied by floating and submerged plants with reeds nearer the shores, followed by sedges and rushes growing at the edges. Still further are shrubs and trees occupying the dry land. Examples An example is a small kettle lake called Sweetmere, in Shropshire, UK. Sweetmere is one of many small kettle lakes which formed at the end of the last glacial period when the temperatures began to increase. The ice began to melt and retreat approximately 10,000 years ago. As the climate slowly began to warm this allowed algae, water lilies and floating aquatic plants to begin to colonise the lake. These, in essence, were the pioneer species. Once these began to die it provided organic matter to the lake bed sediment and therefore increased fertility and reduced depth. As a result, this allowed deeper rooted species to develop such as reed, bulrush and reedmace. At this point there is a growing floating raft of thick organic matter within the lake. Because the bulrushes and reeds have relatively deep roots, this encouraged bioconstruction which traps more sediment, allowing sedges, willow and alder to become established. This process further decreased the water depth and raised the lakebed thus making it drier. Drier conditions now meant that a wider range of species could inhabit the area. Birch and alder came into dominance. All species which have grown have occurred because of seed transfer either by animals, birds, wind, or water transfer. Water level is further reduced as a result of further bioconstruction and also due to increasing temperatures there is increased evaporation from the lake. Underneath the birch canopy developed terrestrial shrubs and grasses. This then increased the acidity which increased the rates of nutrient exchange. The area has been artificially drained and this allowed the oak and ash community to develop. This is the seral stage. The lake is now being managed by cutting down certain species in order to stop the whole lake becoming dried up and dominated by the oak and ash woodland. Another example of a hydrosere is Loch a' Mhuilin, located on the Isle of Arran, Scotland. This small lake lies behind a ridge of material deposited towards the end of the last ice age. The lake exhibits characteristic features of a hydrosere, the succession from a fresh water surface with small pioneer plant species to a sub-climax vegetation of alder and willow. The climax vegetation of oak and beech woodland has not been achieved due to the impact of human activities of clearing grazing land, as well as grazing by red deer and rabbits. See also Psammosere Lithosere Seral community Xerosere References Ecological succession Limnology Lakes
Hydrosere
[ "Environmental_science" ]
1,813
[ "Lakes", "Hydrology" ]
3,056,960
https://en.wikipedia.org/wiki/1%2C3%2C5-Trioxane
1,3,5-Trioxane, sometimes also called trioxane or trioxin, is a chemical compound with molecular formula CHO. It is a white, highly water-soluble solid with a chloroform-like odor. It is a stable cyclic trimer of formaldehyde, and one of the three trioxane isomers; its molecular backbone consists of a six-membered ring with three carbon atoms alternating with three oxygen atoms. Production Trioxane can be obtained by the acid-catalyzed cyclic trimerization of formaldehyde in concentrated aqueous solution. Uses Trioxane can be used interchangeably with formaldehyde and with paraformaldehyde, however the cyclic structure is more stable and it can require high temperatures in order to react. It is a precursor for the production of polyoxymethylene plastics, of which about one million tons per year are produced. Other applications exploit its tendency to release formaldehyde. As such it is used as a binder in textiles, wood products, etc. Trioxane is combined with hexamine and compressed into solid bars to make hexamine fuel tablets, used by the military and outdoorsmen as a cooking fuel. In the laboratory, trioxane is used as an anhydrous source of formaldehyde. See also Formaldehyde Paraformaldehyde Dioxane 1,3,5-Trioxanetrione References Acetals Trioxanes
1,3,5-Trioxane
[ "Chemistry" ]
304
[ "Acetals", "Functional groups" ]
3,056,987
https://en.wikipedia.org/wiki/1%2C2%2C4-Trioxane
1,2,4-Trioxane is one of the isomers of trioxane. It has the molecular formula CHO and consists of a six membered ring with three carbon atoms and three oxygen atoms. The two adjacent oxygen atoms form a peroxide functional group and the other forms an ether functional group. It is like a cyclic acetal but with one of the oxygen atoms in the acetal group being replaced by a peroxide group. 1,2,4-Trioxane itself has not been isolated or characterized, but rather only studied computationally. However, it constitutes an important structural element of some more complex organic compounds. The natural compound artemisinin, isolated from the sweet wormwood plant (Artemisia annua), and some semi-synthetic derivatives are important antimalarial drugs containing the 1,2,4-trioxane ring. Completely synthetic analogs containing the 1,2,4-trioxane ring are important potential improvements over the naturally derived artemisinins. The peroxide group in the 1,2,4-trioxane core of artemisinin is cleaved in the presence of the malaria parasite leading to reactive oxygen radicals that are damaging to the parasite. References Organic peroxides Trioxanes Hypothetical chemical compounds
1,2,4-Trioxane
[ "Chemistry" ]
260
[ "Theoretical chemistry stubs", "Hypotheses in chemistry", "Organic compounds", "Theoretical chemistry", "Hypothetical chemical compounds", "Organic peroxides" ]
3,057,073
https://en.wikipedia.org/wiki/John%20Leighfield
John Percival Leighfield (born 1938) is a British IT industry businessman and was previously chairman of RM plc from 1993 until 2011. Currently John Leighfield is a Director of Getmapping, a UK supplier of aerial photography, mapping products and data hosting solutions. He is also Chairman of Governors of the WMG Academy Trust (which operates two University technical colleges). John Leighfield was born in Oxford, England, and was a pupil at Magdalen College School. He then read Greats at Exeter College, Oxford. He has an MA from Oxford, Honorary Doctorates from the University of Central England in Birmingham (DUniv), from De Montfort University (DTech), from Wolverhampton University (DTech) and from the University of Warwick (DLL). He is a Fellow of the RSA, RGS, CMI, IET, and BCS. Leighfield has pursued a career in IT, initially in the 1960s with the Ford Motor Company, where he did pioneering work on computer systems in finance and manufacturing, Plessey (where he was head of management services) and British Leyland (from the early 1970s). In 1987, he led an employee buy out of Istel Ltd, which he had established as a subsidiary of British Leyland. In 1989, the company was subsequently taken over by AT&T. He was the executive chairman of AT&T Istel until April 1993. In November 1993, he joined RM (a British educational computing company) as a non-executive director and in October 1994 became the non-executive chairman. He has been a non-executive director of a number of other companies as well, including Halifax plc and Synstar plc (of which he is also non-executive chairman). Leighfield was president of the British Computer Society (1993–4) and the Computing Services and Software Association (1995–6). He is president of the Institute for the Management of Information Systems (IMIS), a UK professional association. He has been a member of the council of University of Warwick, chairman of the advisory board, and an honorary visiting professor at the Warwick Business School. He was pro-chancellor and chairman of the council at the University of Warwick from 2002 to 2011. In the Queen's Birthday Honours 1998 Leighfield was appointed as a Commander of the Most Excellent Order of the British Empire. In 2006, Leighfield was awarded the Mountbatten Medal. In 2005, he was appointed as a non-executive director of Getmapping plc and Master of the Worshipful Company of Information Technologists. Leighfield lives in Oxford. He was formerly Chairman of the Governors of Magdalen College School. He is Chairman of the Oxford Philomusica Advisory Council, the Resident Professional Orchestra at the University of Oxford. In his spare time, he has an interest in maps, especially of Oxfordshire. He is married with children and grandchildren. On 15 January 2016 Leighfield gave an in-depth interview to Alan Cane, Former Editor of the Financial Times, on his life and career for Archives of IT. References External links Synstar information BCS Strategic Panel Members Intellect UK information BCS Oxfordshire Branch photograph 1938 births Living people Businesspeople from Oxford Alumni of Exeter College, Oxford British businesspeople Businesspeople in computing People associated with the University of Warwick Fellows of the British Computer Society Fellows of the Royal Geographical Society Fellows of the Institution of Engineering and Technology Commanders of the Order of the British Empire Presidents of the British Computer Society Masters of the Worshipful Company of Information Technologists Masters of the Worshipful Company of Educators
John Leighfield
[ "Engineering" ]
723
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
3,057,518
https://en.wikipedia.org/wiki/Backward-wave%20oscillator
A backward wave oscillator (BWO), also called carcinotron or backward wave tube, is a vacuum tube that is used to generate microwaves up to the terahertz range. Belonging to the traveling-wave tube family, it is an oscillator with a wide electronic tuning range. An electron gun generates an electron beam that interacts with a slow-wave structure. It sustains the oscillations by propagating a traveling wave backwards against the beam. The generated electromagnetic wave power has its group velocity directed oppositely to the direction of motion of the electrons. The output power is coupled out near the electron gun. It has two main subtypes, the M-type (M-BWO), the most powerful, and the O-type (O-BWO). The output power of the O-type is typically in the range of 1 mW at 1000 GHz to 50 mW at 200 GHz. Carcinotrons are used as powerful and stable microwave sources. Due to the good quality wavefront they produce (see below), they find use as illuminators in terahertz imaging. The backward wave oscillators were demonstrated in 1951, M-type by Bernard Epsztein and O-type by Rudolf Kompfner. The M-type BWO is a voltage-controlled non-resonant extrapolation of magnetron interaction. Both types are tunable over a wide range of frequencies by varying the accelerating voltage. They can be swept through the band fast enough to be appearing to radiate over all the band at once, which makes them suitable for effective radar jamming, quickly tuning into the radar frequency. Carcinotrons allowed airborne radar jammers to be highly effective. However, frequency-agile radars can hop frequencies fast enough to force the jammer to use barrage jamming, diluting its output power over a wide band and significantly impairing its efficiency. Carcinotrons are used in research, civilian and military applications. For example, the Czechoslovak Kopac passive sensor and Ramona passive sensor air defense detection systems employed carcinotrons in their receiver systems. Basic concept All travelling-wave tubes operate in the same general fashion, and differ primarily in details of their construction. The concept is dependent on a steady stream of electrons from an electron gun that travel down the center of the tube (see adjacent concept diagram). Surrounding the electron beam is some sort of radio frequency source signal; in the case of the traditional klystron this is a resonant cavity fed with an external signal, whereas in more modern devices there are a series of these cavities or a helical metal wire fed with the same signal. As the electrons travel down the tube, they interact with the RF signal. The electrons are attracted to areas with maximum positive bias and repelled from negative areas. This causes the electrons to bunch up as they are repelled or attracted along the length of the tube, a process known as velocity modulation. This process makes the electron beam take on the same general structure as the original signal; the density of the electrons in the beam matches the relative amplitude of the RF signal in the induction system. The electron current is a function of the details of the gun, and is generally orders of magnitude more powerful than the input RF signal. The result is a signal in the electron beam that is an amplified version of the original RF signal. As the electrons are moving, they induce a magnetic field in any nearby conductor. This allows the now-amplified signal to be extracted. In systems like the magnetron or klystron, this is accomplished with another resonant cavity. In the helical designs, this process occurs along the entire length of the tube, reinforcing the original signal in the helical conductor. The "problem" with traditional designs is that they have relatively narrow bandwidths; designs based on resonators will work with signals within 10% or 20% of their design, as this is physically built into the resonator design, while the helix designs have a much wider bandwidth, perhaps 100% on either side of the design peak. BWO The BWO is built in a fashion similar to the helical TWT. However, instead of the RF signal propagating in the same (or similar) direction as the electron beam, the original signal travels at right angles to the beam. This is normally accomplished by drilling a hole through a rectangular waveguide and shooting the beam through the hole. The waveguide then goes through two right angle turns, forming a C-shape and crossing the beam again. This basic pattern is repeated along the length of the tube so the waveguide passes across the beam several times, forming a series of S-shapes. The original RF signal enters from what would be the far end of the TWT, where the energy would be extracted. The effect of the signal on the passing beam causes the same velocity modulation effect, but because of the direction of the RF signal and specifics of the waveguide, this modulation travels backward along the beam, instead of forward. This propagation, the slow-wave, reaches the next hole in the folded waveguide just as the same phase of the RF signal does. This causes amplification just like the traditional TWT. In a traditional TWT, the speed of propagation of the signal in the induction system has to be similar to that of the electrons in the beam. This is required so that the phase of the signal lines up with the bunched electrons as they pass the inductors. This places limits on the selection of wavelengths the device can amplify, based on the physical construction of the wires or resonant chambers. This is not the case in the BWO, where the electrons pass the signal at right angles and their speed of propagation is independent of that of the input signal. The complex serpentine waveguide places strict limits on the bandwidth of the input signal, such that a standing wave is formed within the guide. But the velocity of the electrons is limited only by the allowable voltages applied to the electron gun, which can be easily and rapidly changed. Thus the BWO takes a single input frequency and produces a wide range of output frequencies. Carcinotron The device was originally given the name "carcinotron", after the Greek name for the crayfish, which swim backwards. By simply changing the supply voltage, the device could produce any required frequency across a band that was much larger than any existing microwave amplifier could match - the cavity magnetron worked at a single frequency defined by the physical dimensions of their resonators, and while the klystron amplified an external signal, it only did so efficiently within a small range of frequencies. Previously, jamming a radar was a complex and time-consuming operation. Operators had to listen for potential frequencies being used, set up one of a bank of amplifiers on that frequency, and then begin broadcasting. When the radar station realized what was happening, they would change their frequencies and the process would begin again. In contrast, the carcinotron could sweep through all the possible frequencies so rapidly that it appeared to be a constant signal on all of the frequencies at once. Typical designs could generate hundreds or low thousands of watts, so at any one frequency, there might be a few watts of power that is received by the radar station. However, at long range the amount of energy from the original radar broadcast that reaches the aircraft is only a few watts at most, so the carcinotron can overpower them. The system was so powerful that it was found that a carcinotron operating on an aircraft would begin to be effective even before it rose above the radar horizon. As it swept through the frequencies it would broadcast on the radar's operating frequency at what were effectively random times, filling the display with random dots any time the antenna was pointed near it, perhaps 3 degrees on either side of the target. There were so many dots that the display simply filled with white noise in that area. As it approached the station, the signal would also begin to appear in the antenna's sidelobes, creating further areas that were blanked out by noise. At close range, on the order of , the entire radar display would be completely filled with noise, rendering it useless. The concept was so powerful as a jammer that there were serious concerns that ground-based radars were obsolete. Airborne radars had the advantage that they could approach the aircraft carrying the jammer, and, eventually, the huge output from their transmitter would "burn through" the jamming. However, interceptors of the era relied on ground direction to get into range, using ground-based radars. This represented an enormous threat to air defense operations. For ground radars, the threat was eventually solved in two ways. The first was that radars were upgraded to operate on many different frequencies and switch among them randomly from pulse to pulse, a concept now known as frequency agility. Some of these frequencies were never used in peacetime, and highly secret, with the hope that they would not be known to the jammer in wartime. The carcinotron could still sweep through the entire band, but then it would be broadcasting on the same frequency as the radar only at random times, reducing its effectiveness. The other solution was to add passive receivers that triangulated on the carcinotron broadcasts, allowing the ground stations to produce accurate tracking information on the location of the jammer and allowing them to be attacked. The slow-wave structure The needed slow-wave structures must support a radio frequency (RF) electric field with a longitudinal component; the structures are periodic in the direction of the beam and behave like microwave filters with passbands and stopbands. Due to the periodicity of the geometry, the fields are identical from cell to cell except for a constant phase shift Φ. This phase shift, a purely real number in a passband of a lossless structure, varies with frequency. According to Floquet's theorem (see Floquet theory), the RF electric field E(z,t) can be described at an angular frequency ω, by a sum of an infinity of "spatial or space harmonics" En where the wave number or propagation constant kn of each harmonic is expressed as kn = (Φ + 2nπ) / p (-π < Φ < +π) z being the direction of propagation, p the pitch of the circuit and n an integer. Two examples of slow-wave circuit characteristics are shown, in the ω-k or Brillouin diagram: on figure (a), the fundamental n=0 is a forward space harmonic (the phase velocity vn=ω/kn has the same sign as the group velocity vg=dω/dkn), synchronism condition for backward interaction is at point B, intersection of the line of slope ve - the beam velocity - with the first backward (n = -1) space harmonic, on figure (b) the fundamental (n=0) is backward A periodic structure can support both forward and backward space harmonics, which are not modes of the field, and cannot exist independently, even if a beam can be coupled to only one of them. As the magnitude of the space harmonics decreases rapidly when the value of n is large, the interaction can be significant only with the fundamental or the first space harmonic. M-type BWO The M-type carcinotron, or M-type backward wave oscillator, uses crossed static electric field E and magnetic field B, similar to the magnetron, for focussing an electron sheet beam drifting perpendicularly to E and B, along a slow-wave circuit, with a velocity E/B. Strong interaction occurs when the phase velocity of one space harmonic of the wave is equal to the electron velocity. Both Ez and Ey components of the RF field are involved in the interaction (Ey parallel to the static E field). Electrons which are in a decelerating Ez electric field of the slow-wave, lose the potential energy they have in the static electric field E and reach the circuit. The sole electrode is more negative than the cathode, in order to avoid collecting those electrons having gained energy while interacting with the slow-wave space harmonic. O-type BWO The O-type carcinotron, or O-type backward wave oscillator, uses an electron beam longitudinally focused by a magnetic field, and a slow-wave circuit interacting with the beam. A collector collects the beam at the end of the tube. O-BWO spectral purity and noise The BWO is a voltage tunable oscillator, whose voltage tuning rate is directly related to the propagation characteristics of the circuit. The oscillation starts at a frequency where the wave propagating on the circuit is synchronous with the slow space charge wave of the beam. Inherently the BWO is more sensitive than other oscillators to external fluctuations. Nevertheless, its ability to be phase- or frequency-locked has been demonstrated, leading to successful operation as a heterodyne local oscillator. Frequency stability The frequency–voltage sensitivity, is given by the relation f/f = 1/2 [1/(1 + |vΦ/vg|)] (V0/V0) The oscillation frequency is also sensitive to the beam current (called "frequency pushing"). The current fluctuations at low frequencies are mainly due to the anode voltage supply, and the sensitivity to the anode voltage is given by f/f = 3/4 [ωq/ω/(1 + |vΦ/vg|)] (Va/Va) This sensitivity as compared to the cathode voltage sensitivity, is reduced by the ratio ωq/ω, where ωq is the angular plasma frequency; this ratio is of the order of a few times 10−2. Noise Measurements on submillimeter-wave BWO's (de Graauw et al., 1978) have shown that a signal-to-noise ratio of 120 dB per MHz could be expected in this wavelength range. In heterodyne detection using a BWO as a local oscillator, this figure corresponds to a noise temperature added by the oscillator of only 1000–3000 K. Notes References Johnson, H. R. (1955). Backward-wave oscillators. Proceedings of the IRE, 43(6), 684–697. Ramo S., Whinnery J. R., Van Duzer T. - Fields and Waves in Communication Electronics (3rd ed.1994) John Wiley & Sons Kantorowicz G., Palluel P. - Backward Wave Oscillators, in Infrared and Millimeter Waves, Vol 1, Chap. 4, K. Button ed., Academic Press 1979 de Graauw Th., Anderegg M., Fitton B., Bonnefoy R., Gustincic J. J. - 3rd Int. Conf. Submm. Waves, Guilford University of Surrey (1978) Convert G., Yeou T., in Millimeter and Submillimeter Waves, Chap. 4, (1964) Illife Books, London External links Virtual Valve Museum Thomson CSF CV6124 (Wayback Machine) Microwave technology Terahertz technology Vacuum tubes
Backward-wave oscillator
[ "Physics" ]
3,173
[ "Spectrum (physical sciences)", "Terahertz technology", "Vacuum tubes", "Electromagnetic spectrum", "Vacuum", "Matter" ]
3,057,530
https://en.wikipedia.org/wiki/Mired
Contracted from the term micro reciprocal degree, the mired () is a unit of measurement used to express color temperature. Values in mireds are calculated by the formula: where T is the colour temperature in units of kelvins and M denotes the resulting mired dimensionless number. The constant is one million kelvins. The SI term for this unit is the reciprocal megakelvin (MK−1), shortened to mirek, but this term has not gained traction. For convenience, decamireds are sometimes used, with each decamired equaling ten mireds. The use of the term mired dates back to Irwin G. Priest's observation in 1932 that the just noticeable difference between two illuminants is based on the difference of the reciprocals of their temperatures, rather than the difference in the temperatures themselves. Examples A blue sky, which has a color temperature T of about , has a mired value of M = 40 mireds, while a standard electronic photography flash, having a color temperature T of 5000 K, has a mired value of M = 200 mireds. Applications Photographic filter and gel In photography, mireds are used to indicate the color temperature shift provided by a filter or gel for a given film and light source. For instance, to use daylight film (5700 K) to take a photograph under a tungsten light source (3200 K) without introducing a color cast, one would need a corrective filter or gel providing a mired shift This corresponds to a color temperature blue (CTB) filter. Color gels with negative mired values appear green or blue, while those with positive values appear amber or red. CCT calculation A number of mathematical methods, including Robertson's, calculate the correlated color temperature of a light source from its chromaticity values. These methods exploit the relatively even spacing of the mired uint internally. Color description Apple's HomeKit uses the mired unit for specifying color temperature. References Units of measurement Non-SI metric units Color
Mired
[ "Mathematics" ]
418
[ "Non-SI metric units", "Quantity", "Units of measurement" ]
3,057,532
https://en.wikipedia.org/wiki/Complex%20measure
In mathematics, specifically measure theory, a complex measure generalizes the concept of measure by letting it have complex values. In other words, one allows for sets whose size (length, area, volume) is a complex number. Definition Formally, a complex measure on a measurable space is a complex-valued function that is sigma-additive. In other words, for any sequence of disjoint sets belonging to , one has As for any permutation (bijection) , it follows that converges unconditionally (hence, since is finite dimensional, converges absolutely). Integration with respect to a complex measure One can define the integral of a complex-valued measurable function with respect to a complex measure in the same way as the Lebesgue integral of a real-valued measurable function with respect to a non-negative measure, by approximating a measurable function with simple functions. Just as in the case of ordinary integration, this more general integral might fail to exist, or its value might be infinite (the complex infinity). Another approach is to not develop a theory of integration from scratch, but rather use the already available concept of integral of a real-valued function with respect to a non-negative measure. To that end, it is a quick check that the real and imaginary parts μ1 and μ2 of a complex measure μ are finite-valued signed measures. One can apply the Hahn-Jordan decomposition to these measures to split them as and where μ1+, μ1−, μ2+, μ2− are finite-valued non-negative measures (which are unique in some sense). Then, for a measurable function f which is real-valued for the moment, one can define as long as the expression on the right-hand side is defined, that is, all four integrals exist and when adding them up one does not encounter the indeterminate ∞−∞. Given now a complex-valued measurable function, one can integrate its real and imaginary components separately as illustrated above and define, as expected, Variation of a complex measure and polar decomposition For a complex measure μ, one defines its variation, or absolute value, |μ| by the formula where A is in Σ and the supremum runs over all sequences of disjoint sets (An)n whose union is A. Taking only finite partitions of the set A into measurable subsets, one obtains an equivalent definition. It turns out that |μ| is a non-negative finite measure. In the same way as a complex number can be represented in a polar form, one has a polar decomposition for a complex measure: There exists a measurable function θ with real values such that meaning for any absolutely integrable measurable function f, i.e., f satisfying One can use the Radon–Nikodym theorem to prove that the variation is a measure and the existence of the polar decomposition. The space of complex measures The sum of two complex measures is a complex measure, as is the product of a complex measure by a complex number. That is to say, the set of all complex measures on a measure space (X, Σ) forms a vector space over the complex numbers. Moreover, the total variation defined as is a norm, with respect to which the space of complex measures is a Banach space. See also Riesz representation theorem Signed measure Vector measure References Further reading External links Complex measure on MathWorld Measures (measure theory) Complex numbers
Complex measure
[ "Physics", "Mathematics" ]
712
[ "Physical quantities", "Measures (measure theory)", "Quantity", "Mathematical objects", "Size", "Complex numbers", "Numbers" ]
3,057,557
https://en.wikipedia.org/wiki/Rubble%20pile
In astronomy, a rubble pile is a celestial body that consists of numerous pieces of debris that have coalesced under the influence of gravity. Rubble piles have low density because there are large cavities between the various chunks that make them up. The asteroids Bennu and Ryugu have a measured bulk density which suggests that their internal structure is a rubble pile. Many comets and most smaller minor planets (<10 km in diameter) are thought to be composed of coalesced rubble. Minor planets Most smaller asteroids are thought to be rubble piles. Rubble piles form when an asteroid or moon (which may originally be monolithic) is smashed to pieces by an impact, and the shattered pieces subsequently fall back together, primarily due to self-gravitation. This coalescing usually takes from several hours to weeks. When a rubble-pile asteroid passes a much more massive object, tidal forces change its shape. Scientists first suspected that asteroids are often rubble piles when asteroid densities were first determined. Many of the calculated densities were significantly less than those of meteorites, which in some cases had been determined to be pieces of asteroids. Many asteroids with low densities are thought to be rubble piles, for example 253 Mathilde. The mass of Mathilde, as determined by the NEAR Shoemaker mission, is far too low for the volume observed, considering the surface is rock. Even ice with a thin crust of rock would not provide a suitable density. Also, the large impact craters on Mathilde would have shattered a rigid body. However, the first unambiguous rubble pile to be photographed is 25143 Itokawa, which has no obvious impact craters and is thus almost certainly a coalescence of shattered fragments. The asteroid 433 Eros, the primary destination of NEAR Shoemaker, was determined to be riven with cracks but otherwise solid. Other asteroids, possibly including Itokawa, have been found to be contact binaries, two major bodies touching, with or without rubble filling the boundary. Large interior voids are possible because of the very low gravity of most asteroids. Despite a fine regolith on the outside (at least to the resolution that has been seen with spacecraft), the asteroid's gravity is so weak that friction between fragments dominates and prevents small pieces from falling inwards and filling the voids. All the largest asteroids (1 Ceres, 2 Pallas, 4 Vesta, 10 Hygiea, 704 Interamnia) are solid objects without any macroscopic internal porosity. This may be because they have been large enough to withstand all impacts, and have never been shattered. Alternatively, Ceres and some few other of the largest asteroids may be massive enough that, even if they were shattered but not dispersed, their gravity would collapse most voids upon recoalescing. Vesta, at least, has withstood intact one major impact since its formation and shows signs of internal structure from differentiation in the resultant crater that assures that it is not a rubble pile. This serves as evidence for size as a protection from shattering into rubble. Comets Observational evidence suggest that the cometary nucleus may not be a well-consolidated single body, but may instead be a loosely bound agglomeration of smaller fragments, weakly bonded and subject to occasional or even frequent disruptive events, although the larger cometary fragments are expected to be primordial condensations rather than collisionally derived debris as in the asteroid case. However, in situ observations by the Rosetta mission indicate that it may be more complex than that. Moons The moon Phobos, the larger of the two natural satellites of the planet Mars, is also thought to be a rubble pile bound together by a thin regolith crust about thick. A rubble-pile morphology may point towards an in situ origin of the Martian moons. Based on this, it has been proposed that Phobos and Deimos may originate from a single destroyed moon. Alternatively, Phobos may have undergone repeated 'recycling,' having been torn apart into a ring before reaccreting and migrating outwards. See also Circumplanetary disk Comet nucleus List of slow rotators (minor planets) References External links Close-up images of Itokawa, a rubble pile asteroid Hyper-Velocity Impacts on Rubble Pile Asteroids pdf online @ kent.ac.uk Astrophysics Bodies of the Solar System
Rubble pile
[ "Physics", "Astronomy" ]
885
[ "Astronomical sub-disciplines", "Bodies of the Solar System", "Astrophysics", "Astronomical objects", "Solar System" ]
3,057,614
https://en.wikipedia.org/wiki/P-form%20electrodynamics
In theoretical physics, -form electrodynamics is a generalization of Maxwell's theory of electromagnetism. Ordinary (via. one-form) Abelian electrodynamics We have a one-form , a gauge symmetry where is any arbitrary fixed 0-form and is the exterior derivative, and a gauge-invariant vector current with density 1 satisfying the continuity equation where is the Hodge star operator. Alternatively, we may express as a closed -form, but we do not consider that case here. is a gauge-invariant 2-form defined as the exterior derivative . satisfies the equation of motion (this equation obviously implies the continuity equation). This can be derived from the action where is the spacetime manifold. p-form Abelian electrodynamics We have a -form , a gauge symmetry where is any arbitrary fixed -form and is the exterior derivative, and a gauge-invariant -vector with density 1 satisfying the continuity equation where is the Hodge star operator. Alternatively, we may express as a closed -form. is a gauge-invariant -form defined as the exterior derivative . satisfies the equation of motion (this equation obviously implies the continuity equation). This can be derived from the action where is the spacetime manifold. Other sign conventions do exist. The Kalb–Ramond field is an example with in string theory; the Ramond–Ramond fields whose charged sources are D-branes are examples for all values of . In eleven-dimensional supergravity or M-theory, we have a 3-form electrodynamics. Non-abelian generalization Just as we have non-abelian generalizations of electrodynamics, leading to Yang–Mills theories, we also have nonabelian generalizations of -form electrodynamics. They typically require the use of gerbes. References Henneaux; Teitelboim (1986), "-Form electrodynamics", Foundations of Physics 16 (7): 593-617, Navarro; Sancho (2012), "Energy and electromagnetism of a differential -form ", J. Math. Phys. 53, 102501 (2012) Electrodynamics String theory
P-form electrodynamics
[ "Astronomy", "Mathematics" ]
459
[ "String theory", "Astronomical hypotheses", "Electrodynamics", "Dynamical systems" ]
3,057,806
https://en.wikipedia.org/wiki/Observations%20and%20explorations%20of%20Venus
Observations of the planet Venus include those in antiquity, telescopic observations, and from visiting spacecraft. Spacecraft have performed various flybys, orbits, and landings on Venus, including balloon probes that floated in the atmosphere of Venus. Study of the planet is aided by its relatively close proximity to the Earth, compared to other planets, but the surface of Venus is obscured by an atmosphere opaque to visible light. Historical observations and impact As one of the brightest objects in the sky, Venus has been known since prehistoric times, and as such, many ancient cultures recorded observations of the planet. A cylinder seal from the Jemdet Nasr period indicates that the ancient Sumerians already knew that the morning and evening stars were the same celestial object. The Sumerians named the planet after the goddess Inanna, who was known as Ishtar by the later Akkadians and Babylonians. She had a dual role as a goddess of both love and war, thereby representing a deity that presided over birth and death. One of the oldest surviving astronomical documents, from the Babylonian library of Ashurbanipal around 1600 BC, is a 21-year record of the appearances of Venus. Because the movements of Venus appear to be discontinuous (it disappears due to its proximity to the sun, for many days at a time, and then reappears on the other horizon), some cultures did not immediately recognize Venus as single entity; instead, they assumed it to be two separate stars on each horizon: the morning star and the evening star. The Ancient Egyptians, for example, believed Venus to be two separate bodies and knew the morning star as Tioumoutiri and the evening star as Ouaiti. The Ancient Greeks called the morning star , (Latinized Phosphorus), the "Bringer of Light" or , (Latinized Eosphorus), the "Bringer of Dawn". The evening star they called (Latinized Hesperus) (, the "star of the evening"). By Hellenistic times, the ancient Greeks identified it as a single planet, which they named after their goddess of love, Aphrodite (), Phoenician Astarte, a planetary name that is retained in modern Greek. Hesperos was translated into Latin as Vesper and Phosphoros as Lucifer ("Light Bearer"). Venus was considered the most important celestial body observed by the Maya, who called it Chac ek, or Noh Ek', "the Great Star" and Xux Ek', the Wasp Star. The Maya based their religious calendar partially upon the movements of Venus and monitored its movements closely, including in the daytime. The positions of Venus and other planets were thought to influence life on Earth, so the Maya and other ancient Mesoamerican cultures timed wars and other important events based on their observations. In the Dresden Codex, the Maya included an almanac showing Venus's full cycle, in five sets of 584 days each (approximately eight years), after which the patterns repeated (since Venus has a synodic period of 583.92 days). The Maya were aware of this synodic period, and could compute it to within a hundredth part of a day. Phases Because its orbit takes it between the Earth and the Sun, Venus as seen from Earth exhibits visible phases in much the same manner as the Earth's Moon. Galileo Galilei observed the phases of Venus in December 1610, an observation which supported Copernicus's then-contentious heliocentric description of the Solar System. He also noted changes in the size of Venus's visible diameter when it was in different phases, suggesting that it was farther from Earth when it was full and nearer when it was a crescent. This observation strongly supported the heliocentric model. Venus (and also Mercury) is not visible from Earth when it is full, since at that time it is at superior conjunction, rising and setting concomitantly with the Sun and hence lost in the Sun's glare. Venus is brightest when approximately 25% of its disk is illuminated; this typically occurs 37 days both before (in the evening sky) and after (in the morning sky) its inferior conjunction. Its greatest elongations occur approximately 70 days before and after inferior conjunction, at which time it is half full; between these two intervals Venus is actually visible in broad daylight, if the observer knows specifically where to look for it. The planet's period of retrograde motion is 20 days on either side of the inferior conjunction. In fact, through a telescope Venus at greatest elongation appears less than half full due to Schröter's effect first noticed in 1793 and shown in 1996 as due to its thick atmosphere. On rare occasions, Venus can actually be seen in both the morning (before sunrise) and evening (after sunset) on the same day. This scenario arises when Venus is at its maximum separation from the ecliptic and concomitantly at inferior conjunction; then one hemisphere (Northern or Southern) will be able to see it at both times. This opportunity presented itself most recently for Northern Hemisphere observers within a few days on either side of March 29, 2001, and for those in the Southern Hemisphere, on and around August 19, 1999. These respective events repeat themselves every eight years pursuant to the planet's synodic cycle. Ground-based observations Transits of Venus directly between the Earth and the Sun's visible disc are rare astronomical events. The first such transit to be predicted and observed was the Transit of Venus, 1639, seen and recorded by English astronomers Jeremiah Horrocks and William Crabtree. The observation by Mikhail Lomonosov of the transit of 1761 provided the first evidence that Venus had an atmosphere, and the 19th-century observations of parallax during Venus transits allowed the distance between the Earth and Sun to be accurately calculated for the first time. Transits can only occur either in early June or early December, these being the points at which Venus crosses the ecliptic (the orbital plane of the Earth), and occur in pairs at eight-year intervals, with each such pair more than a century apart. The most recent pair of transits of Venus occurred in 2004 and 2012, while the prior pair occurred in 1874 and 1882. In the 19th century, many observers stated that Venus had a period of rotation of roughly 24 hours. Italian astronomer Giovanni Schiaparelli was the first to predict a significantly slower rotation, proposing that Venus was tidally locked with the Sun (as he had also proposed for Mercury). While not actually true for either body, this was still a reasonably accurate estimate. The near-resonance between its rotation and its closest approach to Earth helped to create this impression, as Venus always seemed to be facing the same direction when it was in the best location for observations to be made. The rotation rate of Venus was first measured during the 1961 conjunction, observed by radar from a 26 m antenna at Goldstone, California, the Jodrell Bank Radio Observatory in the UK, and the Soviet deep space facility in Yevpatoria, Crimea. Accuracy was refined at each subsequent conjunction, primarily from measurements made from Goldstone and Eupatoria. The fact that rotation was retrograde was not confirmed until 1964. Before radio observations in the 1960s, many believed that Venus contained a lush, Earth-like environment. This was due to the planet's size and orbital radius, which suggested a fairly Earth-like situation as well as to the thick layer of clouds which prevented the surface from being seen. Among the speculations on Venus were that it had a jungle-like environment or that it had oceans of either petroleum or carbonated water. However, microwave observations by C. Mayer et al. indicated a high-temperature source (600 K). Strangely, millimetre-band observations made by A. D. Kuzmin indicated much lower temperatures. Two competing theories explained the unusual radio spectrum, one suggesting the high temperatures originated in the ionosphere, and another suggesting a hot planetary surface. In September 2020, a team at Cardiff University announced that observations of Venus using the James Clerk Maxwell Telescope and Atacama Large Millimeter Array in 2017 and 2019 indicated that the Venusian atmosphere contained phosphine (PH3) in concentrations 10,000 times higher than those that could be ascribed to any known non-biological source on Venus. The phosphine was detected at heights of at least above the surface of Venus, and was detected primarily at mid-latitudes with none detected at the poles of Venus. This could have indicated the potential presence of biological organisms on Venus, however, this measurement was later shown to be in error. Terrestrial radar mapping After the Moon, Venus was the second object in the Solar System to be explored by radar from the Earth. The first studies were carried out in 1961 at NASA's Goldstone Observatory, part of the Deep Space Network. At successive inferior conjunctions, Venus was observed both by Goldstone and the National Astronomy and Ionosphere Center in Arecibo. The studies carried out were similar to the earlier measurement of transits of the meridian, which had revealed in 1963 that the rotation of Venus was retrograde (it rotates in the opposite direction to that in which it orbits the Sun). The radar observations also allowed astronomers to determine that the rotation period of Venus was 243.1 days, and that its axis of rotation was almost perpendicular to its orbital plane. It was also established that the radius of the planet was , some less than the best previous figure obtained with terrestrial telescopes. Interest in the geological characteristics of Venus was stimulated by the refinement of imaging techniques between 1970 and 1985. Early radar observations suggested merely that the surface of Venus was more compacted than the dusty surface of the Moon. The first radar images taken from the Earth showed very bright (radar-reflective) highlands christened Alpha Regio, Beta Regio, and Maxwell Montes; improvements in radar techniques later achieved an image resolution of 1–2 kilometres. Observation by spacecraft There have been numerous uncrewed missions to Venus. Ten Soviet Venera probes achieved a soft landing on the surface, with up to 110 minutes of communication from the surface, all without return. Launch windows occur every 19 months. Early flybys On February 12, 1961, the Soviet spacecraft Venera 1 was the first flyby probe launched to another planet. An overheated orientation sensor caused it to malfunction, losing contact with Earth before its closest approach to Venus of 100,000 km. However, the probe was first to combine all the necessary features of an interplanetary spacecraft: solar panels, parabolic telemetry antenna, 3-axis stabilization, course-correction engine, and the first launch from parking orbit. The first successful flyby Venus probe was the American Mariner 2 spacecraft, which flew past Venus in 1962, coming within 35,000 km. A modified Ranger Moon probe, it established that Venus has practically no intrinsic magnetic field and measured the temperature of the planet's atmosphere to be approximately . The Soviet Union launched the Zond 1 probe to Venus in 1964, but it malfunctioned sometime after its May 16 telemetry session. During another American flyby in 1967, Mariner 5 measured the strength of Venus's magnetic field. In 1974, Mariner 10 swung by Venus on its way to Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere. Mariner-10 provided the best images of Venus taken so far, the series of images clearly demonstrated the high speeds of the planet's atmosphere, first seen in the Doppler-effect velocity measurements of Venera-4 through Venera-8. Early landings On March 1, 1966, the Venera 3 Soviet space probe crash-landed on Venus, becoming the first spacecraft to reach the surface of another planet. Its sister craft Venera 2 had failed due to overheating shortly before completing its flyby mission. The descent capsule of Venera 4 entered the atmosphere of Venus on October 18, 1967, making it the first probe to return direct measurements from another planet's atmosphere. The capsule measured temperature, pressure, density and performed 11 automatic chemical experiments to analyze the atmosphere. It discovered that the atmosphere of Venus was 95% carbon dioxide (), and in combination with radio occultation data from the Mariner 5 probe, showed that surface pressures were far greater than expected (75 to 100 atmospheres). These results were verified and refined by the Venera 5 and Venera 6 in May 1969. But thus far, none of these missions had reached the surface while still transmitting. Venera 4'''s battery ran out while still slowly floating through the massive atmosphere, and Venera 5 and 6 were crushed by high pressure 18 km (60,000 ft) above the surface. The first successful landing on Venus was by Venera 7 on December 15, 1970 — the first successful soft (non-crash) landing on another planet, as well as the first successful transmission of data from another planet's surface to Earth. Venera 7 remained in contact with Earth for 23 minutes, relaying surface temperatures of , and an atmospheric pressure of 92 bar. Venera 8 landed on July 22, 1972. In addition to pressure and temperature profiles, a photometer showed that the clouds of Venus formed a layer ending over above the surface. A gamma ray spectrometer analyzed the chemical composition of the crust. Venera 8 measured the light level as being suitable for surface photography, finding it to be similar to the amount of light on Earth on an overcast day with roughly 1 km visibility. Lander/orbiter pairs Venera 9 and 10 The Soviet probe Venera 9 entered orbit on October 22, 1975, becoming the first artificial satellite of Venus. A battery of cameras and spectrometers returned information about the planet's clouds, ionosphere and magnetosphere, as well as performing bi-static radar measurements of the surface. The descent vehicle separated from Venera 9 and landed, taking the first pictures of the surface and analyzing the crust with a gamma ray spectrometer and a densitometer. During descent, pressure, temperature and photometric measurements were made, as well as backscattering and multi-angle scattering (nephelometer) measurements of cloud density. It was discovered that the clouds of Venus are formed in three distinct layers. On October 25, Venera 10 arrived and carried out a similar program of study. Pioneer Venus In 1978, NASA sent two Pioneer spacecraft to Venus. The Pioneer mission consisted of two components, launched separately: an orbiter and a multiprobe. The Pioneer Venus Multiprobe carried one large and three small atmospheric probes. The large probe was released on November 16, 1978, and the three small probes on November 20. All four probes entered the Venusian atmosphere on December 9, followed by the delivery vehicle. Although not expected to survive the descent through the atmosphere, one probe continued to operate for 45 minutes after reaching the surface. The Pioneer Venus Orbiter was inserted into an elliptical orbit around Venus on December 4, 1978. It carried 17 experiments and operated until the fuel used to maintain its orbit was exhausted and atmospheric entry destroyed the spacecraft in August 1992. Further Soviet missions Also in 1978, Venera 11 and Venera 12 flew past Venus, dropping descent vehicles on December 21 and December 25 respectively. The landers carried colour cameras and a soil drill and analyzer, which unfortunately malfunctioned. Each lander made measurements with a nephelometer, mass spectrometer, gas chromatograph, and a cloud-droplet chemical analyzer using X-ray fluorescence that unexpectedly discovered a large proportion of chlorine in the clouds, in addition to sulfur. Strong lightning activity was also detected. In 1982, the Soviet Venera 13 sent the first colour image of Venus's surface, revealing an orange-brown flat bedrock surface covered with loose regolith and small flat thin angular rocks, and analysed the X-ray fluorescence of an excavated soil sample. The probe operated for a record 127 minutes on the planet's hostile surface. Also in 1982, the Venera 14 lander detected possible seismic activity in the planet's crust. In December 1984, during the apparition of Halley's Comet, the Soviet Union launched the two Vega probes to Venus. Vega 1 and Vega 2 encountered Venus in June 1985, each deploying a lander and an instrumented helium balloon. The balloon-borne aerostat probes floated at about 53 km altitude for 46 and 60 hours respectively, traveling about 1/3 of the way around the planet and allowing scientists to study the dynamics of the most active part of Venus's atmosphere. These measured wind speed, temperature, pressure and cloud density. More turbulence and convection activity than expected was discovered, including occasional plunges of 1 to 3 km in downdrafts. The landing vehicles carried experiments focusing on cloud aerosol composition and structure. Each carried an ultraviolet absorption spectrometer, aerosol particle-size analyzers, and devices for collecting aerosol material and analyzing it with a mass spectrometer, a gas chromatograph, and an X-ray fluorescence spectrometer. The upper two layers of the clouds were found to be sulfuric acid droplets, but the lower layer is probably composed of phosphoric acid solution. The crust of Venus was analyzed with the soil drill experiment and a gamma ray spectrometer. As the landers carried no cameras on board, no images were returned from the surface. They would be the last probes to land on Venus for decades. The Vega spacecraft continued to rendezvous with Halley's Comet nine months later, bringing an additional 14 instruments and cameras for that mission. The multiaimed Soviet Vesta mission, developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the Soviet Union disbanding, included the delivery of balloons and a small lander to Venus, according to the first plan. Orbiters Venera 15 and 16 In October 1983, Venera 15 and Venera 16 entered polar orbits around Venus. The images had a resolution, comparable to those obtained by the best Earth radars. Venera 15 analyzed and mapped the upper atmosphere with an infrared Fourier spectrometer. From November 11, 1983, to July 10, 1984, both satellites mapped the northern third of the planet with synthetic aperture radar. These results provided the first detailed understanding of the surface geology of Venus, including the discovery of unusual massive shield volcanoes such as coronae and arachnoids. Venus had no evidence of plate tectonics, unless the northern third of the planet happened to be a single plate. The altimetry data obtained by the Venera missions had a resolution four times better than Pioneers. Magellan On August 10, 1990, the American Magellan probe, named after the explorer Ferdinand Magellan, arrived at its orbit around the planet and started a mission of detailed radar mapping at a frequency of 2.38 GHz. Whereas previous probes had created low-resolution radar maps of continent-sized formations, Magellan mapped 98% of the surface with a resolution of approximately 100 m. The resulting maps were comparable to visible-light photographs of other planets, and are still the most detailed in existence. Magellan greatly improved scientific understanding of the geology of Venus: the probe found no signs of plate tectonics, but the scarcity of impact craters suggested the surface was relatively young, and there were lava channels thousands of kilometers long. After a four-year mission, Magellan, as planned, plunged into the atmosphere on October 11, 1994, and partly vaporized; some sections are thought to have hit the planet's surface. Venus Express Venus Express was a mission by the European Space Agency to study the atmosphere and surface characteristics of Venus from orbit. The design was based on ESA's Mars Express and Rosetta missions. The probe's main objective was the long-term observation of the Venusian atmosphere, which it is hoped will also contribute to an understanding of Earth's atmosphere and climate. It also made global maps of Venerean surface temperatures, and attempted to observe signs of life on Earth from a distance. Venus Express successfully assumed a polar orbit on April 11, 2006. The mission was originally planned to last for two Venusian years (about 500 Earth days), but was extended to the end of 2014 until its propellant was exhausted. Some of the first results emerging from Venus Express include evidence of past oceans, the discovery of a huge double atmospheric vortex at the south pole, and the detection of hydroxyl in the atmosphere. Akatsuki Akatsuki was launched on May 20, 2010, by JAXA, and was planned to enter Venusian orbit in December 2010. However, the orbital insertion maneuver failed and the spacecraft was left in heliocentric orbit. It was placed on an alternative elliptical Venerian orbit on 7 December 2015 by firing its attitude control thrusters for 1,233 seconds. The probe will image the surface in ultraviolet, infrared, microwaves, and radio, and look for evidence of lightning and volcanism on the planet. Astronomers working on the mission reported detecting a possible gravity wave that occurred on the planet Venus in December 2015. Akatsuki'''s mission ended in 2024. Flybys Several space probes en route to other destinations have used flybys of Venus to increase their speed via the gravitational slingshot method. These include the Galileo mission to Jupiter, and the Cassini–Huygens mission to Saturn, which made two flybys. During Cassinis examination of the radio frequency emissions of Venus with its radio and plasma wave science instrument during both the 1998 and 1999 flybys, it reported no high-frequency radio waves (0.125 to 16 MHz), which are commonly associated with lightning. This was in direct opposition to the findings of the Soviet Venera missions 20 years earlier. It was postulated that perhaps if Venus did have lightning, it might be some type of low-frequency electrical activity, because radio signals cannot penetrate the ionosphere at frequencies below about 1 megahertz. An examination of Venus's radio emissions by the Galileo spacecraft during its flyby in 1990 was interpreted at the time to be indicative of lightning. However, the Galileo probe was over 60 times further from Venus than Cassini was during its flyby, making its observations substantially less significant. In 2007, the Venus Express mission confirmed the presence of lightning on Venus, finding that it is more common on Venus than it is on Earth. MESSENGER passed by Venus twice on its way to Mercury. The first time, it flew by on October 24, 2006, passing 3000 km from Venus. As Earth was on the other side of the Sun, no data was recorded. The second flyby was on July 6, 2007, where the spacecraft passed only 325 km from the cloudtops. BepiColombo also flew by Venus twice on its way to Mercury, the first time on October 15, 2020. During its second flyby of Venus, on August 10, 2021, BepiColombo came 552 km near Venus' surface. While BepiColombo approached Venus before making its second flyby of the planet, two monitoring cameras and seven science instruments were switched on. Johannes Benkhoff, project scientist, believes BepiColombo's MERTIS (Mercury Radiometer and Thermal Infrared Spectrometer) could possibly detect phosphine, but "we do not know if our instrument is sensitive enough". Parker Solar Probe has performed seven Venus flybys, which occurred on October 3, 2018, December 26, 2019, July 11, 2020, February 20, 2021, October 16, 2021, August 21, 2023, and November 6, 2024. Parker Solar Probe makes observations of the Sun and solar wind, and these Venus encounters enable Parker Solar Probe to perform gravity assists and travel closer to the Sun. Future missions The Venera-D spacecraft was proposed to Roscosmos in 2003 and the concept has been matured since then. It is planned to be launched in 2029, and its prime purpose is to map Venus's surface using a powerful radar. The mission would also include a lander capable to function for a long duration on the surface. As of late 2018, NASA was working with Russia on providing some instruments for the mission, but the collaboration had not been formalized, and in the wake of American sanctions on Russia in 2022, Roscosmos Director Dmitry Rogozin deemed American collaboration "inappropriate". India's ISRO is developing Shukrayaan-1, an orbiter and an atmospheric probe with a balloon aerobot which as of 2024, is still in the development phase. In 2017 it was planned to be launched in December 2024, but this was later pushed back to 2028. In June 2021, NASA announced the selection of two new Venus spacecraft, both part of NASA's Discovery Program: VERITAS and DAVINCI. These spacecraft are the first NASA missions to focus on Venus since Magellan in 1990 and Pioneer Venus in 1978. VERITAS, an orbiter, will seek to map the surface of Venus in high resolution, while DAVINCI will send both an orbiter which will map Venus in multiple wavelengths, while a descent probe will study the chemistry of the Venusian atmosphere while taking photographs of the descent. DAVINCI and VERITAS were initially slated to launch in 2029 and 2028 respectively, but funding issues have pushed VERITAS' launch date back to at least 2029–2031. In June 2021, soon after NASA announced VERITAS and DAVINCI, ESA announced Venus orbiter EnVision as part of their Cosmic Vision program. EnVision is planned to perform high-resolution radar mapping and atmospheric studies of Venus, and is planned to launch in 2031. On October 6, 2021, the United Arab Emirates announced its intention to send a probe to Venus as soon as 2028. The probe would make observations of the planet while using it for a gravity assist to propel it to the Asteroid belt. In 2022, China's CNSA revealed Venus Volcano Imaging and Climate Explorer orbiter mission (VOICE) launching in 2026 and arrive in Venus by 2027. VOICE's mission was expected to last 3–4 years and including the following payloads, a Microwave Radiometric Sounder (MRS), Polarimetric Synthetic Aperture Radar (PolSAR), and Ultraviolet-Visible-Near Infrared Multispectral Imager (UVN-MSI). The probe would return images of the surface with one-meter resolution and search the clouds for habitability and biosignatures. Rocket Lab, a private aerospace manufacturer, hopes to launch the first private Venus mission in collaboration with MIT as soon as 2024. The spacecraft, Venus Life Finder, will send a lightweight atmospheric probe into the Venusian atmosphere to search for signs of life. Timeline of Venus exploration Unofficial names used during development are listed in italics. Past missions Current missions Missions under study Proposals To overcome the high pressure and temperature at the surface, a team led by Geoffrey Landis of NASA's Glenn Research Center produced a concept in 2007 of a solar-powered aircraft that would control a resistant surface rover on the ground. The aircraft would carry the mission's sensitive electronics in the relatively mild temperatures of Venus' upper atmosphere. Another concept from 2007 suggests to equip a rover with a Stirling cooler powered by a nuclear power source to keep an electronics package at an operational temperature of about . In 2020 NASA's JPL launched an open competition, titled "Exploring Hell: Avoiding Obstacles on a Clockwork Rover", to design a sensor that could work on Venus's surface. Other examples of mission concepts and proposals include: Impact Research on the atmosphere of Venus has produced significant insights not only about its own state but also about the atmospheres of other planetary objects, especially of Earth. It has helped to find and understand the depletion of Earth's ozone in the 1970s and 1980s. The voyage of James Cook and his crew of HMS Endeavour to observe the Venus transit of 1769 brought about the claiming of Australia at Possession Island for colonisation by Europeans. See also Aspects of Venus Manned Venus Flyby Notes In Isaiah 14:12 in the Latin Vulgate translation of the Bible, Jerome translated the Greek term heosphoros in the Septuagint and the Hebrew term helel in the Hebrew Bible as lucifer, meaning "light bearer". Later English translators, influenced by the Vulgate's rendering of lucifer for helel, introduced Lucifer with a capital into the English translations of the Bible, thereby changing the Latin descriptive term to a personal name. This has caused "Lucifer" to become viewed as a code name for Satan, instead of being a descriptive term by which Isaiah compared the Babylonian king to the bright planet Venus. References External links Widemann, T., Smrekar, S., Garvin, J. et al., Venus Evolution Through Time: Key Science Questions, Selected Mission Concepts and Future Investigations, Space Science Reviews vol. 219, Oct. 3, 2023 Double vortex at Venus South Pole unveiled! Planetary Missions at National Space Science Data Center (NASA) Soviet Venus-rover ХМ-ВД2 Exploring Venus by Solar Airplane – G. Landis Venus Spaceflight Discovery and exploration of the Solar System Solar System
Observations and explorations of Venus
[ "Astronomy" ]
6,066
[ "Outer space", "History of astronomy", "Spaceflight", "Solar System", "Discovery and exploration of the Solar System" ]
3,057,994
https://en.wikipedia.org/wiki/Atmospheric-pressure%20chemical%20ionization
Atmospheric pressure chemical ionization (APCI) is an ionization method used in mass spectrometry which utilizes gas-phase ion-molecule reactions at atmospheric pressure (105 Pa), commonly coupled with high-performance liquid chromatography (HPLC). APCI is a soft ionization method similar to chemical ionization where primary ions are produced on a solvent spray. The main usage of APCI is for polar and relatively less polar thermally stable compounds with molecular weight less than 1500 Da. The application of APCI with HPLC has gained a large popularity in trace analysis detection such as steroids, pesticides and also in pharmacology for drug metabolites. Instrument structure A typical APCI source usually consists of three main parts: a sample inlet, a corona discharge needle, and an ion transfer region under intermediate pressure. In the case of the heated nebulizer inlet from an LC, as shown in the figure, the eluate flows at 0.2 to 2.0 mL/min into a pneumatic nebulizer which creates a mist of fine droplets. Droplets are vaporized by impact with the heated walls at 350–500 °C and carried by the nebulizer gas and an auxiliary gas into the ion molecule reaction region between the corona electrode and the exit counter-electrode. A constant current of 2–5 microamps is maintained from the corona needle. Sample ions are produced by ion-molecule reactions (as described below), and pass through a small orifice or tube into the ion transfer region leading to the mass spectrometer. Various geometries of ion source are possible, depending on application. When used with liquid chromatography, particularly at higher flow rates, the nebulizer is often positioned orthogonal to (or at a similarly steep angle to) the inlet of the mass spectrometer, so that solvent and neutral material does not contaminate the actual inlet of the mass spectrometer. Ionization mechanism Ionization in the gas phase by APCI follows the sequences: sample in solution, sample vapor, and sample ions. The effluent from the HPLC is evaporated completely. The mixture of solvent and sample vapor is then ionized by ion-molecule reaction. The ionization can either be carried out in positive or negative ionization mode. In the positive mode, the relative proton affinities of the reactant ions and the gaseous analyte molecules allow either proton transfer or adduction of reactant gas ions to produce the ions [M+H]+ of the molecular species. In the negative mode, [M−H]− ions are produced by either proton abstraction, or [M+X]− ions are produced by anion attachment. Most work on the APCI-MS analysis has been in positive mode. In the positive mode, when the discharge current of corona discharge is 1-5 μA on the nebulized solvent, N2 gas molecules are excited and ionized, which produce N4+*. The evaporated mobile phase of LC acts as the ionization gas and reactant ions. If water is the only solvent in the evaporated mobile phase, the excited nitrogen molecular ions N4+* would react with H2O molecules to produce water cluster ions H+(H2O)n. Then, analyte molecules M are protonated by the water cluster ions. Finally, the ionization products MH+(H2O)m transfer out from the atmospheric-pressure ion source. Declustering (removal of water molecules from the protonated analyte molecule) of MH+(H2O)m takes place at the high vacuum of the mass analyzer. The analyte molecule ions detected by MS are [M+H]+. The chemical reactions of ionization process are shown below. Primary and secondary reagent ion formation in a nitrogen atmosphere in the presence of water: N2 + e → N2+ + 2e N2+* + 2N2 → N4+* + N2 N4+ + H2O → H2O+ + 2N2 H2O+ + H2O → H3O+ + OH• H3O+ + H2O + N2 → H+(H2O)2 + N2 H+(H2O)n-1 + H2O + N2 → H+(H2O)n + N2 Ionization of product ions: H+(H2O)n + M → MH+(H2O)m + (n-m)H2O Declustering in the high vacuum of the mass analyzer: MH+(H2O)m → MH+ + mH2O If the mobile phase contains solvents with a higher proton affinity than water, proton-transfer reactions take place that lead to protonated the solvent with higher proton affinity. For example, when methanol solvent is present, the cluster solvent ions would be CH3OH2+(H2O)n(CH3OH)m. Fragmentation does not normally occur inside the APCI source. If a fragment ion of a sample is observed, thermal degradation has taken place by the heated nebulizer interface, followed by the ionization of the decomposition products. In a major distinction from chemical ionization, the electrons needed for the primary ionization are not produced by a heated filament, as a heated filament cannot be used under atmospheric pressure conditions. Instead, the ionization must occur using either corona discharges or β- particle emitters, which are both electron sources capable of handling the presence of corrosive or oxidizing gases. History The origins of atmospheric pressure chemical ionization sources combined with mass spectrometry can be found in the 1960s in studies of ions in flames and of ion chemistry in corona discharges up to atmospheric pressure. The first application of APCI combined with mass spectrometry for trace chemical analysis was by the Franklin GNO Corporation who in 1971 developed an instrument combining APCI with ion mobility and mass spectrometry. Horning, Carroll and their co-workers in the 1970s at the Baylor College of Medicine (Houston, TX) demonstrated the advantages of APCI for coupling gas chromatography (GC) and liquid chromatography (LC) to a mass spectrometer. High sensitivity and simple mass spectra were shown in these studies. For LC-MS, the LC eluate was vaporized and ionized in a heated metal block. Initially, a 63Ni foil was used as a source of electrons to perform ionization. In 1975, a corona discharge electrode was developed, providing a larger dynamic response range. APCI with the corona discharge electrode became the model for modern commercially available APCI interfaces. In the late 1970s an APCI mass spectrometer system (the TAGA, for Trace Atmospheric Gas Analyzer), mounted in a van for mobile operation, was introduced by SCIEX, providing high sensitivity for monitoring polar organics in ambient air in real time. In 1981 a triple quadrupole mass spectrometer version was produced, allowing real-time direct air monitoring by APCI-MS/MS. A similar platform was used for the SCIEX AROMIC system (part of the CONDOR contraband detection system developed together with British Aerospace) for the detection of drugs, explosives and alcohol in shipping containers at border crossings, by sampling the interior airspace. In the mid-1980s and into the early 1990s, the advantages of performing LC/MS with APCI and with electrospray, both atmospheric pressure ionization techniques, began to capture the attention of the analytical community. Together they have dramatically expanded the role of mass spectrometry in the pharmaceutical industry for both drug development and drug discovery applications. The sensitivity of APCI combined with the specificity of LC-MS and LC-MS/MS often makes it the method of choice for the quantification of drugs and drug metabolites. Advantages Ionization of the substrate is very efficient as it occurs at atmospheric pressure, and thus has a high collision frequency. Additionally, APCI considerably reduces the thermal decomposition of the analyte because of the rapid desolvation and vaporization of the droplets in the initial stages of the ionization. This combination of factors most typically results in the production of ions of the molecular species with fewer fragmentations than many other ionization methods, making it a soft ionization method. Another advantage to using APCI over other ionization methods is that it allows for the high flow rates typical of standard bore HPLC (0.2–2.0 mL/min) to be used directly, often without diverting the larger fraction of volume to waste. Additionally, APCI can often be performed in a modified ESI source. The ionization occurs in the gas phase, unlike ESI, where the ionization occurs in the liquid phase. A potential advantage of APCI is that it is possible to use a nonpolar solvent as a mobile phase solution, instead of a polar solvent, because the solvent and molecules of interest are converted to a gaseous state before reaching the corona discharge needle. Because APCI involves a gas-phase chemistry, there is no need to use special conditions such as solvents, conductivity, pH for LC. APCI appears to be more versatile LC/MS interface and more compatible with reversed-phase LC than ESI. Application APCI is suited for thermal stable samples with low to medium (less than 1500 Da) molecular weight, and medium to high polarity. It is particularly useful for analytes that are not sufficiently polar for electrospray. The application area of APCI is the analysis of drugs, nonpolar lipids, natural compounds, pesticides and various organic compounds, but it is of limited use in the analysis of biopolymers, organometallics, ionic compounds and other labile analytes. See also Chemical ionization Corona discharge Electrospray ionization Secondary electrospray ionization References Ion source
Atmospheric-pressure chemical ionization
[ "Physics" ]
2,058
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)" ]
3,058,053
https://en.wikipedia.org/wiki/Electronics%20manufacturing%20services
Electronics manufacturing services (EMS) is a term used for companies that design, manufacture, test, distribute, and provide return/repair services for electronic components and assemblies for original equipment manufacturers (OEMs). The concept is also referred to as electronics contract manufacturing (ECM). Many high-volume consumer electronic products have been built in China and countries of Southeast Asia, due to the speed of manufacture of high-volume low-cost electronics in those locations, as opposed to the United States. Cities such as Shenzhen, China and Penang, Malaysia have become important production centres for the industry, attracting many consumer electronics companies such as Apple Inc. Some companies such as Flex and Wistron are original design manufacturers and providers of electronics manufacturing services. History The EMS industry was initially established in 1961 by SCI Systems of Huntsville Alabama. The industry realized its most significant growth in the 1980s; at the time, most electronics manufacturing for large-scale product runs was handled by the OEMs in-house assembly. These new companies offered flexibility and eased human resources issues for smaller companies doing limited runs. The business model for the EMS industry is to specialize in large economies of scale in manufacturing, raw materials procurement and pooling together resources, industrial design expertise as well as create added value services such as warranty and repairs. This frees up the customer who does not need to manufacture and keep huge inventories of products. Therefore, they can respond to sudden spikes in demand more quickly and efficiently. The development of surface mount technology (SMT) on printed circuit boards (PCB) allowed for the rapid assembly of electronics. By the mid-1990s the advantages of the EMS concept became compelling and OEMs began outsourcing PCB assembly (PCBA) on a large scale. By the end of the 1990s and early 2000s, many OEMs sold their assembly plants to EMSs, aggressively vying for market share. A wave of consolidation followed as the more cash-flush EMS firms were able to buy up quickly both existing plants as well as smaller EMS companies. Market segments The global market for Electronics Manufacturing Services (EMS) reached an estimated value of US$504.2 billion in 2023 and is projected to grow at a CAGR of 4.94% during the forecast period 2024-2030, reaching a revised size of US$707.5 billion by 2030. The EMS industry is commonly divided into Tiers by their revenue: Tier 1: >$5 Billion Tier 2: $500M to $5B Tier 3: $100M to $500M Tier 4: <$100M There is no hard rule on the actual revenue designation at this time. Other categories have been suggested by StepBeyond/EMSinsider and CIRCUITS ASSEMBLY: Micro Tier (<$50M); Tier 4 <10m and "Tier Mega" referring to the Big 2, Foxconn and Flex. Another distinction is drawn between EMS that specializes in High Mix Low Volume (HMLV) and High Volume Low Mix (HVLM). Mix refers generally to the complexity or different models of the PCB assembly. Volume refers to the number of units built, with products like consumer electronics on the high end and prototype, medical electronics or machinery on the low end. Typically, lower Tier EMS provide HMLV and higher Tier provide HVLM. During technology's late-1990s heyday, EMS players routinely acquired assets in high-cost locations. EMS players largely focused on printed circuit board fabrication, leaving system assembly to the OEMs. EMS companies largely disdained industries outside the world of information processing (computers) and communications. In recent years, EMS players have shifted production to low-cost geographies; embraced non-traditional industries including consumer electronics, industrial, medical and instrumentation; and added substantial vertical capabilities, stretching from design and ODM through system assembly, test, delivery and logistics, warranty and repair, network services, software and silicon design, and customer service. EMS has also started to provide design services used in conceptual product development advice and mechanical, electrical and software design assistance. Testing services perform in-circuit, functional, environmental, agency compliance, and analytical laboratory testing. Electronics manufacturing services are located throughout the world and provide numerous benefits. They vary in terms of production capabilities and comply with various quality standards and regulatory requirements. E2MS E2MS (Electronic Engineering Manufacturing Service) refers to the strategy of integrating product development, prototyping and industrialization services into a traditional EMS business, with the aim to harness potential synergies. A typical E2MS offering will start in the design phase, then continue to support the client in development, prototyping, tooling and production all the way to the testing phase, allowing for faster ramp-up as the product is prepared for mass-production up-front. The term E2MS was first coined by Escatec and has since been adopted by numerous Tier 2 and Tier 3 producers. Larger companies (Tier 1) have gone even further and offered full concept to mass-production and often taking a stake in the intellectual property, becoming more similar to ODM companies. References Electronics manufacturing
Electronics manufacturing services
[ "Engineering" ]
1,051
[ "Electronic engineering", "Electronics manufacturing" ]
3,058,220
https://en.wikipedia.org/wiki/Free%20Boolean%20algebra
In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators, such that: Each element of the Boolean algebra can be expressed as a finite combination of generators, using the Boolean operations, and The generators are as independent as possible, in the sense that there are no relationships among them (again in terms of finite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter which elements are chosen. A simple example The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propositions "John is tall" and "Mary is rich". These generate a Boolean algebra with four atoms, namely: John is tall, and Mary is rich; John is tall, and Mary is not rich; John is not tall, and Mary is rich; John is not tall, and Mary is not rich. Other elements of the Boolean algebra are then logical disjunctions of the atoms, such as "John is tall and Mary is not rich, or John is not tall and Mary is rich". In addition there is one more element, FALSE, which can be thought of as the empty disjunction; that is, the disjunction of no atoms. This example yields a Boolean algebra with 16 elements; in general, for finite n, the free Boolean algebra with n generators has 2n atoms, and therefore elements. If there are infinitely many generators, a similar situation prevails except that now there are no atoms. Each element of the Boolean algebra is a combination of finitely many of the generating propositions, with two such elements deemed identical if they are logically equivalent. Another way to see why the free Boolean algebra on an n-element set has elements is to note that each element is a function from n bits to one. There are possible inputs to such a function and the function will choose 0 or 1 to output for each input, so there are possible functions. Category-theoretic definition In the language of category theory, free Boolean algebras can be defined simply in terms of an adjunction between the category of sets and functions, Set, and the category of Boolean algebras and Boolean algebra homomorphisms, BA. In fact, this approach generalizes to any algebraic structure definable in the framework of universal algebra. Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way; alternatively, one might start with a set and ask which algebra it generates. Every set X generates a free Boolean algebra FX defined as the algebra such that for every algebra B and function f : X → B, there is a unique Boolean algebra homomorphism f′ : FX → B that extends f. Diagrammatically, where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that once one chooses where to send the elements of X, the laws for Boolean algebra homomorphisms determine where to send everything else in the free algebra FX. If FX contained elements inexpressible as combinations of elements of X, then f′ wouldn't be unique, and if the elements of X weren't sufficiently independent, then f′ wouldn't be well defined! It is easily shown that FX is unique (up to isomorphism), so this definition makes sense. It is also easily shown that a free Boolean algebra with generating set X, as defined originally, is isomorphic to FX, so the two definitions agree. One shortcoming of the above definition is that the diagram doesn't capture that f′ is a homomorphism; since it is a diagram in Set each arrow denotes a mere function. We can fix this by separating it into two diagrams, one in BA and one in Set. To relate the two, we introduce a functor U : BA → Set that "forgets" the algebraic structure, mapping algebras and homomorphisms to their underlying sets and functions. If we interpret the top arrow as a diagram in BA and the bottom triangle as a diagram in Set, then this diagram properly expresses that every function f : X → UB extends to a unique Boolean algebra homomorphism f′ : FX → B. The functor U can be thought of as a device to pull the homomorphism f′ back into Set so it can be related to f. The remarkable aspect of this is that the latter diagram is one of the various (equivalent) definitions of when two functors are adjoint. Our F easily extends to a functor Set → BA, and our definition of X generating a free Boolean algebra FX is precisely that U has a left adjoint F. Topological realization The free Boolean algebra with κ generators, where κ is a finite or infinite cardinal number, may be realized as the collection of all clopen subsets of {0,1}κ, given the product topology assuming that {0,1} has the discrete topology. For each α<κ, the αth generator is the set of all elements of {0,1}κ whose αth coordinate is 1. In particular, the free Boolean algebra with generators is the collection of all clopen subsets of a Cantor space, sometimes called the Cantor algebra. This collection is countable. In fact, while the free Boolean algebra with n generators, n finite, has cardinality , the free Boolean algebra with generators, as for any free algebra with generators and countably many finitary operations, has cardinality . For more on this topological approach to free Boolean algebra, see Stone's representation theorem for Boolean algebras. See also Boolean algebra (structure) Generating set References Steve Awodey (2006) Category Theory (Oxford Logic Guides 49). Oxford University Press. Paul Halmos and Steven Givant (1998) Logic as Algebra. Mathematical Association of America. Saunders Mac Lane (1998) Categories for the Working Mathematician. 2nd ed. (Graduate Texts in Mathematics 5). Springer-Verlag. Saunders Mac Lane (1999) Algebra, 3d. ed. American Mathematical Society. . Robert R. Stoll, 1963. Set Theory and Logic, chpt. 6.7. Dover reprint 1979. Boolean algebra Free algebraic structures
Free Boolean algebra
[ "Mathematics" ]
1,290
[ "Boolean algebra", "Mathematical structures", "Mathematical logic", "Fields of abstract algebra", "Category theory", "Algebraic structures", "Free algebraic structures" ]
3,058,258
https://en.wikipedia.org/wiki/Animal%20song
Animal song is not a well-defined term in scientific literature, and the use of the more broadly defined term vocalizations is in more common use. Song generally consists of several successive vocal sounds incorporating multiple syllables. Some sources distinguish between simpler vocalizations, termed “calls”, reserving the term “song” for more complex productions. Song-like productions have been identified in several groups of animals, including cetaceans (whales and dolphins), avians (birds), anurans (frogs), and humans. Social transmission of song has been found in groups including birds and cetaceans. Anatomy of sound production Mammals Most mammalian species produce sound by passing air from the lungs across the larynx, vibrating the vocal folds. Sound then enters the supralaryngeal vocal tract, which can be adjusted to produce various changes in sound output, providing refinement of vocalizations. Although morphological differences between species affect production of sound, neural control is thought to be more essential factor in producing the variations within human speech and song compared to those of other mammals. Cetacean vocalizations are an exception to this general mechanism. Toothed whales (odontocetes) pass air through a system of air sacs and muscular phonic lips, which vibrate to produce audible vocalizations, thus serving the function of vocal folds in other mammals. Sound vibrations are conveyed to an organ in the head called the melon, which can be changed in shape to control and direct vocalizations. Unlike in humans and other mammals, toothed whales are able to recycle air used in vocal production, allowing whales to sing without releasing air. Some cetaceans, such as humpback whales, sing continuously for hours. Anurans Like mammals, anurans possess a larynx and vocal folds, which are used to create vibrations in sound production. However, frogs also use structures called vocal sacs, elastic membranes in the base of the mouth which inflate during sound production. These sacs provide both amplification and fine-tuning of sounds, and also allow air to be pushed back into the lungs during vocalizations. This allows air used in sound production to be recycled, and is thought to have evolved to increase song efficiency. Increased efficiency of sound production is important, as some frogs may produce calls lasting for several hours during mating seasons. The New River tree frog (Trachycephalus hadroceps), for example, spends hours producing up to 38,000 calls in a single night, which is made possible through the efficient recycling of air by the vocal sac. Birds When birds inhale, air is passed from the mouth, through the trachea, which forks into two bronchi connecting to the lungs. The primary vocal organ of birds is called the syrinx, which is located at the fork of the trachea, and is not present in mammals. As air passes through the respiratory tract, the syrinx and the membranes within vibrate to produce sound. Birds are capable of producing continuous song during both inhalation and exhalation, and may sing continuously for several minutes. For example, the skylark (Alauda arvensis) is capable of producing non-stop song for up to one hour. Some birds change their song characteristics during inhalation versus exhalation. The Brewer's sparrow (Spizella breweri) alternates between rapid trilling during exhalation interspersed with lower-rate trills during short inhalations. The two halves of the syrinx connect to separate lungs, and can be controlled independently, allowing some birds to produce two separate notes simultaneously. Insects Insects such as crickets (family Gryllidae) are well known for their ability to produce loud song, however the mechanism of sound production differs greatly from most other animals. Many insects generate sound by mechanical rubbing of body structures, a mechanism known as stridulation. Orthopteran insects, including crickets and katydids (family Tettigoniidae), have been especially well-studied for sound production. These insects use scraper-like structures on one wing to sweep over file-structures on an opposing wing to create vibrations, producing a variety of trilling and chirping sounds. Locusts and other grasshoppers (suborder Caelifera) stridulate by rubbing hind legs against pegs on wing surfaces in an up and downward motion. Cicadas (superfamily Cicadoidea) produce sound at much greater volumes than orthopterans, relying on a pair of organs called tymbals on the base of the abdomen behind the wings. Muscle contraction rapidly deforms the tymbal membrane, emitting several different types of sounds. Insects thus produce a variety of sounds, using various mechanisms distinct from other animals. Functions of vocalizations Vocalizations can play a wide variety of different roles. In groups such as anurans and birds, several distinct types of notes are incorporated to form songs, which are sung in different situations and serve distinct functions. For example, many frogs may use trilling notes in mate attraction, but switch to different vocal patterns in aggressive territorial displays. In some species, a single song incorporates several note types which serve different purposes, with one type of note eliciting responses from females, and another note of the same song responsible for warning competitor males of aggression. Mating and courtship Vocalizations play an important role in the mating behaviour of many animals. In many groups (birds, frogs, crickets, whales, etc.), song production is more common in males of the species, and is often used to attract females. Bird song is thought to have evolved through sexual selection. Female songbirds often assess potential mates using song, based on qualities such as high song output, complexity and difficulty of songs, as well as presence of local dialect. Song output serves as a fitness indicator of males, since vocalizations require both energy and time to produce, and thus males capable of producing high song output for long durations may have higher fitness than less vocal males. It is thought that song complexity may serve as an indicator of male fitness by providing an indication of successful brain development despite potential early-life stressors, such as lack of food. Social transmission of songs allows for development of local dialects of song, and female songbirds also typically prefer to choose mates producing local song dialects. One hypothesis for this phenomenon is that selecting local mates allows the female to choose genes specially adapted to suit local conditions. Frog song also plays a prominent role in courtship. In túngara frogs (Engystomops pustulosus), male frogs increase the complexity of their calls, adding additional note types when greater numbers of competitor males are present, which has been found to attract greater numbers of female frogs. Some species change their courtship calls when females are especially nearby. In male glass frogs (Hyalinobatachium fleichmanni), a long frequency-modulated vocalization is produced upon noticing another nearby frog, but is changed to a short chirping song when a female approaches. Several species (e.g., dendrobatid frogs (Mannophryne trinitatis), ornate frogs (Cophixalus ornatus), splendid poison frogs (Dendrobates speciosus)), switch from long-range loud trilling sounds to short-range quieter chirps when females move closer, which is thought to allow mate attraction without alerting competitor males to female locations. Although highly complex song-like production has been identified in whales, the function is still somewhat elusive. It is thought to be involved in courtship behaviour and sexual selection, and singing behaviour becomes more common during the breeding season. Aggression and territorial defense Another major function of song output is to indicate aggression among males during breeding seasons. Both anurans and birds use singing in territorial displays to confer aggressive intent. For eastern smooth frogs (Geocrinia victoriana), for example, courtship songs involve shorter notes to attract potential mates, and are followed by longer tones to repel males. Frequency of sounds produced generally negatively correlates with body size both within and among species, and allows competing males to assess body size of vocalizing neighbouring frogs. Male frogs typically approach higher frequency sounds more readily than lower frequencies, likely because the frog producing the sound is assessed to be a smaller, less dangerous competitor. In territorial birds, males increase song production rate when neighbouring males encroach on their territory. In great tits (Parus major), nightingales (Luscinia megarhynchos), blackbirds (Turdus merula) and sparrows (family Passeridae), playing song recordings slows the rate at which males establish territories in an unoccupied region, suggesting these birds rely on song output in establishing territorial boundaries. Experimentally muted Scott's seaside sparrow (Ammodramus maritimus) lose control of their territories to other males. Thus, territorial birds often rely on song production to repel conspecific males. Individual recognition Like the human voice, bird song typically contains sufficient individual variability to allow discrimination of individual vocal patterns by conspecifics. Such discrimination is important to mate recognition of many monogamous species. Seabirds, for example, often use vocalization patterns to recognize their mate upon reunion during the breeding season. In many colonial nesting birds, parent-offspring recognition is critical to allow parents to locate their own offspring upon return to nesting sites. Cliff swallows (Petrochelidon pyrrhonota) have been demonstrated to preferentially respond to parental songs at a young age, providing a means of vocalization-based offspring recognition. Social transmission and learning Learning and development of birdsong Learned vocalizations have been identified in groups including whales, elephants, seals, and primates, however the most well-established examples of learned singing is in birds. In many species, young birds learn songs from adult males of the same species, typically fathers. This was first demonstrated in chaffinches (Fringilla coelabs). Chaffinches raised in social isolation develop abnormal songs, however playing recordings of chaffinch songs allows the young birds to learn their species-specific songs. Song learning generally involves a sensitive learning period in early life, during which young birds must be exposed to song from tutor animals in order to develop normal singing as adults. Song learning occurs in two stages: the sensory phase and the sensorimotor phase. During the sensory phase, birds memorize the song of a tutor animal, forming a template representation of the species-specific song. The sensorimotor phase follows and may overlap with the sensory phase. During the sensorimotor phase, young birds initially produce variable, rambling versions of adult song, called subsong. As learning progresses, the subsong is replaced with a more refined version containing elements of adult song, called plastic song. Finally, the song learning crystallizes into adult song. For song learning to occur properly, young birds must be able to hear and refine their vocal productions, and birds deafened before the development of subsong do not learn to produce normal adult song. The sensitive period in which birds must be exposed to song tutoring varies across species, but typically occurs within the first year of life. Birds in which song learning is limited to the initial sensitive period are referred to as closed-ended learners, whereas some birds (e.g., canaries; Serinus canaria), continue to learn new songs later in life, and are called open-ended learners. Some species of birds, such as the brown-headed cowbird (Molothrus ater), parasitize other bird species, laying their eggs in the nests of other birds such that the heterospecific bird raises the chicks. Although most birds acquire song learning within the first year, brown-headed cowbirds have a delayed sensitive period, occurring approximately one year after hatching. This may be an adaptation to prevent the young birds from learning the songs from the foreign bird species. Instead, the young birds have a year in which to find conspecifics, and learn their own species-specific song. Birds are generally predisposed to favour learning of conspecific songs, and will typically preferentially learn the song form conspecific animals rather than heterospecifics. However, song learning is not completely restricted to within-species songs. If exposed to heterospecific birds of another species in absence of same-species birds, young birds will often adopt the song of the species to which it was exposed. Although birds are capable of learning song production purely from audio recordings of birdsong, tutor-student interaction may be important in some species. For example, white-crowned sparrows (Zonotrichia leucophrys) preferentially learn the songs of song sparrows (Melospiza melodia) when exposed to recordings of white-crowned sparrows and live song sparrows. In other words, the interactive nature of a live tutor seems to trump the familiarity of the recordings from conspecifics. Cultural transmission of whale song While vertical transmission (parent-offspring) is a common element of song learning, horizontal transmission among animals of the same generation can also occur. Male humpback whales produce various songs over their lifetime, which are learned from other males in the population. Males in a population conform to produce the same mating song, consisting of a highly stereotyped vocal display involved in mate attraction. The cultural transmission of these songs has been found to occur across great geographic distances over years, with one study noting song transmission across the western and central South Pacific Ocean populations over an 11-year period. See also Animal communication Animal language Bird song Vocal learning Whale song References External links Listen to Nature 400 examples of animal songs and calls Washington U. Mice Songs Cornell Animal Sound Library (over 300,000 audio recordings from various species of mammals, birds, amphibians, fish, arthropods and reptiles). The British Library Sound Archive has more than 150,000 recordings of 10,000 species Canadian Centre for Wolf Research International Bioacoustics Council many links to animal sound sites Zoosemiotics Song forms Song
Animal song
[ "Biology" ]
2,922
[ "Ethology", "Behavior", "Zoosemiotics" ]
3,058,867
https://en.wikipedia.org/wiki/GEH%20statistic
The GEH Statistic is a formula used in traffic engineering, traffic forecasting, and traffic modelling to compare two sets of traffic volumes. The GEH formula gets its name from Geoffrey E. Havers, who invented it in the 1970s while working as a transport planner in London, England. Although its mathematical form is similar to a chi-squared test, is not a true statistical test. Rather, it is an empirical formula that has proven useful for a variety of traffic analysis purposes. The formula for the "GEH Statistic" is: Where M is the hourly traffic volume from the traffic model (or new count) and C is the real-world hourly traffic count (or the old count) Using the GEH Statistic avoids some pitfalls that occur when using simple percentages to compare two sets of volumes. This is because the traffic volumes in real-world transportation systems vary over a wide range. For example, the mainline of a freeway/motorway might carry 5000 vehicles per hour, while one of the on-ramps leading to the freeway might carry only 50 vehicles per hour (in that situation it would not be possible to select a single percentage of variation that is acceptable for both volumes). The GEH statistic reduces this problem; because the GEH statistic is non-linear, a single acceptance threshold based on GEH can be used over a fairly wide range of traffic volumes. The use of GEH as an acceptance criterion for travel demand forecasting models is recognised in the UK Highways Agency's Design Manual for Roads and Bridges the Wisconsin microsimulation modeling guidelines, the Transport for London Traffic Modelling Guidelines and other references. For traffic modelling work in the "baseline" scenario, a GEH of less than 5.0 is considered a good match between the modelled and observed hourly volumes (flows of longer or shorter durations should be converted to hourly equivalents to use these thresholds). According to DMRB, 85% of the volumes in a traffic model should have a GEH less than 5.0. GEHs in the range of 5.0 to 10.0 may warrant investigation. If the GEH is greater than 10.0, there is a high probability that there is a problem with either the travel demand model or the data (this could be something as simple as a data entry error, or as complicated as a serious model calibration problem). Applications The GEH formula is useful in situations such as the following: Comparing a set of traffic volumes from manual traffic counts with a set of volumes done at the same locations using automation (e.g. a pneumatic tube traffic counter is used to check the total entering volumes at an intersection to affirm the work done by technicians doing a manual count of the turn volumes). Comparing the traffic volumes obtained from this year's traffic counts with a group of counts done at the same locations in a previous year. Comparing the traffic volumes obtained from a travel demand forecasting model (for the "base year" scenario) with the real-world traffic volumes. Adjusting traffic volume data collected at different times to create a mathematically consistent data set that can be used as input for travel demand forecasting models or traffic simulation models (as discussed in NCHRP 765). Common criticism about GEH statistic The GEH statistic depends on the magnitude of the values. Thus, the GEH statistic of two counts of different duration (e.g., daily vs. hourly values) cannot be directly compared. Therefore, GEH statistic is not suitable for evaluating other indicators, e.g., trip distance. Deviations are evaluated differently upward or downward, so the calculation is not symmetrical. Moreover, the GEH statistic is not without a unit, but has the unit  (s−1/2 in SI base units). The GEH statistic does not fall within a range of values between 0 (no match) and 1 (perfect match). Thus, the range of values can only be interpreted with sufficient experience (= non-intuitively). Furthermore, it is criticized that the value does not have a well-founded statistical derivation. Development of the SQV statistic An alternative measure to the GEH statistic is the Scalable Quality Value (SQV), which solves the above-mentioned problems: It is applicable to various indicators, it is symmetric, it has no units, and it has a range of values between 0 and 1. Moreover, Friedrich et al. derive the relationship between GEH statistic and normal distribution, and thus the relationship between SQV statistic and normal distribution. The SQV statistic is calculated using an empirical formula with a scaling factor : Fields of application By introducing a scaling factor , the SQV statistic can be used to evaluate other mobility indicators. The scaling factor is based on the typical magnitude of the mobility indicator (taking into account the corresponding unit). According to Friedrich et al., the SQV statistic value is suitable for assessing: Traffic volumes (if necessary, differentiation can be made not only by time of day, but also by mode). Person-related mobility indicators: Number of trips per person (not differentiated or differentiated by mode and / or trip purpose, suggestion: ), mean travel times per trip in minutes (not differentiated or differentiated by mode and / or trip purpose, proposal: ), mean travel distances per trip in kilometers (not differentiated or differentiated by mode and / or trip purpose, suggestion: ). However, the SQV statistic should not be used for the following indicators: Percentage of modal split or modal shares: here there is a fixed upper limit of 100% that cannot be exceeded. Instead, the number of trips per person per mode can be used for validation with the SQV statistic. Travel times for paths between 2 points in the network: This indicator does not depend on the path taken by a single person, but represents a sequence of distances along a route. Quality categories Friedrich et al. recommend the following categories: Depending on the indicator under comparison, different quality categories may be required. Consideration of standard deviation and sample size The survey of mobility indicators or traffic volumes is often conducted under non-ideal conditions, e.g. large standard deviations or small sample sizes. For these cases, a procedure was described by Friedrich et al. that integrates these two cases into the calculation of the SQV statistic. See also Microsimulation Traffic counter Traffic flow Traffic engineering (transportation) Transportation planning Trip generation External links UK Highways Agency's Design Manual for Roads & Bridges (DMRB) Wisconsin Microsimulation Modeling Guidelines Transport for London Traffic Modelling Guidelines National Cooperative Highway Research Program Report 765 References Transportation engineering Transportation planning
GEH statistic
[ "Engineering" ]
1,370
[ "Transportation engineering", "Civil engineering", "Industrial engineering" ]
3,059,064
https://en.wikipedia.org/wiki/Wood%20warping
Wood warping is a deviation from flatness in timber as a result of internal residual stress caused by uneven shrinkage. Warping primarily occurs due to uneven expansion or contraction caused by changes in moisture content. Warping can occur in wood considered "dry" (wood can take up and release moisture indefinitely) when it takes up moisture unevenly, or when it is allowed to return to its "dry" equilibrium state unevenly, too slowly, or too quickly. Many factors can contribute to wood warp susceptibility: wood species, grain orientation, air flow, sunlight, uneven finishing, temperature, and cutting season. The types of wood warping include: bow: a warp along the length of the face of the wood crook: a warp along the length of the edge of the wood kink: a localized crook, often due to a knot cup: a warp across the width of the face, in which the edges are higher or lower than the center of the wood twist or wind: a distortion in which the two ends do not lie on the same plane. Winding sticks assist in viewing this defect. curl: a warp in the center that creates a sort of bow Wood warping costs the wood industry in the U.S. millions of dollars per year. Straight wood boards that leave a cutting facility sometimes arrive at the store yard warped. Although wood warping has been studied for years, the warping control model for manufacturing composite wood hasn't been updated for about 40 years. Zhiyong Cai, researcher at Texas A&M University, has researched wood warping and was working on a computer software program in 2003 to help manufacturers make changes in the manufacturing process so that wood doesn't arrive at its destination warped after it leaves the mill or factory. See also Drunken trees Forest pathology Dancing Forest Crooked Forest References Further reading WoodWeb – Warp in Drying Society of American Foresters – Warped Wood Woodworking Timber industry Deformation (mechanics) Wood-related terminology
Wood warping
[ "Materials_science", "Engineering" ]
399
[ "Deformation (mechanics)", "Materials science" ]
3,059,114
https://en.wikipedia.org/wiki/Urban%20studies
Urban studies is based on the study of the urban development of cities and regions—it makes up the theory portion of the field of urban planning. This includes studying the history of city development from an architectural point of view, to the impact of urban design on community development efforts. Urban studies is a major field of study used by practitioners of urban planning, it helps with the understanding of human values, development, and the interactions they have with their physical environment. History The study of cities has changed dramatically from the 1800s over time, with new frames of analysis being applied to the development of urban areas. The first college programs were created to observe how cities were developed based on anthropological research of ghetto communities. In the mid-1900s, urban study programs expanded beyond just looking at the current and historical impacts of city design and began studying how those designs impacted the future interactions of people and how to improve city development through architecture, open spaces, the interactions of people, and different types of capital that forms a community. Urban history plays an important role in this field of study because it reveals how cities have developed previously. History plays a large role in determining how cities will change in the future. Such areas change continuously as part of larger processes and create new histories that researchers study on both large-scale and individual levels. Overall, three different themes have influenced how researchers have and will continue to study urban areas: Spatial structures: Reflect how the city is physically organized Processes that support spatial structure: Question how the city's structure operates Normative Analysis: Construct opinions supported by facts to promote better urban planning methods Scholars have also researched how cities outside of the United Kingdom and the United States have developed, but only to a limited degree. Urban history previously focused mostly on how European and American cities developed over time, instead of focusing on how non-European cities developed. Additional geographic areas researched in this field include South Africa, Australia, Latin America, and India. This is changing as more research is performed in developing economies, leading to more contextual urban and infrastructural development in various parts of the world. The racial segregation of urban residents in the United States has played an important role in developing this field. One program founded to research African-American urban residents, the Harvard-MIT Joint Center for Urban Studies, was founded in 1959 to study residential segregation and to support affected communities. More recently, studies related to race and urban life started to focus on ethnographic methods to study how individuals lived in relation to the city and their respective systems as a whole. Israel Zangwill wrote one of the first books on the Ghettos of Europe and how they impacted the Jewish children that were descendants of the original residents, Children of the Ghetto (1892), he also wrote two other books about the European Ghettos. Louis Wirth was the next scholar to write about the Ghettos, he wrote about them from a sociological perspective. Louis Wirth and Roberts Ezra Park also became the first sociologists to publish about the immigrant neighbourhoods in America with suggestions on their future design. Roberts Ezra Park was a student of George Zimmel in Chicago. Other famous scholars that studied segregation, American Ghettos, and impoverished neighbourhoods include Du Bois (1903), Haynes (1913), Johnson (1943), Horace Cayton (1944), Kenneth Clark (1965), William Julius Wilson (1987). Areas of research This field is transdisciplinary because it uses theories from a variety of academic fields and places them within an urban context. A wide variety of academic fields refers to the urban environment as a location studied, such as Environmental Studies, Economics, Geography, Public Health, and Sociology. However, scholars in this field research how specific elements contribute to how the city operates, such as how housing and transportation will change. In addition, researchers also study how residents interact within the city, such as how race and gender differences lead to social inequalities, or concentrated disadvantage in urban areas. Urban studies is a major field of study used by paraprofessional practitioners of urban planning. Criticism Researchers struggle how to define basic terms precisely, such as how a city is defined, due to how the roles of cities change. Researchers must be careful in how they describe urban areas, as their work can be manipulated as positive elements for city boosters wanting to promote a specific city. See also Index of urban studies articles List of urban theorists Urban theory Urban ecology Urban economics Urban geography Urban planning Urban sociology Urban vitality References External links Guide to the University of Chicago Center for Urban Studies Records 1967-1968 at the University of Chicago Special Collections Research Center + Poverty in the United States Race in the United States Black studies
Urban studies
[ "Engineering" ]
946
[ "Urban planning", "Architecture" ]
3,059,333
https://en.wikipedia.org/wiki/Valence%20bond%20programs
Valence bond (VB) computer programs for modern valence bond calculations:- CRUNCH, by Gordon A. Gallup and his group. GAMESS (UK), includes calculation of VB wave functions by the TURTLE code, due to J.H. van Lenthe. GAMESS (US), has links to interface VB2000, and XMVB. MOLPRO and MOLCAS include code by David L. Cooper for generating Spin Coupled VB wave functions from CASSCF calculations. VB2000 version 3.0 (released, 2022), by Jiabo Li, Brian Duke, David W. O. de Sousa, Rodrigo S. Bitzer and Roy McWeeny allows the use of Group Function theory, whereby different groups can be handled by different methods (VB or Hartree–Fock). Many types of VB, including spin-coupled VB, and CASVB calculations are possible. It is part of the GAMESS (US) release and can be compiled into the GAMESS(US) executable. There is a more limited stand-alone program. Earlier versions were interfaced to GAUSSIAN. XMVB (previously known as XIAMEN), by Lingchun Song, Yirong Mo, Qianer Zhang and Wei Wu. This allows several VB methods, including breathing orbital VB. The code now interfaces to GAMESS (US) in a similar manner to VB2000. Earlier versions interfaced to GAUSSIAN 98. Note that several other programs, as well as some of those above, can do Goddard's Generalized Valence Bond (GVB) methods. GAMESS (US) does this either without the VB2000 interface or with it. See also Quantum chemistry computer programs References Computational chemistry software Quantum chemistry
Valence bond programs
[ "Physics", "Chemistry" ]
374
[ "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry stubs", "Computational chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
3,059,400
https://en.wikipedia.org/wiki/CR-39
Poly(allyl diglycol carbonate) (PADC) is a plastic commonly used in the manufacture of eyeglass lenses alongside the material PMMA (polymethyl methacrylate). The monomer is allyl diglycol carbonate (ADC). The term CR-39 technically refers to the ADC monomer, but is more commonly used to refer to the finished plastic. The abbreviation stands for "Columbia Resin #39", which was the 39th formula of a thermosetting plastic developed by the Columbia Resins project in 1940. The first commercial use of CR-39 monomer (ADC) was to help create glass-reinforced plastic fuel tanks for the B-17 bomber aircraft in World War II, reducing the weight and increasing the range of the bomber. After the war, the Armorlite Lens Company in California is credited with manufacturing the first CR-39 eyeglass lenses in 1947. CR-39 plastic has an index of refraction of 1.498 and an Abbe number of 58. CR-39 is now a trade-marked product of PPG Industries. An alternative use includes a purified version that is used to measure ionising radiation such as alpha particles and neutrons. Although CR-39 is a type of polycarbonate, it should not be confused with the general term "polycarbonate", a tough homopolymer usually made from bisphenol A. Synthesis CR-39 is made by polymerization of ADC in presence of diisopropyl peroxydicarbonate (IPP) initiator. The presence of the allyl groups allows the polymer to form cross-links; thus, it is a thermoset resin. The polymerization schedule of ADC monomers using IPP is generally 20 hours long with a maximum temperature of 95 °C. The elevated temperatures can be supplied using a water bath or a forced air oven. Benzoyl peroxide (BPO) is an alternative organic peroxide that may be used to polymerize ADC. Pure benzoyl peroxide is crystalline and less volatile than diisopropyl peroxydicarbonate. Using BPO results in a polymer that has a higher yellowness index, and the peroxide takes longer to dissolve into ADC at room temperature than IPP. Applications Optics CR-39 is transparent in the visible spectrum and is almost completely opaque in the ultraviolet range. It has high abrasion resistance, in fact the highest abrasion/scratch resistance of any uncoated optical plastic. CR-39 is about half the weight of glass with an index of refraction only slightly lower than that of crown glass, and its high Abbe number yields low chromatic aberration, altogether making it an advantageous material for eyeglasses and sunglasses. A wide range of colors can be achieved by dyeing of the surface or the bulk of the material. CR-39 is also resistant to most solvents and other chemicals, gamma radiation, aging, and to material fatigue. It can withstand the small hot sparks from welding, something glass cannot do. It can be used continuously in temperatures up to 100 °C and up to one hour at 130 °C. Radiation detection In the radiation detection application, CR-39 is used as a solid-state nuclear track detector (SSNTD) to detect the presence of ionising radiation. Energetic particles colliding with the polymer structure leave a trail of broken chemical bonds within the CR-39. When immersed in a concentrated alkali solution (typically sodium hydroxide) hydroxide ions attack and break the polymer structure, etching away the bulk of the plastic at a nominally fixed rate. However, along the paths of damage left by charged particle interaction the concentration of radiation damage allows the chemical agent to attack the polymer more rapidly than it does in the bulk, revealing the paths of the charged particle ion tracks. The resulting etched plastic therefore contains a permanent record of not only the location of the radiation on the plastic but also gives spectroscopic information about the source. Principally used for the detection of alpha-emitting radionuclides (especially radon gas), the radiation-sensitivity properties of CR-39 are also used for proton and neutron dosimetry and historically cosmic ray investigations. The ability of CR-39 to record the location of a radiation source, even at extremely low concentrations is exploited in autoradiography studies with alpha particles, and for (comparatively cheap) detection of alpha-emitters like uranium. Typically, a thin section of a biological material is fixed against CR-39 and kept frozen for a timescale of months to years in an environment that is shielded as much as possible from possible radiological contaminants. Before etching, photographs are taken of the biological sample with the affixed CR-39 detector, with care taken to ensure that prescribed location marks on the detector are noted. After the etching process, automated or manual 'scanning' of the CR-39 is used to physically locate the ionising radiation recorded, which can then be mapped to the position of the radionuclide within the biological sample. There is no other non-destructive method for accurately identifying the location of trace quantities of radionuclides in biological samples at such low emission levels. See also Corrective lens References Plastics Polycarbonates Optical materials Particle detectors PPG Industries
CR-39
[ "Physics", "Technology", "Engineering" ]
1,095
[ "Unsolved problems in physics", "Measuring instruments", "Particle detectors", "Materials", "Optical materials", "Amorphous solids", "Matter", "Plastics" ]
3,059,692
https://en.wikipedia.org/wiki/Stokes%20radius
The Stokes radius or Stokes–Einstein radius of a solute is the radius of a hard sphere that diffuses at the same rate as that solute. Named after George Gabriel Stokes, it is closely related to solute mobility, factoring in not only size but also solvent effects. A smaller ion with stronger hydration, for example, may have a greater Stokes radius than a larger ion with weaker hydration. This is because the smaller ion drags a greater number of water molecules with it as it moves through the solution. Stokes radius is sometimes used synonymously with effective hydrated radius in solution. Hydrodynamic radius, RH, can refer to the Stokes radius of a polymer or other macromolecule. Spherical case According to Stokes’ law, a perfect sphere traveling through a viscous liquid feels a drag force proportional to the frictional coefficient : where is the liquid's viscosity, is the sphere's drift speed, and is its radius. Because ionic mobility is directly proportional to drift speed, it is inversely proportional to the frictional coefficient: where represents ionic charge in integer multiples of electron charges. In 1905, Albert Einstein found the diffusion coefficient of an ion to be proportional to its mobility constant: where is the Boltzmann constant and is electrical charge. This is known as the Einstein relation. Substituting in the frictional coefficient of a perfect sphere from Stokes’ law yields which can be rearranged to solve for , the radius: In non-spherical systems, the frictional coefficient is determined by the size and shape of the species under consideration. Research applications Stokes radii are often determined experimentally by gel-permeation or gel-filtration chromatography. They are useful in characterizing biological species due to the size-dependence of processes like enzyme-substrate interaction and membrane diffusion. The Stokes radii of sediment, soil, and aerosol particles are considered in ecological measurements and models. They likewise play a role in the study of polymer and other macromolecular systems. See also Born equation Capillary electrophoresis Dynamic light scattering Equivalent spherical diameter Einstein relation (kinetic theory) Ionic radius Ion transport number Molar conductivity References Fluid dynamics Radii
Stokes radius
[ "Chemistry", "Engineering" ]
455
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
3,060,011
https://en.wikipedia.org/wiki/MMC-1
Mobile Module Connector 1 (MMC-1) is a 280-pin microprocessor cartridge developed by Intel for used by their mobile Pentium, Pentium MMX, Pentium II and Celeron processors. It contains the microprocessor and its associated L2 cache, a 430TX for the Pentium or a 443BX for the Pentium II northbridge, and a voltage regulator. External links Intel Datasheet Intel products
MMC-1
[ "Technology" ]
96
[ "Computing stubs", "Computer hardware stubs" ]
3,060,044
https://en.wikipedia.org/wiki/Process%20design
In chemical engineering, process design is the choice and sequencing of units for desired physical and/or chemical transformation of materials. Process design is central to chemical engineering, and it can be considered to be the summit of that field, bringing together all of the field's components. Process design can be the design of new facilities or it can be the modification or expansion of existing facilities. The design starts at a conceptual level and ultimately ends in the form of fabrication and construction plans. Process design is distinct from equipment design, which is closer in spirit to the design of unit operations. Processes often include many unit operations. Documentation Process design documents serve to define the design and they ensure that the design components fit together. They are useful in communicating ideas and plans to other engineers involved with the design, to external regulatory agencies, to equipment vendors, and to construction contractors. In order of increasing detail, process design documents include: Block flow diagrams (BFD): Very simple diagrams composed of rectangles and lines indicating major material or energy flows. Process flow diagrams (PFD): Typically more complex diagrams of major unit operations as well as flow lines. They usually include a material balance, and sometimes an energy balance, showing typical or design flowrates, stream compositions, and stream and equipment pressures and temperatures. It is the key document in process design. Piping and instrumentation diagrams (P&ID): Diagrams showing each and every pipeline with piping class (carbon steel or stainless steel) and pipe size (diameter). They also show valving along with instrument locations and process control schemes. Specifications: Written design requirements of all major equipment items. Process designers typically write operating manuals on how to start-up, operate and shut-down the process. They often also develop accident plans and projections of process operation on the environment. Documents are maintained after construction of the process facility for the operating personnel to refer to. The documents also are useful when modifications to the facility are planned. A primary method of developing the process documents is process flowsheeting. Design considerations Design conceptualization and considerations can begin once objectives are defined and constraints identified. Objectives that a design may strive to meet include: Throughput rate Process yield Product purity Constraints include: Capital cost: investment required to implement the design including cost of new equipment and disposal of obsolete equipment. Available space: the area of land or room in building to place new or modified equipment. Safety concerns: risk of accidents and posed by hazardous materials. Environmental impact and projected effluents, emissions, and waste production. Operating and maintenance costs. Other factors that designers may include are: Reliability Redundancy Flexibility Anticipated variability in feed stock and allowable variability in product. Sources of design information Designers usually do not start from scratch, especially for complex projects. Often the engineers have pilot plant data available or data from full-scale operating facilities. Other sources of information include proprietary design criteria provided by process licensors, published scientific data, laboratory experiments, and suppliers of feedstocks and utilities. Design process Design starts with process synthesis - the choice of technology and combinations of industrial units to achieve goals. More detailed design proceeds as other engineers and stakeholders sign off on each stage: conceptual to detailed design. Simulation software is often used by design engineers. Simulations can identify weaknesses in designs and allow engineers to choose better alternatives. However, engineers still rely on heuristics, intuition, and experience when designing a process. Human creativity is an element in complex designs. See also Chemical engineer Chemical plant Chemical process Process integration Process simulation Chemical process modeling Environmental engineering Industrial process List of chemical process simulators Process engineering Process safety Unit process Recommended chemical engineering books References External links Chemical Process Design Open Textbook (Northwestern University by Fengqi You) A General Framework for Process Synthesis, Integration, and Intensification (OSTI / Texas A&M University) Process engineering
Process design
[ "Engineering" ]
779
[ "Process engineering", "Mechanical engineering by discipline" ]
3,060,062
https://en.wikipedia.org/wiki/MMC-2
Mobile Module Connector 2 (MMC-2) is Intel's 400-pin processor cartridge used with Pentium II, Celeron and Pentium III mobile processors. It contains CPU, 443BX (Pentium II) Northbridge, off-die L2 cache (early Pentium II only) and voltage regulator. It is the successor of MMC-1, main differences being AGP interface and 100 MHz FSB for the Pentium III. This processor cartridge was widely used on laptops from the late 1990s to early 2000s. Fastest processors in the MMC-2 form factor are: Pentium II 400/256 Pentium III 850/256 Celeron 700/128 Unofficially Pentium III 1000/256, this was achieved by removing the chip off an MMC-2 socket card and soldering a Pentium III 1000 processor on the board See also Notebook processor External links Intel Datasheet Intel products
MMC-2
[ "Technology" ]
191
[ "Computing stubs", "Computer hardware stubs" ]
3,060,150
https://en.wikipedia.org/wiki/Mini-Cartridge
The Mini-Cartridge or Mobile Mini-Cartridge was Intel's 240-pin multi-chip module for their mobile Pentium II processors. It contained the CPU core, as well as separate cache chips and a thermal sensor. References Chip carriers
Mini-Cartridge
[ "Technology" ]
50
[ "Computing stubs", "Computer hardware stubs" ]
3,060,924
https://en.wikipedia.org/wiki/Volcanic%20gas
Volcanic gases are gases given off by active (or, at times, by dormant) volcanoes. These include gases trapped in cavities (vesicles) in volcanic rocks, dissolved or dissociated gases in magma and lava, or gases emanating from lava, from volcanic craters or vents. Volcanic gases can also be emitted through groundwater heated by volcanic action. The sources of volcanic gases on Earth include: primordial and recycled constituents from the Earth's mantle, assimilated constituents from the Earth's crust, groundwater and the Earth's atmosphere. Substances that may become gaseous or give off gases when heated are termed volatile substances. Composition The principal components of volcanic gases are water vapor (H2O), carbon dioxide (CO2), sulfur either as sulfur dioxide (SO2) (high-temperature volcanic gases) or hydrogen sulfide (H2S) (low-temperature volcanic gases), nitrogen, argon, helium, neon, methane, carbon monoxide and hydrogen. Other compounds detected in volcanic gases are oxygen (meteoric), hydrogen chloride, hydrogen fluoride, hydrogen bromide, sulfur hexafluoride, carbonyl sulfide, and organic compounds. Exotic trace compounds include mercury, halocarbons (including CFCs), and halogen oxide radicals. The abundance of gases varies considerably from volcano to volcano, with volcanic activity and with tectonic setting. Water vapour is consistently the most abundant volcanic gas, normally comprising more than 60% of total emissions. Carbon dioxide typically accounts for 10 to 40% of emissions. Volcanoes located at convergent plate boundaries emit more water vapor and chlorine than volcanoes at hot spots or divergent plate boundaries. This is caused by the addition of seawater into magmas formed at subduction zones. Convergent plate boundary volcanoes also have higher H2O/H2, H2O/CO2, CO2/He and N2/He ratios than hot spot or divergent plate boundary volcanoes. Magmatic gases and high-temperature volcanic gases Magma contains dissolved volatile components, as described above. The solubilities of the different volatile constituents are dependent on pressure, temperature and the composition of the magma. As magma ascends towards the surface, the ambient pressure decreases, which decreases the solubility of the dissolved volatiles. Once the solubility decreases below the volatile concentration, the volatiles will tend to come out of solution within the magma (exsolve) and form a separate gas phase (the magma is super-saturated in volatiles). The gas will initially be distributed throughout the magma as small bubbles, that cannot rise quickly through the magma. As the magma ascends the bubbles grow through a combination of expansion through decompression and growth as the solubility of volatiles in the magma decreases further causing more gas to exsolve. Depending on the viscosity of the magma, the bubbles may start to rise through the magma and coalesce, or they remain relatively fixed in place until they begin to connect and form a continuously connected network. In the former case, the bubbles may rise through the magma and accumulate at a vertical surface, e.g. the 'roof' of a magma chamber. In volcanoes with an open path to the surface, e.g. Stromboli in Italy, the bubbles may reach the surface and as they pop small explosions occur. In the latter case, the gas can flow rapidly through the continuous permeable network towards the surface. This mechanism has been used to explain activity at Santiaguito, Santa Maria volcano, Guatemala and Soufrière Hills Volcano, Montserrat. If the gas cannot escape fast enough from the magma, it will fragment the magma into small particles of ash. The fluidised ash has a much lower resistance to motion than the viscous magma, so accelerates, causing further expansion of the gases and acceleration of the mixture. This sequence of events drives explosive volcanism. Whether gas can escape gently (passive eruptions) or not (explosive eruptions) is determined by the total volatile contents of the initial magma and the viscosity of the magma, which is controlled by its composition. The term 'closed system' degassing refers to the case where gas and its parent magma ascend together and in equilibrium with each other. The composition of the emitted gas is in equilibrium with the composition of the magma at the pressure, temperature where the gas leaves the system. In 'open system' degassing, the gas leaves its parent magma and rises up through the overlying magma without remaining in equilibrium with that magma. The gas released at the surface has a composition that is a mass-flow average of the magma exsolved at various depths and is not representative of the magma conditions at any one depth. Molten rock (either magma or lava) near the atmosphere releases high-temperature volcanic gas (>400 °C). In explosive volcanic eruptions, the sudden release of gases from magma may cause rapid movements of the molten rock. When the magma encounters water, seawater, lake water or groundwater, it can be rapidly fragmented. The rapid expansion of gases is the driving mechanism of most explosive volcanic eruptions. However, a significant portion of volcanic gas release occurs during quasi-continuous quiescent phases of active volcanism. Low-temperature volcanic gases and hydrothermal systems As magmatic gas travelling upward encounters meteoric water in an aquifer, steam is produced. Latent magmatic heat can also cause meteoric waters to ascend as a vapour phase. Extended fluid-rock interaction of this hot mixture can leach constituents out of the cooling magmatic rock and also the country rock, causing volume changes and phase transitions, reactions and thus an increase in ionic strength of the upward percolating fluid. This process also decreases the fluid's pH. Cooling can cause phase separation and mineral deposition, accompanied by a shift toward more reducing conditions. At the surface expression of such hydrothermal systems, low-temperature volcanic gases (<400 °C) are either emanating as steam-gas mixtures or in dissolved form in hot springs. At the ocean floor, such hot supersaturated hydrothermal fluids form gigantic chimney structures called black smokers, at the point of emission into the cold seawater. Over geological time, this process of hydrothermal leaching, alteration, and/or redeposition of minerals in the country rock is an effective process of concentration that generates certain types of economically valuable ore deposits. Non-explosive volcanic gas release The gas release can occur by advection through fractures, or via diffuse degassing through large areas of permeable ground as diffuse degassing structures (DDS). At sites of advective gas loss, precipitation of sulfur and rare minerals forms sulfur deposits and small sulfur chimneys, called fumaroles. Very low-temperature (below 100 °C) fumarolic structures are also known as solfataras. Sites of cold degassing of predominantly carbon dioxide are called mofettes. Hot springs on volcanoes often show a measurable amount of magmatic gas in dissolved form. Current emissions of volcanic gases to the atmosphere Present day global emissions of volcanic gases to the atmosphere can be classified as eruptive or non-eruptive. Although all volcanic gas species are emitted to the atmosphere, the emissions of CO2 (a greenhouse gas) and SO2 have received the most study. It has long been recognized that eruptions contribute much lower total SO2 emissions than passive degassing does. Fischer et al (2019) estimated that, from 2005 to 2015, SO2 emissions during eruptions were 2.6 teragrams (Tg or 1012g or 0.907 gigatons Gt) per year and during non-eruptive periods of passive degassing were 23.2 ± 2Tg per year. During the same time interval, CO2 emissions from volcanoes during eruptions were estimated to be 1.8 ± 0.9 Tg per year and during non-eruptive activity were 51.3 ± 5.7 Tg per year. Therefore, CO2 emissions during volcanic eruptions are less than 10% of CO2 emissions released during non-eruptive volcanic activity. The 15 June 1991 eruption of Mount Pinatubo (VEI 6) in the Philippines released a total of 18 ± 4 Tg of SO2. Such large VEI 6 eruptions are rare and only occur once every 50 – 100 years. The 2010 eruptions of Eyjafjallajökull (VEI 4) in Iceland emitted a total of 5.1 Tg CO2. VEI 4 eruptions occur about once per year. For comparison, Le Quéré, C. et al estimates that human burning of fossil fuels and production of cement processed 9.3 Gt carbon per year from 2006 through 2015, creating up to 34.1 Gt CO2 annually. Some recent volcanic CO2 emission estimates are higher than Fischer et al (2019). The estimates of Burton et al. (2013) of 540 Tg CO2/year and of Werner et al. (2019) of 220 - 300 Tg CO2/year take into account diffuse CO2 emissions from volcanic regions. Sensing, collection and measurement Volcanic gases were collected and analysed as long ago as 1790 by Scipione Breislak in Italy. The composition of volcanic gases is dependent on the movement of magma within the volcano. Therefore, sudden changes in gas composition often presage a change in volcanic activity. Accordingly, a large part of hazard monitoring of volcanoes involves regular measurement of gaseous emissions. For example, an increase in the CO2 content of gases at Stromboli has been ascribed to injection of fresh volatile-rich magma at depth within the system. Volcanic gases can be sensed (measured in-situ) or sampled for further analysis. Volcanic gas sensing can be: within the gas by means of electrochemical sensors and flow-through infrared-spectroscopic gas cells outside the gas by ground-based or airborne remote spectroscopy e.g., Correlation spectroscopy (COSPEC), Differential Optical Absorption Spectroscopy (DOAS), or Fourier Transform Infrared Spectroscopy (FTIR). Sulphur dioxide (SO2) absorbs strongly in the ultraviolet wavelengths and has low background concentrations in the atmosphere. These characteristics make sulphur dioxide a good target for volcanic gas monitoring. It can be detected by satellite-based instruments, which allow for global monitoring, and by ground-based instruments such as DOAS. DOAS arrays are placed near some well-monitored volcanoes and used to estimate the flux of SO2 emitted. The Multi-Component Gas Analyzer System (Multi-GAS) is also used to remotely measure CO2, SO2 and H2S. The fluxes of other gases are usually estimated by measuring the ratios of different gases within the volcanic plume, e.g. by FTIR, electrochemical sensors at the volcano crater rim, or direct sampling, and multiplying the ratio of the gas of interest to SO2 by the SO2 flux. Direct sampling of volcanic gas sampling is often done by a method involving an evacuated flask with caustic solution, first used by Robert W. Bunsen (1811-1899) and later refined by the German chemist Werner F. Giggenbach (1937-1997), dubbed Giggenbach-bottle. Other methods include collection in evacuated empty containers, in flow-through glass tubes, in gas wash bottles (cryogenic scrubbers), on impregnated filter packs and on solid adsorbent tubes. Analytical techniques for gas samples comprise gas chromatography with thermal conductivity detection (TCD), flame ionization detection (FID) and mass spectrometry (GC-MS) for gases, and various wet chemical techniques for dissolved species (e.g., acidimetric titration for dissolved CO2, and ion chromatography for sulfate, chloride, fluoride). The trace metal, trace organic and isotopic composition is usually determined by different mass spectrometric methods. Volcanic gases and volcano monitoring Certain constituents of volcanic gases may show very early signs of changing conditions at depth, making them a powerful tool to predict imminent unrest. Used in conjunction with monitoring data on seismicity and deformation, correlative monitoring gains great efficiency. Volcanic gas monitoring is a standard tool of any volcano observatory. Unfortunately, the most precise compositional data still require dangerous field sampling campaigns. However, remote sensing techniques have advanced tremendously through the 1990s. The Deep Earth Carbon Degassing Project is employing Multi-GAS remote sensing to monitor 9 volcanoes on a continuous basis. Hazards Volcanic gases were directly responsible for approximately 3% of all volcano-related deaths of humans between 1900 and 1986. Some volcanic gases kill by acidic corrosion; others kill by asphyxiation. Some volcanic gases including sulfur dioxide, hydrogen chloride, hydrogen sulfide and hydrogen fluoride react with other atmospheric particles to form aerosols. Gallery See also References External links USGS Volcano Hazards Program: Volcanic Gases and Their Effects IVHHN; USGS: The Health Hazards of Volcanic and Geothermal Gases. A Guide for the Public. Volcanic degassing Gases Greenhouse gases
Volcanic gas
[ "Physics", "Chemistry", "Environmental_science" ]
2,706
[ "Matter", "Environmental chemistry", "Phases of matter", "Greenhouse gases", "Statistical mechanics", "Gases" ]
3,060,999
https://en.wikipedia.org/wiki/Shell%20builtin
In computing, a shell builtin is a command or a function, called from a shell, that is executed directly in the shell itself, instead of an external executable program which the shell would load and execute. Shell builtins work significantly faster than external programs, because there is no program loading overhead. However, their code is inherently present in the shell, and thus modifying or updating them requires modifications to the shell. Therefore, shell builtins are usually used for simple, almost trivial, functions, such as text output. Because of the nature of some operating systems, some functions of the systems must necessarily be implemented as shell builtins. The most notable example is the cd command, which changes the working directory of the shell. Since each executable program runs in a separate process, and working directories are specific to each process, loading cd as an external program would not affect the working directory of the shell that loaded it. See also BusyBox Internal DOS command References External links List of special shell builtin commands List of MS-DOS internal commands Command shells
Shell builtin
[ "Technology" ]
217
[ "Windows commands", "Computing stubs", "Computing commands", "Operating system stubs" ]
3,061,740
https://en.wikipedia.org/wiki/Schubert%20calculus
In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert in order to solve various counting problems of projective geometry and, as such, is viewed as part of enumerative geometry. Giving it a more rigorous foundation was the aim of Hilbert's 15th problem. It is related to several more modern concepts, such as characteristic classes, and both its algorithmic aspects and applications remain of current interest. The term Schubert calculus is sometimes used to mean the enumerative geometry of linear subspaces of a vector space, which is roughly equivalent to describing the cohomology ring of Grassmannians. Sometimes it is used to mean the more general enumerative geometry of algebraic varieties that are homogenous spaces of simple Lie groups. Even more generally, Schubert calculus is sometimes understood as encompassing the study of analogous questions in generalized cohomology theories. The objects introduced by Schubert are the Schubert cells, which are locally closed sets in a Grassmannian defined by conditions of incidence of a linear subspace in projective space with a given flag. For further details see Schubert variety. The intersection theory of these cells, which can be seen as the product structure in the cohomology ring of the Grassmannian, consisting of associated cohomology classes, allows in particular the determination of cases in which the intersections of cells results in a finite set of points. A key result is that the Schubert cells (or rather, the classes of their Zariski closures, the Schubert cycles or Schubert varieties) span the whole cohomology ring. The combinatorial aspects mainly arise in relation to computing intersections of Schubert cycles. Lifted from the Grassmannian, which is a homogeneous space, to the general linear group that acts on it, similar questions are involved in the Bruhat decomposition and classification of parabolic subgroups (as block triangular matrices). Construction Schubert calculus can be constructed using the Chow ring of the Grassmannian, where the generating cycles are represented by geometrically defined data. Denote the Grassmannian of -planes in a fixed -dimensional vector space as , and its Chow ring as . (Note that the Grassmannian is sometimes denoted if the vector space isn't explicitly given or as if the ambient space and its -dimensional subspaces are replaced by their projectizations.) Choosing an (arbitrary) complete flag to each weakly decreasing -tuple of integers , where i.e., to each partition of weight whose Young diagram fits into the rectangular one for the partition , we associate a Schubert variety (or Schubert cycle) , defined as This is the closure, in the Zariski topology, of the Schubert cell which is used when considering cellular homology instead of the Chow ring. The latter are disjoint affine spaces, of dimension , whose union is . An equivalent characterization of the Schubert cell may be given in terms of the dual complete flag where Then consists of those -dimensional subspaces that have a basis consisting of elements of the subspaces Since the homology class , called a Schubert class, does not depend on the choice of complete flag , it can be written as It can be shown that these classes are linearly independent and generate the Chow ring as their linear span. The associated intersection theory is called Schubert calculus. For a given sequence with the Schubert class is usually just denoted . The Schubert classes given by a single integer , (i.e., a horizontal partition), are called special classes. Using the Giambelli formula below, all the Schubert classes can be generated from these special classes. Other notational conventions In some sources, the Schubert cells and Schubert varieties are labelled differently, as and , respectively, where is the complementary partition to with parts , whose Young diagram is the complement of the one for within the rectangular one (reversed, both horizontally and vertically). Another labelling convention for and is and , respectively, where is the multi-index defined by The integers are the pivot locations of the representations of elements of in reduced matricial echelon form. Explanation In order to explain the definition, consider a generic -plane . It will have only a zero intersection with for , whereas for For example, in , a -plane is the solution space of a system of five independent homogeneous linear equations. These equations will generically span when restricted to a subspace with , in which case the solution space (the intersection of with ) will consist only of the zero vector. However, if , and will necessarily have nonzero intersection. For example, the expected dimension of intersection of and is , the intersection of and has expected dimension , and so on. The definition of a Schubert variety states that the first value of with is generically smaller than the expected value by the parameter . The -planes given by these constraints then define special subvarieties of . Properties Inclusion There is a partial ordering on all -tuples where if for every . This gives the inclusion of Schubert varieties showing an increase of the indices corresponds to an even greater specialization of subvarieties. Dimension formula A Schubert variety has dimension equal to the weight of the partition . Alternatively, in the notational convention indicated above, its codimension in is the weight of the complementary partition in the dimensional rectangular Young diagram. This is stable under inclusions of Grassmannians. That is, the inclusion defined, for , by has the property and the inclusion defined by adding the extra basis element to each -plane, giving a -plane, does as well Thus, if and are a cell and a subvariety in the Grassmannian , they may also be viewed as a cell and a subvariety within the Grassmannian for any pair with and . Intersection product The intersection product was first established using the Pieri and Giambelli formulas. Pieri formula In the special case , there is an explicit formula of the product of with an arbitrary Schubert class given by where , are the weights of the partitions. This is called the Pieri formula, and can be used to determine the intersection product of any two Schubert classes when combined with the Giambelli formula. For example, and Giambelli formula Schubert classes for partitions of any length can be expressed as the determinant of a matrix having the special classes as entries. This is known as the Giambelli formula. It has the same form as the first Jacobi-Trudi identity, expressing arbitrary Schur functions as determinants in terms of the complete symmetric functions . For example, and General case The intersection product between any pair of Schubert classes is given by where are the Littlewood-Richardson coefficients. The Pieri formula is a special case of this, when has length . Relation with Chern classes There is an easy description of the cohomology ring, or the Chow ring, of the Grassmannian using the Chern classes of two natural vector bundles over . We have the exact sequence of vector bundles over where is the tautological bundle whose fiber, over any element is the subspace itself, is the trivial vector bundle of rank , with as fiber and is the quotient vector bundle of rank , with as fiber. The Chern classes of the bundles and are where is the partition whose Young diagram consists of a single column of length and The tautological sequence then gives the presentation of the Chow ring as One of the classical examples analyzed is the Grassmannian since it parameterizes lines in . Using the Chow ring , Schubert calculus can be used to compute the number of lines on a cubic surface. Chow ring The Chow ring has the presentation and as a graded Abelian group it is given by Lines on a cubic surface Recall that a line in gives a dimension subspace of , hence an element of . Also, the equation of a line can be given as a section of . Since a cubic surface is given as a generic homogeneous cubic polynomial, this is given as a generic section . A line is a subvariety of if and only if the section vanishes on . Therefore, the Euler class of can be integrated over to get the number of points where the generic section vanishes on . In order to get the Euler class, the total Chern class of must be computed, which is given as The splitting formula then reads as the formal equation where and for formal line bundles . The splitting equation gives the relations and . Since can be viewed as the direct sum of formal line bundles whose total Chern class is it follows that using the fact that and Since is the top class, the integral is then Therefore, there are lines on a cubic surface. See also Enumerative geometry Chow ring Intersection theory Grassmannian Giambelli's formula Pieri's formula Chern class Quintic threefold Mirror symmetry conjecture References Summer school notes http://homepages.math.uic.edu/~coskun/poland.html Phillip Griffiths and Joseph Harris (1978), Principles of Algebraic Geometry, Chapter 1.5 David Eisenbud and Joseph Harris (2016), "3264 and All That: A Second Course in Algebraic Geometry". Algebraic geometry Topology of homogeneous spaces
Schubert calculus
[ "Mathematics" ]
1,857
[ "Fields of abstract algebra", "Algebraic geometry" ]
3,061,787
https://en.wikipedia.org/wiki/Methane%20%28data%20page%29
This page provides supplementary chemical data on methane. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Annotation "(s)" indicates equilibrium temperature of vapor over solid. Otherwise temperature is equilibrium of vapor over liquid. Note that these are all negative temperature values. Spectral data References Cited sources Chemical data pages Methane Chemical data pages cleanup
Methane (data page)
[ "Chemistry" ]
101
[ "Greenhouse gases", "Methane", "Chemical data pages", "nan" ]
3,061,815
https://en.wikipedia.org/wiki/Homogeneous%20broadening
Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectral linewidth is its natural linewidth, with a Lorentzian profile. Broadening in laser systems Broadening in laser physics is a physical phenomenon that affects the spectroscopic line shape of the laser emission profile. The laser emission is due to the (excitation and subsequent) relaxation of a quantum system (atom, molecule, ion, etc.) between an excited state (higher in energy) and a lower one. These states can be thought of as the eigenstates of the energy operator. The difference in energy between these states is proportional to the frequency/wavelength of the photon emitted. Since this energy difference has a fluctuation, then the frequency/wavelength of the "macroscopic emission" (the beam) will have a certain width (i.e. it will be "broadened" with respect to the "ideal" perfectly monochromatic emission). Depending on the nature of the fluctuation, there can be two types of broadening. If the fluctuation in the frequency/wavelength is due to a phenomenon that is the same for each quantum emitter, there is homogeneous broadening, while if each quantum emitter has a different type of fluctuation, the broadening is inhomogeneous. Examples of situations where the fluctuation is the same for each system (homogeneous broadening) are natural or lifetime broadening, and collisional or pressure broadening. In these cases each system is affected "on average" in the same way (e.g. by the collisions due to the pressure). The most frequent situation in solid state systems where the fluctuation is different for each system (inhomogeneous broadening) is when because of the presence of dopants, the local electric field is different for each emitter, and so the Stark effect changes the energy levels in an inhomogeneous way. The homogeneous broadened emission line will have a Lorentzian profile (i.e. will be best fitted by a Lorentzian function), while the inhomogeneously broadened emission will have a Gaussian profile. One or more phenomena may be present at the same time, but if one has a wider fluctuation, it will be the one responsible for the character of the broadening. These effects are not limited to laser systems, or even to optical spectroscopy. They are relevant in magnetic resonance as well, where the frequency range is in the radiofrequency region for NMR, and one can also refer to these effects in EPR where the lineshape is observed at fixed (microwave) frequency and in a magnetic field range. Semiconductors In semiconductors, if all oscillations have the same eigenfrequency and the broadening in the imaginary part of the dielectric function results only from a finite damping , the system is said to be homogeneously broadened, and has a Lorentzian profile. If the system contains many oscillators with slightly different frequencies about however, then the system is inhomogeneously broadened. See also Homogeneity (physics) Voigt profile Spectral line shape References Laser science Atomic, molecular, and optical physics
Homogeneous broadening
[ "Physics", "Chemistry" ]
707
[ "Atomic", " molecular", " and optical physics" ]
3,062,336
https://en.wikipedia.org/wiki/Human%20right%20to%20water%20and%20sanitation
The human right to water and sanitation (HRWS) is a principle stating that clean drinking water and sanitation are a universal human right because of their high importance in sustaining every person's life. It was recognized as a human right by the United Nations General Assembly on 28 July 2010. The HRWS has been recognized in international law through human rights treaties, declarations and other standards. Some commentators have based an argument for the existence of a universal human right to water on grounds independent of the 2010 General Assembly resolution, such as Article 11.1 of the International Covenant on Economic, Social and Cultural Rights (ICESCR); among those commentators, those who accept the existence of international ius cogens and consider it to include the Covenant's provisions hold that such a right is a universally binding principle of international law. Other treaties that explicitly recognize the HRWS include the 1979 Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) and the 1989 Convention on the Rights of the Child (CRC). The clearest definition of the human right to water was issued by the United Nations Committee on Economic, Social and Cultural Rights in General Comment 15 drafted in 2002. It was a non-binding interpretation that access to water was a condition for the enjoyment of the right to an adequate standard of living, inextricably related to the right to the highest attainable standard of health, and therefore a human right. It stated: "The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses." The first resolutions about the HRWS were passed by the UN General Assembly and the UN Human Rights Council in 2010. They stated that there was aman right to sanitation connected to the human right to water, since the lack of sanitation reduces the quality of water downstream, so subsequent discussions have continued emphasizing both rights together. In July 2010, United Nations (UN) General Assembly Resolution 64/292 reasserted the human right to receive safe, affordable, and clean accessible water and sanitation services. During that General Assembly, it stated that for the comprehension of enjoyment in life and all human rights, safe and clean drinking water as well as sanitation is acknowledged as a human right. General Assembly Resolution 64/292's assertion of a free human right of access to safe and clean drinking water and sanitation raises issues regarding governmental rights to control and responsibilities for securing that water and sanitation. The United Nations Development Programme has stated that broad recognition of the significance of accessing dependable and clean water and sanitation services will promote wide expansion of the achievement of a healthy and fulfilling life. A revised UN resolution in 2015 highlighted that the two rights were separate but equal. The HRWS obliges governments to ensure that people can enjoy quality, available, acceptable, accessible, and affordable water and sanitation. Affordability of water considers the extent to which the cost of water becomes inhibitive such that it requires one to sacrifice access to other essential goods and services. Generally, a rule of thumb for the affordability of water is that it should not surpass 3–5% of households' income. Accessibility of water considers the time taken, convenience in reaching the source and risks involved while getting to the source of water. Water must be accessible to every citizen, meaning that water should not be further than 1,000 meters or 3,280 feet and must be within 30 minutes. Availability of water considers whether the supply of water is available in adequate amounts, reliable and sustainable. Quality of water considers whether water is safe for consumption, including for drinking or other activities. For acceptability of water, it must not have any odor and should not consist of any color. The ICESCR requires signatory countries to progressively achieve and respect all human rights, including those of water and sanitation. They should work quickly and efficiently to increase access and improve service. International context The WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation reported that 663 million people did not have access to improved sources of drinking water and more than 2.4 billion people lacked access to basic sanitation services in 2015. Access to clean water is a major problem for many parts of the world. Acceptable sources include "household connections, public standpipes, boreholes, protected dug wells, protected springs and rainwater collections." Although 9 percent of the global population lacks access to water, there are "regions particularly delayed, such as Sub-Saharan Africa". The UN further emphasizes that "about 1.5 million children under the age of five die each year and 443 million school days are lost because of water- and sanitation-related diseases." In 2022, over 2 billion people, 25% of the world's population, lacked consistent access to clean drinking water. 4.2 billion lacked access to safe sanitation services. By 2024, new estimates are much higher, with 4.4 billion people in low- and middle-income countries lacking access to safe household drinking water. Legal foundations and recognition The International Covenant on Economic, Social and Cultural Rights (ICESCR) of 1966 codified the economic, social, and cultural rights found within the Universal Declaration on Human Rights (UDHR) of 1948. Neither of these early documents explicitly recognized human rights to water and sanitation. Several later international human rights conventions, however, had provisions that explicitly recognized rights to water and sanitation. The 1979 Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) has Article 14.2 that states that "parties shall take all appropriate measures to eliminate discrimination against women in rural areas to ensure, on a basis of equality of men and women, that they participate in and benefit from rural development and, in particular shall ensure to women the right: ... (h) To enjoy adequate living conditions, particularly in relation to housing, sanitation, electricity and water supply, transport and communications." The 1989 Convention on the Rights of the Child (CRC) has Article 24 that provides that "parties recognize the right of the child to the enjoyment of the highest attainable standard of health and to facilities for the treatment of illness and rehabilitation of health ... 2. States parties shall pursue full implementation of this right and, in particular, shall take appropriate measures... (c) To combat disease and malnutrition, including within the framework of primary health care, through, inter alia... the provision of adequate nutritious foods and clean drinking water..." The 2006 Convention on the Rights of Persons with Disabilities (CRPD) has Article 28(2)(a) that requires that "parties recognize the right of persons with disabilities to social protection and to the enjoyment of that right without discrimination on the basis of disability, and shall take appropriate steps to safeguard and promote the realization of this right, including measures to ensure equal access by persons with disabilities to clean water services, and to ensure access to appropriate and affordable services, devices and other assistance for disability-related needs." "The International Bill of Human Rights"- which comprises the 1966: International Covenant on Civil and Political Rights (ICCPR); 1966: Articles 11 and 12 of the 1966 International Covenant of Economic, Social, and Cultural Right (ICERS); and 1948: Article 25 of the Universal Declaration of Human Rights (UDHR) documented the evolution of human right to water and sanitation and other water-associated rights to be recognised in worldwide decree. Scholars also called attention to the importance of possible UN recognition of human rights to water and sanitation at the end of the twentieth century. Two early efforts to define the human right to water came from law professor Stephen McCaffrey of the University of the Pacific in 1992 and Dr. Peter Gleick in 1999. McCaffrey stated that "Such a right could be envisaged as part and parcel of the right to food or sustenance, the right to health, or most fundamentally, the right to life. Gleick added: "that access to a basic water requirement is a fundamental human right implicitly and explicitly supported by international law, declarations, and State practice." The UN Committee for Economic, Social and Cultural Rights (CESCR) overseeing ICESCR compliance came to similar conclusions as these scholars with General Comment 15 in 2002. It was found that, the right to water was an implicitly part of the right to an adequate standard of living and related to the right to the highest attainable standard of health and the rights to adequate housing and adequate food. It defines that "The human right to water entitles everyone to sufficient, safe, acceptable, physically accessible and affordable water for personal and domestic uses. An adequate amount of safe water is necessary to prevent death from dehydration, to reduce the risk of water-related disease and to provide for consumption, cooking, personal and domestic hygienic requirements." Several countries agreed and formally acknowledged the right to water to be part of their treaty obligations under the ICESCR (e.g., Germany; United Kingdom; Netherlands) after publication of General Comment 15. A further step was taken in 2005 by the former UN Sub-Commission on the Promotion and Protection of Human Rights which issued guidelines to assist governments to achieve and respect the human right to water and sanitation. These guidelines led the UN Human Rights Council to assign Catarina de Albuquerque as an independent expert on the issue of human rights obligations related to access to safe drinking water and sanitation in 2008. She wrote a detailed report in 2009 that outlined human rights obligations to sanitation, and the CESCR responded by stating that sanitation should be recognized by all states. Following intense negotiations, 122 countries formally acknowledged "the Human Right to Water and Sanitation" in General Assembly Resolution 64/292 on 28 July 2010. It recognized the right of every human being to have access to sufficient water for personal and domestic uses (between 50 and 100 liters of water per person per day), which must be safe, acceptable and affordable (water costs should not exceed 3% of household income), and physically accessible (the water source has to be within 1,000 meters of the home and collection time should not exceed 30 minutes)." The General Assembly declared that clean drinking water is "essential to the full enjoyment of life and all other human rights". In September 2010, the UN Human Rights Council adopted a resolution recognizing that the human right to water and sanitation forms part of the right to an adequate standard of living. The mandate of Catarina de Albuquerque as "Independent expert on the issue of human rights obligations related to access to safe drinking water and sanitation" was extended and renamed as "Special Rapporteur on the human right to safe drinking water and sanitation" after the resolutions in 2010. Through her reports to the Human Rights Council and the UN General Assembly, she continued clarifying the scope and content of the human right to water and sanitation. As Special Rapporteur, she addressed issues such as: Human Rights Obligations Related to Non-State Service Provision in Water and Sanitation (2010); Financing for the Realization of the Rights to Water and Sanitation (2011); Wastewater management in the realization of the rights to water and sanitation (2013); and Sustainability and non-retrogression in the realization of the rights to water and sanitation (2013). Léo Heller was appointed in 2014 to be the second Special Rapporteur on the human rights to safe drinking water and sanitation. Subsequent resolutions extended the mandate of the Special Rapporteur and defined each state's role in the respect of these rights. The most recent General Assembly Resolution 7/169 of 2015 has been called a declaration of "The Human Rights to Safe Drinking Water and Sanitation. It recognized the distinction between the right to water and the right to sanitation. This decision was made due to concern about the right to sanitation being overlooked when compared to the right to water. International jurisprudence Inter-American Court of Human Rights The right to water has been considered in the Inter-American Court of Human Rights case of the Sawhoyamaxa Indigenous Community v. Paraguay. The issues involved the states failure to acknowledge indigenous communities' property rights over ancestral lands. In 1991, the state removed the indigenous Sawhoyamaxa community from the land resulting in their loss of access to basic essential services, like water, food, schooling and health services. This fell within the scope of the American Convention on Human Rights; encroaching the right to life. Water is included in this right, as part of access to land. The courts required the lands to be returned, compensation provided, and basic goods and services to be implemented, while the community was in the process of having their lands returned. International Centre for Settlement of Investment Disputes The following cases from the International Centre for Settlement of Investment Disputes (ICSID) concern the contracts established between governments and corporations for the maintenance of waterways. Although the cases regard questions of investment, commentators have noted that the indirect impact of the right to water upon the verdicts is significant. World Bank data shows that water privatization spiked starting in the 1990s and significant growth in privatization continued into the 2000s. Azurix Corp v. Argentina The first notable case regarding the right to water in the ICSID is that of Azurix Corp v. Argentina. The dispute was between the Argentine Republic and Azurix Corporation regarding discrepancies arising from a 30-year contract between the parties to operate the water supply of various provinces. A consideration in regard to the right to water is implicitly made during the arbitration for compensation, where it was held that Azurix was entitled to a fair return on the market value of the investment. This was rather than the requested US$438.6 million, citing that a reasonable business person could not expect such a return, given the limits of water price increases and improvements that would be required to ensure a well-functioning, clean water system. Biwater Gauff Ltd v. Tanzania Secondly, a similar case encountered by the ICSID is that of Biwater Gauff Ltd v. Tanzania. This was again a case of a private water company in a contractual dispute with a government, this time the United Republic of Tanzania. This contract was for the operation and management of the Dar es Salaam water system. In May 2005, the Tanzania government ended the contract with Biwater Gauff for its alleged failure to meet performance guarantees. In July 2008, the Tribunal issued its decision on the case, declaring that the Tanzania government had violated the agreement with Biwater Gauff. It did not however award monetary damages to Biwater, acknowledging that public interest concerns were paramount in the dispute. Right to water in domestic law Without the existence of an international body that can enforce it, the human right to water relies upon the activity of national courts. The basis for this has been established through the constitutionalisation of economic, social and cultural rights (ESCR) through one of two means: as "directive principles" that are goals and are often non-justiciable; or as expressly protected and enforceable through the courts. South Africa In South Africa, the right to water is enshrined in the constitution and implemented by ordinary statutes. This is evidence of a slight modification of the second technique of constitutionalisation referred to as the "subsidiary legislation model". This means that a large portion of the content and implementation of the right is done an ordinary domestic statute with some constitutional standing. Residents of Bon Vista Mansions v. Southern Metropolitan Local Council The first notable case in which the courts did so was the Residents of Bon Vista Mansions v. Southern Metropolitan Local Council. The case was brought by residents of a block of flats (Bon Vista Mansions), following the disconnection of the water supply by the local Council, resulting from the failure to pay water charges. The court held that in adherence to the South African Constitution, that constitutionally all persons ought to have access to water as a right. Further reasoning for the decision was based on General Comment 12 on the Right to Food, made by the UN Committee on Economic, Social and Cultural Rights imposing upon parties to the agreement the obligation to observe and respect already existing access to adequate food by not implementing any encroaching measures. The court found that the discontinuation of the existing water source, which had not adhered to the "fair and reasonable" requirements of the South African Water Services Act, was illegal. It is important to note that the decision pre-dates the adoption of the UN General Comment No. 15. Mazibuko v. City of Johannesburg The quantity of water to be provided was further discussed in Mazibuko v City of Johannesburg. The case revolved around the distribution of water through pipes to Phiri, one of the oldest areas of Soweto. This case concerned two major issues: whether or not the city's policy regarding the supply of free basic water, 6 kilolitres per month to each account holder in the city was in conflict with Section 27 of the South African Constitution or Section 11 of the Water Services Act. The second issue being whether or not the installation of pre-paid water meters was lawful. It was held in the High Court that the city's by-laws did not provide for the installation of meters and that their installation was unlawful. Further, as the meters halted supply of water to residence once the free basic water supply had ended, this was deemed an unlawful discontinuation of the water supply. The court held the residents of Phiri should be provided with a free basic water supply of 50 litres per person per day. The work of the Centre for Applied Legal Studies (CALS) of the University of the Witwatersrand in Johannesburg, South Africa and the Pacific Institute in Oakland, California, shared a 2008 Business Ethics Network BENNY Award for their work on this case. The Pacific Institute contributed legal testimony based on the work of Dr. Peter Gleick defining a human right to water and quantifying basic human needs for water. The big respondents took the case to the Supreme Court of Appeal (SCA) which held that the city's water policy had been formulated based upon a material error of law in regards to the city's obligation to provide the minimum set in the South African National Standard, therefore it was set aside. The court also held the quantity for dignified human existence in compliance with section 27 of the constitution was in fact 42 litres per person per day rather than 50 litres per person per day. The SCA declared that the installation of water meters was illegal, but suspended the order for two years to give the city an opportunity to rectify the situation. The issues went further to the Constitutional Court, which held that the duty created by the constitution required that the state take reasonable legislative and other measures progressively to realise the achievement of the right to access of water, within its available resource. The Constitutional Court also held that it is a matter for the legislature and executive institution of government to act within the allowance of their budgets and that the scrutiny of their programs is a matter of democratic accountability. Therefore, the minimum content set out by the regulation 3(b) is constitutional, rendering the bodies to deviate upwards and further it is inappropriate for a court to determine the achievement of any social and economic right the government has taken steps to implement. The courts had instead focused their inquiry on whether the steps taken by Government are reasonable, and whether the Government subjects its policies to regular review. The judgment has been criticized for deploying an "unnecessarily limiting concept of judicial deference". India The two most prominent cases in India regarding the right to water illustrate that although this is not explicitly protected in the Constitution of India, it has been interpreted by the courts that the right to life includes the right to safe and sufficient water. Delhi Water Supply v. State of Haryana Here a water usage dispute arose due to the fact that the state of Haryana was using the Jamuna River for irrigation, while the residents of Delhi needed it for the purpose of drinking. It was reasoned that domestic use overrode the commercial use of water and the court ruled that Haryana must allow enough water to get to Delhi for consumption and domestic use. Subhash Kumar v. State of Bihar Also notable is the case of Subhash Kumar v. State of Bihar, where a discharge of sludge from the washeries into the Bokaro River was petitioned against by way of public interest litigation. The courts found that the right to life, as protected by Article 21 of the Constitution of India, included the right to enjoy pollution-free water. The case failed upon the facts and it was held that the petition had been filed not in any public interest but for the petitioner's personal interest and therefore a continuation of litigation would amount to an abuse of process. World Rights to Water Day Water is essential for existence of living beings including humans. Therefore, having access to pure and adequate quantity of water is an inalienable human right. Hence, the Eco Needs Foundation (ENF) deems it necessary to recognise the right to water (with ensured per capita minimum quantity of water) through the appropriate expressed legal provision. The United Nations with its several covenants has made it obligatory for all the nations to ensure equitable distribution of water amongst all the citizens. Accordingly, the ENF began to observe and promote the celebration of World Rights to Water Day on 20 March, the date on which Dr. Babasaheb Ambedkar ("the father of modern India") led the world's first satyagraha for water in 1927. The World Right to Water Day calls for the adoption of special legislation establishing the universal right to water. Under the guidance of founder Dr Priyanand Agale, the ENF arranges a variety of several programmes to ensure the right to water for Indian citizens. New Zealand ESCR are not explicitly protected in New Zealand at the current time, either by the Human Rights or Bill of Rights Acts, therefore the right to water is not defended by law there. The New Zealand Law Society has recently indicated that this country would give further consideration to the legal status of economic, social and cultural rights. United States In Pilchen v. City of Auburn, New York, a single mother named Diane Pilchen was living as a rental tenant in a foreclosed house, in which the owner (landlord) of the house had failed to pay the water bill for some time. The City of Auburn billed Pilchen for the landlord's arrears, and repeatedly shut her water service off without notice when she could not pay these debts, making the house uninhabitable. The city condemned the home and forced Pilchen and her child to move out. Pilchen was represented by the Public Utility Law Project of New York (PULP) in the lawsuit. The City of Auburn attempted unsuccessfully to argue that water is not a constitutional right because bottled water could be used instead, an argument that was contested by PULP as absurd. In 2010, Pilchen won summary judgment in which it was determined that shutting off the water violated her constitutional rights, and that Pilchen could not be billed and stopped from having water due to an unrelated party's delays in paying water bills. Standing Rock Sioux Tribe v. United States Army Corps of Engineers In 2016, there was a prominent case known as Standing Rock Sioux Tribe v. United States Army Corps of Engineers, where the Sioux Tribe challenged the building of the Dakota Access Pipeline (DAPL). This crude oil pipeline spans over four states, which includes the beginning in North Dakota, then passes through both South Dakota and Iowa, and ends in Illinois. The Standing Rock Reservation is located near the border of North and South Dakota and the pipeline is built within a half a mile of it. Since the pipeline was built near the reservation, the tribe feared that historical and cultural significance of Lake Oahe would be tampered with, even though the pipeline does not run directly through the lake. Lake Oahe provides basic water necessities for the Sioux Tribe such as drinking water and for sanitation. The construction of the oil pipeline means that there is a higher risk of an oil spill into Lake Oahe, which made the tribe concerned. The Sioux Tribe sued the DAPL company as they believed that the creation of the pipeline was violating the National Environmental Policy Act and the National Historic Preservation Act. After the 2016 briefing, the court was unable to come to a conclusion, so the court decided to do additional briefings. After five briefings in 2017 and one briefing in 2018, the court has allowed the construction of the pipeline, but the Standing Rock tribe continues to fight to ensure that pipeline is removed. Australia The attention in Australia is focused on the rights of Indigenous Australians to water and sanitation. History of settler-colonialism overshadows today's state governance that regulates water use to indigenous Australians. There are many governmental agreements, but most of them are incomplete to fully influence power to the indigenous right to water and sanitation. In Mabo v Queensland, 1992, Native rights were legally recognized at the first time. Indigenous Australians often claim cultural bonds to the land. Although "culture" was recognized in the court as much as land resources, cultural and spiritual value of Aborigines to water body are fuzzy. It is challenging but needed to transcend their cultural and spiritual values into legal sphere. For now, there is virtually no progress. Australian water law basically prescribes surface water for citizens who can use surface water but cannot own. In the constitution, however, there is no description about inland and riparian water. Therefore, the sphere of inland/riparian water rights are the primary mandates of the state. The Commonwealth Government obtains authority over water by borrowing the help of external relationship, including the Grants Power, Trade and Commerce Power. In 2000, the Federal Court concluded the agreement that allowed indigenous landowners to take water for traditional purposes. However, the use is limited to traditional purpose, which did not include irrigation as a traditional practice. In June 2004, CoAC concluded an intergovernmental accord on a National Water Initiative (NWI), promoting recognition of indigenous right to water. However, NWI is not concerned broadly about complex history of settler-colonialism, which has systematically created an unequal pattern of water distribution. Indigenous people in Australia are constantly seeking the right to water. Remaining discussions Transboundary effects Given the fact that access to water is a cross-border source of concern and potential conflict in the Middle East, South Asia, the Eastern Mediterranean and parts of North America amongst other places, some non-governmental organizations (NGOs) and scholars argue that the right to water also has a trans-national or extraterritorial aspect. They argue that given the fact that water supplies naturally overlap and cross borders, states also have a legal obligation not to act in a way that might have a negative effect on the enjoyment of human rights in other states. The formal acknowledgement of this legal obligation could prevent the negative effects of the global "water crunch" (as a future threat and one negative result of human overpopulation). Water shortages and increasing consumption of freshwater make this right incredibly complicated. As the world population rapidly increases, freshwater shortages will cause many problems. A shortage in the quantity of water brings up the question of whether or not water should be transferred from one country to another. Water Dispute Between India and Pakistan The water dispute between India and Pakistan is influenced by the scarcity of water in the South Asian region. The two countries have a pre-existing agreement known as the Indus Waters Treaty. The treaty was formed to limit the conflict between India and Pakistan regarding the use of the Indus basin and allocate water supply for both countries after the countries gained independence. However, disagreements regarding it have surfaced. According to the treaty, India is allowed to use the western river basin for irrigation and non-consumptive purposes, while Pakistan has the majority of control over the basin. However, Pakistan has voiced concerns that India's construction on the rivers may lead to severe water scarcity in Pakistan. Moreover, Pakistan voiced that the dams constructed by India for non-consumptive purposes may be used to divert water flow and disrupt Pakistan's water supply. In addition, the treaty involves rivers that originate from Jammu and Kashmir, which have been excluded from control over their own water bodies. Water commercialization versus state provision Contention exists regarding whose, if anyone's, responsibility it is to ensure the human right to water and sanitation. Often, two schools of thought emerge from such discourse: it is the state's responsibility to provide access to clean water to people versus the privatization of distribution and sanitation. The commercialization of water is offered as a response to the increased scarcity of water that has resulted due to the world population tripling while the demand for water has increased six-fold. Market environmentalism uses the markets as a solution to environmental problems such as environmental degradation and an inefficient use of resources. Supporters of market environmentalism believe that the managing of water as an economic good by private companies will be more efficient than governments providing water resources to their citizens. Such proponents claim that the government costs of developing infrastructure for water resource allocation is not worth the marginal benefits of water provision, thus deeming the state as an ineffective provider of water. Moreover, it is argued that water commodification leads to more sustainable water management due to the economic incentives for consumers to use water more efficiently. The opponents believe that the consequence of water being a human right excludes private sector involvement and requires that water should be given to all people because it is essential to life. Access to water as a human right is used by some NGOs as a means to combat privatization efforts. A human right to water "generally rests on two justifications: the non-substitutability of drinking water ('essential for life'), and the fact that many other human rights which are explicitly recognized in the UN Conventions are predicated upon an (assumed) availability of water (e.g. the right to food)." Organizations Organizations working on the rights to water and sanitation are listed below. United Nations organizations OHCHR (UN Office of the High Commissioner on Human Rights) UNDP UNICEF Sanitation and Water for All Governmental cooperation agencies DFID (United Kingdom's Cooperation Agency) GIZ (German Corporation for International Cooperation) SDC (Swiss Agency for Development and Cooperation) EPA (United States Environmental Protection Agency) International non-governmental organizations and networks Action against Hunger (ACF) Blood:Water Center for Water Security and Cooperation Freshwater Action Network (FAN) Pure Water for the World The DigDeep Right to Water Project The Pacific Institute The Water Project Transnational Institute with the Water Justice project UUSC WaterAid WaterLex (defunct as of 2020) PeaceJam Thirst Project See also References External links Special Rapporteur on the human right to safe drinking water and sanitation by the UN High Commissioner for Human Rights WaterLex Archive The Human Right to Water and Sanitation: Translating Theory into Practice (2009) by GIZ Right to Water: Understanding children's right to water on Humanium Water Sanitation Right to health
Human right to water and sanitation
[ "Environmental_science" ]
6,336
[ "Water", "Hydrology" ]
3,062,468
https://en.wikipedia.org/wiki/Butane%20%28data%20page%29
This page provides supplementary chemical data on n-butane. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as eChemPortal, and follow its directions. Structure and properties Thermodynamic properties Vapor pressure of liquid n-Butane: Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Spectral data References Chemical data pages Data page Chemical data pages cleanup
Butane (data page)
[ "Chemistry" ]
110
[ "Chemical data pages", "nan" ]
3,062,543
https://en.wikipedia.org/wiki/Architects%27%20Journal
Architects' Journal is a professional architecture magazine, published monthly in London by Metropolis International. Each issue includes in-depth features on relevant current affairs, alongside profiles of recently completed buildings. Ten times per year the magazine is accompanied by sister publication AJ Specification. Architects’ Journal’s website – which attracts 8 million views a year – is focused on breaking news, and is where the publication’s investigative journalism and campaigns can be found. This includes the RetroFirst campaign, which helps architects to ensure they embed sustainability into every part of their practice. In 2018 Architects’ Journal was awarded Magazine of the Year at the Professional Publishers Association Awards., and was named Editorial Brand of the Year at the International Building Press Awards in 2020, 2021 and 2023. History The first edition was of what is now Architects' Journal was published in 1895. Originally named The Builder's Journal and Architectural Record, from 1906 to 1910 it was known as The Builder's Journal and Architectural Engineer, and it then became The Architects and Builder's Journal from 1911 until 1919, at which point it was given its current name. In December 2015, then-owner of the title Top Right Group rebranded as Ascential and, in January 2017, announced its intention to sell 13 "heritage titles", including Architects Journal. It was announced on 1 June 2017 that the brands had been published by Metropolis International. See also List of architecture magazines References External links The Architects' Journal Internet Archive publication page 1895 establishments in the United Kingdom Architecture magazines Architecture in the United Kingdom Magazines established in 1895 Magazines published in London Professional and trade magazines published in the United Kingdom
Architects' Journal
[ "Engineering" ]
326
[ "Architecture stubs", "Architecture" ]
3,062,544
https://en.wikipedia.org/wiki/SEMATECH
SEMATECH (from Semiconductor Manufacturing Technology) was a not-for-profit consortium that performed research and development to advance chip manufacturing. SEMATECH involved collaboration between various sectors of the R&D community, including chipmakers, equipment and material suppliers, universities, research institutes, and government partners. SEMATECH's mission was to rejuvenate the U.S. semiconductor industry through collective R&D efforts, focused on improving manufacturing processes and introducing cutting-edge technologies. The group was first funded by the U.S. Department of Defense through the Defense Advanced Research Projects Agency until 1997 and later by member dues. SEMATECH was moved from Austin, Texas to Albany, New York in 2007 after receiving state funding from the state of New York. The consortium was absorbed by SUNY Polytechnic University in 2015 after a long decline, leaving behind a mixed legacy. History SEMATECH was conceived in 1986, formed in 1987, and began operating in Austin, Texas in 1988 as a partnership between the United States government and 14 U.S.-based semiconductor manufacturers to solve common manufacturing problems and regain competitiveness for the U.S. semiconductor industry that had been surpassed by Japanese industry in the mid-1980s. SEMATECH was funded over five years by public subsidies coming from the U.S. Department of Defense via the Defense Advanced Research Projects Agency (DARPA) for a total of $500 million. This represents about $1 billion in 2022 dollars or only 2 percent of the CHIPS investment. Following a determination by SEMATECH Board of Directors to eliminate matching funds from the U.S. government after 1996, the organization's focus shifted from the U.S. semiconductor industry to the larger international semiconductor industry, abandoning the initial U.S. government-initiative. Its members represented about half of the worldwide chip market. In late 2015, SEMATECH transferred the Critical Materials Council (CMC), a membership group of semiconductor fabricators, to TECHCET CA LLC, an advisory service firm dedicated to providing supply-chain and market information on electronic materials. This group of procurement and quality managers continues to focus on anticipating and remedying materials supply-chain issues and focusing on best practices. The CMC is now an integral part of TECHCET's business and provides guidance on their work of Critical Materials Reports and CMC Conference activities. Technology focus SEMATECH conducted research on the technical challenges and costs associated with developing new materials, processes, and equipment for semiconductor manufacturing. Advanced technology programs focus on EUV lithography including photomask blank and photoresist development, materials and emerging technologies for device structures, metrology, manufacturing, and environment and safety issues. In 1989, the partnership spent a substantial amount of its resources to help the struggling GCA Corp., an equipment manufacturer being eclipsed by Japanese competitors. The initial investment helped the Massachusetts-based factory stay afloat, and even modernize, but failed to address the larger issue – a lack of demand. College of Nanoscale Science and Engineering (CNSE) In January 2003 SEMATECH and the University at Albany – State University of New York – established a major partnership to commercialize advanced semiconductor, nanotechnology and other emerging technologies. Through its government-university-industry partnership with the State of New York and the College of Nanoscale Science and Engineering (CNSE) of the University at Albany, SEMATECH conducted programs in lithography and metrology at CNSE's Albany NanoTech Complex. In 2010, SEMATECH expanded its cooperation with CNSE with the announcement that the ISMI would relocate its headquarters and operations to CNSE's Albany NanoTech Complex beginning in January 2011. With over $6.5 billion in high-tech investments, CNSE's Albany NanoTech Complex features the only fully integrated, 300 mm wafer, computer chip pilot prototyping and demonstration line within of Class 1 capable cleanrooms. Location SEMATECH had access to laboratories and development fabs in Austin, Texas (1987-2007) and Albany, New York (2007-2015). Industry participation SEMATECH hosted a variety of worldwide conferences, symposiums, and workshops (e.g., Litho Forum, Manufacturing Week) and delivered papers, presentations, and joint reports at major industry conferences (SPIE, IEDM, SEMICON West). References External links SEMATECH homepage Organizations established in 1987 Non-profit organizations based in New York (state) Technology consortia Information technology organizations based in North America Cleanroom technology
SEMATECH
[ "Chemistry" ]
925
[ "Cleanroom technology" ]
3,062,599
https://en.wikipedia.org/wiki/Anonymous%20recursion
In computer science, anonymous recursion is recursion which does not explicitly call a function by name. This can be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it – or implicitly, via reflection features which allow one to access certain functions depending on the current context, especially "the current function" or sometimes "the calling function of the current function". In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilities to support it. In general programming practice, however, this is considered poor style, and recursion with named functions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in any language that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear than explicitly recursing by name. In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursion without requiring named functions. This is particularly important for the lambda calculus, which has anonymous unary functions, but is able to compute any recursive function. This anonymous recursion can be produced generically via fixed-point combinators. Use Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they form closures or are used as callbacks, to avoid having to bind the name of the function. Anonymous recursion primarily consists of calling "the current function", which results in direct recursion. Anonymous indirect recursion is possible, such as by calling "the caller (the previous function)", or, more rarely, by going further up the call stack, and this can be chained to produce mutual recursion. The self-reference of "the current function" is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer to the current context. Anonymous recursion can also be used for named functions, rather that calling them by name, say to specify that one is recursing on the current function, or to allow one to rename the function without needing to change the name where it calls itself. However, as a matter of programming style this is generally not done. Alternatives Named functions The usual alternative is to use named functions and named recursion. Given an anonymous function, this can be done either by binding a name to the function, as in named function expressions in JavaScript, or by assigning the function to a variable and then calling the variable, as in function statements in JavaScript. Since languages that allow anonymous functions generally allow assigning these functions to variables (if not first-class functions), many languages do not provide a way to refer to the function itself, and explicitly reject anonymous recursion; examples include Go. For example, in JavaScript the factorial function can be defined via anonymous recursion as such: [1, 2, 3, 4, 5].map(function(n) { return (!(n > 1)) ? 1 : arguments.callee(n-1) * n; }); Rewritten to use a named function expression yields: [1, 2, 3, 4, 5].map(function factorial(n) { return (!(n > 1)) ? 1 : factorial(n-1) * n; }); Passing functions as arguments Even without mechanisms to refer to the current function or calling function, anonymous recursion is possible in a language that allows functions as arguments. This is done by adding another parameter to the basic recursive function and using this parameter as the function for the recursive call. This creates a higher-order function, and passing this higher function itself allows anonymous recursion within the actual recursive function. This can be done purely anonymously by applying a fixed-point combinator to this higher order function. This is mainly of academic interest, particularly to show that the lambda calculus has recursion, as the resulting expression is significantly more complicated than the original named recursive function. Conversely, the use of fixed-pointed combinators may be generically referred to as "anonymous recursion", as this is a notable use of them, though they have other applications. This is illustrated below using Python. First, a standard named recursion: def fact(n): if n == 0: return 1 return n * fact(n - 1) Using a higher-order function so the top-level function recurses anonymously on an argument, but still needing the standard recursive function as an argument: def fact0(n0): if n0 == 0: return 1 return n0 * fact0(n0 - 1) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(n1 - 1) fact = lambda n: fact1(fact0, n) We can eliminate the standard recursive function by passing the function argument into the call: fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = lambda n: fact1(fact1, n) The second line can be replaced by a generic higher-order function called a combinator: F = lambda f: (lambda x: f(f, x)) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = F(fact1) Written anonymously: (lambda f: (lambda x: f(f, x))) \ (lambda g, n1: 1 if n1 == 0 else n1 * g(g, n1 - 1)) In the lambda calculus, which only uses functions of a single variable, this can be done via the Y combinator. First make the higher-order function of two variables be a function of a single variable, which directly returns a function, by currying: fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = fact1(fact1) There are two "applying a higher order function to itself" operations here: f(f) in the first line and fact1(fact1) in the second. Factoring out the second double application into a combinator yields: C = lambda x: x(x) fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = C(fact1) Factoring out the other double application yields: C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = C(D(fact1)) Combining the two combinators into one yields the Y combinator: C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) Y = lambda y: C(D(y)) fact1 = lambda g: (lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1) Expanding out the Y combinator yields: Y = lambda f: (lambda x: f(lambda v: x(x)(v))) \ (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1) Combining these yields a recursive definition of the factorial in lambda calculus (anonymous functions of a single variable): (lambda f: (lambda x: f(lambda v: x(x)(v))) (lambda x: f(lambda v: x(x)(v)))) \ (lambda g: (lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1))) Examples APL In APL, the current dfn is accessible via ∇. This allows anonymous recursion, such as in this implementation of the factorial: {0=⍵:1 ⋄ ⍵×∇ ⍵-1} 5 120 {0=⍵:1 ⋄ ⍵×∇ ⍵-1}¨ ⍳10 ⍝ applied to each element of 0 to 9 1 1 2 6 24 120 720 5040 40320 362880 JavaScript In JavaScript, the current function is accessible via arguments.callee, while the calling function is accessible via arguments.caller. These allow anonymous recursion, such as in this implementation of the factorial: [1, 2, 3, 4, 5].map(function(n) { return (!(n > 1)) ? 1 : arguments.callee(n - 1) * n; }); Perl Starting with Perl 5.16, the current subroutine is accessible via the token, which returns a reference to the current subroutine, or undef outside a subroutine. This allows anonymous recursion, such as in the following implementation of the factorial: #!/usr/bin/perl use feature ":5.16"; print sub { my $x = shift; $x > 0 ? $x * ->( $x - 1 ) : 1; }->(5), "\n"; R In R, the current function can be called using Recall. For example, sapply(0:5, function(n) { if (n == 0) return(1) n * Recall(n - 1) }) It will not work, however, if passed as an argument to another function, e.g. lapply, inside the anonymous function definition. In this case, sys.function(0) can be used. For example, the code below squares a list recursively: (function(x) { if (is.list(x)) { lapply(x, sys.function(0)) } else { x^2 } })(list(list(1, 2, 3), list(4, 5))) References Recursion Articles with example R code
Anonymous recursion
[ "Mathematics" ]
2,223
[ "Mathematical logic", "Recursion" ]
3,062,626
https://en.wikipedia.org/wiki/Linear%20encoder
A linear encoder is a sensor, transducer or readhead paired with a scale that encodes position. The sensor reads the scale in order to convert the encoded position into an analog or digital signal, which can then be decoded into position by a digital readout (DRO) or motion controller. The encoder can be either incremental or absolute. In an incremental system, position is determined by motion over time; in contrast, in an absolute system, motion is determined by position over time. Linear encoder technologies include optical, magnetic, inductive, capacitive and eddy current. Optical technologies include shadow, self imaging and interferometric. Linear encoders are used in metrology instruments, motion systems, inkjet printers and high precision machining tools ranging from digital calipers and coordinate measuring machines to stages, CNC mills, manufacturing gantry tables and semiconductor steppers. Physical principle Linear encoders are transducers that exploit many different physical properties in order to encode position: Scale/reference based Optical Optical linear encoders dominate the high resolution market and may employ shuttering/moiré, diffraction or holographic principles. Optical encoders are the most accurate of the standard styles of encoders, and the most commonly used in industrial automation applications. When specifying an optical encoder, it's important that the encoder have extra protection built in to prevent contamination from dust, vibration and other conditions common to industrial environments. Typical incremental scale periods vary from hundreds of micrometers down to sub-micrometer. Interpolation can provide resolutions as fine as a nanometer. Light sources used include infrared LEDs, visible LEDs, miniature light-bulbs and laser diodes. Magnetic Magnetic linear encoders employ either active (magnetized) or passive (variable reluctance) scales and position may be sensed using sense-coils, Hall effect or magnetoresistive readheads. With coarser scale periods than optical encoders (typically a few hundred micrometers to several millimeters) resolutions in the order of a micrometer are the norm. Capacitive Capacitive linear encoders work by sensing the capacitance between a reader and scale. Typical applications are digital calipers. One of the disadvantages is the sensitivity to uneven dirt, which can locally change the relative permittivity. Inductive Inductive technology is robust to contaminants, allowing calipers and other measurement tools that are coolant-proof. A well-known application of the inductive measuring principle is the Inductosyn. Eddy current US Patent 3820110, "Eddy current type digital encoder and position reference", gives an example of this type of encoder, which uses a scale coded with high and low permeability, non-magnetic materials, which is detected and decoded by monitoring changes in inductance of an AC circuit that includes an inductive coil sensor. Maxon makes an example (rotary encoder) product (the MILE encoder). Without scales Optical image sensor The sensors are based on an image correlation method. The sensor takes subsequent pictures from the surface being measured and compares the images for displacement. Resolutions down to a nanometer are possible. Applications There are two main areas of application for linear encoders: Measurement Measurement application include coordinate-measuring machines (CMM), laser scanners, calipers, gear measurement, tension testers, and digital read outs (DROs). Motion systems Servo controlled motion systems employ linear encoder so as to provide accurate, high-speed movement. Typical applications include robotics, machine tools, pick-and-place PCB assembly equipment; semiconductors handling and test equipment, wire bonders, printers and digital presses. Output signal formats Incremental signals Linear encoders can have analog or digital outputs. Analog The industry standard analog output for linear encoders is sine and cosine quadrature signals. These are usually transmitted differentially so as to improve noise immunity. An early industry standard was 12 μA peak-peak current signals but more recently this has been replaced with 1V peak to peak voltage signals. Compared to digital transmission, the analog signals' lower bandwidth helps to minimise EMC emissions. Quadrature sine/cosine signals can be monitored easily by using an oscilloscope in XY mode to display a circular Lissajous figure. Highest accuracy signals are obtained if the Lissajous figure is circular (no gain or phase error) and perfectly centred. Modern encoder systems employ circuitry to trim these error mechanisms automatically. The overall accuracy of the linear encoder is a combination of the scale accuracy and errors introduced by the readhead. Scale contributions to the error budget include linearity and slope (scaling factor error). Readhead error mechanisms are usually described as cyclic error or sub-divisional error (SDE) as they repeat every scale period. The largest contributor to readhead inaccuracy is signal offset, followed by signal imbalance (ellipticity) and phase error (the quadrature signals not being exactly 90° apart). Overall signal size does not affect encoder accuracy, however, signal-to-noise and jitter performance may degrade with smaller signals. Automatic signal compensation mechanisms can include automatic offset compensation (AOC), automatic balance compensation (ABC) and automatic gain control (AGC). Phase is more difficult to compensate dynamically and is usually applied as one time compensation during installation or calibration. Other forms of inaccuracy include signal distortion (frequently harmonic distortion of the sine/cosine signals). Digital A linear incremental encoder has two digital output signals, A and B, which issue quadrature squarewaves. Depending on its internal mechanism, an encoder may derive A and B directly from sensors which are fundamentally digital in nature, or it may interpolate its internal, analogue sine/cosine signals. In the latter case, the interpolation process effectively sub-divides the scale period and thereby achieves higher measurement resolution. In either case, the encoder will output quadrature squarewaves, with the distance between edges of the two channels being the resolution of the encoder. The reference mark or index pulse is also output in digital form, as a pulse which is one to four units-of-resolution wide. The output signals may be directly transmitted to a digital incremental encoder interface for position tracking. The major advantages of linear incremental encoders are improved noise immunity, high measurement accuracy, and low-latency reporting of position changes. However, the high frequency, fast signal edges may produce more EMC emissions. Absolute reference signals As well as analog or digital incremental output signals, linear encoders can provide absolute reference or positioning signals. Reference mark Most incremental, linear encoders can produce an index or reference mark pulse providing a datum position along the scale for use at power-up or following a loss of power. This index signal must be able to identify position within one, unique period of the scale. The reference mark may comprise a single feature on the scale, an autocorrelator pattern (typically a Barker code) or a chirp pattern. Distance coded reference marks (DCRM) are placed onto the scale in a unique pattern allowing a minimal movement (typically moving past two reference marks) to define the readhead's position. Multiple, equally spaced reference marks may also be placed onto the scale such that following installation, the desired marker can either be selected - usually via a magnet or optically or unwanted ones deselected using labels or by being painted over. Absolute code With suitably encoded scales (multitrack, vernier, digital code, or pseudo-random code) an encoder can determine its position without movement or needing to find a reference position. Such absolute encoders also communicate using serial communication protocols. Many of these protocols are proprietary (e.g., Fanuc, Mitsubishi, FeeDat (Fagor Automation), Heidenhain EnDat, DriveCliq, Panasonic, Yaskawa) but open standards such as BiSS are now appearing, which avoid tying users to a particular supplier. Limit switches Many linear encoders include built-in limit switches; either optical or magnetic. Two limit switches are frequently included such that on power-up the controller can determine if the encoder is at an end-of-travel and in which direction to drive the axis. Physical arrangement and protection Linear encoders may be either enclosed or open. Enclosed linear encoders are employed in dirty, hostile environments such as machine-tools. They typically comprise an aluminium extrusion enclosing a glass or metal scale. Flexible lip seals allow an internal, guided readhead to read the scale. Accuracy is limited due to the friction and hysteresis imposed by this mechanical arrangement. For the highest accuracy, lowest measurement hysteresis and lowest friction applications, open linear encoders are used. Linear encoders may use transmissive (glass) or reflective scales, employing Ronchi or phase gratings. Scale materials include chrome on glass, metal (stainless steel, gold plated steel, Invar), ceramics (Zerodur) and plastics. The scale may be self-supporting, thermally mastered to the substrate (via adhesive or adhesive tape) or track mounted. Track mounting may allow the scale to maintain its own coefficient of thermal expansion and allows large equipment to be broken down for shipment. Encoder terms Resolution Repeatability Hysteresis Signal-to-noise ratio/noise/jitter Lissajous figure Quadrature Index/reference mark/datum/fiducial Distance coded reference marks (DCRM) See also Rotary encoder References Further reading Electromechanical engineering Position sensors
Linear encoder
[ "Engineering" ]
2,058
[ "Electrical engineering", "Electromechanical engineering", "Mechanical engineering by discipline" ]
3,062,637
https://en.wikipedia.org/wiki/Estimation%20of%20distribution%20algorithm
Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima. EDAs belong to the class of evolutionary algorithms. The main difference between EDAs and most conventional evolutionary algorithms is that evolutionary algorithms generate new candidate solutions using an implicit distribution defined by one or more variation operators, whereas EDAs use an explicit probability distribution encoded by a Bayesian network, a multivariate normal distribution, or another model class. Similarly as other evolutionary algorithms, EDAs can be used to solve optimization problems defined over a number of representations from vectors to LISP style S expressions, and the quality of candidate solutions is often evaluated using one or more objective functions. The general procedure of an EDA is outlined in the following: t := 0 initialize model M(0) to represent uniform distribution over admissible solutions while (termination criteria not met) do P := generate N>0 candidate solutions by sampling M(t) F := evaluate all candidate solutions in P M(t + 1) := adjust_model(P, F, M(t)) t := t + 1 Using explicit probabilistic models in optimization allowed EDAs to feasibly solve optimization problems that were notoriously difficult for most conventional evolutionary algorithms and traditional optimization techniques, such as problems with high levels of epistasis. Nonetheless, the advantage of EDAs is also that these algorithms provide an optimization practitioner with a series of probabilistic models that reveal a lot of information about the problem being solved. This information can in turn be used to design problem-specific neighborhood operators for local search, to bias future runs of EDAs on a similar problem, or to create an efficient computational model of the problem. For example, if the population is represented by bit strings of length 4, the EDA can represent the population of promising solution using a single vector of four probabilities (p1, p2, p3, p4) where each component of p defines the probability of that position being a 1. Using this probability vector it is possible to create an arbitrary number of candidate solutions. Estimation of distribution algorithms (EDAs) This section describes the models built by some well known EDAs of different levels of complexity. It is always assumed a population at the generation , a selection operator , a model-building operator and a sampling operator . Univariate factorizations The most simple EDAs assume that decision variables are independent, i.e. . Therefore, univariate EDAs rely only on univariate statistics and multivariate distributions must be factorized as the product of univariate probability distributions, Such factorizations are used in many different EDAs, next we describe some of them. Univariate marginal distribution algorithm (UMDA) The UMDA is a simple EDA that uses an operator to estimate marginal probabilities from a selected population . By assuming contain elements, produces probabilities: Every UMDA step can be described as follows Population-based incremental learning (PBIL) The PBIL, represents the population implicitly by its model, from which it samples new solutions and updates the model. At each generation, individuals are sampled and are selected. Such individuals are then used to update the model as follows where is a parameter defining the learning rate, a small value determines that the previous model should be only slightly modified by the new solutions sampled. PBIL can be described as Compact genetic algorithm (cGA) The CGA, also relies on the implicit populations defined by univariate distributions. At each generation , two individuals are sampled, . The population is then sorted in decreasing order of fitness, , with being the best and being the worst solution. The CGA estimates univariate probabilities as follows where, is a constant defining the learning rate, usually set to . The CGA can be defined as Bivariate factorizations Although univariate models can be computed efficiently, in many cases they are not representative enough to provide better performance than GAs. In order to overcome such a drawback, the use of bivariate factorizations was proposed in the EDA community, in which dependencies between pairs of variables could be modeled. A bivariate factorization can be defined as follows, where contains a possible variable dependent to , i.e. . Bivariate and multivariate distributions are usually represented as probabilistic graphical models (graphs), in which edges denote statistical dependencies (or conditional probabilities) and vertices denote variables. To learn the structure of a PGM from data linkage-learning is employed. Mutual information maximizing input clustering (MIMIC) The MIMIC factorizes the joint probability distribution in a chain-like model representing successive dependencies between variables. It finds a permutation of the decision variables, , such that minimizes the Kullback-Leibler divergence in relation to the true probability distribution, i.e. . MIMIC models a distribution New solutions are sampled from the leftmost to the rightmost variable, the first is generated independently and the others according to conditional probabilities. Since the estimated distribution must be recomputed each generation, MIMIC uses concrete populations in the following way Bivariate marginal distribution algorithm (BMDA) The BMDA factorizes the joint probability distribution in bivariate distributions. First, a randomly chosen variable is added as a node in a graph, the most dependent variable to one of those in the graph is chosen among those not yet in the graph, this procedure is repeated until no remaining variable depends on any variable in the graph (verified according to a threshold value). The resulting model is a forest with multiple trees rooted at nodes . Considering the non-root variables, BMDA estimates a factorized distribution in which the root variables can be sampled independently, whereas all the others must be conditioned to the parent variable . Each step of BMDA is defined as follows Multivariate factorizations The next stage of EDAs development was the use of multivariate factorizations. In this case, the joint probability distribution is usually factorized in a number of components of limited size . The learning of PGMs encoding multivariate distributions is a computationally expensive task, therefore, it is usual for EDAs to estimate multivariate statistics from bivariate statistics. Such relaxation allows PGM to be built in polynomial time in ; however, it also limits the generality of such EDAs. Extended compact genetic algorithm (eCGA) The ECGA was one of the first EDA to employ multivariate factorizations, in which high-order dependencies among decision variables can be modeled. Its approach factorizes the joint probability distribution in the product of multivariate marginal distributions. Assume is a set of subsets, in which every is a linkage set, containing variables. The factorized joint probability distribution is represented as follows The ECGA popularized the term "linkage-learning" as denoting procedures that identify linkage sets. Its linkage-learning procedure relies on two measures: (1) the Model Complexity (MC) and (2) the Compressed Population Complexity (CPC). The MC quantifies the model representation size in terms of number of bits required to store all the marginal probabilities The CPC, on the other hand, quantifies the data compression in terms of entropy of the marginal distribution over all partitions, where is the selected population size, is the number of decision variables in the linkage set and is the joint entropy of the variables in The linkage-learning in ECGA works as follows: (1) Insert each variable in a cluster, (2) compute CCC = MC + CPC of the current linkage sets, (3) verify the increase on CCC provided by joining pairs of clusters, (4) effectively joins those clusters with highest CCC improvement. This procedure is repeated until no CCC improvements are possible and produces a linkage model . The ECGA works with concrete populations, therefore, using the factorized distribution modeled by ECGA, it can be described as Bayesian optimization algorithm (BOA) The BOA uses Bayesian networks to model and sample promising solutions. Bayesian networks are directed acyclic graphs, with nodes representing variables and edges representing conditional probabilities between pair of variables. The value of a variable can be conditioned on a maximum of other variables, defined in . BOA builds a PGM encoding a factorized joint distribution, in which the parameters of the network, i.e. the conditional probabilities, are estimated from the selected population using the maximum likelihood estimator. The Bayesian network structure, on the other hand, must be built iteratively (linkage-learning). It starts with a network without edges and, at each step, adds the edge which better improves some scoring metric (e.g. Bayesian information criterion (BIC) or Bayesian-Dirichlet metric with likelihood equivalence (BDe)). The scoring metric evaluates the network structure according to its accuracy in modeling the selected population. From the built network, BOA samples new promising solutions as follows: (1) it computes the ancestral ordering for each variable, each node being preceded by its parents; (2) each variable is sampled conditionally to its parents. Given such scenario, every BOA step can be defined as Linkage-tree Genetic Algorithm (LTGA) The LTGA differs from most EDA in the sense it does not explicitly model a probability distribution but only a linkage model, called linkage-tree. A linkage is a set of linkage sets with no probability distribution associated, therefore, there is no way to sample new solutions directly from . The linkage model is a linkage-tree produced stored as a Family of sets (FOS). The linkage-tree learning procedure is a hierarchical clustering algorithm, which work as follows. At each step the two closest clusters and are merged, this procedure repeats until only one cluster remains, each subtree is stored as a subset . The LTGA uses to guide an "optimal mixing" procedure which resembles a recombination operator but only accepts improving moves. We denote it as , where the notation indicates the transfer of the genetic material indexed by from to . Input: A family of subsets and a population Output: A population . for each in do for each in do choose a random := := if then return The LTGA does not implement typical selection operators, instead, selection is performed during recombination. Similar ideas have been usually applied into local-search heuristics and, in this sense, the LTGA can be seen as an hybrid method. In summary, one step of the LTGA is defined as Other Probability collectives (PC) Hill climbing with learning (HCwL) Estimation of multivariate normal algorithm (EMNA) Estimation of Bayesian networks algorithm (EBNA) Stochastic hill climbing with learning by vectors of normal distributions (SHCLVND) Real-coded PBIL Selfish Gene Algorithm (SG) Compact Differential Evolution (cDE) and its variants Compact Particle Swarm Optimization (cPSO) Compact Bacterial Foraging Optimization (cBFO) Probabilistic incremental program evolution (PIPE) Estimation of Gaussian networks algorithm (EGNA) Estimation multivariate normal algorithm with thresheld convergence Dependency Structure Matrix Genetic Algorithm (DSMGA) Related CMA-ES Cross-entropy method Ant colony optimization algorithms References Evolutionary computation Stochastic optimization
Estimation of distribution algorithm
[ "Biology" ]
2,427
[ "Bioinformatics", "Evolutionary computation" ]
3,062,721
https://en.wikipedia.org/wiki/Neuroinformatics
Neuroinformatics is the emergent field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied: the development of computational models of the nervous system and neural processes; the development of tools for analyzing and modeling neuroscience data; and the development of tools and databases for management and sharing of neuroscience data at all levels of analysis. Neuroinformatics encompasses philosophy (computational theory of mind), psychology (information processing theory), computer science (natural computing, bio-inspired computing), among others disciplines. Neuroinformatics doesn't deal with matter or energy, so it can be seen as a branch of neurobiology that studies various aspects of nervous systems. The term neuroinformatics seems to be used synonymously with cognitive informatics, described by Journal of Biomedical Informatics as interdisciplinary domain that focuses on human information processing, mechanisms and processes within the context of computing and computing applications. According to German National Library, neuroinformatics is synonymous with neurocomputing. At Proceedings of the 10th IEEE International Conference on Cognitive Informatics and Cognitive Computing was introduced the following description: Cognitive Informatics (CI) as a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science. CI investigates into the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. According to INCF, neuroinformatics is a research field devoted to the development of neuroscience data and knowledge bases together with computational models. Neuroinformatics in neuropsychology and neurobiology Models of neural computation Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the different models of a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computational neuroethology, the practice is to include the environment in the model in such a way that the loop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity. Neurocomputing technologies Artificial neural networks Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times. Brain emulation and mind uploading Brain emulation is the concept of creating a functioning computational model and emulation of a brain or part of a brain. In December 2006, the Blue Brain project completed a simulation of a rat's neocortical column. The neocortical column is considered the smallest functional unit of the neocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108 synapses). In November 2007, the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. An artificial neural network described as being "as big and as complex as half of a mouse brain" was run on an IBM Blue Gene supercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models. Mind uploading is the process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Brain–computer interface Research on brain–computer interface began in the 1970s at the University of California, Los Angeles under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. Recently, studies in Human-computer interaction through the application of machine learning with statistical temporal features extracted from the frontal lobe, EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating) mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia. Neuroengineering & Neuroinformatics Neuroinformatics is the scientific study of information flow and processing in the nervous system. Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. There are three main directions where neuroinformatics has to be applied: the development of computational models of the nervous system and neural processes, the development of tools for analyzing data from devices for neurological diagnostic devices, the development of tools and databases for management and sharing of patients brain data in healthcare institutions. Brain mapping and simulation Brain simulation is the concept of creating a functioning computational model of a brain or part of a brain. In December 2006, the Blue Brain project completed a simulation of a rat's neocortical column. The neocortical column is considered the smallest functional unit of the neocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108 synapses). In November 2007, the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. An artificial neural network described as being "as big and as complex as half of a mouse brain" was run on an IBM Blue Gene supercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models. Mind uploading Mind uploading is the process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility. Auxiliary sciences of neuroinformatics Data analysis and knowledge organisation Neuroinformatics (in context of library science) is also devoted to the development of neurobiology knowledge with computational models and analytical tools for sharing, integration, and analysis of experimental data and advancement of theories about the nervous system function. In the INCF context, this field refers to scientific information about primary experimental data, ontology, metadata, analytical tools, and computational models of the nervous system. The primary data includes experiments and experimental conditions concerning the genomic, molecular, structural, cellular, networks, systems and behavioural level, in all species and preparations in both the normal and disordered states. In the recent decade, as vast amounts of diverse data about the brain were gathered by many research groups, the problem was raised of how to integrate the data from thousands of publications in order to enable efficient tools for further research. The biological and neuroscience data are highly interconnected and complex, and by itself, integration represents a great challenge for scientists. History The United States National Institute of Mental Health (NIMH), the National Institute of Drug Abuse (NIDA) and the National Science Foundation (NSF) provided the National Academy of Sciences Institute of Medicine with funds to undertake a careful analysis and study of the need to introduce computational techniques to brain research. The positive recommendations were reported in 1991. This positive report enabled NIMH, now directed by Allan Leshner, to create the "Human Brain Project" (HBP), with the first grants awarded in 1993. Next, Koslow pursued the globalization of the HPG and neuroinformatics through the European Union and the Office for Economic Co-operation and Development (OECD), Paris, France. Two particular opportunities occurred in 1996. The first was the existence of the US/European Commission Biotechnology Task force co-chaired by Mary Clutter from NSF. Within the mandate of this committee, of which Koslow was a member the United States European Commission Committee on Neuroinformatics was established and co-chaired by Koslow from the United States. This committee resulted in the European Commission initiating support for neuroinformatics in Framework 5 and it has continued to support activities in neuroinformatics research and training. A second opportunity for globalization of neuroinformatics occurred when the participating governments of the Mega Science Forum (MSF) of the OECD were asked if they had any new scientific initiatives to bring forward for scientific cooperation around the globe. The White House Office of Science and Technology Policy requested that agencies in the federal government meet at NIH to decide if cooperation were needed that would be of global benefit. The NIH held a series of meetings in which proposals from different agencies were discussed. The proposal recommendation from the U.S. for the MSF was a combination of the NSF and NIH proposals. Jim Edwards of NSF supported databases and data-sharing in the area of biodiversity. The two related initiatives were combined to form the United States proposal on "Biological Informatics". This initiative was supported by the White House Office of Science and Technology Policy and presented at the OECD MSF by Edwards and Koslow. An MSF committee was established on Biological Informatics with two subcommittees: 1. Biodiversity (Chair, James Edwards, NSF), and 2. Neuroinformatics (Chair, Stephen Koslow, NIH). At the end of two years the Neuroinformatics subcommittee of the Biological Working Group issued a report supporting a global neuroinformatics effort. Koslow, working with the NIH and the White House Office of Science and Technology Policy to establishing a new Neuroinformatics working group to develop specific recommendation to support the more general recommendations of the first report. The Global Science Forum (GSF; renamed from MSF) of the OECD supported this recommendation. Community Institute of Neuroinformatics, University of Zurich The Institute of Neuroinformatics was established at the University of Zurich and ETH Zurich at the end of 1995. The mission of the Institute is to discover the key principles by which brains work and to implement these in artificial systems that interact intelligently with the real world. Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh Computational Neuroscience and Neuroinformatics Group in Institute for Adaptive and Neural Computation of University of Edinburgh's School of Informatics study how the brain processes information. The International Neuroinformatics Coordinating Facility An international organization with the mission to develop, evaluate, and endorse standards and best practices that embrace the principles of open, fair, and citable neuroscience. As of October 2019, the INCF has active nodes in 18 countries. This committee presented 3 recommendations to the member governments of GSF. These recommendations were: National neuroinformatics programs should be continued or initiated in each country should have a national node to both provide research resources nationally and to serve as the contact for national and international coordination. An International Neuroinformatics Coordinating Facility should be established. The INCF will coordinate the implementation of a global neuroinformatics network through integration of national neuroinformatics nodes. A new international funding scheme should be established. This scheme should eliminate national and disciplinary barriers and provide a most efficient approach to global collaborative research and data sharing. In this new scheme, each country will be expected to fund the participating researchers from their country. The GSF neuroinformatics committee then developed a business plan for the operation, support and establishment of the INCF which was supported and approved by the GSF Science Ministers at its 2004 meeting. In 2006 the INCF was created and its central office established and set into operation at the Karolinska Institute, Stockholm, Sweden under the leadership of Sten Grillner. Sixteen countries (Australia, Canada, China, the Czech Republic, Denmark, Finland, France, Germany, India, Italy, Japan, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom and the United States), and the EU Commission established the legal basis for the INCF and Programme in International Neuroinformatics (PIN). To date, eighteen countries (Australia, Belgium, Czech Republic, Finland, France, Germany, India, Italy, Japan, Malaysia, Netherlands, Norway, Poland, Republic of Korea, Sweden, Switzerland, the United Kingdom and the United States) are members of the INCF. Membership is pending for several other countries. The goal of the INCF is to coordinate and promote international activities in neuroinformatics. The INCF contributes to the development and maintenance of database and computational infrastructure and support mechanisms for neuroscience applications. The system is expected to provide access to all freely accessible human brain data and resources to the international research community. The more general task of INCF is to provide conditions for developing convenient and flexible applications for neuroscience laboratories in order to improve our knowledge about the human brain and its disorders. Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology The main activity of the group is development of computational tools and models, and using them to understand brain structure and function. Neuroimaging & Neuroinformatics, Howard Florey Institute, University of Melbourne Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Led by Gary Egan. Montreal Neurological Institute, McGill University Led by Alan Evans, MCIN conducts computationally-intensive brain research using innovative mathematical and statistical approaches to integrate clinical, psychological and brain imaging data with genetics. MCIN researchers and staff also develop infrastructure and software tools in the areas of image processing, databasing, and high performance computing. The MCIN community, together with the Ludmer Centre for Neuroinformatics and Mental Health, collaborates with a broad range of researchers and increasingly focuses on open data sharing and open science, including for the Montreal Neurological Institute. The THOR Center for Neuroinformatics Established April 1998 at the Department of Mathematical Modelling, Technical University of Denmark. Besides pursuing independent research goals, the THOR Center hosts a number of related projects concerning neural networks, functional neuroimaging, multimedia signal processing, and biomedical signal processing. The Neuroinformatics Portal Pilot The project is part of a larger effort to enhance the exchange of neuroscience data, data-analysis tools, and modeling software. The portal is supported from many members of the OECD Working Group on Neuroinformatics. The Portal Pilot is promoted by the German Ministry for Science and Education. Computational Neuroscience, ITB, Humboldt-University Berlin This group focuses on computational neurobiology, in particular on the dynamics and signal processing capabilities of systems with spiking neurons. Led by Andreas VM Herz. The Neuroinformatics Group in Bielefeld Active in the field of Artificial Neural Networks since 1989. Current research programmes within the group are focused on the improvement of man-machine-interfaces, robot-force-control, eye-tracking experiments, machine vision, virtual reality and distributed systems. Laboratory of Computational Embodied Neuroscience (LOCEN) This group, part of the Institute of Cognitive Sciences and Technologies, Italian National Research Council (ISTC-CNR) in Rome and founded in 2006 is currently led by Gianluca Baldassarre. It has two objectives: (a) understanding the brain mechanisms underlying learning and expression of sensorimotor behaviour, and related motivations and higher-level cognition grounded on it, on the basis of embodied computational models; (b) transferring the acquired knowledge to building innovative controllers for autonomous humanoid robots capable of learning in an open-ended fashion on the basis of intrinsic and extrinsic motivations. Japan national neuroinformatics resource The Visiome Platform is the Neuroinformatics Search Service that provides access to mathematical models, experimental data, analysis libraries and related resources. An online portal for neurophysiological data sharing is also available at BrainLiner.jp as part of the MEXT Strategic Research Program for Brain Sciences (SRPBS). Laboratory for Mathematical Neuroscience, RIKEN Brain Science Institute (Wako, Saitama) The target of Laboratory for Mathematical Neuroscience is to establish mathematical foundations of brain-style computations toward construction of a new type of information science. Led by Shun-ichi Amari. Netherlands state program in neuroinformatics Started in the light of the international OECD Global Science Forum which aim is to create a worldwide program in Neuroinformatics. NUST-SEECS Neuroinformatics Research Lab Establishment of the Neuro-Informatics Lab at SEECS-NUST has enabled Pakistani researchers and members of the faculty to actively participate in such efforts, thereby becoming an active part of the above-mentioned experimentation, simulation, and visualization processes. The lab collaborates with the leading international institutions to develop highly skilled human resource in the related field. This lab facilitates neuroscientists and computer scientists in Pakistan to conduct their experiments and analysis on the data collected using state of the art research methodologies without investing in establishing the experimental neuroscience facilities. The key goal of this lab is to provide state of the art experimental and simulation facilities, to all beneficiaries including higher education institutes, medical researchers/practitioners, and technology industry. The Blue Brain Project The Blue Brain Project was founded in May 2005, and uses an 8000 processor Blue Gene/L supercomputer developed by IBM. At the time, this was one of the fastest supercomputers in the world. The project involves: Databases: 3D reconstructed model neurons, synapses, synaptic pathways, microcircuit statistics, computer model neurons, virtual neurons. Visualization: microcircuit builder and simulation results visualizator, 2D, 3D and immersive visualization systems are being developed. Simulation: a simulation environment for large-scale simulations of morphologically complex neurons on 8000 processors of IBM's Blue Gene supercomputer. Simulations and experiments: iterations between large-scale simulations of neocortical microcircuits and experiments in order to verify the computational model and explore predictions. The mission of the Blue Brain Project is to understand mammalian brain function and dysfunction through detailed simulations. The Blue Brain Project will invite researchers to build their own models of different brain regions in different species and at different levels of detail using Blue Brain Software for simulation on Blue Gene. These models will be deposited in an internet database from which Blue Brain software can extract and connect models together to build brain regions and begin the first whole brain simulations. Genes to Cognition Project Genes to Cognition Project, a neuroscience research programme that studies genes, the brain and behaviour in an integrated manner. It is engaged in a large-scale investigation of the function of molecules found at the synapse. This is mainly focused on proteins that interact with the NMDA receptor, a receptor for the neurotransmitter, glutamate, which is required for processes of synaptic plasticity such as long-term potentiation (LTP). Many of the techniques used are high-throughout in nature, and integrating the various data sources, along with guiding the experiments has raised numerous informatics questions. The program is primarily run by Professor Seth Grant at the Wellcome Trust Sanger Institute, but there are many other teams of collaborators across the world. The CARMEN project The CARMEN project is a multi-site (11 universities in the United Kingdom) research project aimed at using GRID computing to enable experimental neuroscientists to archive their datasets in a structured database, making them widely accessible for further research, and for modellers and algorithm developers to exploit. EBI Computational Neurobiology, EMBL-EBI (Hinxton) The main goal of the group is to build realistic models of neuronal function at various levels, from the synapse to the micro-circuit, based on the precise knowledge of molecule functions and interactions (Systems Biology). Led by Nicolas Le Novère. Neurogenetics GeneNetwork Genenetwork started as component of the NIH Human Brain Project in 1999 with a focus on the genetic analysis of brain structure and function. This international program consists of tightly integrated genome and phenome data sets for human, mouse, and rat that are designed specifically for large-scale systems and network studies relating gene variants to differences in mRNA and protein expression and to differences in CNS structure and behavior. The great majority of data are open access. GeneNetwork has a companion neuroimaging web site—the Mouse Brain Library—that contains high resolution images for thousands of genetically defined strains of mice. The Neuronal Time Series Analysis (NTSA) NTSA Workbench is a set of tools, techniques and standards designed to meet the needs of neuroscientists who work with neuronal time series data. The goal of this project is to develop information system that will make the storage, organization, retrieval, analysis and sharing of experimental and simulated neuronal data easier. The ultimate aim is to develop a set of tools, techniques and standards in order to satisfy the needs of neuroscientists who work with neuronal data. The Cognitive Atlas The Cognitive Atlas is a project developing a shared knowledge base in cognitive science and neuroscience. This comprises two basic kinds of knowledge: tasks and concepts, providing definitions and properties thereof, and also relationships between them. An important feature of the site is ability to cite literature for assertions (e.g. "The Stroop task measures executive control") and to discuss their validity. It contributes to NeuroLex and the Neuroscience Information Framework, allows programmatic access to the database, and is built around semantic web technologies. Brain Big Data research group at the Allen Institute for Brain Science (Seattle, WA) Led by Hanchuan Peng, this group has focused on using large-scale imaging computing and data analysis techniques to reconstruct single neuron models and mapping them in brains of different animals. See also References Citations Sources Further reading Books Journals and conferences Computational neuroscience Bioinformatics Computational fields of study
Neuroinformatics
[ "Technology", "Engineering", "Biology" ]
5,194
[ "Biological engineering", "Computational fields of study", "Bioinformatics", "Computing and society", "Neuroinformatics" ]
1,571,019
https://en.wikipedia.org/wiki/Stereo-Pak
The Muntz Stereo-Pak, commonly known as the 4-track cartridge, is a magnetic tape sound recording cartridge technology. The Stereo-Pak cartridge was inspired by the Fidelipac 2-track monaural (audio & cue tracks, later 3-track for stereo) tape cartridge system invented by George Eash in 1954 and used by radio broadcasters for commercials and jingles in 1959. The Stereo-Pak was adapted from the Fidelipac cartridge design by Earl "Madman" Muntz in 1962, in partnership with Eash, as a way to play pre-recorded tapes in cars. The tape is arranged in an infinite loop that traverses a central hub and crosses a tape head, usually under a pressure pad to assure proper tape contact. The tape is pulled by tension, and spooling is aided by a lubricant, usually graphite. History The endless loop tape cartridge was designed in 1952 by Bernard Cousino of Toledo, Ohio. Previously, music in the car had been restricted mostly to radios. Records, due to their methods of operation and size, were not practical for use in a car, although several companies tried to market automobile record players such as the Highway Hi-Fi and the Auto-Com flexidisc. Entrepreneur Earl "Madman" Muntz of Los Angeles, California, saw a potential in Fidelipac broadcast carts for an automobile music tape system, and in 1962 introduced his "Stereo-Pak 4-Track Stereo Tape Cartridge System" and pre-recorded tapes, initially in California and Florida. He licensed popular music albums from the major record companies and duplicated them on these 4-track cartridges, or CARtridges, as they were first advertised. Music came in four cartridge sizes. AA (single) size was an inch wide by two inches long and carried the same amount of time per track (6 minutes) as one side of a 45 RPM EP. A-size, 4 inches wide by 5 inches long, was the most common size. The same size as the vast majority of NAB (Fidelipac) carts, it was able to carry a 3-inch reel. B-size, six inches wide by 7 inches long, was used infrequently for 2-LP sets and other extended programs. Able to carry a 5-inch reel. C-size, able to carry a full 1800-foot 7-inch reel of one-mil tape, used infrequently for extremely extended 4-LP sets. Muntz developed and marketed a variety of mobile and stationary players and recorders for his 4-track tapes. The B- and C-size carts would have their stereo sound split to mono and be used for background music systems all the way up to the early 1990s, when digital took over. In the last part of that period, a last-ditch effort to reduce cost came in the form of reducing the tape speed first to 1-7/8 IPS and then to 15/16 IPS, while reducing the cartridge size first back to the standard widely available A-size and then to a hybrid size between the AA (single) size and the A-size. At first, chromium high-bias tape was used to offset the loss of fidelity from the lower speed, and then, when that proved too expensive, cobalt-based tape was substituted. After riding in Muntz's car and listening to his 4-track cartridge system, electronics and aerospace entrepreneur Bill Lear had an employee of Lear Jet Corporation create a modified derivative, resulting in the more convenient and long-playing 8-track cartridge system, which quickly supplanted and surpassed the 4-track in the market until being surpassed, itself, by the cassette tape system. References Audiovisual introductions in 1962 Audio storage History of sound recording Tape recording Discontinued media formats American inventions 1962 in music 1962 in technology Products introduced in 1962
Stereo-Pak
[ "Technology" ]
791
[ "Recording devices", "Tape recording" ]
1,571,248
https://en.wikipedia.org/wiki/History%20of%20the%20Nintendo%20Entertainment%20System
The history of the Nintendo Entertainment System (NES) spans the 1982 development of the Family Computer, to the 1985 launch of the NES, to Nintendo's rise to global dominance based upon this platform throughout the late 1980s. The or was developed in 1982 and launched in 1983 in Japan. Following the North American video game crash of 1983, the Famicom was adapted into the NES which was launched in North America in 1985. Transitioning the company from its arcade game history into this combined global 8-bit home video game console platform, the Famicom and NES continued to aggressively compete with next-generation 16-bit consoles, including the Sega Genesis. The platform was succeeded by the Super Famicom in 1990 and the Super Nintendo Entertainment System in 1991, but its support and production continued until 1995. Interest in the NES has been renewed by collectors and emulators, including Nintendo's own Virtual Console platform. 1981–1984: Origins 1981–1983: Development The video game industry experienced a period of rapid growth and unprecedented popularity during the late 1970s to early 1980s, with the golden age of arcade video games and the second generation of video game consoles: Space Invaders (1978) and its shoot 'em up clones had become a phenomenal success across arcades worldwide, game consoles such as the Atari 2600 and the Intellivision became popular in North American homes, and the Epoch Cassette Vision became the best-selling console in Japan. Many companies, including Nintendo, arose in their wake to exploit the growing industry. The inspiration for the Famicom hardware was arcade video game hardware. A major influence was Namco's Galaxian (1979), which had replaced the more intensive bitmap rendering system of Space Invaders with a hardware sprite rendering system that animated sprites over a scrolling background, allowing more detailed graphics, faster gameplay, and a scrolling animated starfield background. This provided the basis for Nintendo's Radar Scope (1980) arcade hardware, which they co-developed with Ikegami Tsushinki, improving on Galaxian with technology such as high-speed emitter-coupled logic (ECL) integrated circuit (IC) chips and memory on a 50 MHz printed circuit board. Following the commercial failure of Radar Scope, the game's arcade hardware was converted for use with Donkey Kong (1981), which became a major arcade hit. Home systems at the time were not powerful enough to handle an accurate port of Donkey Kong, so Nintendo wanted to create a system to allow a fully accurate conversion of Donkey Kong to be played in homes. Led by Masayuki Uemura, Nintendo's R&D2 team began work on a home system in 1982, ambitiously targeted to be less expensive than its competitors, yet with performance that could not be surpassed by its competitors for at least one year. The console began development under the codename Project GAMECOM. Uemura analyzed the innards of rival consoles, including the Atari 2600 and Magnavox Odyssey, sidestepping their primitive technology. Their main competition was the Epoch Cassette Vision, the best-selling console in Japan at the time, with Nintendo president Hiroshi Yamauchi telling employees he wanted them to develop a console both more powerful and cheaper than the Cassette Vision. Nintendo R&D2 engineer Katsuya Nakakawa analyzed the IC chips of the more powerful Donkey Kong arcade hardware, concluding that it would be possible to use them as a basis for their console. Another Nintendo R&D2 engineer, Takao Sawano, proposed that the D-pad of Nintendo's Game & Watch handheld devices should be adapted for the console. Meanwhile in North America, the toy manufacturer Coleco was working on a new home console to compete with the Atari 2600 and which would be capable of handling fairly accurate ports of arcade games. Coleco demonstrated a prototype of the ColecoVision to Nintendo R&D2 engineers, who were impressed by the smoothly animated graphics. It left a strong impression on Sawano and Uemara, who had the ColecoVision in mind while working on Nintendo's new console in Japan. However, while the ColecoVision was a significant improvement over the Atari 2600, there was still no console comparable to the original Donkey Kong arcade hardware. Nevertheless, the bundled port of Donkey Kong helped the ColecoVision become a major success in North America. Uemura sent the engineers Katsuya Nakakawa and Masahiro Ootake to meet with Ricoh, a semiconductor manufacturer that had previously worked with Nintendo on arcade games. One of Ricoh's supervisors was Hiromitsu Yagi, a former Mitsubishi Electronics engineer who had previously designed the large-scale integration (LSI) chips for the Nintendo Color TV-Game consoles in the 1970s. To determine the system specifications of the new console, Nakakawa and Masahiro brought along a Donkey Kong arcade machine for Ricoh to analyze, in order to help build a console more powerful than any consoles at the time and which would be comparable to the Donkey Kong arcade hardware. Uemura initially thought of using a modern 16-bit central processing unit (CPU), but instead settled on an 8-bit CPU based on the inexpensive MOS Technology 6502, supplementing it with a custom graphics chip, the Picture Processing Unit, produced by Ricoh. To reduce costs, suggestions of including a keyboard, modem, and floppy disk drive were rejected, but expensive circuitry was added to provide a versatile 15-pin expansion port connection on the front of the console for future add-on functionality such as peripheral devices. The keyboard, Famicom Modem, and Famicom Disk System would later be released as add-on peripherals, all utilizing the Famicom expansion port. Other peripheral devices connecting via the expansion port would include the Famicom Light Gun, Family Trainer, and various specialized controllers. Many would be released in Japan only, such as the Famicom 3D System and Famicom Disk System. The wireless broadcast functionality of the TV Tennis Electrotennis (1975) got Nintendo designer Masayuki Uemura to consider adding that capability to the Famicom. He ultimately did not pursue it to keep system costs low. 1983–1984: Famicom release in Japan Nintendo held its own exhibition to unveil the Famicom, becoming a sensation among toy show exhibitors. Shortly after, the competing SG-1000 was unveiled at the Tokyo Toy Show. Launching on July 15, 1983, the Family Computer (commonly known by the Japanese-English term Famicom) is an 8-bit console using interchangeable cartridges. The Famicom was released in Japan for (about $150 at the time, or equivalent to $ in ). Its launch game list is Donkey Kong, Donkey Kong Junior, and Popeye. The console was intentionally designed to look like a toy, with a bright red-and-white color scheme and two hardwired gamepads that are stored visibly at the sides of the unit. It sold well in its early months, at 500,000 units in its first two months. However, many Famicom units reportedly had faulty graphics chips and froze during gameplay. After tracing the problem to a faulty circuit, Nintendo voluntarily recalled all Famicom systems just before the holiday shopping season, and temporarily suspended production of the system while the concerns were addressed, costing Nintendo millions of dollars. The Famicom was subsequently reissued with a new motherboard. The Famicom easily outsold its primary competitor, the SG-1000. By the end of 1984 Nintendo had sold more than 2.5 million Famicoms in the Japanese market. This made it the best-selling console in Japan, surpassing the Cassette Vision. Sales exceeded Nintendo's expectations, leading to the Famicom being sold out, so Nintendo raised projections and increased production for the following year. Nintendo had planned to be the exclusive provider of Famicom games during its launch year. Major arcade developer Namco approached Nintendo about Famicom development, as they had no means of cartridge production. They contracted a 30% fee to Nintendo per game sold, consisting of 10% as a licensing fee for the console, and 20% as the production cost of new cartridges. By 1984, third party Famicom games were published. This 30% fee became a de facto standard in console and storefront licensing for video game publishing through the 2010s. 1984–1987: Going international 1983: Marketing negotiations with Atari Bolstered by its success in Japan, Nintendo soon turned its attention to foreign markets. As a new console manufacturer, Nintendo had to convince a skeptical public to embrace its system. To this end, Nintendo entered into negotiations with Atari to release the Famicom outside Japan as the Nintendo Enhanced Video System, with plans to release the system by the end of 1983. Though the two companies reached a tentative agreement, with final contract papers to be signed at the 1983 Summer Consumer Electronics Show (CES), Atari refused to sign at the last minute, after seeing Coleco, one of its main competitors in the market at that time, demonstrating a prototype of Donkey Kong for its forthcoming Coleco Adam home computer system. Coleco had licensed Donkey Kong for the ColecoVision home console, but Atari had the exclusive computer license for the game. Although the game had been originally produced for the ColecoVision and could thus automatically be played on the backward compatible Adam computer, Atari took the demonstration as a sign that Nintendo was also dealing with Coleco. Though the issue was cleared up within a month, by then Atari's financial problems stemming from the North American video game crash of 1983 coupled with the departure of Atari CEO Ray Kassar left the company unable to follow through with the deal in time to make the target launch. North America 1984–1986: Nintendo VS. System Famicom hardware debuted in North America in the arcades, in the form of the Nintendo VS. System in 1984; the system's success in arcades paved the way for the official release of the NES console. After the video game crash of 1983, many American retailers considered video games a passing fad, and greatly reduced or discontinued the inventory of such products. Nintendo of America's market research was met with warnings to stay away from home consoles, with US retailers refusing to stock game consoles. Meanwhile, the arcade industry also had a slump as the golden age of arcade video games came to an end, but arcades were able to recover and stabilize with the help of software conversion kit systems. Hiroshi Yamauchi realized there was still a market for video games in North America, where gamers were gradually returning to arcades in significant numbers. Yamauchi still had faith there was a market for the Famicom, so he decided to introduce it to North America through the arcade industry. Nintendo developed the VS. System with the same hardware as the Famicom, and introduced it as the successor to its Nintendo-Pak arcade system, which had been used for games such as Donkey Kong 3 and Mario Bros. (both 1983) Though technologically weaker than Nintendo's more powerful Punch-Out arcade hardware, the VS. System was relatively inexpensive, epitomizing Gunpei Yokoi's philosophy of "lateral thinking with withered technology." The VS. System was also able to offer a wider variety of games, due to being able to easily port over games from the Famicom. Upon release, the VS. System generated excitement in the arcade industry, receiving praise for its easy conversions, affordability, flexibility and multiplayer capabilities. The VS. System became a major success in North American arcades. Between 10,000 and 20,000 arcade units were sold in 1984, and individual Vs. games often appeared as top-earners on the US arcade charts, such as VS. Tennis and VS. Baseball in 1984, then Duck Hunt and VS. Hogan's Alley in 1985. By 1985, 50,000 units had been sold, having established Nintendo as an industry leader in the arcades. The Vs. System went on to become the highest-grossing arcade machine of 1985 in the United States. By the time the NES launched in North America, nearly 100,000 VS. Systems had been sold to American arcades. The success of the VS. System gave Nintendo the confidence to release the Famicom in North America as a video game console, which would later be called the Nintendo Entertainment System (NES). Nintendo's strong positive reputation in the arcades generated significant interest in the NES. It also gave Nintendo the opportunity to test new games as VS. Paks in the arcades, to determine which games to release for the NES launch. Nintendo's software strategy was to first release games for the Famicom, then the VS. System, and then for the NES. This allowed Nintendo to build a solid launch line-up for the NES. Many games made their North American debut on the VS. System before releasing for the NES, which led to many players being "amazed" at the accuracy of the arcade "ports" for the NES, though most VS. System games originated on the Famicom. 1985: Advanced Video System home computer Nintendo president Hiroshi Yamauchi said in 1986, "Atari collapsed because they gave too much freedom to third-party developers and the market was swamped with rubbish games." After the deal with Atari failed, Nintendo proceeded alone, re-conceiving the Famicom console with a sophisticated design language as the "Nintendo Advanced Video System" (AVS). To keep the software market for its console from becoming similarly oversaturated, Nintendo added a lockout system to obstruct unlicensed software from running on the console, thus allowing Nintendo to enforce strict licensing standards. The software carries the Nintendo Seal of Quality to communicate the company's approval. Nintendo's product designer Lance Barr, who would continue with the company for decades, retooled the Famicom console with a sleek and sophisticated design language. The toy-like white-and-red color scheme of the Famicom was replaced with a clean and futuristic color scheme of grey, black, and red. The top and bottom portions are in different shades of grey, plus a stripe with black and ribbing along the top, and minor red accents. The shape is boxier: flat on top, and a bottom half that tapered down to a smaller footprint. The front of the main unit features a compartment for storing the wireless controllers out of sight. To avoid the stigma of video game consoles, Nintendo issued prerelease marketing of the AVS as a full home computer, with an included keyboard, cassette data recorder, and a BASIC interpreter software cartridge. The BASIC interpreter would later be sold together with a keyboard as the Family BASIC package, and the cassette deck for data storage would later be released as the Famicom Data Recorder. The AVS includes a variety of computer-style input devices: gamepads, a handheld joystick, a 3-octave musical keyboard, and the Zapper light gun. The AVS Zapper is hinged, allowing it to straighten out into a wand form, or bend into a gun form. The AVS uses a wireless infrared interface for all its peripherals, including keyboard, cassette deck, and controllers. Most of the peripherals for the Advanced Video System are on display at the Nintendo World Store. The system's first known advertisement is in Consumer Electronics magazine in 1985, saying "The evolution of a species is now complete." The AVS was showcased at the Winter Consumer Electronics Show held in Las Vegas during January 5–8, 1985, in a reportedly "very busy" booth headed by Nintendo of America's president Minoru Arakawa. There, attendees acknowledged the advanced technology, but responded poorly to the keyboard and wireless functionality. All of the more than 25 games demonstrated were complete, with no prototypes. No retail pricing information was given by Nintendo, reportedly seeming to "test the waters" with potential distributors, in an unpredictable market. Although Nintendo of Americas's marketing manager Gail Tilden had reported sales of more than 2.5 million units of the Famicom across the previous 18 months yielding a 90% market share in Japan by the beginning of 1985, the American video game press was skeptical that the AVS could have any success in North America. News Wire reported on January 12, 1985, "It's hard to believe, but a Japanese company says it intends to introduce a new video-game machine in the United States, despite the collapse of the video-game industry here." The March 1985 issue of Electronic Games magazine stated that "the videogame market in America has virtually disappeared" and that "this could be a miscalculation on Nintendo's part". Roger Buoy of Mindscape allegedly said that year, "Hasn't anyone told them that the videogame industry is dead?" Video game historian Chris Kohler reflected, "Retailers didn't want to listen to the little startup Nintendo of America talk about how its Japanese parent company had a huge hit with the Famicom (the system from which the NES was adapted from). In America, videogames were dead, dead, dead. Personal computers were the future, and anything that just played games but couldn't do your taxes was hopelessly backwards." Computer Entertainer openly rebuked the media after attending a humbly optimistic June 1985 Consumer Electronics Show (CES), "Can another video game system buck the trend and become a success? ... Perhaps if the press can avoid jumping all over the Nintendo system and let American consumers make up their own minds, we might find out that video games aren't dead after all." 1985: Redesign as the Nintendo Entertainment System At the 1985 CES, Nintendo returned with a stripped-down and cost-reduced redesign of the AVS, having abandoned the home computer approach. Nintendo purposefully designed the system so as not to resemble a video game console, and would avoid terms associated with game consoles, with marketing manager Gail Tilden choosing the term "Game Pak" for cartridges, "Control Deck" for the console, and "Entertainment System" for the whole platform altogether. Renamed the "Nintendo Entertainment System" (NES), the new and cost-reduced version lacks most of the upscale features added in the AVS, but retains many of its audiophile-inspired design elements, such as the grey color scheme and boxy form factor. Disappointed with the cosmetically raw prototype part they received from Japan, which they nicknamed "the lunchbox", Nintendo of America designers Lance Barr and Don James added the two-tone gray, the black stripe, and the red lettering. To obscure the video game connotation, NES replaced the top-loading cartridge slot of the Famicom and AVS with a front-loading chamber for software cartridges that place the inserted cartridge out of view, reminiscent of a VCR. The Famicom's pair of hard-wired controllers, and the AVS's wireless controllers, were replaced with two custom 7-pin sockets for detachable wired controllers. Using another approach to market the system to North American retailers as an "entertainment system", as opposed to a video game console, Nintendo positioned the NES more squarely as a toy, emphasizing the Zapper light gun, and more significantly, R.O.B. (Robotic Operating Buddy), a wireless toy robot that responds to special screen flashes with mechanized actions. Although R.O.B. successfully drew a stream of retailers to Nintendo's CES booth to see the NES, they were still unwilling to sign up to distribute the console. 1985–1986: North American launch In a show of strength and confidence by a company that rejected positions of weakness, an intense direct campaign ensued by a dedicated 12-person "Nintendo SWAT team" who relocated from Nintendo of America's headquarters in Redmond. The team included Minoru Arakawa, Tukwila warehouse manager and game tester Howard Phillips, Redmond warehouse manager and product designer Don James, product designer Lance Barr, marketer Gail Tilden, her boss Ron Judy, and salesperson Bruce Lowry. Having failed to secure a retail distributor in the last year, the team would deliver the NES debut itself. This began a series of limited test market launches at various metropolitan American cities prior to nationwide release. Instead of the traditional business of test launching at a cheaper mid-sized city, Arakawa boldly chose the nation's largest market, New York City as its initial test market with a $50 million budget. Only with R.O.B's reclassification of the NES as a toy, telemarketing and shopping mall demonstrations, and a risk-free proposition to retailers, did Nintendo secure enough retailer support there of about 500 retailers in New York and New Jersey. As the bellwether and key toy retailer of New York City, the grandest and most important site was a 15 square foot area at FAO Schwarz. This had a dozen playable NES displays surrounding another giant television, featuring Baseball being played by real Major League Baseball players who also signed autographs in order to anchor the curious audiences to a familiar American pastime among all the surreal fantasy games. In a huge gamble by Arakawa and without having informed the headquarters in Japan, Nintendo offered to handle all store setup and marketing, extend 90 days credit on the merchandise, and accept returns on all unsold inventory. Retailers would pay nothing upfront, and after 90 days would either pay for the merchandise or return it to Nintendo. At Nintendo's unprecedented offer of risk absorption, retailers signed up one by one, with one incredulously saying "It's your funeral." The Nintendo Entertainment System then consisted of the Deluxe Set and an initial library of 17 games which were chosen by Phillips. The Deluxe Set included a Control Deck console, two gamepads, R.O.B., the Zapper light gun, and the Game Paks Gyromite and Duck Hunt. Fifteen additional games were sold separately: 10-Yard Fight, Baseball, Clu Clu Land, Excitebike, Golf, Hogan's Alley, Ice Climber, Kung Fu, Pinball, Soccer, Stack-Up, Super Mario Bros., Tennis, Wild Gunman, and Wrecking Crew. The first test launch was in New York City on October 18, 1985, with an initial shipment of 100,000 Deluxe Set systems. Nintendo began marketing the system the same month in October 1985. Headquartered in a Hackensack warehouse oozing with EPA hazards "something like 'rats and snakes and toxic waste'", the SWAT team worked every day even through Christmas Eve 1985, in what Don James called "the longest and hardest I ever worked consecutive days in my life" and what Phillips called "every waking hour ... at the crack of dawn ... seven days a week". President Arakawa joined them at the warehouse and at retail stores, once running a TV up a flight of stairs just to follow in the whole team's footsteps. While unloading their products into stores, the Nintendo of America crew was confronted by strangers who resented any Japanese-influenced company in a time of international trade issues and cheap Japanese clones of American products. A security guard reportedly said, "You're working for the Japs? I hope you fall flat on your ass." Gail Tilden said, "I remember one woman coming up to me, and I don't know what sparked her to do this, but she came up to me and said, 'Nintendo. That's a Japanese company, right? ... I hope you fail!'". Retail staff resentful of the disastrous video game market rolled their eyes at Nintendo staff, with one manager looking at Nintendo's inventory and saying "Somebody told me I've got to sell this crap." The first sale came soon and quietly, of a Deluxe Set and the 15 additional games, to a gentleman who the team later realized was employed by an unspecified Japanese competitor. Sales were not high but encouraging throughout the holiday season, though sources vary on how many consoles were sold then. In 1986, Nintendo said it had sold nearly 90,000 units in nine weeks during its late 1985 New York City test. 460,000 game cartridges were also sold in 1985. Following its success in the New York City test market, Nintendo planned to release it gradually across different US states over the first six months of 1986, starting with California at the end of January 1986; Nintendo cited production capacities and other considerations as reasons for the gradual rollout. In January 1986, an independent research firm commissioned by Nintendo delivered a survey of 200 NES owners, showing that the most popular given reason for buying an NES was because children wanted R.O.B. the robotfollowed most strongly by good graphics, variety of games, and the uniqueness and newness of the NES package. R.O.B. is credited as a primary factor in building initial support for the NES in North America, but the accessory itself was not well received for its entertainment value. Its original Famicom counterpart, the Famicom Robot, was already failing in Japan at the time of the North American launch. The NES was also credited with bringing arcade-style gaming to homes. For the nationwide launch in 1986, the NES was available in two different packages: the fully featured Deluxe Set as had been configured during the New York City launch, and a scaled-down Control Deck package which included the console, two gamepads, and Super Mario Bros. In early 1986, Nintendo announced the intention to adapt the Famicom Disk System to the NES by late 1986, but the need was obviated by the proliferation of larger and faster cartridge technology, and the drive's NES launch was canceled and the original was discontinued in Japan by the early 1990s. Nintendo added Los Angeles as the second test market in February 1986, followed by Chicago and San Francisco, then other top 12 US markets, and finally nationwide in July. Nintendo and Sega, which was similarly exporting its Master System to the US, both planned to spend $15 million in the fourth quarter of 1986 to market their consoles; later, Nintendo said it planned to spend $16 million and Sega said more than $9 million. Nintendo obtained a distribution deal with toymaker Worlds of Wonder, which leveraged its popular Teddy Ruxpin and Lazer Tag products to solicit more stores to carry the NES. From 1986 to 1987, this provided the initially reluctant WoW sales staff with windfall commissions, which Arakawa eventually capped at $1 million per person per year. The largest retailer Sears sold it through its Christmas catalog and the second largest retailer Kmart sold it in 700 stores. Nintendo sold 1.1 million consoles in 1986, estimating that it could have sold 1.4 million if inventory had held out. Nintendo earned $310 million in sales, out of total 1986 video game industry sales of $430 million, compared to total 1985 industry sales of $100 million. Europe and Oceania The NES was also released in Europe and Australia, in stages and in a rather haphazard manner. It was launched in Scandinavia in September 1986, and in the rest of mainland Europe in different months of 1987 (or most likely 1988 in the case of Spain), depending on the country. The United Kingdom, Ireland, Italy, Australia and New Zealand all received the system in 1987, where it was distributed exclusively by Mattel. In Europe, the NES received a less enthusiastic response than it had elsewhere, and Nintendo lagged in market and retail penetration, though the console was more successful later. During the late 1980s, NES sales were lower than that of the Master System in the United Kingdom. By 1990, the Master System was the highest-selling console in Europe, though the NES was beginning to have a fast growing user base in the United Kingdom. Sega continued outselling Nintendo in the UK into 1992; a reason cited at the time by Paul Wooding of Sega Force was that, "Nintendo became associated with kids playing alone in their rooms, while Sega was first experienced in the arcades with a gang of friends". Between 1991 and 1992, NES sales were booming in Europe, partly driven by the success of the Game Boy which helped uplift NES sales in the region. By 1994, NES sales had narrowly edged out the Master System overall in Western Europe. Among major European markets, the Master System led in the United Kingdom, Belgium, and Spain, whereas the NES led in France, Germany, Italy, and the Netherlands. In Australia, the NES was less successful than the Master System. South Korea In South Korea, the hardware was licensed to Hyundai Electronics, which marketed it as the Comboy from 1991. After World War II, the government of Korea (later South Korea) imposed a wide ban on all Japanese "cultural products". Until this was repealed in 1998, the only way Japanese products could legally enter the South Korean market was through licensing to a third-party (non-Japanese) distributor, as was the case with the Comboy and its successor, the Super Comboy, a version of the Super Nintendo Entertainment System (SNES). Hyundai sold 360,000 Comboy units in South Korea by 1993. This was less than half of the Master System (marketed as the Gam*Boy or Aladdinboy by Samsung), at 730,000 units sold in the country by 1993. Soviet Union and Russia After the collapse of the Soviet Union, the introduction of the NES was attempted in two ways. The first was the launch through local distributors. The second, much more popular method, was in the form of an unlicensed Taiwanese hardware clone named the Dendy produced in Russia in the early 1990s. Aesthetically, it is a replica of the original Famicom, with a unique color scheme and labels, and with controller ports on the front using DE-9 serial connectors, identical to those used in the Atari 2600 and the Atari 8-bit computers. All Dendy games sold in Russia are bootleg copies, not licensed by Nintendo. In 1994, Nintendo signed an agreement with the Dendy distributor under which Nintendo had no claims against Dendy and allowed the sale of games and consoles. A total of units were sold in Russia and the former Soviet Union. 1987–1990: Leading the industry In Japan, about Famicom units had been sold by January 1986, helped by the success of Super Mario Bros. (1985), and increased sales to more than units with 95% of the home video game market by early 1987. In North America, the NES sold units in 1986, out of worldwide sales of that year. By 1988, the console had sold units in Japan and was projected to top in the United States by the end of the year. The NES widely outsold its primary competitors, the Master System and the Atari 7800. The successful launch of the NES positioned Nintendo to dominate the home video game market for the remainder of the 1980s. Buoyed by the success of the system, NES Game Pak produced similar sales records. The console's library exploded with classic flagship franchise-building and best-selling hits like Super Mario Bros., The Legend of Zelda, and Metroid (both 1986). Toward the 1987 Christmas season, sales of the NES had dwarfed those of Teddy Ruxpin and all other original products of its American distributor, Worlds of Wonder. In October 1987, Minoru Arakawa discontinued the NES distribution contract with the failing WoW in favor of Nintendo's own growing clout, while hiring WoW's sales staff awaythe same sales staff previously offered to Nintendo by Atari in 1983. The Legend of Zelda was the first NES game with over non-bundled cartridges sold in the United States. At more than 40 million copies, Super Mario Bros. was the highest selling video game in history for many years. Released in 1988 in Japan, Super Mario Bros. 3 would gross more than $500 million, with more than 7 million copies sold in America and 4 million copies in Japan, making it the most popular and fastest selling standalone home video game in history. By mid-1986, 19% (6.5 million) of Japanese households owned a Famicom; one third by mid-1988. By 1990, over units were sold in the United States, present in 38% of American households, compared to 23% for all personal computers. The NES had reached a larger user base in the United States than any previous console, surpassing the previous record set by the Atari 2600 in 1982. In 1990, Nintendo also surpassed Toyota as Japan's most successful corporation. By early 1992, more than units had been sold worldwide, with in the United States by early 1993. Its popularity greatly affected the computer-game industry, with executives stating that "Nintendo's success has destroyed the software entertainment market" and "there's been a much greater falling off of disk sales than anyone anticipated". The growth in sales of the Commodore 64 ended; Nintendo sold almost as many consoles in 1988 as the total number of Commodore 64s had been sold in five years. Trip Hawkins called Nintendo "the last hurrah of the 8-bit world", with Nintendo having completely destroyed the Commodore 64 game market as of Christmas 1988. 1990s: Final years 1990–1992: Market decline In the late 80s, Nintendo's dominance was addressed by newer, technologically superior consoles. In 1987, NEC and Hudson Soft released the PC Engine, and in 1988, Sega released the 16-bit Mega Drive. Both were introduced in North America in 1989, where they were respectively marketed as the TurboGrafx-16 and the Genesis. Facing new competition from the PC Engine in Japan, and the Genesis in North America, Nintendo's market share began to erode. Nintendo responded in the form of the Super Famicom (Super NES or SNES in North America and Europe), the Famicom's 16-bit successor, in 1990. Although Nintendo announced its intention to continue to support the Famicom alongside its newer console, the success of the newer offering began to draw even more gamers and developers from the NES, whose decline accelerated. Nintendo did continue support of the NES for about three years after the September 1991 release of the SNES, with the NES's final first-party games being Zoda's Revenge: StarTropics II and Wario's Woods. At Christmas 1991, both the NES and SNES were outsold by the Genesis in North America. This led to Nintendo's share of the North American market declining between 1991 and 1992. In contrast, NES sales were then booming in Europe. 1993-1995 New model and Discontinuation in North America A revised Famicom (HVC-101 model) was released in Japan in 1993. It takes some design cues from the SNES. The HVC-101 model replaces the original HVC-001 model's RF modulator with RCA composite audio/video output, eliminates the hardwired controllers, and features a more compact case design. Retailing for ¥4,800 to ¥7,200 (equivalent to approximately $4260), the HVC-101 model remained in production for almost a decade before being finally discontinued in 2003. The case design of the AV Famicom was adopted for a subsequent North American re-release of the NES. The NES-101 model differs from the Japanese HVC-101 model in that it omits the RCA composite output connectors of the original NES-001 model, and sports only RF output capabilities. ASCII Entertainment reported in early 1993 that stores still offered 100 NES games, compared to 100 on shelves for Genesis and 50 for SNES. Some of the last games released for the system were The Incredible Crash Test Dummies, Startropics 2 and Wario's Woods. After a decade of being on sale overseas, the NES was discontinued on August 14, 1995 with the last game being The Lion King. By the end of its production, more than 60 million NES units had been sold throughout the world. 2007–2018: Emulation After the NES's discontinuation, the secondhand market of video rental stores, thrift stores, yard sales, flea markets, and games repackaged by Game Time Inc. / Game Trader Inc. and sold at retail stores such as K-Mart, was burgeoning. Many people began to rediscover the NES around this time, and by 1997 many older NES games were becoming popular with collectors. At the same time, computer programmers began to develop emulators capable of reproducing the internal workings of the NES on modern personal computers. When paired with a ROM image (a bit-for-bit copy of a NES cartridge's program code), the games can be played on a computer. Emulators also come with a variety of built-in functions that change the gaming experience, such as save states which allow the player to save and resume progress at an exact spot in the game. Nintendo did not respond positively to these developments and became one of the most vocal opponents of ROM image trading. Nintendo and its supporters claim that such trading represents blatant software piracy. Proponents of ROM image trading argue that emulation preserves many classic games for future generations, outside of their more fragile cartridge formats. On May 30, 2003, Nintendo announced that it would stop production of the Super Famicom in September, along with discontinuing the original Famicom and software for the Famicom Disk System. The last Famicom, serial number HN11033309, was manufactured on September 25; it was kept by Nintendo and subsequently loaned to the organizers of Level X, a video game exhibition held from December 4, 2003, to February 8, 2004, at the Tokyo Metropolitan Museum of Photography, for a Famicom retrospective in commemoration of the console's 20th anniversary. In 2005, Nintendo announced plans to publish classic NES games on the Virtual Console download service for the Wii console, which is based on its own emulation technology. Initial games released included Mario Bros., The Legend of Zelda and Donkey Kong, with blockbusters such as Super Mario Bros., Punch-Out!!, and Metroid appearing in the following months. In 2007, Nintendo announced that it would no longer repair Famicom systems, due to an increasing shortage of the necessary parts. In 2016, Nintendo announced the NES Classic Edition, a dedicated console designed as a miniature replica of the original NES; it features 30 games along with save state capability for each game. It released in Australia on November 10, and in Europe and North America the following day. Nintendo also released a Famicom version of the console, featuring a different set of games, in Japan on November 10. The console immediately sold out upon launch due to high demand; intended to be a limited-time release, Nintendo discontinued it in April 2017. Following the release of its successor, the Super NES Classic Edition, the NES Classic Edition was re-released on June 29, 2018; both consoles were discontinued after the end of the holiday season that year. As of June 30, 2018, Nintendo has sold 3.6 million units of the NES Classic Edition. See also Third generation of video game consoles References Nintendo Entertainment System Nintendo Entertainment System
History of the Nintendo Entertainment System
[ "Technology" ]
8,043
[ "History of video games", "History of computing" ]
1,571,427
https://en.wikipedia.org/wiki/Airworthiness%20certificate
A standard certificate of airworthiness is a permit for commercial passenger or cargo operation, issued for an aircraft by the civil aviation authority in the state/nation in which the aircraft is registered. For other aircraft such as crop-sprayers, a Special Airworthiness Certificate (not for commercial passenger or cargo operations) must be issued. Legal authority A certificate of airworthiness (CoA), or an airworthiness certificate, is issued for an aircraft by the civil aviation authority in the state in which the aircraft is registered. The CoA attests that the aircraft is airworthy insofar as the aircraft conforms to its type design. Each certificate is issued in one of a number of different categories when the aircraft is registered in the name of the owner. Thereafter, a yearly currency fee is payable to renew the CoA. If this fee is not paid when due, the certificate expires and the owner must apply again for the certificate. The CoA can only be issued when a maintenance release or certificate of release to service (CRS) from the maintenance facility declares that the maintenance due has been carried out and the aircraft is then certified as being airworthy. In the US, Australia and some other countries a CoA is classified as either a standard airworthiness certificate or a special airworthiness certificate. Standard airworthiness certificate A standard airworthiness certificate is an airworthiness certificate issued for an aircraft by the civil aviation authority in the state in which the aircraft is registered. A standard airworthiness certificate is one of the certificates that are mandatory if an aircraft is to be used in commercial operations. In the US, Australia and some other countries, a standard airworthiness certificate is issued in one of the following categories: Transport Commuter Normal Utility Acrobatic Manned free balloons Special class of aircraft The airworthiness certificate must be carried on board the aircraft and must be presented to a representative of the aviation authority upon request. A standard airworthiness certificate remains valid as long as the aircraft meets its approved type design and is in a condition for safe operation. In the US, a standard airworthiness certificate remains effective providing the maintenance, preventive maintenance and alterations are performed in accordance with relevant requirements and the aircraft remains registered in the USA. A standard airworthiness certificate ceases to be valid when the aircraft ceases to be registered. Change of ownership of an aircraft does not require re-issue or re-validation of that aircraft's standard airworthiness certificate. In contrast to a standard airworthiness certificate, an aircraft may be issued with a special airworthiness certificate. Examples of aircraft which are not eligible for standard airworthiness certificates but may be eligible for special airworthiness certificates include agricultural aircraft, experimental aircraft, and some ex-military aircraft. Special airworthiness certificate A special airworthiness certificate is an airworthiness certificate that is not sufficient to allow an aircraft to be used in commercial passenger or cargo operations. In the United States a Special Airworthiness Certificate is issued in one or more of the following categories: See also Type certificate Joint Aviation Requirements Notes References The Code of Federal Regulations Title 14, Volume 1, Part 21 External links UK Civil Aviation Authority Sri Lanka Aviation licenses and certifications Aircraft maintenance
Airworthiness certificate
[ "Engineering" ]
658
[ "Aircraft maintenance", "Aerospace engineering" ]
1,571,429
https://en.wikipedia.org/wiki/HD%20330075
HD 330075 is a star in the southern constellation of Norma. It has a yellow hue and an apparent visual magnitude of 9.36, which makes it too faint to be seen with the naked eye – it is visible only with telescope or powerful binoculars. Parallax measurements provide a distance estimate of 148 light years from the Sun, and it is drifting further away with a radial velocity of 62 km/s. The star is estimated to have come as close as some 409 million years ago. This object appears to be a slightly evolved dwarf with a spectral class of G5. That is, it is nearing the end of its main sequence lifetimes and is becoming a subgiant star. The star has very low chromospheric activity and is around five billion years old. It is smaller than the Sun with 86% of the Sun's mass and 85% of the solar radius. As a consequence, it is radiating just 39% of the luminosity of the Sun from its photosphere at an effective temperature of 4,967 K. It has a super-solar metallicity, which means the abundance of elements other than hydrogen and helium appears much higher than in the Sun. Planetary system In 2004, the discovery of a hot Jupiter planet orbiting close to the star was announced. This is the first planet discovered by the then-new HARPS spectrograph. See also List of stars with extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Norma (constellation) CD-49 10033 330075 077517 J15493770-4957486
HD 330075
[ "Astronomy" ]
334
[ "Norma (constellation)", "Constellations" ]
1,571,455
https://en.wikipedia.org/wiki/HD%204208
HD 4208 is a star with an orbiting exoplanetary companion in the southern constellation of Sculptor. It has a yellow hue with an apparent visual magnitude of 7.78, making it too dim to be visible to the naked eye. But with binoculars or small telescope it should be an easy target. This object is located at a distance of 111.6 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +57 km/s. The star HD 4208 is named Cocibolca. The name was selected in the NameExoWorlds campaign by Nicaragua, during the 100th anniversary of the IAU. Cocibolca is the Nahuatl name for the Lake Nicaragua. This is a G-type main-sequence star with a stellar classification of , where the suffix notation indicates underabundances of iron and carbyne in the spectrum. It is roughly 6.6 billion years old and is spinning with a projected rotational velocity of 4.4 km/s. The star has 86% of the Sun's mass and radius, and is radiating 71% of the Sun's luminosity from its photosphere at an effective temperature of 5,717 K. In 2001, a planet was discovered orbiting the star by means of the radial velocity method. This body is orbiting from the host star with a period of and a low eccentricity of 0.042. The position of this planet near the star's habitable zone means that it will have a strong gravitational perturbation effect on any potential Earth-mass planet that may be orbiting within this region. See also HD 4203 HD 4308 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Sculptor (constellation) Durchmusterung objects 9024 004208 003479
HD 4208
[ "Astronomy" ]
385
[ "Constellations", "Sculptor (constellation)" ]
1,571,487
https://en.wikipedia.org/wiki/HD%20114729
HD 114729 is a Sun-like star with an orbiting exoplanet in the southern constellation of Centaurus. Based on parallax measurements, it is located at a distance of 124 light years from the Sun. It is near the lower limit of visibility to the naked eye, having an apparent visual magnitude of 6.68 The system is drifting further away with a heliocentric radial velocity of 26.3 km/s. The system has a relatively high proper motion, traversing the celestial sphere at an angular rate of ·yr−1. The spectrum of HD 114729 presents as an ordinary G-type main-sequence star, a yellow dwarf, with a stellar classification of G0 V. It has a negligible level of magnetic activity, making it chromosperically quiet. The star has about the same mass as the Sun, but the radius has expanded to 44% greater than the Sun's girth. It is radiating more than double the luminosity of the Sun from its photosphere at an effective temperature of 5,939 K. The size and luminosity suggest a much greater age than the Sun; perhaps around nine billion years. HD 114729 has a co-moving companion designated HD 114729 B, with the latter having 25.3% of the Sun's mass and a projected separation of . Planetary system In 2003 the California and Carnegie Planet Search team announced the discovery of a planet orbiting the star. This planet orbits twice as far away from the star as Earth to the Sun and orbits very eccentrically. It has mass at least 95% (0.840) that of Jupiter and thus a minimum of 267 times the mass of Earth. See also List of extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Centaurus Durchmusterung objects 114729 064459
HD 114729
[ "Astronomy" ]
388
[ "Centaurus", "Constellations" ]
1,571,507
https://en.wikipedia.org/wiki/HD%20179949
HD 179949 is a 6th magnitude star in the constellation of Sagittarius. It is a yellow-white dwarf (spectral class F8 V), a type of star hotter and more luminous than the Sun. The star is located about 90 light years from Earth and might be visible under exceptionally good conditions to an experienced observer without technical aid; usually binoculars are needed. The star HD 179949 is named Gumala. The name was selected in the NameExoWorlds campaign by Brunei, during the 100th anniversary of the IAU. Gumala is a Malay word, which means a magic bezoar stone found in snakes, dragons, etc. Properties This is an F-type main-sequence star classified with a spectral type of F8V. It has an estimated mass of 1.23 times the solar mass and a radius of 1.20 times the solar radius. Its photosphere is shining with 1.95 times the solar luminosity at an effective temperature of 6,220 K. Its metallicity, the abundance of elements other than hydrogen and helium, is high, with 162% the solar iron abundance, following the trend that stars with giant planets are more metal-rich. With an estimated age of 1.2 billion years, HD 179949 is a chromospherically active star and has a complex magnetic field with a maximum strength of 10 G. Like the Sun, this star has differential rotation, with the equatorial region having a faster rotation period, of 7.62 ± 0.07 days, compared to a rotation period of 10.3 ± 0.8 days in the poles. The star's projected rotational velocity is 7.0 km/s, corresponding to an inclination angle of about 60°. HD 179949 has been classified as a BY Draconis variable, which varies in brightness due to rotational modulation of spots on the surface. Monitoring of the star's spectral lines suggested a possible correlation between the star's chromospheric activity and the orbital period of its planet HD 179949 b. Later observations showed that this correlation is not present, with the star's activity being in synchrony with the star's rotation, instead of the exoplanet's orbit. In 2022, stellar X-ray flares from the star were found to be uncorrelated with the exoplanet's orbital period. Planetary system The discovery of an extrasolar planet orbiting HD 179949 with a period of only 3.1 days was published in 2001. It was detected with the radial velocity method from observations of the star with the UCLES spectrograph, in the Anglo-Australian Telescope, as part of the Anglo-Australian Planet Search. With a minimum mass of 92% of the mass of Jupiter, it is a hot Jupiter, orbiting the star at a distance of only 0.04 AU. Its orbit is nearly circular, with a best fit orbital eccentricity of 0.022 ± 0.015. Planets close to their stars have high chances of transit, but photometric observations of HD 179949 ruled out this possibility. Infrared observations of HD 179949 with the Spitzer Space Telescope detected 0.14% variations in the system's brightness in phase with the orbital period of the planet, indicating large luminosity variation between the illuminated side and the dark side of the planet, implying that less than 21% of the incident stellar energy is transferred to the dark side. In 2014, infrared observations of the system with the CRIRES instrument, at the Very Large Telescope, directly detected the thermal spectrum of the planet, revealing absorption features of carbon monoxide and water vapor in its atmosphere. The radial velocity of the planet has variations of 142.8 ± 3.4 km/s due to orbital motion, which allowed the calculation of a real mass of 0.98 ± 0.04 Jupiter masses and an orbital inclination of 67.7 ± 4.3 degrees. References External links SIMBAD star entry, planet entry 179949 094645 Sagittarius (constellation) F-type main-sequence stars 7291 0749 Durchmusterung objects
HD 179949
[ "Astronomy" ]
845
[ "Sagittarius (constellation)", "Constellations" ]
1,571,520
https://en.wikipedia.org/wiki/Chi%20Orionis
Chi Orionis (Chi Ori, χ Orionis, χ Ori) is the name of two stars: Chi1 Orionis (54 Orionis, HD 39587) Chi2 Orionis (62 Orionis, HD 41117) All of them were member of asterism 司怪 (Sī Guài), Deity in Charge of Monsters, Turtle Beak mansion. References Orionis, Chi Orion (constellation)
Chi Orionis
[ "Astronomy" ]
88
[ "Constellations", "Orion (constellation)" ]
1,571,680
https://en.wikipedia.org/wiki/List%20of%20document%20markup%20languages
The following is a list of document markup languages. You may also find the List of markup languages of interest. Well-known document markup languages HyperText Markup Language (HTML) – the original markup language that was defined as a part of implementing World Wide Web, an ad hoc defined language inspired by the meta format SGML and which inspired many other markup languages. Keyhole Markup Language (KML/KMZ) - the XML-based markup language used for exchanging geographic information for use with Google Earth. Markdown - simple plaintext markup popular as language of blog/cms posts and comments, multiple implementations. Mathematical Markup Language (MathML) Scalable Vector Graphics (SVG) TeX, LaTeX – a format for describing complex type and page layout often used for mathematics, technical, and academic publications. Wiki markup – used in Wikipedia, MediaWiki and other Wiki installations. Extensible 3D (X3D) Extensible HyperText Markup Language (XHTML): HTML reformulated in XML syntax. XHTML Basic – a subset of XHTML for simple (typically mobile, handheld) devices. It is meant to replace WML, and C-HTML. XHTML Mobile Profile (XHTML MP) – a standard designed for mobile phones and other resource-constrained devices. Metalanguages Standard Generalized Markup Language (SGML) – a standard pattern for markup languages to which HTML and DocBook adhere. Extensible Markup Language (XML) – a newer standard pattern for markup languages; a restricted form of SGML that is intended to be compatible with it. Lesser-known document markup languages (including some lightweight markup languages) ABC notation - markup language for music scores in pure text. Amigaguide – The Amiga hypertext documentation format, including multimedia support. AsciiDoc - plaintext markup language similar to Markdown AsciiDoctor - plaintext markup language (extending AsciiDoc) AsciiDoctor Chemical Markup Language (CML) Compact HyperText Markup Language (C-HTML) – used for some mobile phones. Computable Document Format - used for interactive technical documents. ConTeXt – a modular, structured formatting language based on TeX. Darwin Information Typing Architecture (DITA) - modular open free format for technical and specialized documents. DocBook – format for technical (but not only) manuals and documentation. Encoded Archival Description (EAD) Enriched text – for formatting e-mail text. GML Generalized Markup Language (GML) Geography Markup Language (GML) Gesture Markup Language (GML) Graffiti Markup Language (GML) GNU TeXmacs format – used by the GNU TeXmacs document preparation system Guide Markup Language (GuideML) – used by the Hitchhiker's Guide site. Handheld Device Markup Language (HDML) – designed for smartphones and handheld computers. Help Markup Language (HelpML) Hypermedia/Time-based Structuring Language (HyTime) HyperTeX – for including hyperlinks in TeX (and LaTeX) documents. Information Presentation Facility (IPF) – is a system for presenting online help and hypertext on IBM OS/2 systems. It is also the default help file format used by the cross-platform fpGUI Toolkit project. JATS (Journal Article Tag Suite) – a NISO standard of XML used to describe and publish STEM (scientific/technical/engineering/medical) scholarly journal articles LilyPond – a system for music notation. LinuxDoc – used by the Linux Documentation Project. Lout – a document formatting functional programming language, similar in style to LaTeX. Maker Interchange Format (MIF) Microsoft Assistance Markup Language (MAML) Music Encoding Initiative (MEI) Music Extensible Markup Language (MusicXML) Open Mathematical Documents (OMDoc) OpenMath – a markup language for mathematical formulae which can complement MathML. Parameter Value Language, stores mission data in NASA's Planetary Database System Plain Old Documentation (POD) – a simple, platform-independent documentation tool for Perl. Pillar - a markup syntax and associated tools to write and generate documentation written in Pharo PUB (markup language), an early scriptable markup language Remote Telescope Markup Language (RTML) reStructuredText (reSt) - plaintext platform-independent markup used as Python libraries documentation tool, multiple output formats (HTML, LaTeX, ODT, EPUB, ...) Retail Template Markup Language (RTML) – e-commerce language which is based on Lisp. Revisable-Form Text (RFT) – part of IBM's Document Content Architecture to allow transfer of formatted documents to other systems. S1000D – international specification for technical documentation related to commercial or military; aerospace, sea or land; vehicles or equipment. Scribble - Markup language based on Racket (programming language) Scribe – Brian Reid's seminal markup language Script – Early IBM markup language on which GML is built. Semantic, Extensible, Computational, Styled, Tagged markup language (SECST) - A more expressive and semantic alternative to Markdown that also transpiles to HTML. SiSU Structured Information, Serialized Units – generalized Markup language with several output formats SKiCal – a machine-readable format for the interchange of enhanced yellow-page directory listings. Skriv – lightweight markup language. Texinfo – GNU documentation format. Text Encoding Initiative (TEI) – Guidelines for text encoding in the humanities, social sciences and linguistics. Textile (markup language) – Plaintext XHTML web text. Time Management Markup Language(TMML) – For Time Management and rarely used for mobile alarm in 2008 troff (typesetter runoff), groff (GNU runoff) UDO (markup language), a lightweight markup language Wireless Markup Language (WML), Wireless TV Markup Language (WTVML) Extensible Application Markup Language (XAML) XML based user interface markup language Xupl – a C-style equivalent to XML. Office document markup languages Compound Document Format Office Open XML (OOXML) – open standard format for office documents: SpreadsheetML – spreadsheet language, part of Office Open XML PresentationML – presentations language, part of Office Open XML WordprocessingML – wordprocessing language, part of Office Open XML Microsoft Office 2003 XML formats – predecessor of Office Open XML OpenDocument (ODF) – open standard format for office documents OpenOffice.org XML – predecessor of OpenDocument ReportML – Report format language originating from Microsoft Access. (not a part of Office Open XML (yet)) Rich Text Format (RTF) – Microsoft format for exchanging documents with other vendors' applications. (It is not really a markup language, as it was never meant for intuitive and easy typing.) Uniform Office Format (UOF) – open format for office documents, being harmonised with OpenDocument. See also Comparison of document markup languages Comparison of Office Open XML and OpenDocument Lightweight markup language Page description language References Document markup languages
List of document markup languages
[ "Technology" ]
1,509
[ "Computing-related lists", "Lists of computer languages" ]
1,571,683
https://en.wikipedia.org/wiki/HD%20121504
HD 121504 is a star with an orbiting exoplanet in the southern constellation of Centaurus. It is located at a distance of 136 light years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of 19.6 km/s. With an apparent visual magnitude of 7.54, this star is too faint to be visible to the naked eye. It shows a high proper motion, traversing the celestial sphere at an angular rate of . The spectrum of this star presents as an ordinary G-type main-sequence star, a yellow dwarf similar in appearance to the Sun, having a stellar classification of G2V. It is roughly two billion years old and is spinning with a rotation period of 8.6 days. The star has 16% more mass than the Sun and a 15% greater radius. The metallicity (the abundance of elements more massive than helium) is higher than solar. The star is radiating 162% of the luminosity of the Sun from its photosphere at an effective temperature of 6,089 K. A nearby visual companion, designated as SAO 241323 has been proposed as a component of the system. However, the pair form an optical binary with an angular separation of , and in reality this is a white giant star located thousands of light years away. Exoplanet In 2000 the Geneva Extrasolar Planet Search Team announced the discovery of an extrasolar planet orbiting the star. See also List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Double stars Centaurus Durchmusterung objects 121504 068162
HD 121504
[ "Astronomy" ]
337
[ "Centaurus", "Constellations" ]
1,571,686
https://en.wikipedia.org/wiki/List%20of%20web%20service%20protocols
The following is a list of web service protocols. BEEP - Blocks Extensible Exchange Protocol CTS - Canonical Text Services Protocol E-Business XML Hessian Internet Open Trading Protocol JSON-RPC JSON-WSP SOAP - outgrowth of XML-RPC, originally an acronym for Simple Object Access Protocol Universal Description, Discovery, and Integration (UDDI) Web Processing Service (WPS) WSCL - Web Services Conversation Language WSFL - Web Services Flow Language (superseded by BPEL) XINS Standard Calling Convention - HTTP parameters in (GET/POST/HEAD), POX out XLANG - XLANG-Specification (superseded by BPEL) XML-RPC - XML Remote Procedure Call See also List of web service frameworks List of web service specifications Service-oriented architecture Web service Application layer protocols web service
List of web service protocols
[ "Technology" ]
172
[ "Computing-related lists", "Lists of network protocols" ]
1,571,722
https://en.wikipedia.org/wiki/HD%20114783
HD 114783 is a star with two exoplanetary companions in the equatorial constellation of Virgo. With an apparent visual magnitude of 7.56 it is too faint to be visible with the unaided eye, but is an easy target for binoculars. Based on parallax measurements, it is located at a distance of from the Sun, but is drifting closer with a radial velocity of −12 km/s. This is an orange-hued K-type main-sequence star with a stellar classification of K1V. It is roughly 2.5 billion years old and is chromospherically inactive with a low projected rotational velocity of 1.9 km/s. The star has 88% of the mass and 81% of the radius of the Sun. It is radiating 42% of the luminosity of the Sun from its photosphere at an effective temperature of 5,114 K. In 2001, the California and Carnegie Planet Search team found an exoplanet, HD 114783 b, orbiting the star using the radial velocity method. The discovery was made with the Keck Telescope. A second companion, HD 114783 c, was discovered in 2016, and in 2023 its inclination and true mass were measured via astrometry. See also HD 114386 List of extrasolar planets References K-type main-sequence stars Planetary systems with two confirmed planets Virgo (constellation) BD-01 2784 3769 114783 064457
HD 114783
[ "Astronomy" ]
308
[ "Virgo (constellation)", "Constellations" ]