id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
27,456,863 | https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20Bst%E2%80%93Bv | This article contains a list of the most studied restriction enzymes whose names start with Bst to Bv inclusive. It contains approximately 200 enzymes.
The following information is given:
Whole list navigation
Restriction enzymes
Bst
Bsu - Bv
Notes
Biotechnology
Restriction enzyme cutting sites
Restriction enzymes | List of restriction enzyme cutting sites: Bst–Bv | Chemistry,Biology | 56 |
47,991,769 | https://en.wikipedia.org/wiki/Penicillium%20viticola | Penicillium viticola is a species of fungus in the genus Penicillium which was isolated from grapes in Yamanashi Prefecture in Japan. Penicillium viticola produces calcium malate
References
Further reading
viticola
Fungi described in 2011
Fungus species | Penicillium viticola | Biology | 57 |
6,885,779 | https://en.wikipedia.org/wiki/Mobile%20radio | Mobile radio or mobiles refer to wireless communications systems and devices which are based on radio frequencies (using commonly UHF or VHF frequencies), and where the path of communications is movable on either end. There are a variety of views about what constitutes mobile equipment. For US licensing purposes, mobiles may include hand-carried, (sometimes called portable), equipment. An obsolete term is radiophone.
A sales person or radio repair shop would understand the word mobile to mean vehicle-mounted: a transmitter-receiver (transceiver) used for radio communications from a vehicle. Mobile radios are mounted to a motor vehicle usually with the microphone and control panel in reach of the driver. In the US, such a device is typically powered by the host vehicle's 12 Volt electrical system.
Some mobile radios are mounted in aircraft (aeronautical mobile), shipboard (maritime mobile), on motorcycles, or railroad locomotives. Power may vary with each platform. For example, a mobile radio installed in a locomotive would run off of 72 or 30 Volt DC power. A large ship with 117 V AC power might have a base station mounted on the ship's bridge.
According to article 1.67 of the ITU, a mobile radio is "A station in the mobile service intended to be used while in motion or during halts at unspecified points."
Nomenclature: Two-way versus telephone
The distinction between radiotelephones and two-way radio is becoming blurred as the two technologies merge. The backbone or infrastructure supporting the system defines which category or taxonomy applies. A parallel to this concept is the convergence of computing and telephones.
Radiotelephones are full-duplex (simultaneous talk and listen), circuit switched, and primarily communicate with telephones connected to the public switched telephone network. The connection sets up based on the user dialing. The connection is taken down when the end button is pressed. They run on telephony-based infrastructure such as AMPS or GSM.
Two-way radio is primarily a dispatch tool intended to communicate in simplex or half-duplex modes using push-to-talk, and primarily intended to communicate with other radios rather than telephones. These systems run on push-to-talk-based infrastructure such as Nextel's iDEN, Specialized Mobile Radio (SMR), MPT-1327, Enhanced Specialized Mobile Radio (ESMR) or conventional two-way systems. Certain modern two-way radio systems may have full-duplex telephone capability.
History
Early users of mobile radio equipment included transportation and government. These systems used one-way broadcasting instead of two-way conversations. Railroads used medium frequency range (MF) communications (similar to the AM broadcast band) to improve safety. Instead of hanging out of a locomotive cab and grabbing train orders while rolling past a station, voice communications with rolling trains became possible. Radios linked the caboose with the locomotive cab. Early police radio systems were initially one way using MF frequencies above the AM broadcast band, (1.7 MHz). Some early systems talked back to dispatch on a 30-50 MHz link, (called crossband).
Early mobile radios used amplitude modulation (AM) to convey intelligence through the communications channel. In time, problems with sources of electrical noise showed that frequency modulation (FM) was superior for its ability to cope with vehicle ignition and power line noise. The frequency range used by most early radio systems, 25–50 MHz (vhf "low band") is particularly susceptible to the problem of electrical noise. This plus the need for more channels led to the eventual expansion of two-way radio communications into the VHF "high band" (150–174 MHz) and UHF (450–470 MHz). The UHF band has since been expanded again.
One of the major challenges in early mobile radio technology was that of converting the six or twelve volt power supply of the vehicle to the high voltage needed to operate the vacuum tubes in the radio. Early tube-type radios used dynamotors - essentially a six or twelve volt motor that turned a generator to provide the high voltages required by the vacuum tubes. Some early mobile radios were the size of a suitcase or had separate boxes for the transmitter and receiver. As time went on, power supply technology evolved to use first electromechanical vibrators, then solid-state power supplies to provide high voltage for the vacuum tubes. These circuits, called "inverters", changed the 6 or 12 V direct current (DC) to alternating current (AC) which could be passed through a transformer to make high voltage. The power supply then rectified this high voltage to make the high voltage DC required for the vacuum tubes, (called valves in British English). The power supplies needed to power vacuum tube radios resulted in a common trait of tube-type mobile radios: their heavy weight due to the iron-core transformers in the power supplies. These high voltage power supplies were inefficient, and the filaments of the vacuum tubes added to current demands, taxing vehicle electrical systems. Sometimes, a generator or alternator upgrade was needed to support the current required for a tube-type mobile radio.
Examples of US 1950s-1960s tube-type mobile radios with no transistors:
Motorola FMTRU-140D (dynamotor powered)
Motorola Twin-V, named for its "universal" 6 or 12 Volt power supply
General Electric Progress Line (Early models without "T-Power" power supply)
Kaar Engineering Model 501
Equipment from different US manufacturers had similar traits. This was partly dictated by Federal Communications Commission (FCC) regulations. The requirement that unauthorized persons be prohibited from using the radio transmitter meant that many radios were wired so they could not transmit unless the vehicle ignition was on. Persons without a key to the vehicle could not transmit. Equipment had to be "type accepted", or technically approved, by the FCC before it could be offered for sale. In order to be type accepted, the radio set had to be equipped with an indicator light, usually green or yellow, that showed power was applied and the radio was ready to transmit. Radios were also required to have a lamp (usually red) indicating when the transmitter was on. These traits continue in the design of modern radios.
Early tube-type radios operated on 50 kHz channel spacing with ±15 kHz modulation deviation. This meant that the number of radio channels that could be accommodated in the available radio frequency spectrum were limited to a certain number, dictated by the bandwidth of the signal on each channel.
Solid-state electronic equipment arrived in the 1960s, with more efficient circuitry and smaller size. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) provided a practical and economic solution for radio technology, and was used in mobile radio systems by the early 1970s. Channel spacing narrowed to 20–30 kHz with modulation deviation dropping to ±5 kHz. This was done to allow more radio spectrum availability to accommodate the rapidly growing national group of two-way radio users. By the mid-1970s, tube-type transmitter power amplifiers had been replaced with high-power transistors. From the 1960s to the 1980s, large system users with specialized requirements often had custom built radios designed for their unique systems. Systems with multiple-CTCSS tone encoders and more than two channels were unusual. Manufacturers of mobile radios built customized equipment for large radio fleets such as the California Department of Forestry and the California Highway Patrol.
Examples of US hybrid partially solid state mobile radios:
Motorola Motrac
Motorola MJ IMTS Car Telephone (1963)
General Electric Transistorized Progress Line
General Electric MASTR Professional and MASTR Executive
RCA Super Carfone
Today
Custom design for a particular customer is a thing of the past. Modern mobile radio equipment is "feature rich". A mobile radio may have 100 or more channels, be microprocessor controlled and have built-in options such as unit ID. A computer and software is typically required to program the features and channels of the mobile radio. Menus of options may be several levels deep and offer a complicated array of possibilities. Some mobile radios have alphanumeric displays that translate channel numbers (F1, F2) to a phrase more meaningful to the user, such as "Providence Base", "Boston Base", etc. Radios are now designed with a myriad of features to preclude the need for custom design. For example, Hytera's HM68X mobile radio, which was introduced in September 2022, offers a variety of features, including GPS location, emergency alarm, noise cancellation, and more.
Examples of US microprocessor-controlled mobile radios:
Motorola Astro Digital Spectra W9
Kenwood TK-690
PositionPTT mobile-radio-m94g
As use of mobile radio equipment has virtually exploded, channel spacing has had to be narrowed again to 12.5–15 kHz with modulation deviation dropped to ±2.5 kilohertz. In order to fit into smaller, more economical vehicles, today's radios are trending toward radically smaller sizes than their tube-type ancestors.
The traditional analogue radio communications have been surpassed by digital radio voice communications capabilities that provide greater clarity of transmission, enable security features such as encryption and, within the network, allow low band data transmissions to accommodate simple text or picture messaging as an example. (Examples: Project 25(APCO-25), Terrestrial Trunked Radio (TETRA), DMR.)
Details
Commercial and professional mobile radios are often purchased from an equipment supplier or dealer whose staff will install the equipment into the user's vehicles. Large fleet users may buy radios directly from an equipment manufacturer and may even employ their own technical staff for installation and maintenance.
A modern mobile radio consists of a radio transceiver, housed in a single box, and a microphone with a push-to-talk button. Each installation would also have a vehicle-mounted antenna connected to the transceiver by a coaxial cable. Some models may have an external, separate speaker which can be positioned and oriented facing the driver to overcome ambient road noise present when driving. The installer would have to locate this equipment in a way that does not interfere with the vehicle's sun roof, electronic engine management system, vehicle stability computer, or air bags.
Mobile radios installed on motorcycles are subject to extreme vibration and weather. Professional equipment designed for use on motorcycles is weather and vibration resistant. Shock mounting systems are used to reduce the radio's exposure to vibration imparted by the motorcycle's modal, or resonant, shaking.
Some mobile radios use noise-canceling microphones or headsets. At speeds over 100 MPH, the ambient road and wind noise can make radio communications difficult to understand. For example, California Highway Patrol mobile radios have noise-canceling microphones which reduce road and siren noise heard by the dispatcher. Most fire engines and radios in heavy equipment use noise-canceling headsets. These protect the occupant's hearing and reduce background noise in the transmitted audio. Noise-canceling microphones require the operator speak directly into the front of the microphone. Hole arrays in the back of the microphone pick up ambient noise. This is applied, out-of-phase, to the back of the microphone, effectively reducing or canceling any sound which is present both in front and back of the microphone. Ideally, only the voice present on the front side of the microphone goes out on the air.
Many radios are equipped with transmitter time-out timers which limit the length of a transmission. A bane of push-to-talk systems is the stuck microphone: A radio locked on transmit, which disrupts communications on a two-way radio system. One example of this problem occurred in a car with a concealed two-way radio installation where the microphone and coiled cord were hidden inside the glove box. An operator tossed the mike into the glove box and shut it, causing the push-to-talk button to be depressed and locking the transmitter on. On taxi systems, a driver may be upset when a dispatcher assigns a call (s)he wanted to another driver and may deliberately hold the transmit button down (for which the owner can be fined by the FCC). Radios with time-out timers transmit for the preset amount of time, usually 30–60 seconds, after which the transmitter automatically turns off and a loud tone comes out of the radio speaker. The volume level of the tone on some radios is loud and cannot be adjusted. As soon as the push-to-talk button is released, the tone stops and the timer resets.
Mobile radio equipment is manufactured to specifications developed by the Electronic Industries Association/Telecommunications Industry Association (EIA/TIA). These specifications have been developed to help assure the user that mobile radio equipment performs as expected and to prevent the sale and distribution of inferior equipment which could degrade communications.
Antenna
A mobile radio must have an associated antenna. The most common antennas are stainless steel wire or rod whips which protrude vertically from the vehicle. Physics defines the antenna length: length relates to frequency and cannot be arbitrarily lengthened or shortened (more likely) by the end user. The standard "quarter wave" antenna in the 25-50 MHz range can be over nine feet long. A 900 MHz antenna may be three inches long for a quarter wavelength. A transit bus may have a ruggedized antenna, which looks like a white plastic blade or fin, on its roof. Some vehicles with concealed radio installations have antennas designed to look like the original AM/FM antenna, a rearview mirror, or may be installed inside windows, or hidden on the floor pan or underside of a vehicle. Aircraft antennas look like blades or fins, the size and shape being determined by frequencies used. Microwave antennas may look like flat panels on the aircraft's skin. Temporary installations may have antennas which clip on to vehicle parts or are attached to steel body parts by a strong magnet.
Though initially relatively inexpensive mobile radio system components, frequently damaged antennas can be costly to replace since they are usually not included in maintenance contracts for mobile radio fleets. Some types of vehicles in 24-hour use, with stiff suspensions, tall heights, or rough diesel engine idle vibrations may damage antennas quickly. The location and type of antenna can affect system performance drastically. Large fleets usually test a few vehicles before making a commitment to a certain antenna location or type.
U.S. Occupational Safety and Health Administration guidelines for non-ionizing radio energy generally say the radio antenna must be two feet from any vehicle occupants. This rule of thumb is intended to prevent passengers from being exposed to unsafe levels of radio frequency energy when the radio transmits.
Multiple radio sets
Dispatch-reliant services, such as tow cars or ambulances, may have several radios in each vehicle. For example, tow cars may have one radio for towing company communications and a second for emergency road service communications. Ambulances may have a similar arrangement with one radio for government emergency medical services dispatch and one for company dispatch.
Multiple controls, microphones
US ambulances often have radios with dual controls and dual microphones allowing the radio to be used from the patient care area in the rear or from the vehicle's cab.
Data radio
Both tow cars and ambulances may have an additional radio which transmits and receives to support a mobile data terminal. A data terminal radio allows data communications to take place over the separate radio. In the same way that a facsimile machine has a separate phone line, this means data and voice communication can take place simultaneously over a separate radio. Early Federal Express (FedEx) radio systems used a single radio for data and voice. The radio had a request-to-speak button which, when acknowledged, allowed voice communication to the dispatch center.
Each radio works over a single band of frequencies. If a tow car company had a frequency on the same band as its auto club, a single radio with scanning might be employed for both systems. Since a mobile radio typically works on a single frequency band, multiple radios may be required in cases where communications take place over systems on more than one frequency band.
Walkie talkie converters in place of mobile radios
Intended as a cost savings, some systems employ vehicular chargers instead of a mobile radio. Each radio user is issued a walkie talkie. Each vehicle is equipped with a charger system console. The walkie talkie inserted into a vehicular charger or converter while the user is in the vehicle. The charger or converter (1) connects the walkie talkie to the vehicle's two-way radio antenna, (2) connects an amplified speaker, (3) connects a mobile microphone, and (4) charges the walkie talkie's battery. The weak point of these systems has been connector technology which has been proven unreliable in some installations. Receiver performance is a problem in congested radio signal and urban areas. These installations are sometimes referred to as jerk-and-run systems.
Notes
See also
Land mobile radio
References
External links
Mobile technology | Mobile radio | Technology | 3,485 |
31,562,947 | https://en.wikipedia.org/wiki/MECHATROLINK | MECHATROLINK is an open protocol used for industrial automation, originally developed by Yaskawa and presently maintained by Mechatrolink Members Association (MMA).
Mechatrolink protocol has two major variants:
MECHATROLINK-II—Defines protocol communication schemes through serial link equivalent to RS485 with a maximum speed of 10 Mbit/s and maximum 30 slave nodes.
MECHATROLINK-III—Defines protocol communication schemes over Ethernet with a maximum speed of 100 Mbit/s and maximum 62 slave nodes.
References
External links
Protocol Introduction and Supported Products
Industrial computing
Serial buses
Industrial Ethernet | MECHATROLINK | Technology,Engineering | 123 |
12,197,976 | https://en.wikipedia.org/wiki/MMode | mMode was the brand name for the wireless data service offered by the former AT&T Wireless. Based on NTT DoCoMo's i-mode, it was available to any AT&T Wireless subscriber with a WAP-capable phone. Operating over GPRS, EDGE, and UMTS, mMode was the successor to AT&T's unsuccessful CDPD-based Pocketnet. Launched in April 2002, it was no longer available to new subscribers following the Cingular takeover, but legacy AT&T Wireless subscribers were able to access the system until June 2010.
Features
Access to sites with WAP-enabled pages, such as eBay and Yahoo!
"@mmode.com" email account
Ringtones and graphics available for purchase and download
"Find a Friend", a service which enabled one subscriber to find another subscriber's approximate location using triangulation. Cingular has since removed this feature.
mMode Music Store, launched in October 2004, allowed subscribers to purchase music and have it charged to their wireless bill (note that the music could not be played on the phone, it had to be downloaded to the user's computer)
References
AT&T debuts "mMode" wireless Web - CNET News
AT&T Wireless opens mobile music store - CNET News
Cingular to Shut Down mMode LBS Services - PhoneNews.com
AT&T
Mobile telecommunication services
2002 introductions | MMode | Technology | 294 |
81,218 | https://en.wikipedia.org/wiki/Bookmarklet | A bookmarklet is a bookmark stored in a web browser that contains JavaScript commands that add new features to the browser. They are stored as the URL of a bookmark in a web browser or as a hyperlink on a web page. Bookmarklets are usually small snippets of JavaScript executed when user clicks on them. When clicked, bookmarklets can perform a wide variety of operations, such as running a search query from selected text or extracting data from a table.
Another name for bookmarklet is favelet or favlet, derived from favorites (synonym of bookmark).
History
Steve Kangas of bookmarklets.com coined the word bookmarklet when he started to create short scripts based on a suggestion in Netscape's JavaScript guide. Before that, Tantek Çelik called these scripts favelets and used that word as early as on 6 September 2001 (personal email). Brendan Eich, who developed JavaScript at Netscape, gave this account of the origin of bookmarklets:
The increased implementation of Content Security Policy (CSP) in websites has caused problems with bookmarklet execution and usage (2013-2015), with some suggesting that this hails the end or death of bookmarklets. William Donnelly created a work-around solution for this problem (in the specific instance of loading, referencing and using JavaScript library code) in early 2015 using a Greasemonkey userscript (Firefox / Pale Moon browser add-on extension) and a simple bookmarklet-userscript communication protocol. It allows (library-based) bookmarklets to be executed on any and all websites, including those using CSP and having an https:// URI scheme. Note, however, that if/when browsers support disabling/disallowing inline script execution using CSP, and if/when websites begin to implement that feature, it will "break" this "fix".
Concept
Web browsers use URIs for the href attribute of the <a> tag and for bookmarks. The URI scheme, such as http or ftp, and which generally specifies the protocol, determines the format of the rest of the string. Browsers also implement javascript: URIs that to a parser is just like any other URI. The browser recognizes the specified javascript scheme and treats the rest of the string as a JavaScript program which is then executed. The expression result, if any, is treated as the HTML source code for a new page displayed in place of the original.
The executing script has access to the current page, which it may inspect and change. If the script returns an undefined type (rather than, for example, a string), the browser will not load a new page, with the result that the script simply runs against the current page content. This permits changes such as in-place font size and color changes without a page reload.
An immediately invoked function that returns no value or an expression preceded by the void operator will prevent the browser from attempting to parse the result of the evaluation as a snippet of HTML markup:
javascript:(function(){
//Statements returning a non-undefined type, e.g. assignments
})();
Usage
Bookmarklets are saved and used as normal bookmarks. As such, they are simple "one-click" tools which add functionality to the browser. For example, they can:
Modify the appearance of a web page within the browser (e.g., change font size, background color, etc.)
Extract data from a web page (e.g., hyperlinks, images, text, etc.)
Remove redirects from (e.g. Google) search results, to show the actual target URL
Submit the current page to a blogging service such as Posterous, link-shortening service such as bit.ly, or bookmarking service such as Delicious
Query a search engine or online encyclopedia with highlighted text or by a dialog box
Submit the current page to a link validation service or translation service
Set commonly chosen configuration options when the page itself provides no way to do this
Control HTML5 audio and video playback parameters such as speed, position, toggling looping, and showing/hiding playback controls, the first of which can be adjusted beyond HTML5 players' typical range setting.
Installation
"Installing" a bookmarklet allows you to quickly access and run JavaScript programs with a single click from your browser's bookmarks bar. Follow these detailed steps to install a bookmarklet:
Method 1: Creating a New Bookmark
Open Your Browser: Launch the browser where you want to add the bookmarklet.
Add a New Bookmark:
Navigate to the bookmarks manager. In most browsers, this can be accessed by pressing Ctrl+Shift+O or by selecting 'Bookmarks' from the browser menu and then choosing 'Bookmark manager'.
Right-click in the bookmarks bar or the folder where you want to add the bookmarklet and select 'Add new bookmark' or 'Add page'.
Configure the Bookmark:
In the 'Name' field, enter a descriptive name for your bookmarklet to help you identify its function.
In the 'URL' field, paste the JavaScript code provided for the bookmarklet. Ensure that it starts with javascript: followed by the code snippet.
Save the Bookmark: Click 'Save' or 'Done' to add the bookmarklet to your bookmarks bar or folder.
Method 2: Dragging and Dropping
Locate the Bookmarklet Link: Find the bookmarklet link provided on a webpage. This link will typically appear as a clickable button or link labeled with the function of the bookmarklet.
Drag the Bookmarklet to Your Bookmarks Bar:
Click and hold the bookmarklet link.
Drag it directly onto your bookmarks bar. Some browsers might show a placeholder or highlight where the bookmarklet will be placed.
Release the mouse button to drop the bookmarklet into place.
Confirmation: The bookmarklet should now appear on your bookmarks bar, ready for use.
Running the Bookmarklet
To use the bookmarklet, simply click on its icon or name in your bookmarks bar. The JavaScript code will execute immediately on the current webpage you are viewing. Make sure the webpage is fully loaded before using the bookmarklet for optimal performance.
Tips
Security Warning: Be cautious about adding bookmarklets from untrusted sources as they run JavaScript code that could potentially affect your browsing security or privacy.
Compatibility: While most modern browsers support bookmarklets, the functionality may vary. Check your browser’s documentation for any specific instructions or limitations.
Example
This example bookmarklet performs a Wikipedia search on any highlighted text in the web browser window. In normal use, the following JavaScript code would be installed to a bookmark in a browser bookmarks toolbar. From then on, after selecting any text, clicking the bookmarklet performs the search.
javascript:(function() {
function se(d) {
return d.selection ? d.selection.createRange().text : d.getSelection()
}
var s = se(document);
for (var i=0; i<frames.length && (s==null || s==''); i++) s = se(frames[i].document);
if (!s || s=='') s = prompt('Enter%20search%20terms%20for%20Wikipedia','');
open('https://en.wikipedia.org' + (s ? '/w/index.php?title=Special:Search&search=' + encodeURIComponent(s) : '')).focus();
})();
Bookmarklets can modify the location, e.g. to save a web page to the Wayback Machine,
javascript:location.href='https://web.archive.org/save/'+document.location.href;
Open a new web browser window or tab, e.g. to show the source of a web resource if the web browser supports the view-source URI scheme,
javascript:void(window.open('view-source:'+location));
Show info related to the current URL, e.g.,
javascript:alert('\tdocument.URL\n'+document.URL+'\n\tdocument.lastModified\n'+document.lastModified+'\n\tlocation\n'+location);
References
External links
JavaScript
Web development | Bookmarklet | Engineering | 1,842 |
622,942 | https://en.wikipedia.org/wiki/Pergolide | Pergolide, sold under the brand name Permax and Prascend (veterinary) among others, is an ergoline-based dopamine receptor agonist used in some countries for the treatment of Parkinson's disease. Parkinson's disease is associated with reduced dopamine synthesis in the substantia nigra of the brain. Pergolide acts on many of the same receptors as dopamine to increase receptor activity.
It was patented in 1978 and approved for medical use in 1989. In 2007, pergolide was withdrawn from the U.S. market for human use after several published studies revealed a link between the drug and increased rates of valvular heart disease. However, a veterinary form of pergolide, marketed under the trade name Prascend, is permitted for the treatment of pituitary pars intermedia dysfunction (PPID) also known as equine Cushing's syndrome (ECS) in horses.
Medical uses
Pergolide is no longer available for use by humans in the United States, however, it is still used in various other countries, where it is used to treat various conditions including Parkinson's disease, hyperprolactinemia, and restless leg syndrome.
Pergolide is available for veterinary use. Under the trade name Prascend, manufactured by Boehringer Ingelheim, it is commonly used for the treatment of pituitary hyperplasia at the pars intermedia or Equine Cushing's Syndrome (ECS) in horses.
Pharmacology
Pharmacodynamics
Pergolide acts as an agonist of dopamine D2 and D1 and serotonin 5-HT1A, 5-HT1B, 5-HT2A, 5-HT2B, and 5-HT2C receptors. It may possess agonist activity at other dopamine receptor subtypes as well, similar to cabergoline. Although pergolide is more potent as an agonist of the D2 receptor, it has high D1 receptor affinity and is one of the most potent D1 receptor agonists of the dopamine receptor agonists that are clinically available. The agonist activity of pergolide at the D1 receptor somewhat alters its clinical and side effect profile in the treatment of Parkinson's disease. Pergolide has been said to be hallucinogenic due to activation of 5-HT2A receptors. However, other sources have stated that the drug is non-hallucinogenic. It has been associated with cardiac valvulopathy due to activation of 5-HT2B receptors.
Side effects
The drug is in decreasing use, as it was reported in 2003 to be associated with a form of heart disease called cardiac fibrosis. In 2007, the United States Food and Drug Administration announced a voluntary withdrawal of the drug by manufacturers due to the possibility of heart valve damage. Pergolide is not currently available in the United States for human use. This problem is thought to be due to pergolide's action at the 5-HT2B serotonin receptors of cardiac myocytes, causing proliferative valve disease by the same mechanism as ergotamine, methysergide, fenfluramine, and other serotonin 5-HT2B agonists, including serotonin itself when elevated in the blood in carcinoid syndrome. Pergolide can rarely cause Raynaud's phenomenon. Among similar antiparkinsonian drugs, cabergoline, but not lisuride, exhibit this same type of serotonin receptor binding. In January 2007, cabergoline (Dostinex) was also reported to be associated with valvular proliferation heart damage. In March 2007, pergolide was withdrawn from the U.S. market for human use due to serious valvular damage that was shown in two independent studies.
Pergolide has also been shown to impair associative learning.
Addictive behaviors
At least one British pergolide user has attracted some media attention with claims that it has caused him to develop a gambling addiction. In June 2010, it was reported that more than 100 Australian users of the drug are suing the manufacturer over both gambling and sex addiction problems they claim are the result of the drug's side effects.
Society and culture
Brand names
Brand names of pergolide include Permax and Prascend (veterinary), among others.
Research
Pergolide has been studied in the treatment of social anxiety disorder in one small study but was found to be ineffective.
References
5-HT2B agonists
D2-receptor agonists
D3 receptor agonists
D4 receptor agonists
Dopamine receptor modulators
Equine medications
Ergolines
Non-hallucinogenic 5-HT2A receptor agonists
Prolactin inhibitors
Thioethers
Withdrawn drugs
Cardiotoxins | Pergolide | Chemistry | 1,030 |
77,926 | https://en.wikipedia.org/wiki/DECnet | DECnet is a suite of network protocols created by Digital Equipment Corporation. Originally released in 1975 in order to connect two PDP-11 minicomputers, it evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol.
DECnet was built right into the DEC flagship operating system OpenVMS since its inception. Later Digital ported it to Ultrix, OSF/1 (later Tru64) as well as Apple Macintosh and IBM PC running variants of DOS, OS/2 and Microsoft Windows under the name PATHWORKS, allowing these systems to connect to DECnet networks of VAX machines as terminal nodes.
While the DECnet protocols were designed entirely by Digital Equipment Corporation, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including ones for FreeBSD and Linux. DECnet code in the Linux kernel was marked as orphaned on February 18, 2010 and removed August 22, 2022.
Evolution
DECnet refers to a specific set of hardware and software networking products which implement the DIGITAL Network Architecture (DNA). The DIGITAL Network Architecture has a set of documents which define the network architecture in general, state the specifications for each layer of the architecture, and describe the protocols which operate within each layer. Although network protocol analyzer tools tend to categorize all protocols from DIGITAL as "DECnet", strictly speaking, non-routed DIGITAL protocols such as LAT, SCS, AMDS, LAST/LAD are not DECnet protocols and are not part of the DIGITAL Network Architecture.
To trace the evolution of DECnet is to trace the development of DNA. The beginnings of DNA were in the early 1970s. DIGITAL published its first DNA specification at about the same time that IBM announced its Systems Network Architecture (SNA). Since that time, development of DNA has evolved through the following phases:
1970–1980
Phase I (1974)
Support limited to two PDP-11s running the RSX-11 operating system, or a small number of PDP-8s running the RTS-8 operating system, with communication over point-to-point (DDCMP) links between nodes.
Phase II (1975)
Support for networks of up to 32 nodes with multiple, different implementations which could inter-operate with each other. Implementations expanded to include RSTS, TOPS-10, TOPS-20 and VAX/VMS with communications between processors still limited to point-to-point links only. Introduction of downline loading (MOP), and file transfer using File Access Listener (FAL), remote file access using Data Access Protocol (DAP), task-to-task programming interfaces and network management features.
Phase III (1980). Support for networks of up to 255 nodes with 8-bit addresses, over point-to point and multi-drop links. Introduction of adaptive routing capability, record access, a network management architecture, and gateways to other types of networks including IBM's SNA and CCITT Recommendation X.25.
1981–1986
Phase IV and Phase IV+ (1982).
Phase IV was released initially to RSX-11 and VMS systems, later TOPS-20, TOPS-10, ULTRIX, VAXELN, and RSTS/E gained support. Support for networks of up to 64,449 nodes (63 areas of 1023 nodes) with 16-bit addresses, datalink capabilities expanded beyond DDCMP to include Ethernet local area network support as the datalink of choice, expanded adaptive routing capability to include hierarchical routing (areas, level 1 and level 2 routers), VMScluster support (cluster alias) and host services (CTERM). CTERM allowed a user on one computer to log into another computer remotely, performing the same function that Telnet does in the TCP/IP protocol stack. Digital also released a product called the PATHWORKS client, and more commonly known as the PATHWORKS 32 client, that implemented much of DECnet Phase IV for DOS, and 16 and 32 bit Microsoft Windows platforms (all the way through to Windows Server 2003).
Phase IV implemented an 8 layer architecture similar to the OSI (7 layer) model especially at the lower levels. Since the OSI standards were not yet fully developed at the time, many of the Phase IV protocols remained proprietary.
The Ethernet implementation was unusual in that the software changed the physical address of the Ethernet interface on the network to AA-00-04-00-xx-yy where xx-yy reflected the DECnet network address of the host. This allowed ARP-less LAN operation because the LAN address could be deduced from the DECnet address. This precluded connecting two NICs from the same DECnet node onto the same LAN segment, however.
The initial implementations released were for VAX/VMS and RSX-11, later this expanded to virtually every operating system DIGITAL ever shipped with the notable exception of RT-11. DECnet stacks are found on Linux, SunOS and other platforms, and Cisco and other network vendors offer products that can cooperate with and operate within DECnet networks. Full DECnet Phase IV specifications are available.
At the same time that DECnet Phase IV was released, the company also released a proprietary protocol called LAT for serial terminal access via Terminal servers. LAT shared the OSI physical and datalink layers with DECnet and LAT terminal servers used MOP for the server image download and related bootstrap processing.
Enhancements made to DECnet Phase IV eventually became known as DECnet Phase IV+, although systems running this protocol remained completely interoperable with DECnet Phase IV systems.
1987 & beyond
Phase V and Phase V+ (1987).
Support for very large (architecturally unlimited) networks, a new network management model, local or distributed name service, improved performance over Phase IV. Move from a proprietary network to an Open Systems Interconnection (OSI) by integration of ISO standards to provide multi-vendor connectivity and compatibility with DNA Phase IV, the last two features resulted in a hybrid network architecture (DNA and OSI) with separate "towers" sharing an integrated transport layer. Transparent transport level links to TCP/IP were added via the IETF RFC 1006 (OSI over IP) and RFC 1859 (NSP over IP) standards (see diagram).
It was later renamed DECnet/OSI to emphasize its OSI interconnectability, and subsequently DECnet-Plus as TCP/IP protocols were incorporated.
Notable installations
DEC Easynet
DEC's internal corporate network was a DECnet network called Easynet, which had evolved from DEC's Engineering Net (E-NET). It included over 2,000 nodes as of 1984, 15,000 nodes (in 39 countries) as of 1987, and 54,000 nodes as of 1990.
The DECnet Internet
DECnet was used at various scientific research centers which linked their networks to form an international network called the DECnet Internet. This included the U.S. Space Physics Analysis Network (US-SPAN), the European Space Physics Analysis Network (E-SPAN), Energy Sciences Network, and other research and education networks. The network consisted of over 17,000 nodes as of 1989. Routing between networks with different address spaces involved the use of either "poor man's routing" (PMR) or address translation gateways. In December 1988, VAX/VMS hosts on the DECnet Internet were attacked by the Father Christmas worm.
CCNET
CCNET (Computer Center Network) was a DECnet network that connected the campuses of various universities in the eastern regions of the United States during the 1980s. A key benefit was the sharing of systems software developed by the operations staff at the various sites, all of which were using a variety of DEC computers. As of March 1983, it included Columbia University, Carnegie Mellon University, and Case Western Reserve University. By May 1986, New York University, Stevens Institute of Technology, Vassar College and Oberlin College had been added. Several other universities joined later.
Hobbyist DECnet networks
Hobbyist DECnet networks have been in use during the 21st century. These include:
HECnet
Italian Retro DECnet
See also
Protocol Wars
References
General references
Carl Malamud, Analyzing DECnet/OSI Phase V. Van Hostrand Reinhold, 1991. .
James Martin, Joe Leben, DECnet Phase V: An OSI Implementation. Digital Press, 1992. .
DECnet-Plus manuals for OpenVMS are available at http://www.hp.com/go/openvms/doc/
DECnet Phase IV OpenVMS manuals for DECnet Phase IV; these Phase IV manuals are archived on OpenVMS Freeware V5.0 distribution, at http://www.hp.com/go/openvms/freeware and other sites.
DECnet Phase IV architecture manuals (including DDCMP, MOP, NICE, NSP, DAP, CTERM, routing); at https://web.archive.org/web/20140221225835/http://h71000.www7.hp.com/wizard/decnet/ (the originals are mirrored at DECnet for Linux).
Cisco documentation of DECnet, at http://docwiki.cisco.com/wiki/DECnet
Computer-related introductions in 1975
Products and services discontinued in 2022
2022 disestablishments in the United States
History of computer networks
Network protocols
net
OpenVMS | DECnet | Technology | 1,996 |
65,799,363 | https://en.wikipedia.org/wiki/Valentin%20Goranko | Valentin Feodorov Goranko (born 22 September 1959 in Sofia, Bulgaria) is a Bulgarian-Swedish logician, Professor of Logic and Theoretical Philosophy at the Department of Philosophy, Stockholm University.
Currently, he is the President of the Division for Logic, Methodology and Philosophy of Science and Technology (DLMPST) of the International Union of History and Philosophy of Science and Technology under the International Science Council (ISC).
Education and academic career
Goranko studied mathematics (M.Sc. 1984) and obtained Ph.D. in Mathematical Logic at the Faculty of Mathematics and Informatics of the Sofia University "St. Kliment Ohridski" in 1988. Before joining Stockholm University in 2014, he has had several academic positions at universities in Bulgaria (until 1992), South Africa (1992-2009), Denmark (2009-2014) and Sweden (since 2014) and has taught a wide variety of courses in Mathematics, Computer Science, and Logic.
Research fields
Goranko has a broad range of research interests in the theory and applications of Logic to artificial intelligence, multi-agent systems, philosophy, computer science, and game theory, where he has published 4 books and over 140 research papers and chapters in handbooks and other research collections.
Professional service
President (2024–2027) of the Division of Logic, Methodology and Philosophy of Science and Technology (DLMPST) of the International Union of History and Philosophy of Science and Technology (IUHPST)
President (since 2018) of the Scandinavian Logic Society
Past president (2016-2020) of the Association for Logic, Language and Information (FoLLI)
Editor-in-chief (Logic) of the FoLLI Publications series on Logic, Language and Information, a sub-series of Springer LNCS.
Executive member of the Board of the European Association for Computer Science Logic EACSL
Editor-in-chief on the journal Logics
Associate Editor of the ACM Transactions on Computational Logic and member of the editorial boards of several other scientific journals.
Published books
2015 Logic and Discrete Mathematics: A Concise Introduction
2016 Temporal Logics in Computer Science
2016 Logic as a Tool: A Guide to Formal Logical Reasoning
2023 Temporal logics
References
1959 births
Bulgarian logicians
Logicians
Mathematical logicians
Living people | Valentin Goranko | Mathematics | 459 |
45,031,900 | https://en.wikipedia.org/wiki/Denis%20Peri%C5%A1a | Denis Periša (born July 23, 1983) is a political activist, whistle blower and computer hacker from Šibenik, Croatia. He was convicted and criminally charged in September 1999. He was forbidden the use of any form of computer system or the internet for hacking the e-mail of politician Veselin Pejnović while planting a backdoor to his network. He founded the computer security website Jezgra.org in 1997. In 2005, he founded ŠI-WIFI wireless, an organization that was formed for his town.
Activism
During the COVID-19 pandemic, as part of a group, Periša made 3D-printed face shields for the local hospitals in Drniš and Knin.
Law and order
Beginning in 2017, Periša worked on cases for local police and DORH (the state attorney) by coming across and prosecuting online bullies and people spreading hate with the possible use of physical violence towards others. He spoke about how to report such crimes on national television. Shortly after that, he was threatened and put under police protection. Later on, he searched for fugitive Ivica Todorić and the servers Todorić used.
Political standoff
In August 2010, Periša claimed that he blew the whistle on the Social Democratic Party (SDP) for overspending the local municipal budget on a wireless network. He informed the city mayor privately, to which the mayor replied roughly with no interest. Periša then publicly confronted city Mayor Ante Županović to accuse him of stealing public money.
He turned out to be right in 2013 after a local news paper tested out the existing wireless network and concluded it was not working. After the Croatian Democratic Union (HDZ) took power, Periša was appointed to rebuild and maintain the local city wireless, which he did in late 2014.
Anonymous group
After a decade of claiming to lead hacking groups, Periša claims he finally joined the Anonymous group firstly fighting against ACTA and similar acts. He encourages people to use the Linux OS for supposedly lower power consumption and better power savings. On June 4, 2015, Periša appeared on a Skype interview for the Alter EGO show talking about the internet and anonymously representing himself as one of the internet freedom fighters.
Innovations
LoRaWAN
In January 2019, Periša built and installed a new type of sensor network for the town of Šibenik. The technology is LoRaWAN and it's based on an 868Mhz version for this part of the world. Grad Šibenik was the first town to do such a thing in this part of the world.
Biohacking
On 7 October 2019, Periša implanted an xNT NFC chip in his hand and discussed it on national television and other media.
Wireless
Periša was the founder and president of the ŠI-WIFI organization from 21 July 2005 until 20 November 2017, when it was closed and a new city network was established. Periša built a free town wireless network with city funds in April 2014. The network was placed on 10 public nodes with 30 public antennas and access points.
Public appearances
In 2014, Periša began appearing at conferences, giving a guest lecture at the Split Film Festival (STFF) on the subject of "Hacking and (Privacy) Protection". He was interviewed about the same subject by H-Alter magazine with the subject "Table to table via smartphones", commenting on his previous work and the WikiLeaks arrests which had taken place shortly before that.
In the summer of 2014, Periša was invited to do a video presentation on the island of Prvić on the subject of hacking and creative thinking.
Periša conducted a live hack on the RTL television channel and did a demonstration of an MITM attack on 14 May 2019.
In November 2019, Periša gave a live presentation and lecture on "Potrošači Digitalnog Doba" in Split, Croatia, on the subject of personal protection and Internet news, hacks and biohacking.
References
Hackers
Living people
1983 births
Ransomware | Denis Periša | Technology | 817 |
34,982,053 | https://en.wikipedia.org/wiki/Mewa%20Singh | Mewa Singh (born 11 April 1951) is an Indian primatologist, ethologist, and conservation biologist. He was a professor of ecology and animal behavior at University of Mysore Biopsychology Department in Mysore, Karnataka. Currently he is a Life-Long Distinguished Professor in University of Mysore. Singh has a bachelor's degree in English, a Master's and a PhD degree in Psychology but was never formally trained in Biological or Conservation Sciences. Yet he is popular and revered for coordinating courses in Evolution, Genetics, Animal Behavior, Conservation Biology and Statistics not only in his department at the University of Mysore but at academic schools, conferences and faculty refresher courses throughout the country.
A new night frog Nyctibatrachus mewasinghi has been named after him which is endemic to the Western Ghats. It is generally referred to as Mewa Singh's Night frog.
Singh's research centers on primate social behavior, including conflict resolution, cooperation, inequity aversion,food-sharing, primate bereavement, etc. He is the author of the book Primate Societies and co-author of Macaque Societies: A Model for the Study of Social Organization. He has published more than 200 research articles on several animal species. Singh also studies the viability of primate populations and is frequently quoted in the media as an expert in this area.
He is a fellow of all three Science Academies of India: Indian Academy of Sciences Bangalore; Indian National Science Academy New Delhi; National Academy of Sciences Allahabad. He is also a Ramanna Fellow, DST, a Fellow of the National Academy of Psychology, India and a Distinguished SERB Fellow (2019).
References
External links
https://web.archive.org/web/20111230065812/http://uni-mysore.ac.in/dr-mewa-singh/
Fellows of the Indian Academy of Sciences
1948 births
Living people
Academic staff of the University of Mysore
Primatologists
Ethologists
20th-century Indian biologists
Scientists from Karnataka | Mewa Singh | Biology | 422 |
40,352,203 | https://en.wikipedia.org/wiki/Picralinal | Picralinal is a bio-active alkaloid from Alstonia scholaris, a medicinal tree of West Africa.
Notes
Tryptamine alkaloids
Quinolizidine alkaloids
Methyl esters
Oxygen heterocycles
Heterocyclic compounds with 6 rings | Picralinal | Chemistry | 59 |
41,664,177 | https://en.wikipedia.org/wiki/Alcohol%20congener%20analysis | Alcohol congener analysis of blood and urine is used to provide an indication of the type of alcoholic beverage consumed. The analysis involves investigating compounds called congeners that give the beverage a distinctive appearance, aroma, and flavour, not including water and ethanol. The theory of discovering one's drinking habits has been investigated since the late 1970s, predominantly in Germany, for "hip-flask" defence cases (after-drinking). Alcohol congener analysis can play a crucial role in these cases where the driver is apprehended some time after a motor vehicle incident who, when returning a positive alcohol reading then claim that this is due to drinking an alcoholic beverage only after the incident. This traditional methodology for congener analysis has focused solely on the detection of fermentation by-product congeners that are found in all alcoholic beverages. By comparing the ratios of a set standard of congeners, the ingested alcoholic beverage type is proposed.
Ingredient markers
Recently, a novel accompanying-alcohol analytic was developed that targets alcoholic-beverage-specific compounds sourced from the ingredients used during the production of the beverage. These markers should ideally be unique to that beverage and not found in other beverages, food, or in the environment. Beer ingestion can be confirmed from blood samples, by targeting iso-alpha-acids-type compounds that are derived from the hops used during the brewing process. Levels of these compounds have been found in blood several hours after ingestion in controlled drinking studies using "high-" and "low-hopped" beers. This methodology presents new possibilities for accompanying-alcohol analysis as ingredient-specific markers are developed for other alcohol beverages, e.g. wine and spirits.
References
Alcohol chemistry | Alcohol congener analysis | Chemistry | 346 |
66,242,294 | https://en.wikipedia.org/wiki/Gonzalo%20Blumel | Gonzalo Fernando Blumel Mac-Iver (born 17 May 1978) is a Chilean politician, civil environmental engineer and economist who served as Minister of the Interior and Public Security of Chile during Sebastián Piñera second government (2018–2022).
Biography
He is the son of Juan Enrique Blumel Méndez – grandson of the German explorer Santiago Blümel and Rosa Ancán, daughter of a Mapuche lonko from Nueva Imperial – and Emma Francisca Mac-Iver Prieto, great-granddaughter of Enrique Mac Iver (politician of Radical Party of Chile's liberal faction) and of Emma Ovalle Gutiérrez, granddaughter of Chilean President José Tomás Ovalle (1830–1831).
Political career
He began his career in 2001 as a researcher at Pontificia Universidad Católica de Chile Center for the Environment of the Department of Industrial Engineering. In 2005, he assumed as planning secretary of the Municipality of Futrono. Later, he worked as a researcher for the environment program of Libertad y Desarrollo, right–wing think tank linked with libertarian conservatism ideas then represented by the party Independent Democratic Union (known in Spanish for his acronym: «UDI»), organization promoted by Augusto Pinochet dictatorship through his ideologist: the lawyer Jaime Guzmán (UDI founder).
He performed in three charges during President Piñera's first government, being – from March to July 2010 – Staff Chief of Cristián Larroulet (UDI), General Secretariat of the Presidency. Later, he was head of Studies Division in the same ministry and, finally, in March 2013, he became the president's chief adviser.
Once finished Piñera's first government, he was the CEO of Avanza Chile Foundation as well as he taught lessons also at PUC, his alma mater, or Universidad del Desarrollo (UDD). In 2016, he was one of founding member of the party Political Evolution, known in Chile for its acronym «Evópoli».
When Piñera returned to the presidency in 2018, Blumel was appointed as Minister Secretary General, the same charge where he collaborated with Larroulet. He served there until 28 October 2019, when he was called by Piñera to take the Ministry of the Interior charge amid 2019–20 riots in Chile commonly known as Estallido Social de Chile.
References
External links
1978 births
Living people
Alumni of the University of Birmingham
Chilean civil engineers
Chilean people of German descent
Chilean people of British descent
Chilean people of Mapuche descent
Environmental engineers
Evópoli politicians
Politicians from Santiago, Chile
People from Talca
Pontifical Catholic University of Chile alumni
Ministers of the interior of Chile | Gonzalo Blumel | Chemistry,Engineering | 543 |
46,936,585 | https://en.wikipedia.org/wiki/9%20Algorithms%20That%20Changed%20the%20Future | 9 Algorithms that Changed the Future is a 2012 book by John MacCormick on algorithms. The book seeks to explain commonly encountered computer algorithms to a layman audience.
Summary
The chapters in the book each cover an algorithm.
Search engine indexing
PageRank
Public-key cryptography
Forward error correction
Pattern recognition
Data compression
Database
Digital signature
Response
One reviewer said the book is written in a clear and simple style.
A reviewer for New York Journal of Books suggested that this book would be a good complement to an introductory college-level computer science course.
Another reviewer called the book "a valuable addition to the popular computing literature".
2020 edition
The book has been re-released by Princeton University Press in 2020.
References
External links
short video of the author talking about the book
2012 non-fiction books
Computer books
Princeton University Press books | 9 Algorithms That Changed the Future | Technology | 165 |
54,755,886 | https://en.wikipedia.org/wiki/Anova%20Culinary | Anova Culinary, officially known as Anova Applied Electronics, Inc., is a company headquartered in San Francisco that specializes in smart kitchen appliances designed for home cooking. Their product range includes devices such as sous-vide cookers, combination ovens, and vacuum sealers. In 2014, Anova introduced the Anova Precision Cooker, the first sous-vide cooking device with Bluetooth connectivity, followed by a Wi-Fi-enabled version in 2015.
On February 6, 2017, Anova was acquired by Electrolux, a home appliance company, for a total of US$250 million. This acquisition marked the first instance of a multimillion-dollar purchase of a smart kitchen brand.
History
Anova Culinary was established in 2013 by Stephen Svajian, Jeff Wu, and Natalie Vaughn. The company originated as a manufacturer of temperature control products for scientific laboratories across the globe.
In 2010, Wu developed an initial proof of concept for an affordable sous vide device for home use. He subsequently joined forces with Stephen Svajian, the founder and CEO of the marketing agency Get Fresh Inc., which ultimately led to the creation of Anova Culinary. Their inaugural product, the Anova One, was shipped in 2013.
Overview
In 2013, Anova Culinary unveiled the Anova One, the initial sous-vide cooker designed for home use. This device was an immersion circulator that could be attached to an existing pot, circulating and heating water for cooking.
The introduction of the Anova Precision Cooker followed in 2014, marking the first connected sous-vide device. It featured a wand-like immersion circulator that could be attached to a pot or container, ensuring precise temperature control for cooking food at specific times and temperatures. With Bluetooth capability, users could control the device through the company's app on their mobile devices.
The Anova Precision Cooker Nano enables Bluetooth-enabled communication across multiple devices through the Anova Culinary App. This allowed users to synchronize cooking cycles for multiple dishes using Anova products. It offered the same temperature control as other models, with basic controls accessible directly on the device for those who preferred not to use the app.
The Anova Precision Oven, a countertop combi oven, was designed to function independently as sous vide solution or in conjunction with existing Precision Cooker products. It combined steam and convection cooking methods.
The Anova Culinary App serves as the companion app for the Precision Cooker and is available for Android and iOS devices. It provides access to recipes and allows users to control temperature settings.
Acquisitions
Get Fresh, Inc.
In 2015, Anova Culinary purchased marketing agency Get Fresh, Inc. for a total of US$9.2 million.
Electrolux
On February 6, 2017, Electrolux announced its acquisition of Anova Culinary for a total of US$250 million. The acquisition involved an initial cash payment of $115 million, with an additional $135 million allocated for adjustments and the fulfillment of specific financial targets.
Under the agreement, the Anova brand remains intact and retains its own distinct identity, with Stephan Svajian continuing as CEO. Anova Culinary operates as a subsidiary within the broader Electrolux organization. Electrolux is said to be establishing a "smart home solutions center" in San Francisco to focus on developing connected products in various categories, as reported by Business Insider.
Partnerships
During the 2017 International Home and Housewares Show, Anova revealed a collaboration with Stasher, a manufacturer of silicone bags, to offer reusable and resealable bags specifically designed for sous vide cooking. Additionally, on July 25, 2017, Anova announced a partnership with Field Company, which involved a special release of a limited batch of the Field cast iron skillet before its wider availability to the general public.
References
Electrolux
American companies established in 2013
Companies based in San Francisco
Home automation companies
Kickstarter-funded products
2017 mergers and acquisitions | Anova Culinary | Technology | 801 |
14,389,994 | https://en.wikipedia.org/wiki/Natural%20landscape | A natural landscape is the original landscape that exists before it is acted upon by human culture. The natural landscape and the cultural landscape are separate parts of the landscape. However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, so that reference is sometimes now made to degrees of naturalness within a landscape.
In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: "Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year" and then how it looks now following the use of herbicides: "The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire". Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context.
The phrase "natural landscape" was first used in connection with landscape painting, and landscape gardening, to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape. Then in 1908 geographer Otto Schlüter developed the terms original landscape (Urlandschaft) and its opposite cultural landscape (Kulturlandschaft) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase "natural landscape" by a geographer can be found in Carl O. Sauer's paper "The Morphology of Landscape" (1925).
Origins of the term
The concept of a natural landscape was first developed in connection with landscape painting, though the actual term itself was first used in relation to landscape gardening. In both cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chunglin Kwa suggests, "that a seventeenth-century or early-eighteenth-century pen could experience natural scenery 'just like on a painting,’ and so, with or without the use of the word itself, designate it as a landscape." With regard to landscape gardening John Aikin, commented in 1794: "Whatever, therefore, there be of novelty in the singular scenery of an artificial garden, it is soon exhausted, whereas the infinite diversity of a natural landscape presents an inexhaustible flore of new forms". Writing in 1844 the prominent American landscape gardener Andrew Jackson Downing comments: "straight canals, round or oblong pieces of water, and all the regular forms of the geometric mode ... would evidently be in violent opposition to the whole character and expression of natural landscape".
In his extensive travels in South America, Alexander von Humboldt became the first to conceptualize a natural landscape separate from the cultural landscape, though he does not actually use these terms. Andrew Jackson Downing was aware of, and sympathetic to, Humboldt's ideas, which therefore influenced American landscape gardening.
Subsequently, the geographer Otto Schlüter, in 1908, argued that by defining geography as a Landschaftskunde (landscape science) would give geography a logical subject matter shared by no other discipline. He defined two forms of landscape: the Urlandschaft (original landscape) or landscape that existed before major human induced changes and the Kulturlandschaft (cultural landscape) a landscape created by human culture. Schlüter argued that the major task of geography was to trace the changes in these two landscapes.
The term natural landscape is sometimes used as a synonym for wilderness, but for geographers natural landscape is a scientific term which refers to the biological, geological, climatological and other aspects of a landscape, not the cultural values that are implied by the word wilderness.
The natural and conservation
Matters are complicated by the fact that the words nature and natural have more than one meaning. On the one hand there is the main dictionary meaning for nature: "The phenomena of the physical world collectively, including plants, animals, the landscape, and other features and products of the earth, as opposed to humans or human creations." On the other hand, there is the growing awareness, especially since Charles Darwin, of humanities biological affinity with nature.
The dualism of the first definition has its roots is an "ancient concept", because early people viewed "nature, or the nonhuman world […] as a divine Other, godlike in its separation from humans." In the West, Christianity's myth of the fall, that is the expulsion of humankind from the Garden of Eden, where all creation lived in harmony, into an imperfect world, has been the major influence. Cartesian dualism, from the seventeenth century on, further reinforced this dualistic thinking about nature.
With this dualism goes value judgement as to the superiority of the natural over the artificial. Modern science, however, is moving towards a holistic view of nature.
America
What is meant by natural, within the American conservation movement, has been changing over the last century and a half.
In the mid-nineteenth century American began to realize that the land was becoming more and more domesticated and wildlife was disappearing. This led to the creation of American National Parks and other conservation sites. Initially it was believed that all that was needed to do was to separate what was seen as natural landscape and "avoid disturbances such as logging, grazing, fire and insect outbreaks." This, and subsequent environmental policy, until recently, was influenced by ideas of the wilderness. However, this policy was not consistently applied, and in Yellowstone Park, to take one example, the existing ecology was altered, firstly by the exclusion of Native Americans and later with the virtual extermination of the wolf population.<ref name="dow"></ref</ref>
A century later, in the mid-twentieth century, it began to be believed that the earlier policy of "protection from disturbance was inadequate to preserve park values", and that is that direct human intervention was necessary to restore the landscape of National Parks to its ‘'natural'’ condition. In 1963 the Leopold Report argued that "A national park should represent a vignette of primitive America". This policy change eventually led to the restoration of wolves in Yellowstone Park in the 1990s.
However, recent research in various disciplines indicates that a pristine natural or "primitive" landscape is a myth, and it now realised that people have been changing the natural into a cultural landscape for a long while, and that there are few places untouched in some way from human influence. The earlier conservation policies were now seen as cultural interventions. The idea of what is natural and what artificial or cultural, and how to maintain the natural elements in a landscape, has been further complicated by the discovery of global warming and how it is changing natural landscapes.
Also important is a reaction recently amongst scholars against dualistic thinking about nature and culture. Maria Kaika comments: "Nowadays, we are beginning to see nature and culture as intertwined once again – not ontologically separated anymore […].What I used to perceive as a compartmentalized world, consisting of neatly and tightly sealed, autonomous 'space envelopes' (the home, the city, and nature) was, in fact, a messy socio-spatial continuum". And William Cronon argues against the idea of wilderness because it "involves a dualistic vision in which the human is entirely outside the natural" and affirms that "wildness (as opposed to wilderness) can be found anywhere" even "in the cracks of a Manhattan sidewalk." According to Cronon we have to "abandon the dualism that sees the tree in the garden as artificial […] and the tree in the wilderness as natural […] Both in some ultimate sense are wild." Here he bends somewhat the regular dictionary meaning of wild, to emphasise that nothing natural, even in a garden, is fully under human control.
Europe
The landscape of Europe has considerably altered by people and even in an area, like the Cairngorm Mountains of Scotland, with a low population density, only " the high summits of the Cairngorm Mountains, consist entirely of natural elements. These high summits are of course only part of the Cairngorms, and there are no longer wolves, bears, wild boar or lynx in Scotland's wilderness. The Scots pine in the form of the Caledonian forest also covered much more of the Scottish landscape than today.
The Swiss National Park, however, represent a more natural landscape. It was founded in 1914, and is one of the earliest national parks in Europe.
Visitors are not allowed to leave the motor road, or paths through the park, make fire or camp. The only building within the park is Chamanna Cluozza, mountain hut. It is also forbidden to disturb the animals or the plants, or to take home anything found in the park. Dogs are not allowed. Due to these strict rules, the Swiss National Park is the only park in the Alps who has been categorized by the IUCN as a strict nature reserve, which is the highest protection level.
History of natural landscape
No place on the Earth is unaffected by people and their culture. People are part of biodiversity, but human activity affects biodiversity, and this alters the natural landscape. Mankind have altered landscape to such an extent that few places on earth remain pristine, but once free of human influences, the landscape can return to a natural or near natural state.
Even the remote Yukon and Alaskan wilderness, the bi-national Kluane-Wrangell-St. Elias-Glacier Bay-Tatshenshini-Alsek park system comprising Kluane, Wrangell-St Elias, Glacier Bay and Tatshenshini-Alsek parks, a UNESCO World Heritage Site, is not free from human influence, because the Kluane National Park lies within the traditional territories of the Champagne and Aishihik First Nations and Kluane First Nation who have a long history of living in this region. Through their respective Final Agreements with the Canadian Government, they have made into law their rights to harvest in this region.
Procession
Through different intervals of time, the process of natural landscapes have been shaped by a series of landforms, mostly due to its factors, including tectonics, erosion, weathering and vegetation.
Examples of cultural forces
Cultural forces intentionally or unintentionally, have an influence upon the landscape. Cultural landscapes are places or artifacts created and maintained by people. Examples of cultural intrusions into a landscape are: fences, roads, parking lots, sand pits, buildings, hiking trails, management of plants, including the introduction of invasive species, extraction or removal of plants, management of animals, mining, hunting, natural landscaping, farming and forestry, pollution. Areas that might be confused with a natural landscape include public parks, farms, orchards, artificial lakes and reservoirs, managed forests, golf courses, nature center trails, gardens.
See also
Notes
References
External links
Developing a forest naturalness indicator for Europe
Scottish heritage: Natural Spaces
Carl O. Sauer. "The Morphology of Landscape", University of California Publications in Geography, vol. 2, No. 2, 12 October 1925, pp. 19–53 (scroll down)
Biodiversity
Biology terminology
Ecology
Environmental science
Environmental law
Evolution
Geography terminology
Habitats
Landscape
Philosophy of biology
Wilderness | Natural landscape | Biology,Environmental_science | 2,323 |
145,020 | https://en.wikipedia.org/wiki/Sintering | Sintering or frittage is the process of compacting and forming a solid mass of material by pressure or heat without melting it to the point of liquefaction. Sintering happens as part of a manufacturing process used with metals, ceramics, plastics, and other materials. The atoms/molecules in the sintered material diffuse across the boundaries of the particles, fusing the particles together and creating a solid piece.
Since the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points, such as tungsten and molybdenum. The study of sintering in metallurgical powder-related processes is known as powder metallurgy.
An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the formation of a hard snowball by pressing loose snow together.
The material produced by sintering is called sinter. The word sinter comes from the Middle High German , a cognate of English cinder.
General sintering
Sintering is generally considered successful when the process reduces porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity. In some special cases, sintering is carefully applied to enhance the strength of a material while preserving porosity (e.g. in filters or catalysts, where gas adsorption is a priority). During the sintering process, atomic diffusion drives powder surface elimination in different stages, starting at the formation of necks between powders to final elimination of small pores at the end of the process.
The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a net decrease in total free energy. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometers, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials.
The ratio of bond area to particle size is a determining factor for properties such as strength and electrical conductivity. To yield the desired bond area, temperature and initial grain size are precisely controlled over the sintering process. At steady state, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively.
The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, particle count would decrease and pores would be destroyed. Pore elimination is fastest in samples with many pores of uniform size because the boundary diffusion distance is smallest. during the latter portions of the process, boundary and lattice diffusion from the boundary become important.
Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, particle size, particle distribution, material composition, and often other properties of the sintering environment itself.
Ceramic sintering
Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Sintering and vitrification (which requires higher temperatures) are the two main mechanisms behind the strength and stability of ceramics. Sintered ceramic objects are made from substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide, and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering.
Sintering begins when sufficient temperatures have been reached to mobilize the active elements in the ceramic material, which can start below their melting point (typically at 50–80% of their melting point), e.g. as premelting. When sufficient sintering has taken place, the ceramic body will no longer break down in water; additional sintering can reduce the porosity of the ceramic, increase the bond area between ceramic particles, and increase the material strength.
Industrial procedures to create ceramic objects via sintering of powders generally include:
mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry
spray-drying the slurry
putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item)
heating the green body at low temperature to burn off the binder
sintering at a high temperature to fuse the ceramic particles together.
All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramic's formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material.
Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electric current) could be used. A commonly used second external force is pressure. Sintering performed by only heating is generally termed "pressureless sintering", which is possible with graded metal-ceramic composites, utilising a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing.
To allow efficient stacking of product in the furnace during sintering and to prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Sintering of metallic powders
Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus En of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product:
where D is the density, E is Young's modulus and d is the maximum density of iron.
Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement.
A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide.
Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved.
Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder.
Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems.
Sintering of powders containing precious metals such as silver and gold is used to make small jewelry items. Evaporative self-assembly of colloidal silver nanocubes into supercrystals has been shown to allow the sintering of electrical joints at temperatures lower than 200 °C.
Advantages
Particular advantages of the powder technology include:
Very high levels of purity and uniformity in starting materials
Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible
Stabilization of the details of repetitive operations, by control of grain size during the input stages
Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes
No deformation needed to produce directional elongation of grains
Capability to produce materials of controlled, uniform porosity.
Capability to produce nearly net-shaped objects.
Capability to produce materials which cannot be produced by any other technology.
Capability to fabricate high-strength material like turbine blades.
After sintering the mechanical strength to handling becomes higher.
The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled.
Disadvantages
Particular disadvantages of the powder technology include:
sintering cannot create uniform sizes
micro- and nanostructures produced before sintering are often destroyed.
Plastics sintering
Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating.
Liquid phase sintering
For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages:
rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement.
solution-precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are not close or in contact. This is called contact flattening. This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification.
final densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores.
For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films.
Electric current assisted sintering
These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments.
In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C.
In the United States, sintering was first patented by Duval d'Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush.
Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents.
Of these technologies the most well known is resistance sintering (also called hot pressing) and spark plasma sintering, while electro sinter forging is the latest advancement in this field.
Spark plasma sintering
In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and taking less time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as field assisted sintering technique (FAST), electric field assisted sintering (EFAS), and direct current sintering (DCS) have been implemented by the sintering community. Using a direct current (DC) pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. By modifying the graphite die design and its assembly, it is possible to perform pressureless sintering in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques.
Electro sinter forging
Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering. It is used for the production of diamond metal matrix composites and is under evaluation for the production of hard metals, nitinol and other metals and intermetallics. It is characterized by a very low sintering time, allowing machines to sinter at the same speed as a compaction press.
Pressureless sintering
Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods.
The powder compact (if a ceramic) can be created by slip casting, injection moulding, and cold isostatic pressing. After presintering, the final green compact can be machined to its final shape before being sintered.
Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used.
Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode.
In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. By definition, the relative density, ρrel, in the open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples.
Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant.
Microwave sintering
In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy is required and there are improvements in the product properties.
A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability, microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product.
This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics. Magnesium phosphates and calcium phosphates are the examples which have been processed through the microwave sintering technique.
Densification, vitrification and grain growth
Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample, thereby making it denser. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. Under certain conditions of chemistry and orientation, some grains may grow rapidly at the expense of their neighbours during sintering. This phenomenon, known as abnormal grain growth (AGG), results in a bimodal grain size distribution that has consequences for the mechanical, dielectric and thermal performance of the sintered material.
For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of to for silicate liquids and in the range of to for a metal such as liquid cobalt.
Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increases pressure at points of contact causing the material to move away from the contact areas, forcing particle centers to draw near each other.
The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter, and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process.
Sintering mechanisms
Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the "sintering mechanisms" or "matter transport mechanisms".
In solid state sintering, the six common mechanisms are:
surface diffusion – diffusion of atoms along the surface of a particle
vapor transport – evaporation of atoms which condense on a different surface
lattice diffusion from surface – atoms from surface diffuse through lattice
lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice
grain boundary diffusion – atoms diffuse along grain boundary
plastic deformation – dislocation motion causes flow of matter.
Mechanisms 1–3 above are non-densifying (i.e. do not cause the pores and the overall ceramic body to shrink) but can still increase the area of the bond or "neck" between grains; they take atoms from the surface and rearrange them onto another surface or part of the same surface. Mechanisms 4–6 are densifying – atoms are moved from the bulk material or the grain boundaries to the surface of pores, thereby eliminating porosity and increasing the density of the sample.
Grain growth
A grain boundary (GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice, thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure to be visible.
Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority.
Grain boundary energy/tension
The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy, thus striving to make the grain boundary area smaller and this change requires energy.
"Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow that:
with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered."[pg 478]
The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument,
holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area.
For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by
which gives
is normally expressed in units of while is normally expressed in units of since they are different physical properties.
Mechanical equilibrium
In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium) of the 2D specimen. A consequence of this is that, to keep trying to be as close to the equilibrium as possible, grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries, while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) is in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grow until prevented by a counterforce.
Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature. The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size.
Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces, therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains.
Grain growth in a simple model is found to follow:
Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by:
Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor. In most materials the sintered grain size is proportional to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering.
Reducing grain growth
Solute ions
If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move, the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.
Fine second phase particles
If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder, then this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other, it will be hindered by the insoluble particle. This is because it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to
where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is
assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:
Now, assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:
This can be reduced to
so the critical diameter of the grains is dependent on the size and volume fraction of the particles at the grain boundaries.
It has also been shown that small bubbles or cavities can act as inclusion
More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.
Sintering of catalysts
Sintering is an important cause for loss of catalytic activity, especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process.
Small catalyst particles have the highest possible relative surface area and high reaction temperature, both factors that generally increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed.
For many supported metal catalysts, sintering starts to become a significant effect at temperatures over . Catalysts that operate at higher temperatures, such as a car catalyst, use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica, carbon or alumina.
See also
, a rapid prototyping technology, that includes Direct Metal Laser Sintering (DMLS).
– a pioneer of sintering methods
References
Further reading
External links
Particle-Particle-Sintering – a 3D lattice kinetic Monte Carlo simulation
Sphere-Plate-Sintering – a 3D lattice kinetic Monte Carlo simulation
Industrial processes
Metalworking
Plastics industry
Metallurgical processes | Sintering | Chemistry,Materials_science | 7,074 |
72,447,471 | https://en.wikipedia.org/wiki/Robert%20L.%20McGinnis | Robert L. McGinnis is an American scientist, technology entrepreneur, and inventor who has founded a number of technology companies including Prometheus Fuels, Mattershift and Oasys Water.
As a scientist, McGinnis is known for his contributions in the domain of desalination and forward osmosis; in particular he is credited as a co-inventor of the / draw solution for the forward osmosis (FO) desalination process.
McGinnis is CEO at Prometheus Fuels, an environmental technology startup company he founded in 2019.
Background
Robert McGinnis attended Cabrillo College and then Yale University, where he received his B.A. degree in Theater in 2002. He then earned an M.S. in Environmental Engineering in 2007. Continuing his studies at Yale University, McGinnis finished his Ph.D. in Environmental Engineering in 2009; his academic advisor was Menachem Elimelech. His joint work and thesis “ Ammonia – Carbon Dioxide Forward Osmosis Desalination and Pressure Retarded Osmosis" was published in the journal "Desalination" in April 2005.
McGinnis is a veteran of the U.S. Navy Explosive Ordnance Disposal (EOD) team, where he also served during Operation Desert Storm defusing mines in the Persian Gulf's harbors and battlefields.
Academic career
In 2002, McGinnis was assigned as CTO and research engineer at Osmotic Technologies Inc., (OTI), a Yale University Incubator for commercialization of forward osmosis desalination and water treatment, which later became a pilot project under the auspices of EUWP program (Expeditionary Unit Water Purification Consortium). In 2006, McGinnis received an NSF-GRFP Graduate Research Fellowship from the National Science Foundation for his Ph.D. studies under the supervision of Menachem Elimelech, who founded Yale's Environmental Engineering Program.
McGinnis' scientific research interests at Yale included the development of osmotically driven membrane processes, novel membrane design, and nanoscale membrane sensing with the main focus being on engineered forward osmosis methods and its practical applications in desalination and water treatment processes.
His work has been published in chemistry and environment technology-related journals. McGinnis is also co-inventor on more than 20 granted patents in the fields of membranes, energy, desalination, and nanotechnology assigned by the United States Patent and Trademark Office. In 2018, McGinnis received an AIChE Innovator Award for Innovation in Chemical Engineering Education granted by the American Institute of Chemical Engineers (AIChE).
Business career
Oasys Water
The research of forward osmosis methods in Elimelech's lab at Yale led to the formation of Oasys Water in 2008, a company based in Cambridge, Massachusetts, with the main purpose of making the technology of functional desalination systems called engineered osmosis (EO) commercially applicable. The company sprang out as Yale's technology startup project. McGinnis directed the company as CTO until 2012. Eventually, Oasys Water built five large water treatment plants in China and was later merged with Beijing-based Woteer Water Technology company.
Mattershift
In 2013, McGinnis launched Mattershift, a technology company developing carbon nanotube membranes for molecular factories. The company further sought to convert CO₂ from the air into fuels, fertilizers, pharmaceuticals, and construction materials without the use of fossil fuels. The San Francisco Bay Area-based company was initially located at the University of Connecticut (UCONN) as part of its Technology Incubation Program.
The company's technology in scaling up carbon nanotube (CNT) membranes was published and peer-reviewed in Science Advances in March 2018. The open-access study was also reviewed by The Chemical Engineer.
Prometheus Fuels
McGinnis founded his next technology startup company, Santa Cruz, California-based, Prometheus Fuels, an energy startup developing tools to filter atmospheric CO2 using water, electricity, and nanotube membranes to produce commercially viable fuels. He started the company in 2019 and has been CEO since then. The project was one of two selected for investment in March 2019 by Y Combinator after the incubator's request for proposals to address carbon removal.
Selected publications
Robert L. McGinnis; Kevin Reimund; Jian Ren; Lingling Xia and others, Large-scale polymeric carbon nanotube membranes with sub–1.27-nm pores, in Science Advances, Vol 4, Issue 3, 2018
Robert L. McGinnis; Nathan T. Hancock; Marek S. Nowosielski-Slepowron; Gary D. McGurgan2, Pilot demonstration of the / forward osmosis desalination process on high salinity brines, in Desalination Journal, Volume 312, 1 March 2013, Pages 67–74
Robert L. McGinnis; Tzahi Y. Catha; Menachem Elimelech; Jeffrey R. McCutcheon and others, Standard Methodology for Evaluating Membrane Performance in Osmotically Driven Membrane Processes, in Desalination Journal, Volume 312, 1 March 2013, Pages 31–38
Robert L. McGinnis and Menachem Elimelech, Global Challenges in Energy and Water Supply: The Promise of Engineered Osmosis in Environ. Sci. Technol. 2008, 42, 23, 8625–8629, 1 December 2008
Robert L. McGinnis; Jeffrey R. McCutcheon; Menachem Elimelech, A novel ammonia–carbon dioxide osmotic heat engine for power generation, in Journal of Membrane Science, Volume 305, Issues 1–2, 15 November 2007, Pages 13–19
Robert L. McGinnis; Menachem Elimelech, Energy requirements of ammonia–carbon dioxide forward osmosis desalination, in Desalination Journal, Volume 207, Issues 1–3, 10 March 2007, Pages 370-382
Robert L. McGinnis; Jeffrey R. McCutcheon; Menachem Elimelech, Desalination by ammonia–carbon dioxide forward osmosis: Influence of draw and feed solution concentrations on process performance, in Journal of Membrane Science, Volume 278, Issues 1–2, 5 July 2006, Pages 114-123
Robert L. McGinnis; Jeffrey L. McCutcheon and Menachem Elimelech, The Ammonia-Carbon Dioxide Forward Osmosis Desalination Process, in Water Conditioning and Purification, Jan 1, 2006
Robert L. McGinnis; Jeffrey R. McCutcheon; Menachem Elimelech, A novel ammonia—carbon dioxide forward (direct) osmosis desalination process, in Desalination Journal, Volume 174, Issue 1, 1 April 2005, Pages 1–11
See also
Forward osmosis
Desalination
References
External links
Rob McGinnis on Google Scholar
Google Patents - Inventor Robert L. McGinnis
Living people
Year of birth missing (living people)
American scientists
American environmental scientists
Yale University alumni
American inventors
American technology company founders
American technology businesspeople | Robert L. McGinnis | Environmental_science | 1,471 |
49,274,221 | https://en.wikipedia.org/wiki/Graphs%20and%20Combinatorics | Graphs and Combinatorics (ISSN 0911-0119, abbreviated Graphs Combin.) is a peer-reviewed academic journal in graph theory, combinatorics, and discrete geometry published by Springer Japan. Its editor-in-chief is Katsuhiro Ota of Keio University.
The journal was first published in 1985. Its founding editor in chief was Hoon Heng Teh of Singapore, the president of the Southeast Asian Mathematics Society, and its managing editor was Jin Akiyama. Originally, it was subtitled "An Asian Journal".
In most years since 1999, it has been ranked as a second-quartile journal in discrete mathematics and theoretical computer science by SCImago Journal Rank.
References
Academic journals established in 1985
Combinatorics journals
Graph theory journals
Discrete geometry journals | Graphs and Combinatorics | Mathematics | 163 |
15,072,039 | https://en.wikipedia.org/wiki/ARMS2 | Age-related maculopathy susceptibility protein 2, is a mitochondrial protein that in humans is encoded by the ARMS2 gene.
References
External links
Further reading | ARMS2 | Chemistry | 35 |
45,102,490 | https://en.wikipedia.org/wiki/Stochastic%20empirical%20loading%20and%20dilution%20model | The stochastic empirical loading and dilution model (SELDM) is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. Although SELDM is, nominally, a highway runoff model is can be used to estimate flows concentrations and loads of runoff-quality constituents from other land use areas as well. SELDM was developed by the U.S. Geological Survey so the model, source code, and all related documentation are provided free of any copyright restrictions according to U.S. copyright laws and the USGS Software User Rights Notice. SELDM is widely used to assess the potential effect of runoff from highways, bridges, and developed areas on receiving-water quality with and without the use of mitigation measures. Stormwater practitioners evaluating highway runoff commonly use data from the Highway Runoff Database (HRDB) with SELDM to assess the risks for adverse effects of runoff on receiving waters.
SELDM is a stochastic mass-balance model. A mass-balance approach (figure 1) is commonly applied to estimate the concentrations and loads of water-quality constituents in receiving waters downstream of an urban or highway-runoff outfall. In a mass-balance model, the loads from the upstream basin and runoff source area are added to calculate the discharge, concentration, and load in the receiving water downstream of the discharge point.
SELDM can do a stream-basin analysis and a lake-basin analysis. The stream-basin analysis uses a stochastic mass-balance analysis based on multi-year simulations including hundreds to thousands of runoff events. SELDM generates storm-event values for the site of interest (the highway site) and the upstream receiving stream to calculate flows, concentrations, and loads in the receiving stream downstream of the stormwater outfall. The lake-basin analysis also is a stochastic multi-year mass-balance analysis. The lake-basin analysis uses the highway loads that occur during runoff periods, the total annual loads from the lake basin to calculate annual loads to and from the lake. The lake basin analysis uses the volume of the lake and pollutant-specific attenuation factors to calculate a population of average-annual lake concentrations.
The annual flows and loads SELDM calculates for the stream and lake analyses also can be used to estimate total maximum daily loads (TMDLs) for the site of interest and the upstream lake basin. The TMDL can be based on the average of annual loads because product of the average load times the number of years of record will be the sum-total load for that (simulated) period of record. The variability in annual values can be used to estimate the risk of exceedance and the margin of safety for the TMDL analysis
Model description
SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physicochemical equations.
SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
SELDM is easy to use because it has a simple graphical user interface and because much of the information and data needed to run SELDM are embedded in the model. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. Information and data from hundreds to thousands of sites across the country were compiled to facilitate use of SELDM. Most of the necessary input data are obtained by defining the location of the site of interest and five simple basin properties. These basin properties are the drainage area, the basin length, the basin slope, the impervious fraction, and the basin development factor
SELDM models the potential effect of mitigation measures by using Monte Carlo methods with statistics that approximate the net effects of structural and nonstructural best management practices (BMPs). Structural BMPs are defined as the components of the drainage pathway between the source of runoff and a stormwater discharge location that affect the volume, timing, or quality of runoff. SELDM uses a simple stochastic statistical model of BMP performance to develop planning-level estimates of runoff-event characteristics. This statistical approach can be used to represent a single BMP or an assemblage of BMPs. The SELDM BMP-treatment module has provisions for stochastic modeling of three stormwater treatments: volume reduction, hydrograph extension, and water-quality treatment. In SELDM, these three treatment variables are modeled by using the trapezoidal distribution and the rank correlation with the associated highway-runoff variables. This report describes methods for calculating the trapezoidal-distribution statistics and rank correlation coefficients for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater BMPs and provides the calculated values for these variables. These statistics are different from the statistics commonly used to characterize or compare BMPs. They are designed to provide a stochastic transfer function to approximate the quantity, duration, and quality of BMP effluent given the associated inflow values for a population of storm events.
Model interface
SELDM was developed as a Microsoft Access® database software application to facilitate storage, handling, and use of the hydrologic dataset with a simple graphical user interface (GUI). The program's menu-driven GUI uses standard Microsoft Visual Basic for Applications® (VBA) interface controls to facilitate entry, processing, and output of data. Appendix 4 of the SELDM manual has detailed instructions for using the GUI.
The SELDM user interface has one or more GUI forms that are used to enter four categories of input data, which include documentation, site and region information, hydrologic statistics, and water-quality data. The documentation data include information about the analyst, the project, and the analysis. The site and region data include the highway-site characteristics, the ecoregions, the upstream-basin characteristics, and, if a lake analysis is selected, the lake-basin characteristics. The hydrologic data include precipitation, streamflow, and runoff-coefficient statistics. The water-quality data include highway-runoff-quality statistics, upstream-water-quality statistics, downstream-water-quality definitions, and BMP-performance statistics. There also is a GUI form for running the model and accessing the distinct set of output files. The SELDM interface is designed to populate the database with data and statistics for the analysis and to specify index variables that are used by the program to query the database when SELDM is run. It is necessary to step through the input forms each time an analysis is run.
Model output
The results of each SELDM analysis are written to 5–10 output files, depending on the options that were selected during the analysis-specification process. The five output files that are created for every model run are the output documentation, highway-runoff quality, annual highway runoff, precipitation events, and stormflow file. If the Stream Basin or Stream and Lake Basin output options are selected, then the prestorm streamflow and dilution factor files also are created. If these same two output options are selected and, in addition, one or more downstream water-quality pairs are defined by using the water-quality menu, then the upstream water-quality and downstream water-quality output files also are created by SELDM. If the Stream and Lake Basin Output or Lake Basin Output option is selected, and one or more downstream water-quality pairs are defined by using the water-quality menu, then the Lake Analysis output file is created when the Lake Basin Analysis is run. The output files are written as tab-delimited ASCII text files in a relational database (RDB) format that can be imported into many software packages. This output is designed to facilitate post-modeling analysis and presentation of results.
The benefit of the Monte Carlo analysis is not to decrease uncertainty in the input statistics, but to represent the different combinations of the variables that determine potential risks of water-quality excursions. SELDM provides a method for rapid assessment of information that is otherwise difficult or impossible to obtain because it models the interactions among hydrologic variables (with different probability distributions) that result in a population of values that represent likely long-term outcomes from runoff processes and the potential effects of different mitigation measures. SELDM also provides the means for rapidly doing sensitivity analyses to determine the potential effects of different input assumptions on the risks for water-quality excursions. SELDM produces a population of storm-event and annual values to address the questions about the potential frequency, magnitude, and duration of water-quality excursions. The output represents a collection of random events rather than a time series. Each storm that is generated in SELDM is identified by sequence number and annual-load accounting year. The model generates each storm randomly; there is no serial correlation, and the order of storms does not reflect seasonal patterns. The annual-load accounting years, which are just random collections of events generated with the sum of storm interevent times less than or equal to a year, are used to generate annual highway flows and loads for TMDL analysis and the lake basin analysis.
In 2019, the USGS developed a model post processor for SELDM to facilitate analysis and graphing of results from SELDM simulations; that software, known as InterpretSELDM, is available in the public domain on a USGS ScienceBase site.
History
SELDM was developed between 2010 and 2013 and was published as version 1.0.0 in March 2013. A small problem with the algorithm used to calculate upstream and lake-basin transport curves was discovered and version 1.0.1 was released in July 2013. Version 1.0.2 was released in June, 2016 to use the Cunnane plotting position formula for all output files. Version 1.0.3 was released in July, 2018 to address issues with load calculations for constituents with concentrations of nanograms per liter or picograms per liter and to address other sundry issues. Version 1.1.0 was released in May 2021 to add batch processing, change the highway runoff duration used for upstream transport curves from the discharge duration, which could vary from BMP to BMP, to the runoff-concurrent duration and volume, and fix a problem that allowed users to simulate a dependent variable in a lake analysis without the explanatory variable, which caused an error. Version 1.1.1 was released in December 2022 to make SELDM compatible with the 32- and 64-bit versions of Microsoft Office; this version has the ability to simulate emerging contaminants including Microplastics, PFAS/PFOS (see Per- and polyfluoroalkyl substances and Perfluorooctanesulfonic acid), and tire chemicals (see Tire manufacturing, Rubber pollution, and 6PPD). The code for SELDM is open source and public domain code that can be downloaded from the SELDM software support page.
See also
References
External links
SELDM Documentation Page
SELDM Software Support Page
SELDM Software Archive
Stormwater YouTube Page
Environmental engineering
Federal Highway Administration
Stormwater management
Environmental issues with water
Hydrology models
Hydrology and urban planning
Water and the environment
Water resource management in the United States
United States Geological Survey | Stochastic empirical loading and dilution model | Chemistry,Engineering,Biology,Environmental_science | 2,779 |
983,406 | https://en.wikipedia.org/wiki/NGC%203 | NGC 3 is a lenticular galaxy with the morphological type of S0, located in the constellation of Pisces. Other sources classify NGC 3 as a barred spiral galaxy as a type of SBa. It was discovered on November 29, 1864, by Albert Marth.
Observational History
NGC 3 was discovered by Albert Marth on 29 November 1864 and was described as "faint, very small, round, almost stellar".
Properties
NGC 3 is a lenticular galaxy, while other sources tagged NGC 3 as a barred spiral galaxy. NGC 3 is located at a distance of about 172 million light-years from Earth. NGC 3 has a magnitude of 14.2.
NGC 3 appears to have a faint spiral arm structure, along with a weak bar.
Listing in Astronomical Catalogues
NGC 3 is first cataloged as GC 5080, an addendum to Dreyer's 1877 Supplement to the General Catalogue of Nebulae And Clusters of Stars. The object is cataloged as UGC 58, PGC 565, Ark 1, MCG+01-01-037, and CGCG 408–35.
Gallery
References
External links
Galaxies discovered in 1864
Lenticular galaxies
Pisces (constellation)
0003
00565
00058
18641129 | NGC 3 | Astronomy | 257 |
19,593,040 | https://en.wikipedia.org/wiki/Celsius | The degree Celsius is the unit of temperature on the Celsius temperature scale (originally known as the centigrade scale outside Sweden), one of two temperature scales used in the International System of Units (SI), the other being the closely related Kelvin scale. The degree Celsius (symbol: °C) can refer to a specific point on the Celsius temperature scale or to a difference or range between two temperatures. It is named after the Swedish astronomer Anders Celsius (1701–1744), who proposed the first version of it in 1742. The unit was called centigrade in several languages (from the Latin centum, which means 100, and gradus, which means steps) for many years. In 1948, the International Committee for Weights and Measures renamed it to honor Celsius and also to remove confusion with the term for one hundredth of a gradian in some languages. Most countries use this scale (the Fahrenheit scale is still used in the United States, some island territories, and Liberia).
Throughout the 19th century, the scale was based on for the freezing point of water and for the boiling point of water at 1 atm pressure. (In Celsius's initial proposal, the values were reversed: the boiling point was 0 degrees and the freezing point was 100 degrees.)
Between 1954 and 2019, the precise definitions of the unit and the Celsius temperature scale used absolute zero and the triple point of water. Since 2007, the Celsius temperature scale has been defined in terms of the kelvin, the SI base unit of thermodynamic temperature (symbol: K). Absolute zero, the lowest temperature, is now defined as being exactly and .
History
In 1742, Swedish astronomer Anders Celsius (1701–1744) created a temperature scale that was the reverse of the scale now known as "Celsius": 0 represented the boiling point of water, while 100 represented the freezing point of water. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the melting point of ice is essentially unaffected by pressure. He also determined with remarkable precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the zero point of his temperature scale, being the boiling point, would be calibrated at the mean barometric pressure at mean sea level. This pressure is known as one standard atmosphere. The BIPM's 10th General Conference on Weights and Measures (CGPM) in 1954 defined one standard atmosphere to equal precisely 1,013,250 dynes per square centimeter (101.325 kPa).
In 1743, the French physicist Jean-Pierre Christin, permanent secretary of the Academy of Lyon, inverted the Celsius temperature scale so that 0 represented the freezing point of water and 100 represented the boiling point of water. Some credit Christin for independently inventing the reverse of Celsius's original scale, while others believe Christin merely reversed Celsius's scale. On 19 May 1743 he published the design of a mercury thermometer, the "Thermometer of Lyon" built by the craftsman Pierre Casati that used this scale.
In 1744, coincident with the death of Anders Celsius, the Swedish botanist Carl Linnaeus (1707–1778) reversed Celsius's scale. His custom-made "Linnaeus-thermometer", for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time, whose workshop was located in the basement of the Stockholm observatory. As often happened in this age before modern communications, numerous physicists, scientists, and instrument makers are credited with having independently developed this same scale; among them were Pehr Elvius, the secretary of the Royal Swedish Academy of Sciences (which had an instrument workshop) and with whom Linnaeus had been corresponding; , the instrument maker; and Mårten Strömer (1707–1770) who had studied astronomy under Anders Celsius.
The first known Swedish document reporting temperatures in this modern "forward" Celsius temperature scale is the paper Hortus Upsaliensis dated 16 December 1745 that Linnaeus wrote to a student of his, Samuel Nauclér. In it, Linnaeus recounted the temperatures inside the orangery at the University of Uppsala Botanical Garden:
"Centigrade" versus "Celsius"
Since the 19th century, the scientific and thermometry communities worldwide have used the phrase "centigrade scale" and temperatures were often reported simply as "degrees" or, when greater specificity was desired, as "degrees centigrade", with the symbol °C.
In the French language, the term centigrade also means one hundredth of a gradian, when used for angular measurement. The term centesimal degree was later introduced for temperatures but was also problematic, as it means gradian (one hundredth of a right angle) in the French and Spanish languages. The risk of confusion between temperature and angular measurement was eliminated in 1948 when the 9th meeting of the General Conference on Weights and Measures and the Comité International des Poids et Mesures (CIPM) formally adopted "degree Celsius" for temperature.
While "Celsius" is commonly used in scientific work, "centigrade" is still used in French and English-speaking countries, especially in informal contexts. The frequency of the usage of "centigrade" has declined over time.
Due to metrication in Australia, after 1 September 1972 weather reports in the country were exclusively given in Celsius. In the United Kingdom, it was not until February 1985 that forecasts by BBC Weather switched from "centigrade" to "Celsius".
Common temperatures
All phase transitions are at standard atmosphere. Figures are either by definition, or approximated from empirical measurements.
Name and symbol typesetting
The "degree Celsius" has been the only SI unit whose full unit name contains an uppercase letter since 1967, when the SI base unit for temperature became the kelvin, replacing the capitalized term degrees Kelvin. The plural form is "degrees Celsius".
The general rule of the International Bureau of Weights and Measures (BIPM) is that the numerical value always precedes the unit, and a space is always used to separate the unit from the number, (not "" or ""). The only exceptions to this rule are for the unit symbols for degree, minute, and second for plane angle (°, , and , respectively), for which no space is left between the numerical value and the unit symbol. Other languages, and various publishing houses, may follow different typographical rules.
Unicode character
Unicode provides the Celsius symbol at code point . However, this is a compatibility character provided for roundtrip compatibility with legacy encodings. It easily allows correct rendering for vertically written East Asian scripts, such as Chinese. The Unicode standard explicitly discourages the use of this character: "In normal use, it is better to represent degrees Celsius '°C' with a sequence of + , rather than . For searching, treat these two sequences as identical."
Temperatures and intervals
The degree Celsius is subject to the same rules as the kelvin with regard to the use of its unit name and symbol. Thus, besides expressing specific temperatures along its scale (e.g. "Gallium melts at 29.7646 °C" and "The temperature outside is 23 degrees Celsius"), the degree Celsius is also suitable for expressing temperature intervals: differences between temperatures or their uncertainties (e.g. "The output of the heat exchanger is hotter by 40 degrees Celsius", and "Our standard uncertainty is ±3 °C"). Because of this dual usage, one must not rely upon the unit name or its symbol to denote that a quantity is a temperature interval; it must be unambiguous through context or explicit statement that the quantity is an interval. This is sometimes solved by using the symbol °C (pronounced "degrees Celsius") for a temperature, and (pronounced "Celsius degrees") for a temperature interval, although this usage is non-standard. Another way to express the same is , which can be commonly found in literature.
Celsius measurement follows an interval system but not a ratio system; and it follows a relative scale not an absolute scale. For example, an object at 20 °C does not have twice the energy of when it is 10 °C; and 0 °C is not the lowest Celsius value. Thus, degrees Celsius is a useful interval measurement but does not possess the characteristics of ratio measures like weight or distance.
Coexistence with Kelvin
In science and in engineering, the Celsius and Kelvin scales are often used in combination in close contexts, e.g. "a measured value was 0.01023 °C with an uncertainty of 70 μK". This practice is permissible because the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding the official endorsement provided by decision no. 3 of Resolution 3 of the 13th CGPM, which stated "a temperature interval may also be expressed in degrees Celsius", the practice of simultaneously using both °C and K remains widespread throughout the scientific world as the use of SI-prefixed forms of the degree Celsius (such as "μ°C" or "microdegrees Celsius") to express a temperature interval has not been widely adopted.
Melting and boiling points of water
The melting and boiling points of water are no longer part of the definition of the Celsius temperature scale. In 1948, the definition was changed to use the triple point of water. In 2005, the definition was further refined to use water with precisely defined isotopic composition (VSMOW) for the triple point. In 2019, the definition was changed to use the Boltzmann constant, completely decoupling the definition of the kelvin from the properties of water. Each of these formal definitions left the numerical values of the Celsius temperature scale identical to the prior definition to within the limits of accuracy of the metrology of the time.
When the melting and boiling points of water ceased being part of the definition, they became measured quantities instead. This is also true of the triple point.
In 1948 when the 9th General Conference on Weights and Measures (CGPM) in Resolution 3 first considered using the triple point of water as a defining point, the triple point was so close to being 0.01 °C greater than water's known melting point, it was simply defined as precisely 0.01 °C. However, later measurements showed that the difference between the triple and melting points of VSMOW is actually very slightly (< 0.001 °C) greater than 0.01 °C. Thus, the actual melting point of ice is very slightly (less than a thousandth of a degree) below 0 °C. Also, defining water's triple point at 273.16 K precisely defined the magnitude of each 1 °C increment in terms of the absolute thermodynamic temperature scale (referencing absolute zero). Now decoupled from the actual boiling point of water, the value "100 °C" is hotter than 0 °C – in absolute terms – by a factor of exactly (approximately 36.61% thermodynamically hotter). When adhering strictly to the two-point definition for calibration, the boiling point of VSMOW under one standard atmosphere of pressure was actually 373.1339 K (99.9839 °C). When calibrated to ITS-90 (a calibration standard comprising many definition points and commonly used for high-precision instrumentation), the boiling point of VSMOW was slightly less, about 99.974 °C.
This boiling-point difference of 16.1 millikelvins between the Celsius temperature scale's original definition and the previous one (based on absolute zero and the triple point) has little practical meaning in common daily applications because water's boiling point is very sensitive to variations in barometric pressure. For example, an altitude change of only causes the boiling point to change by one millikelvin.
See also
Outline of metrology and measurement
Comparison of temperature scales
Degree of frost
Thermodynamic temperature
Notes
References
External links
NIST, Basic unit definitions: Kelvin
The Uppsala Astronomical Observatory, History of the Celsius temperature scale
London South Bank University, Water, scientific data
BIPM, SI brochure, section 2.1.1.5, Unit of thermodynamic temperature
SI derived units
Scales of temperature
Swedish inventions
1742 introductions
18th-century inventions
Scales in meteorology | Celsius | Physics,Mathematics | 2,609 |
70,346,758 | https://en.wikipedia.org/wiki/Sea%20surface%20skin%20temperature | The sea surface skin temperature (SSTskin), or ocean skin temperature, is the temperature of the sea surface as determined through its infrared spectrum (3.7–12 μm) and represents the temperature of the sublayer of water at a depth of 10–20 μm. High-resolution data of skin temperature gained by satellites in passive infrared measurements is a crucial constituent in determining the sea surface temperature (SST).
Since the skin layer is in radiative equilibrium with the atmosphere and the sun, its temperature underlies a daily cycle. Even small changes in the skin temperature can lead to large changes in atmospheric circulation. This makes skin temperature a widely used quantity in weather forecasting and climate science.
Remote Sensing
Large-scale sea surface skin temperature measurements started with the use of satellites in remote sensing. The underlying principle of this kind of measurement is to determine the surface temperature via its black body spectrum. Different measurement devices are installed where each device measures a different wavelength. Every wavelength corresponds to different sublayers in the upper 500 μm of the ocean water column. Since this layer shows a strong temperature gradient, the observed temperature depends on the wavelength used. Therefore, the measurements are often indicated with their wavelength band instead of their depths.
History
First satellite measurements of the sea surface were conducted as early as 1964 by Nimbus-I. Further satellites were deployed in 1966 and the early 1970s. Early measurements suffered from contamination by atmospheric disturbances. The first satellite to carry a sensor operating on multiple infrared bands was launched late in 1978, which enabled atmospheric correction. This class of sensors is called Advanced very-high-resolution radiometers (AVHRR) and provides information that is also relevant for the tracking of clouds. The current, third-generation features six channels at wavelength ranges important for cloud observation, cloud/snow differentiation, surface temperature observation and atmospheric correction. The modern satellite array is able to give a global coverage with a resolution of 10 km every ~6 h.
Conversion to SST
Sea surface skin temperature measurements are completed with SSTsubskin measurements in the microwave regime to estimate the sea surface temperature. These measurements have the advantage of being independent of cloud cover and underlie less variation. The conversion to SST is done via elaborate retrieval algorithms. These algorithms take additional information like the current wind, cloud cover, precipitation and water vapor content into account and model the heat transfer between the layers. The determined SST is validated by in-situ measurements from ships, buoys and profilers. On average, the skin temperature is estimated to be systematically cooler by 0.15 ± 0.1 K compared to the temperature at 5m depth.
Vertical temperature profile of the sea surface
The vertical temperature profile of the surface layer of the ocean is determined by different heat transport processes. At the very interface, the ocean is in thermal equilibrium with the atmosphere which is dominated by conductive and diffusive heat transfer. Also, evaporation takes place at the interface and thus cools the skin layer. Below the skin layer lies the subskin layer, this layer is defined as the layer where molecular and viscous heat transfer dominates. At larger scales, as the much bigger foundation layer, turbulent heat transport through eddies contributes most to the vertical heat transfer.
During the day, there is additional heating by the sun. The solar radiation entering the ocean gets heats the surface following the Beer-Lambert law. Here, approximately five percent of the incoming radiation is absorbed in the upper 1 mm of the ocean. Since the heating from above leads to a stable stratification, other processes dominate the heat transport, depending on the considered scale.
Regarding the skin layer with thickness , turbulent diffusion term is negligible. For the stationary case without external heating, the vertical temperature profile obeys the following energy budget:
Here, and denote the density and heat capacity of water, the molecular thermal conductivity and the vertical partial derivative of the temperature. The vertical heat difference consists of latent heat release, sensible heat fluxes and the net longwave thermal radiation. The observed in the skin layer is positive, which corresponds to a temperature increasing with depth (Note that the z-axis points downward into the ocean). This leads to a cool skin layer as can be seen in Fig. 2. A common empiric description of the vertical temperature profile within the skin layer of depth is:
Here, and denote the temperature of the surface and the lower boundary. When including the diurnal heating, we have to include an additional heating term, depending on the absorbed short wave radiation. Integrating over , we can express the temperature at depth as:
where is the net shortwave solar radiation at the ocean interface and is its fraction absorbed up to depth . As can be seen in Fig. 2, the diurnal heating reduces the cool skin effect. The maximum temperature can be found in the subskin layer, where the external heating per depth is lower than in the skin layer, but where the surface cooling has a smaller effect. With further increasing depth, the temperature declines, as the proportional heating is smaller and the layer is mixed via turbulent processes.
Variation of skin temperature
Daily cycle
The ocean skin temperature is defined as the temperature of the water at 20 μm depth. This means that the SSTskin is very dependent on the heat flux from the ocean to the atmosphere. This results in diurnal warming of the sea surface, high temperatures occur during the day and low temperatures during the night (especially with clear skies and low wind speed conditions).
Because the SSTskin can be measured by satellites and is the temperature almost at the interface of the ocean and the atmosphere, it is a very useful measure to find the heat flux from the ocean. The increased heat flux due to diurnal warming can reach as high as 50-60 W/m2 and has a temporal mean of 10 W/m2. These amounts of heat flux cannot be neglected in atmospheric processes.
Wind and interaction with the atmosphere
The sea surface temperature is also highly dependent on wind and waves. Both processes cause mixing and therefore cooling/heating of the SSTskin. For example, when rough seas occur during the day, colder water from lower layers are mixed with the ocean skin. When gravity waves are present at the sea surface, there is a modulation of ocean skin temperature. In this modulation, the wind plays an important role. The magnitude of this modulation depends on wind speed, the phase is determined by the direction of the wind relative to the waves. When the wind and wave direction are similar, maximum temperatures occur on the forward side of the wave and when the wind blows from the opposite side compared to the waves, maximum temperatures are found at the rear face of the wave.
Interaction with marine lifeforms
On a global scale, skin temperature is an indicator of plankton concentrations. In areas where a relatively cold SSTskin is measured, abundance of phytoplankton is high. This effect is caused by the rise of cold, nutrient-rich water from the sea bottom in these regions. This increase in nutrients causes phytoplankton to thrive. On the other hand, relatively high SSTskin is an indication of higher zooplankton concentrations. These plankton depend on organic matter to thrive and higher temperatures increase production.
On more local scales, surface accumulations of cyanobacteria can cause local increases in SSTskin by up to 1.5 degrees Celsius. Cyanobacteria are bacteria that photosynthesize and therefore chlorophyll is present in these bacteria. This increased chlorophyll concentration causes more absorption of incoming radiation. This increased absorption causes the temperature of the sea surface to rise. This increased temperature is most likely only apparent in the first meter and definitely only in the first five meters, after which no increased temperatures are measured.
See also
Sea surface temperature
Remote sensing
Remote sensing (oceanography)
Thermal radiation
Skin temperature of an atmosphere
Sea surface interface temperature
Sea surface subskin temperature
Group for High Resolution Sea Surface Temperature (GHRSST)
Weather modification
References
Oceans
Temperature | Sea surface skin temperature | Physics,Chemistry | 1,637 |
14,403,153 | https://en.wikipedia.org/wiki/Mongol%20mythology | The Mongol mythology is the traditional religion of the Mongols.
Creation
There are many Mongol creation myths. In one, the creation of the world is attributed to a Buddhist deity Lama. At the start of time, there was only water, and from the heavens, Lama came down to it holding an iron rod with which he began to stir. As he began to stir the water, the stirring brought about a wind and fire which caused a thickening at the centre of the waters to form the earth. Another narrative also attributes the creation of heaven and earth to a lama who is called Udan. Udan began by separating earth from heaven, and then dividing heaven and earth both into nine stories, and creating nine rivers. After the creation of the earth itself, the first male and female couple were created out of clay. They would become the progenitors of all humanity.
In another example the world began as an agitating gas which grew increasingly warm and damp, precipitating a heavy rain that created the oceans. Dust and sand emerged to the surface and became earth. Yet another account tells of the Buddha Sakyamuni searching the surface of the sea for a means to create the earth and spotted a golden frog. From its east side, Buddha pierced the frog through, causing it to spin and face north. From its mouth burst fire, and its rump streamed water. Buddha tossed golden sand on his back which became land. And this was the origin of the five earthly elements, wood and metal from the arrow, and fire, water and sand. These myths date from the 17th century when Yellow Shamanism (Tibetan Buddhism using shamanistic forms) was established in Mongolia. Black Shamanism and White Shamanism from pre-Buddhist times survive only in far-northern Mongolia (around Lake Khuvsgul) and the region around Lake Baikal where Lamaist persecution had not been effective.
Deities
Bai-Ulgan and Esege Malan are creator deities.
Ot is the goddess of marriage.
Tung-ak is the patron god of tribal chiefs and the ruler of the lesser spirits of Mongol mythology
Erlik Khan is the King of the Underworld.
Daichi Tengri is the red god of war to whom enemy soldiers were sometimes sacrificed during battle campaigns.
Zaarin Tengri is a spirit who gives Khorchi (in the Secret History of the Mongols) a vision of a cow mooing "Heaven and earth have agreed to make Temujin (later Genghis Khan) the lord of the nation".
The sky god Tengri is attested from the Xiongnu of the 2nd century BC. The Xiongnu may not have been Mongol, but Tengri is common to several Central Asian peoples, including the Mongols.
The wolf, falcon, deer and horse were important symbolic animals.
Texts and myths
The Uliger are traditional epic tales and the Epic of King Gesar is shared with much of Central Asia and Tibet.
The Epic of King Gesar (Ges'r, Kesar) is a Mongol religious epic about Geser (also known as Buche Beligte), a prophet of Tengriism.
See also
Alpamysh
Epic of Manas
Manchurian mythology
Mongolian cosmogony
Scythian mythology
Shamanism in Siberia
The Secret History of the Mongols
Tibetan mythology
Tungusic mythology
Turco-Mongol tradition
Turkic mythology
Notes
References
Walter Heissig, The Religions of Mongolia, Kegan Paul (2000).
Myths Connected With Mongol Religion, A Journey in Southern Siberia, by Jeremiah Curtin.
Gerald Hausman, Loretta Hausman, The Mythology of Horses: Horse Legend and Lore Throughout the Ages (2003), 37–46.
Yves Bonnefoy, Wendy Doniger, Asian Mythologies, University Of Chicago Press (1993), 315–339.
满都呼, 中国阿尔泰语系诸民族神话故事(folklores of Chinese Altaic races).民族出版社, 1997. .
贺灵, 新疆宗教古籍资料辑注(materials of old texts of Xinjiang religions).Xinjiang People's Press, May 2006. .
S. G. Klyashtornyj, 'Political Background of the Old Turkic Religion' in: Oelschlägel, Nentwig, Taube (eds.), "Roter Altai, gib dein Echo!" (FS Taube), Leipzig, 2005, , 260–265.
External links
Alpamysh
Shamanism in Mongolia and Tibet
The Altaic Epic
Tengri on Mars
Creation myths
Tengriism | Mongol mythology | Astronomy | 939 |
60,573,119 | https://en.wikipedia.org/wiki/Pavilion%20%28exhibition%29 | A pavilion is a genre of building often found at large international exhibitions such as a World's fair. It may be designed by a well-known architect or designer from the exhibiting country to showcase the latest technology of the exhibitor or be designed in what is considered the national architectural style of the exhibiting country. The German pavilion for the 1929 Barcelona International Exposition, for instance, was designed by noted modernist German architects Ludwig Mies van der Rohe and Lilly Reich.
See also
Aberdeen Pavilion
Bridge Pavilion
Canada Pavilion, British Empire Exhibition
Expo 2010 pavilions
Moscow Pavilion
Pavilion (generally)
References
External links
Architectural terminology | Pavilion (exhibition) | Engineering | 122 |
1,080,226 | https://en.wikipedia.org/wiki/Nitrogenase | Nitrogenases are enzymes () that are produced by certain bacteria, such as cyanobacteria (blue-green bacteria) and rhizobacteria. These enzymes are responsible for the reduction of nitrogen (N2) to ammonia (NH3). Nitrogenases are the only family of enzymes known to catalyze this reaction, which is a step in the process of nitrogen fixation. Nitrogen fixation is required for all forms of life, with nitrogen being essential for the biosynthesis of molecules (nucleotides, amino acids) that create plants, animals and other organisms. They are encoded by the Nif genes or homologs. They are related to protochlorophyllide reductase.
Classification and structure
Although the equilibrium formation of ammonia from molecular hydrogen and nitrogen has an overall negative enthalpy of reaction (), the activation energy is very high (). Nitrogenase acts as a catalyst, reducing this energy barrier such that the reaction can take place at ambient temperatures.
A usual assembly consists of two components:
The homodimeric Fe-only protein, the reductase which has a high reducing power and is responsible for a supply of electrons.
The heterotetrameric MoFe protein, a nitrogenase which uses the electrons provided to reduce N2 to NH3. In some assemblies it is replaced by a homologous alternative.
Reductase
The Fe protein, the dinitrogenase reductase or NifH, is a dimer of identical subunits which contains one [Fe4S4] cluster and has a mass of approximately 60-64kDa. The function of the Fe protein is to transfer electrons from a reducing agent, such as ferredoxin or flavodoxin to the nitrogenase protein. Ferredoxin or flavodoxin can be reduced by one of six mechanisms: 1. by a pyruvate:ferredoxin oxidoreductase, 2. by a bi-directional hydrogenase, 3. in a photosynthetic reaction center, 4. by coupling electron flow to dissipation of the proton motive force, 5. by electron bifurcation, or 6. by a ferredoxin:NADPH oxidoreductase. The transfer of electrons requires an input of chemical energy which comes from the binding and hydrolysis of ATP. The hydrolysis of ATP also causes a conformational change within the nitrogenase complex, bringing the Fe protein and MoFe protein closer together for easier electron transfer.
Nitrogenase
The MoFe protein is a heterotetramer consisting of two α subunits and two β subunits, with a mass of approximately 240-250kDa. The MoFe protein also contains two iron–sulfur clusters, known as P-clusters, located at the interface between the α and β subunits and two FeMo cofactors, within the α subunits. The oxidation state of Mo in these nitrogenases was formerly thought Mo(V), but more recent evidence is for Mo(III). (Molybdenum in other enzymes is generally bound to molybdopterin as fully oxidized Mo(VI)).
The core (Fe8S7) of the P-cluster takes the form of two [Fe4S3] cubes linked by a central sulfur atom. Each P-cluster is linked to the MoFe protein by six cysteine residues.
Each FeMo cofactor (Fe7MoS9C) consists of two non-identical clusters: [Fe4S3] and [MoFe3S3], which are linked by three sulfide ions. Each FeMo cofactor is covalently linked to the α subunit of the protein by one cysteine residue and one histidine residue.
Electrons from the Fe protein enter the MoFe protein at the P-clusters, which then transfer the electrons to the FeMo cofactors. Each FeMo cofactor then acts as a site for nitrogen fixation, with N2 binding in the central cavity of the cofactor.
Variations
The MoFe protein can be replaced by alternative nitrogenases in environments low in the Mo cofactor. Two types of such nitrogenases are known: the vanadium–iron (VFe; Vnf) type and the iron–iron (FeFe; Anf) type. Both form an assembly of two α subunits, two β subunits, and two δ (sometimes γ) subunits. The delta subunits are homologous to each other, and the alpha and beta subunits themselves are homologous to the ones found in MoFe nitrogenase. The gene clusters are also homologous, and these subunits are interchangeable to some degree. All nitrogenases use a similar Fe-S core cluster, and the variations come in the cofactor metal.
The Anf nitrogenase in Azotobacter vinelandii is organized in an anfHDGKOR operon. This operon still requires some of the Nif genes to function. An engineered minimal 10-gene operon that incorporates these additional essential genes has been constructed.
Mechanism
General mechanism
Nitrogenase is an enzyme responsible for catalyzing nitrogen fixation, which is the reduction of nitrogen (N2) to ammonia (NH3) and a process vital to sustaining life on Earth. There are three types of nitrogenase found in various nitrogen-fixing bacteria: molybdenum (Mo) nitrogenase, vanadium (V) nitrogenase, and iron-only (Fe) nitrogenase. Molybdenum nitrogenase, which can be found in diazotrophs such as legume-associated rhizobia, is the nitrogenase that has been studied the most extensively and thus is the most well characterized. Vanadium nitrogenase and iron-only nitrogenase can both be found in select species of Azotobacter as an alternative nitrogenase. Equations 1 and 2 show the balanced reactions of nitrogen fixation in molybdenum nitrogenase and vanadium nitrogenase respectively.
All nitrogenases are two-component systems made up of Component I (also known as dinitrogenase) and Component II (also known as dinitrogenase reductase). Component I is a MoFe protein in molybdenum nitrogenase, a VFe protein in vanadium nitrogenase, and an Fe protein in iron-only nitrogenase. Component II is a Fe protein that contains the Fe-S cluster., which transfers electrons to Component I. Component I contains 2 metal clusters: the P-cluster, and the FeMo-cofactor (FeMo-co). Mo is replaced by V or Fe in vanadium nitrogenase and iron-only nitrogenase respectively. During catalysis, 2 equivalents of MgATP are hydrolysed which helps to decrease the potential of the to the Fe-S cluster and drive reduction of the P-cluster, and finally to the FeMo-co, where reduction of N2 to NH3 takes place.
Lowe-Thorneley kinetic model
The reduction of nitrogen to two molecules of ammonia is carried out at the FeMo-co of Component I after the sequential addition of proton and electron equivalents from Component II. Steady state, freeze quench, and stopped-flow kinetics measurements carried out in the 70's and 80's by Lowe, Thorneley, and others provided a kinetic basis for this process. The Lowe-Thorneley (LT) kinetic model was developed from these experiments and documents the eight correlated proton and electron transfers required throughout the reaction. Each intermediate stage is depicted as En where n = 0–8, corresponding to the number of equivalents transferred. The transfer of four equivalents are required before the productive addition of N2, although reaction of E3 with N2 is also possible. Notably, nitrogen reduction has been shown to require 8 equivalents of protons and electrons as opposed to the 6 equivalents predicted by the balanced chemical reaction.
Intermediates E0 through E4
Spectroscopic characterization of these intermediates has allowed for greater understanding of nitrogen reduction by nitrogenase, however, the mechanism remains an active area of research and debate. Briefly listed below are spectroscopic experiments for the intermediates before the addition of nitrogen:
E0 – This is the resting state of the enzyme before catalysis begins. EPR characterization shows that this species has a spin of 3/2.
E1 – The one electron reduced intermediate has been trapped during turnover under N2. Mӧssbauer spectroscopy of the trapped intermediate indicates that the FeMo-co is integer spin greater than 1.
E2 – This intermediate is proposed to contain the metal cluster in its resting oxidation state with the two added electrons stored in a bridging hydride and the additional proton bonded to a sulfur atom. Isolation of this intermediate in mutated enzymes shows that the FeMo-co is high spin and has a spin of 3/2.
E3 – This intermediate is proposed to be the singly reduced FeMo-co with one bridging hydride and one hydride.
E4 – Termed the Janus intermediate after the Roman god of transitions, this intermediate is positioned after exactly half of the electron proton transfers and can either decay back to E0 or proceed with nitrogen binding and finish the catalytic cycle. This intermediate is proposed to contain the FeMo-co in its resting oxidation state with two bridging hydrides and two sulfur bonded protons. This intermediate was first observed using freeze quench techniques with a mutated protein in which residue 70, a valine amino acid, is replaced with isoleucine. This modification prevents substrate access to the FeMo-co. EPR characterization of this isolated intermediate shows a new species with a spin of ½. ENDOR experiments have provided insight into the structure of this intermediate, revealing the presence of two bridging hydrides. 95Mo and 57Fe ENDOR show that the hydrides bridge between two iron centers. Cryoannealing of the trapped intermediate at -20 °C results in the successive loss of two hydrogen equivalents upon relaxation, proving that the isolated intermediate is consistent with the E4 state. The decay of E4 to E2 + H2 and finally to E0 and 2H2 has confirmed the EPR signal associated with the E2 intermediate.
The above intermediates suggest that the metal cluster is cycled between its original oxidation state and a singly reduced state with additional electrons being stored in hydrides. It has alternatively been proposed that each step involves the formation of a hydride and that the metal cluster actually cycles between the original oxidation state and a singly oxidized state.
Distal and alternating pathways for N2 fixation
While the mechanism for nitrogen fixation prior to the Janus E4 complex is generally agreed upon, there are currently two hypotheses for the exact pathway in the second half of the mechanism: the "distal" and the "alternating" pathway. In the distal pathway, the terminal nitrogen is hydrogenated first, releases ammonia, then the nitrogen directly bound to the metal is hydrogenated. In the alternating pathway, one hydrogen is added to the terminal nitrogen, then one hydrogen is added to the nitrogen directly bound to the metal. This alternating pattern continues until ammonia is released. Because each pathway favors a unique set of intermediates, attempts to determine which path is correct have generally focused on the isolation of said intermediates, such as the nitrido in the distal pathway, and the diazene and hydrazine in the alternating pathway. Attempts to isolate the intermediates in nitrogenase itself have so far been unsuccessful, but the use of model complexes has allowed for the isolation of intermediates that support both sides depending on the metal center used. Studies with Mo generally point towards a distal pathway, while studies with Fe generally point towards an alternating pathway.
Specific support for the distal pathway has mainly stemmed from the work of Schrock and Chatt, who successfully isolated the nitrido complex using Mo as the metal center in a model complex. Specific support for the alternating pathway stems from a few studies. Iron only model clusters have been shown to catalytically reduce N2. Small tungsten clusters have also been shown to follow an alternating pathway for nitrogen fixation. The vanadium nitrogenase releases hydrazine, an intermediate specific to the alternating mechanism. However, the lack of characterized intermediates in the native enzyme itself means that neither pathway has been definitively proven. Furthermore, computational studies have been found to support both sides, depending on whether the reaction site is assumed to be at Mo (distal) or at Fe (alternating) in the MoFe cofactor.
Mechanism of MgATP binding
Binding of MgATP is one of the central events to occur in the mechanism employed by nitrogenase. Hydrolysis of the terminal phosphate group of MgATP provides the energy needed to transfer electrons from the Fe protein to the MoFe protein. The binding interactions between the MgATP phosphate groups and the amino acid residues of the Fe protein are well understood by comparing to similar enzymes, while the interactions with the rest of the molecule are more elusive due to the lack of a Fe protein crystal structure with MgATP bound (as of 1996). Three protein residues have been shown to have significant interactions with the phosphates. In the absence of MgATP, a salt bridge exists between residue 15, lysine, and residue 125, aspartic acid. Upon binding, this salt bridge is interrupted. Site-specific mutagenesis has demonstrated that when the lysine is substituted for a glutamine, the protein's affinity for MgATP is greatly reduced and when the lysine is substituted for an arginine, MgATP cannot bind due to the salt bridge being too strong. The necessity of specifically aspartic acid at site 125 has been shown through noting altered reactivity upon mutation of this residue to glutamic acid. Residue 16, serine, has been shown to bind MgATP. Site-specific mutagenesis was used to demonstrate this fact. This has led to a model in which the serine remains coordinated to the Mg2+ ion after phosphate hydrolysis in order to facilitate its association with a different phosphate of the now ADP molecule. MgATP binding also induces significant conformational changes within the Fe protein. Site-directed mutagenesis was employed to create mutants in which MgATP binds but does not induce a conformational change. Comparing X-ray scattering data in the mutants versus in the wild-type protein led to the conclusion that the entire protein contracts upon MgATP binding, with a decrease in radius of approximately 2.0 Å.
Other mechanistic details
Many mechanistic aspects of catalysis remain unknown. No crystallographic analysis has been reported on substrate bound to nitrogenase.
Nitrogenase is able to reduce acetylene, but is inhibited by carbon monoxide, which binds to the enzyme and thereby prevents binding of dinitrogen. Dinitrogen prevent acetylene binding, but acetylene does not inhibit binding of dinitrogen and requires only one electron for reduction to ethylene. Due to the oxidative properties of oxygen, most nitrogenases are irreversibly inhibited by dioxygen, which degradatively oxidizes the Fe-S cofactors. This requires mechanisms for nitrogen fixers to protect nitrogenase from oxygen in vivo. Despite this problem, many use oxygen as a terminal electron acceptor for respiration. Although the ability of some nitrogen fixers such as Azotobacteraceae to employ an oxygen-labile nitrogenase under aerobic conditions has been attributed to a high metabolic rate, allowing oxygen reduction at the cell membrane, the effectiveness of such a mechanism has been questioned at oxygen concentrations above 70 μM (ambient concentration is 230 μM O2), as well as during additional nutrient limitations. A molecule found in the nitrogen-fixing nodules of leguminous plants, leghemoglobin, which can bind to dioxygen via a heme prosthetic group, plays a crucial role in buffering O2 at the active site of the nitrogenase, while concomitantly allowing for efficient respiration.
Nonspecific reactions
In addition to dinitrogen reduction, nitrogenases also reduce protons to dihydrogen, meaning nitrogenase is also a dehydrogenase. A list of other reactions carried out by nitrogenases is shown below:
HC≡CH → H2C=CH2
N–=N+=O → N2 + H2O
N=N=N– → N2 + NH3
→ CH4, NH3, H3C–CH3, H2C=CH2 (CH3NH2)
N≡C–R → RCH3 + NH3
C≡N–R → CH4, H3C–CH3, H2C=CH2, C3H8, C3H6, RNH2
O=C=S → CO + H2S
O=C=O → CO + H2O
S=C=N– → H2S + HCN
O=C=N– → H2O + HCN, CO + NH3
Furthermore, dihydrogen functions as a competitive inhibitor, carbon monoxide functions as a non-competitive inhibitor, and carbon disulfide functions as a rapid-equilibrium inhibitor of nitrogenase.
Vanadium nitrogenases have also been shown to catalyze the conversion of CO into alkanes through a reaction comparable to Fischer-Tropsch synthesis.
Organisms that synthesize nitrogenase
There are two types of bacteria that synthesize nitrogenase and are required for nitrogen fixation. These are:
Free-living bacteria (non-symbiotic), examples include:
Cyanobacteria (blue-green algae)
Green sulfur bacteria
Azotobacter
Mutualistic bacteria (symbiotic), examples include:
Rhizobium, associated with legumes
Azospirillum, associated with grasses
Frankia, associated with actinorhizal plants
Similarity to other proteins
The three subunits of nitrogenase exhibit significant sequence similarity to three subunits of the light-independent version of protochlorophyllide reductase that performs the conversion of protochlorophyllide to chlorophyll. This protein is present in gymnosperms, algae, and photosynthetic bacteria but has been lost by angiosperms during evolution.
Separately, two of the nitrogenase subunits (NifD and NifH) have homologues in methanogens that do not fix nitrogen e.g. Methanocaldococcus jannaschii. Little is understood about the function of these "class IV" nif genes, though they occur in many methanogens. In M. jannaschii they are known to interact with each other and are constitutively expressed.
Measurement of nitrogenase activity
As with many assays for enzyme activity, it is possible to estimate nitrogenase activity by measuring the rate of conversion of the substrate (N2) to the product (NH3). Since NH3 is involved in other reactions in the cell, it is often desirable to label the substrate with 15N to provide accounting or "mass balance" of the added substrate. A more common assay, the acetylene reduction assay or ARA, estimates the activity of nitrogenase by taking advantage of the ability of the enzyme to reduce acetylene gas to ethylene gas. These gases are easily quantified using gas chromatography. Though first used in a laboratory setting to measure nitrogenase activity in extracts of Clostridium pasteurianum cells, ARA has been applied to a wide range of test systems, including field studies where other techniques are difficult to deploy. For example, ARA was used successfully to demonstrate that bacteria associated with rice roots undergo seasonal and diurnal rhythms in nitrogenase activity, which were apparently controlled by the plant.
Unfortunately, the conversion of data from nitrogenase assays to actual moles of N2 reduced (particularly in the case of ARA), is not always straightforward and may either underestimate or overestimate the true rate for a variety of reasons. For example, H2 competes with N2 but not acetylene for nitrogenase (leading to overestimates of nitrogenase by ARA). Bottle or chamber-based assays may produce negative impacts on microbial systems as a result of containment or disruption of the microenvironment through handling, leading to underestimation of nitrogenase. Despite these weaknesses, such assays are very useful in assessing relative rates or temporal patterns in nitrogenase activity.
See also
Nitrogen fixation
Abiological nitrogen fixation
References
Further reading
External links
EC 1.18.6
Iron–sulfur proteins
Nitrogen cycle
Molybdenum enzymes | Nitrogenase | Chemistry | 4,341 |
24,111,988 | https://en.wikipedia.org/wiki/Hydra%20Engine | HYDRA Engine is a brand name for a multi-GPU developed by Lucid Logix. Similar to nVidia's SLI and ATI's Crossfire-technologies, Hydra allows linking several video cards together producing a single output and higher performance. Unlike SLI and CrossFire however, Hydra allows video cards from different chip manufactures to be linked together. Lucid claims it can do so with near to linear scaling of performance, i.e. two video cards equals twice the performance. The technology consists of both hardware on the motherboard and device drivers.
Currently there are two chips released under the Hydra Engine brand: Hydra 100 and Hydra 200. The basic concept behind the hardware is to intercept Microsoft DirectX or OpenGL sent to the video cards from the CPU and split these up to divide the calculation task fairly common amongst the present GPUs.
Reception
SweClockers.com, when testing the MSI Big Bang Fusion which features Hydra 200, gave it very poor ratings citing the following: Poor drivers, poor game support, small if any performance gain over a single video card, graphical artifacts, unstable gameplay and a high price tag.
See also
Scalable Link Interface
ATI CrossFire
References
Graphics cards | Hydra Engine | Technology | 247 |
51,887,672 | https://en.wikipedia.org/wiki/Neural%20circuit%20reconstruction | Neural circuit reconstruction is the reconstruction of the detailed circuitry of the nervous system (or a portion of the nervous system) of an animal. It is sometimes called EM reconstruction since the main method used is the electron microscope (EM). This field is a close relative of reverse engineering of human-made devices, and is part of the field of connectomics, which in turn is a sub-field of neuroanatomy.
Model systems
Some of the model systems used for circuit reconstruction are the fruit fly, the mouse, and the nematode C. elegans.
Sample preparation
The sample must be fixed, stained, and embedded in plastic.
Imaging
The sample may be cut into thin slices with a microtome, then imaged using transmission electron microscopy. Alternatively, the sample may be imaged with a scanning electron microscope, then the surface abraded using a focused ion beam, or trimmed using an in-microscope microtome. Then the sample is re-imaged, and the process repeated until the desired volume is processed.
Image processing
The first step is to align the individual images into a coherent three dimensional volume.
The volume is then annotated using one of two main methods. The first manually identifies the skeletons of each neurite. The second techniques uses computer vision software to identify voxels belonging to the same neuron. The second technique uses Machine Learning software to identify voxels belonging to the same neuron. Popular approaches are U-Net architectures to predict voxel-wise affinities paired with a watershed segmentation and flood-filling networks. These approaches produce an over-segmentation which can be manually or automatically agglomerated to correctly represent a neuron. Even for automatically agglomerated segmentations, large manual proofreading efforts are employed for highest accuracy.
Notable examples
The connectome of C. elegans was the seminal work in this field. This circuit was obtained with great effort using manually cut sections and purely manual annotation on photographic film. For many years this was the only circuit reconstruction available.
The central brain of the fruit fly Drosophila Melanogaster was released in 2020. This data release introduced the first on-line tools to query the connectome.
The Human Cortex H01, released in 2021, is a 1.4 petabyte volume of a small sample of human brain tissue imaged at nanoscale-resolution by serial section electron microscopy, reconstructed and annotated by automated computational techniques, and analyzed for preliminary insights into the structure of human cortex.
In their 2022 study “Connectomic comparison of mouse and human cortex”, the researchers reconstructed 9 connectomes across species: Datasets of Mouse, Macaque and Human.
Querying the connectome
Connectomes of higher organism's brains requires considerable data. For the fruit fly, for example, roughly 10 terabytes of image data are processed, by humans and computers, to generate several gigabyte of connectome data. Easy interaction with this data requires an interactive query interface, where researchers can look at the portion of data they are interested in without downloading the whole data set, and without specific training. A specific example of this technology is the NeuPrint interface to the connectomes generate at HHMI. This mimics the infrastructure of genetics, where interactive query tools such as BLAST are normally used to look at genes of interest, which for most research comprise only a small portion of the genome.
Limitations and future work
Understanding the detailed operation of the reconstructed networks also requires knowledge of gap junctions (hard to see with existing techniques), the identity of neurotransmitters and the locations and identities of receptors. In addition, neuromodulators can diffuse across large distances and still strongly affect function. Currently these features must be obtained through other techniques. Expansion microscopy may provide an alternative method.
References
Brain
Neural coding
Neuroimaging
Neuroinformatics | Neural circuit reconstruction | Biology | 803 |
44,839,293 | https://en.wikipedia.org/wiki/IC%20335 | IC 335 is an edge-on lenticular galaxy about 60 million light years (18 million parsecs) away, in the constellation Fornax. It is part of the Fornax Cluster.
IC 335 appears very similar to NGC 4452, a lenticular galaxy in Virgo. Both galaxies are edge-on, meaning that their characteristics, like spiral arms, are hidden. Lenticular galaxies like these are thought to be intermediate between spiral galaxies and elliptical galaxies, and like elliptical galaxies, they have very little gas for star formation. IC 335 may have once been a spiral galaxy that ran out of interstellar medium, or it may have collided with a galaxy in the past and thus used up all of its gas (see interacting galaxy).
References
External links
0335
13277
Fornax
Fornax Cluster
Lenticular galaxies | IC 335 | Astronomy | 171 |
12,967,535 | https://en.wikipedia.org/wiki/IEEE%20C2 | American National Standard C2 is the American National Standards Institute (ANSI) standard for the National Electrical Safety Code (NESC), published by the Institute of Electrical and Electronics Engineers (IEEE).
The NESC is a document containing voluntary (unless adopted by law) standards for safeguarding persons against electrical hazards during the installation, operation and maintenance of electric supply and communication lines. It includes general updates and critical revisions that directly impact the power utility industry. Adopted by law by the majority of states and Public Service Commissions across the US, the NESC is a performance code considered to be the authoritative source on good electrical engineering practice.
See also
IEC 60364
National Electrical Safety Code
Canadian Electrical Code
PSE law, Japan Electrical Safety Law.
Slash rating
Central Electricity Authority Regulations
References
IEEE Standards Association
C2
Electrical safety
Electrical wiring
Safety codes | IEEE C2 | Physics,Technology,Engineering | 166 |
18,824,671 | https://en.wikipedia.org/wiki/Alcohol%20withdrawal%20syndrome | Alcohol withdrawal syndrome (AWS) is a set of symptoms that can occur following a reduction in alcohol use after a period of excessive use. Symptoms typically include anxiety, shakiness, sweating, vomiting, fast heart rate, and a mild fever. More severe symptoms may include seizures, and delirium tremens (DTs); which can be fatal in untreated patients. Symptoms start at around 6 hours after the last drink. Peak incidence of seizures occurs at 24 to 36 hours and peak incidence of delirium tremens is at 48 to 72 hours.
Alcohol withdrawal may occur in those who are alcohol dependent. This may occur following a planned or unplanned decrease in alcohol intake. The underlying mechanism involves a decreased responsiveness of GABA receptors in the brain. The withdrawal process is typically followed using the Clinical Institute Withdrawal Assessment for Alcohol scale (CIWA-Ar).
The typical treatment of alcohol withdrawal is with benzodiazepines such as chlordiazepoxide or diazepam. Often the amounts given are based on a person's symptoms. Thiamine is recommended routinely. Electrolyte problems and low blood sugar should also be treated. Early treatment improves outcomes.
In the Western world about 15% of people have problems with alcoholism at some point in time. Alcohol depresses the central nervous system, slowing cerebral messaging and altering the way signals are sent and received. Progressively larger amounts of alcohol are needed to achieve the same physical and emotional results. The drinker eventually must consume alcohol just to avoid the physical cravings and withdrawal symptoms. About half of people with alcoholism will develop withdrawal symptoms upon reducing their use, with four percent developing severe symptoms. Among those with severe symptoms up to 15% die. Symptoms of alcohol withdrawal have been described at least as early as 400 BC by Hippocrates. It is not believed to have become a widespread problem until the 1700s.
Signs and symptoms
Signs and symptoms of alcohol withdrawal occur primarily in the central nervous system. The severity of withdrawal can vary from mild symptoms such as insomnia, trembling, and anxiety to severe and life-threatening symptoms such as alcoholic hallucinosis, delirium tremens, and autonomic instability.
Withdrawal usually begins 6 to 24 hours after the last drink. Symptoms are worst at 24 to 72 hours, and improve by seven days. To be classified as alcohol withdrawal syndrome, patients must exhibit at least two of the following symptoms: increased hand tremor, insomnia, nausea or vomiting, transient hallucinations (auditory, visual or tactile), psychomotor agitation, anxiety, generalized tonic–clonic seizures, and autonomic instability.
The severity of symptoms is dictated by a number of factors, the most important of which are degree of alcohol intake, length of time the individual has been using alcohol, and previous history of alcohol withdrawal. Symptoms are also grouped together and classified:
Alcohol hallucinosis: patients have transient visual, auditory, or tactile hallucinations, but are otherwise clear.
Withdrawal seizures: seizures occur within 48 hours of alcohol cessation and occur either as a single generalized tonic-clonic seizure or as a brief episode of multiple seizures.
Delirium tremens: hyperadrenergic state, disorientation, tremors, diaphoresis, impaired attention/consciousness, and visual and auditory hallucinations.
Progression
Six to 12 hours after the ingestion of the last drink, withdrawal symptoms such as shaking, headache, sweating, anxiety, nausea or vomiting may occur. Twelve to 24 hours after cessation, the condition may progress to such major symptoms as confusion, hallucinations (with awareness of reality), while less severe symptoms may persist and develop including tremor, agitation, hyperactivity and insomnia.
At 12 to 48 hours following the last ethanol ingestion, the possibility of generalized tonic–clonic seizures should be anticipated, occurring in 3–5% of cases. Meanwhile, none of the earlier withdrawal symptoms will typically have abated. Seizures carry the risk of major complications and death for individuals with an alcohol use disorder.
Although the person's condition usually begins to improve after 48 hours, withdrawal symptoms sometimes continue to increase in severity and advance to the most severe stage of withdrawal, delirium tremens. This occurs in 5–20% of patients experiencing detoxification and one third of untreated cases, which is characterized by hallucinations that are indistinguishable from reality, severe confusion, seizures, high blood pressure, and fever that can persist anywhere from 4 to 12 days.
Protracted withdrawal
A protracted alcohol withdrawal syndrome occurs in many alcoholics when withdrawal symptoms continue beyond the acute withdrawal stage but usually at a subacute level of intensity and gradually decreasing with severity over time. This syndrome is sometimes referred to as the post-acute-withdrawal syndrome. Some withdrawal symptoms can linger for at least a year after discontinuation of alcohol. Symptoms can include a craving for alcohol, inability to feel pleasure from normally pleasurable things (known as anhedonia), clouding of sensorium, disorientation, nausea and vomiting or headache.
Insomnia is a common protracted withdrawal symptom that persists after the acute withdrawal phase of alcohol. Insomnia has also been found to influence relapse rate. Studies have found that magnesium or trazodone can help treat the persisting withdrawal symptom of insomnia in recovering alcoholics. Insomnia can be difficult to treat in these individuals because many of the traditional sleep aids (e.g., benzodiazepine receptor agonists and barbiturate receptor agonists) work via a GABAA receptor mechanism and are cross-tolerant with alcohol. However, trazodone is not cross-tolerant with alcohol. The acute phase of the alcohol withdrawal syndrome can occasionally be protracted. Protracted delirium tremens has been reported in the medical literature as a possible but unusual feature of alcohol withdrawal.
Pathophysiology
Chronic use of alcohol leads to changes in brain chemistry especially in the GABAergic system. Various adaptations occur such as changes in gene expression and down regulation of GABAA receptors. During acute alcohol withdrawal, changes also occur such as upregulation of alpha4 containing GABAA receptors and downregulation of alpha1 and alpha3 containing GABAA receptors. Neurochemical changes occurring during alcohol withdrawal can be minimized with drugs which are used for acute detoxification. With abstinence from alcohol and cross-tolerant drugs these changes in neurochemistry may gradually return towards normal. Adaptations to the NMDA system also occur as a result of repeated alcohol intoxication and are involved in the hyper-excitability of the central nervous system during the alcohol withdrawal syndrome. Homocysteine levels, which are elevated during chronic drinking, increase even further during the withdrawal state, and may result in excitotoxicity. Alterations in ECG (in particular an increase in QT interval) and EEG abnormalities (including abnormal quantified EEG) may occur during early withdrawal. Dysfunction of the hypothalamic–pituitary–adrenal axis and increased release of corticotropin-releasing hormone occur during both acute as well as protracted abstinence from alcohol and contribute to both acute and protracted withdrawal symptoms. Anhedonia/dysphoria symptoms, which can persist as part of a protracted withdrawal, may be due to dopamine underactivity.
Kindling
Kindling is a phenomenon where repeated alcohol detoxifications leads to an increased severity of the withdrawal syndrome. For example, binge drinkers may initially experience no withdrawal symptoms, but with each period of alcohol use followed by cessation, their withdrawal symptoms intensify in severity and may eventually result in full-blown delirium tremens with convulsive seizures. Alcoholics who experience seizures during detoxification are more likely to have had previous episodes of alcohol detoxification than patients who did not have seizures during withdrawal. In addition, people with previous withdrawal syndromes are more likely to have more medically complicated alcohol withdrawal symptoms.
Kindling can cause complications and may increase the risk of relapse, alcohol-related brain damage and cognitive deficits. Chronic alcohol misuse and kindling via multiple alcohol withdrawals may lead to permanent alterations in the GABAA receptors. The mechanism behind kindling is sensitization of some neuronal systems and desensitization of other neuronal systems which leads to increasingly gross neurochemical imbalances. This in turn leads to more profound withdrawal symptoms including anxiety, convulsions and neurotoxicity.
Binge drinking is associated with increased impulsivity, impairments in spatial working memory and impaired emotional learning. These adverse effects are believed to be due to the neurotoxic effects of repeated withdrawal from alcohol on aberrant neuronal plasticity and cortical damage. Repeated periods of acute intoxication followed by acute detoxification has profound effects on the brain and is associated with an increased risk of seizures as well as cognitive deficits. The effects on the brain are similar to those seen in alcoholics who have detoxified repeatedly but not as severe as in alcoholics who have no history of prior detox. Thus, the acute withdrawal syndrome appears to be the most important factor in causing damage or impairment to brain function. The brain regions most sensitive to harm from binge drinking are the amygdala and prefrontal cortex.
People in adolescence who experience repeated withdrawals from binge drinking show impairments of long-term nonverbal memory. Alcoholics who have had two or more alcohol withdrawals show more frontal lobe cognitive dysfunction than those who have experienced one or no prior withdrawals. Kindling of neurons is the proposed cause of withdrawal-related cognitive damage. Kindling from repeated withdrawals leads to accumulating neuroadaptive changes. Kindling may also be the reason for cognitive damage seen in binge drinkers.
Diagnosis
Many hospitals use the Clinical Institute Withdrawal Assessment for Alcohol (CIWA) protocol in order to assess the level of withdrawal present and therefore the amount of medication needed. When overuse of alcohol is suspected but drinking history is unclear, testing for elevated values of carbohydrate-deficient transferrin or gammaglutamyl transferase can help make the diagnosis of alcohol overuse and dependence more clear. The CIWA has also been shortened (now called the CIWA-Ar), while retaining its validity and reliability, to help assess patients more efficiently due to the life-threatening nature of alcohol withdrawal.
Treatment
Benzodiazepines are effective for the management of symptoms as well as the prevention of seizures. Certain vitamins are also an important part of the management of alcohol withdrawal syndrome. In those with severe symptoms inpatient care is often required. In those with lesser symptoms treatment at home may be possible with daily visits with a health care provider.
Cohort studies have demonstrated that the combination of anticonvulsants and benzodiazepines is more effective than other treatments in reducing alcohol withdrawal scores and shortening the duration of intensive care unit stays.
Benzodiazepines
Benzodiazepines are the most commonly used medication for the treatment of alcohol withdrawal and are generally safe and effective in suppressing symptoms of alcohol withdrawal. This class of medication is generally effective in symptoms control, but needs to be used carefully. Although benzodiazepines have a long history of successfully treating and preventing withdrawal, there is no consensus on the ideal one to use. The most commonly used agents are long-acting benzodiazepines, such as chlordiazepoxide and diazepam. These are believed to be superior to other benzodiazepines for treatment of delirium and allow for longer periods between doses. However, benzodiazepines with intermediate half-lives like lorazepam may be safer in people with liver problems. Benzodiazepines showed a protective benefit against alcohol withdrawal symptoms, in particular seizure, compared to other common methods of treatment.
The primary debate between use of long-acting benzodiazepines and short-acting is that of ease of use. Longer-acting drugs, such as diazepam, can be administered less frequently. However, evidence does exist that "symptom-triggered regimens" such as those used when treating with lorazepam, are as safe and effective, but have decreased treatment duration and medication quantity used.
Although benzodiazepines are very effective at treating alcohol withdrawal, they should be carefully used. Benzodiazepines should only be used for brief periods in alcoholics who are not already dependent on them, as they share cross tolerance with alcohol. There is a risk of replacing an alcohol addiction with benzodiazepine dependence or adding another addiction. Furthermore, disrupted GABA benzodiazepine receptor function is part of alcohol dependence and chronic benzodiazepines may prevent full recovery from alcohol induced mental effects. The combination of benzodiazepines and alcohol can amplify the adverse psychological effects of each other causing enhanced depressive effects on mood and increase suicidal actions and are generally contraindicated except for alcohol withdrawal.
Vitamins
Alcoholics are often deficient in various nutrients, which can cause severe complications during alcohol withdrawal, such as the development of Wernicke syndrome. To help to prevent Wernicke syndrome, these individuals should be administered a multivitamin preparation with sufficient quantities of thiamine and folic acid. During alcohol withdrawal, the prophylactic administration of thiamine, folic acid, and pyridoxine intravenously is recommended before starting any carbohydrate-containing fluids or food. These vitamins are often combined into a banana bag for intravenous administration.
Anticonvulsants
Very limited evidence indicates that topiramate or pregabalin may be useful in the treatment of alcohol withdrawal syndrome. Limited evidence supports the use of gabapentin or carbamazepine for the treatment of mild or moderate alcohol withdrawal as the sole treatment or as combination therapy with other medications; however, gabapentin does not appear to be effective for treatment of severe alcohol withdrawal and is therefore not recommended for use in this setting. A 2010 Cochrane review similarly reported that the evidence to support the role of anticonvulsants over benzodiazepines in the treatment of alcohol withdrawal is not supported. Paraldehyde combined with chloral hydrate showed superiority over chlordiazepoxide with regard to life-threatening side effects and carbamazepine may have advantages for certain symptoms. Long term anticonvulsant medications are not usually recommended in those who have had prior seizures due to withdrawal.
Prevention of further drinking
There are three medications used to help prevent a return to drinking: naltrexone, acamprosate, and disulfiram. They are used after withdrawal has occurred.
Other
Clonidine may be used in combination with benzodiazepines to help some of the symptoms. No conclusions can be drawn concerning the efficacy or safety of baclofen for alcohol withdrawal syndrome due to the insufficiency and low quality of the evidence.
Antipsychotics, such as haloperidol, are sometimes used in addition to benzodiazepines to control agitation or psychosis. Antipsychotics may potentially worsen alcohol withdrawal as they lower the seizure threshold. Clozapine, olanzapine, or low-potency phenothiazines (such as chlorpromazine) are particularly risky; if used, extreme caution is required.
While intravenous ethanol could theoretically be used, evidence to support this use, at least in those who are very sick, is insufficient.
Hypertension is common, and some doctors also prescribe beta blockers during withdrawal.
Prognosis
Failure to manage the alcohol withdrawal syndrome appropriately can lead to permanent brain damage or death. It has been proposed that brain damage due to alcohol withdrawal may be prevented by the administration of NMDA antagonists, calcium antagonists, and glucocorticoid antagonists.
Substances impairing recovery
Continued use of benzodiazepines may impair recovery from psychomotor and cognitive impairments from alcohol. Cigarette smoking may slow down or interfere with recovery of brain pathways in recovering alcoholics.
References
External links
CIWA-Ar for Alcohol Withdrawal
Alcohol Detox Guidelines Example
Alcohol abuse
Health effects of alcohol
Drug rehabilitation
Neurological disorders
Symptoms and signs of mental disorders
Withdrawal syndromes
Addiction psychiatry
Psychopharmacology
Disorders causing seizures
Wikipedia medicine articles ready to translate
Wikipedia neurology articles ready to translate
Adverse effects of psychoactive drugs | Alcohol withdrawal syndrome | Chemistry | 3,462 |
9,522,674 | https://en.wikipedia.org/wiki/Mendelian%20error | A Mendelian error in the genetic analysis of a species, describes an allele in an individual which could not have been received from either of its biological parents by Mendelian inheritance. Inheritance is defined by a set of related individuals who have the same or similar phenotypes for a locus of a particular gene. A Mendelian error means that the very structure of the inheritance as defined by analysis of the parental genes is incorrect: one parent of one individual
is not actually the parent indicated; therefore the assumption is that the parental information is incorrect.
Possible explanations for Mendelian errors are genotyping errors, erroneous assignment of the individuals as relatives, or de novo mutations. Mendelian error is established by demonstrating the existence of a trait which is inconsistent with every possible combination of genotype compatible with the individual. This method of determination requires pedigree checking, however, and establishing a contradiction between phenotype and pedigree is an NP-complete problem. Genetic inconsistencies which do not correspond to this definition are Non-Mendelian Errors.
Statistical genetics analysis is used to detect these errors and to detect the possibility of the individual being linked to a specific disease linked to a single gene. Examples of such diseases in humans caused by single genes are Huntington's disease or Marfan syndrome.
See also
Gregor Mendel
SNP genotyping
Footnotes
Mendelian error detection in complex pedigree using weighted constraint satisfaction techniques
Genetics
error
NP-complete problems | Mendelian error | Mathematics,Biology | 305 |
12,324,407 | https://en.wikipedia.org/wiki/Daiwa%20Adrian%20Prize | This Daiwa Adrian Prize is an award given by The Daiwa Anglo-Japanese Foundation, a UK charity, to scientists who have made significant achievements in science through Anglo-Japanese collaborative research. Prizes are awarded every third year and applications are handled by the foundation with an assessment conducted by a panel of Fellows of The Royal Society.
The prize was initiated 1992 by Lord Adrian (2nd Baron Adrian), a former Trustee of the Foundation. The physiologist Richard Adrian was Master of Pembroke College, Vice-Chancellor of the University of Cambridge and the only son of the Nobel laureate Edgar Adrian (1st Baron Adrian).
Daiwa Adrian Prizes 2013
The ceremony was held at the Royal Society on 26 November 2013 and was attended by Trustees of the Foundation including the Chairman, Sir Peter Williams, who is former Vice President of the Royal Society. The Prizes were presented by Lord Adrian's wife Lady Adrian.
Chemonostics: Using chemical receptors in the development of simple diagnostic devices for age-related diseases.
Institutions: University of Bath, University of Birmingham, Kyushu University, Tokyo Metropolitan University and University of Kitakyushu.
UK Team Leader: Professor Tony James, University of Bath
Japan Team Leader: Professor Seiji Shinkai, Kyushu University
Circadian regulation of photosynthesis: discovering mechanisms that connect the circadian clock with photosynthesis in chloroplasts in order to understand how circadian and environmental signals optimise photosynthesis and plant productivity.
Institutions: University of Bristol, University of Edinburgh, Chiba University and Tokyo Institute of Technology.
UK Team Leader: Dr Antony Dodd, University of Bristol
Japan Team Leader: Dr Mitsumasa Hanaoka, Chiba University
Exploration of active functionality in abundant oxide materials utilising unique nanostructure: discovering novel properties in traditional materials and addressing the limited availability of technologically important elements through curiosity-driven research.
Institutions: University College London and Tokyo Institute of Technology
UK Team Leader: Professor Alexander Shluger, University College London
Japan Team Leader: Professor Hideo Hosono, Tokyo Institute of Technology
Extension of terrestrial radiocarbon age calibration curve using annually laminated sediment core from Lake Suigetsu, Japan – establishing a reliable calibration for radiocarbon dates thus considerably improving the accuracy of the age determination.
Institutions: University of Newcastle, University of Oxford, NERC Radiocarbon Facility, Aberystwyth University, Nagoya University, Chiba University of Commerce, Osaka City University and University of Tokyo
UK Team Leader: Professor Takeshi Nakagawa, University of Newcastle
Japan Team Leader: Professor Hiroyuki Kitagawa, Nagoya University
Daiwa Adrian Prizes 2010
The ceremony was held at the Royal Society on 2 December 2010 and was attended by Trustees of the Foundation including the then Chairman, Sir John Whitehead, and Sir Peter Williams. The Prizes were presented by Lord Adrian's wife Lady Adrian.
Nonlinear dynamics of cortical neurons and gamma oscillations - from cell to network models. Advancement of knowledge of the basic operation of brain networks, contributing to understanding of disorders such as schizophrenia, Alzheimer’s disease and epilepsy.
University of Cambridge/Harvard University/Karolinska Institutet: Hugh Robinson, Nathan Gouwens, Hugo Zeberg, Rita Kalra
University of Tokyo/Osaka University: Kazuyuki Aihara, Kenji Morita, Kunichika Tsumoto, Takashi Tateno, Kantaro Fujiwara
The evolutionary and spatial dynamics of human viral pathogens. Investigation of the spread of human viruses, particularly HIV and Hepatitis C, why outbreaks begin at certain times and in certain locations, and why virus strains follow particular routes when they disseminate internationally.
University of Oxford: Oliver Pybus, Samir Bhatt, Peter Markov, Joe Parker, Aris Katzourakis
National Institute of Infectious Diseases: Yutaka Takebe, Yue Li, Shigeru Kusagawa, Kok Keng Tee, Takayo Tsuchiura
Photonic quantum information science and technology. Development of new technologies based on harnessing quantum mechanics – the fundamental physics theory governing behaviour at the microscopic scale.
University of Bristol: Jeremy O'Brien
Hokkaido University/Osaka University: Shigeki Takeuchi
Non-linear cosmological perturbations. Providing theoretical predictions from the very early universe physics for the statistical properties of primordial curvature perturbations.
University of Portsmouth: David Wands, Marco Bruni, Robert Crittenden, Kazuya Koyama, Roy Maartens, Cyril Pitrou
Kyoto University: Misao Sasaki, Tetsuya Shiromizu, Jiro Soda, Takahiro Tanaka
Use of genomics to understand plant-pathogen interactions. Understanding plant pathogen interactions to enhance knowledge on plant disease control.
The Sainsbury Laboratory: Sophien Kamoun, Joe Win, Liliana M. Cano, Angela Chaparro-Garcia, Tolga O. Bozkurt, Sebastian Schornack
Iwate Biotechnology Research Center: Ryohei Terauchi, Kentaro Yoshida, Hiromasa Saitoh, Koki Fujisaki, Ayako Miya, Muluneh Tamiru
Phase space analysis of partial differential equations. Analysis of a range of properties exhibited by solutions to evolution partial differential equations which are of major importance in many different sciences.
Imperial College London: Michael Ruzhansky, Jens Wirth, Claudia Garetto, Ilia Kamotski
Nagoya University/Tokai University/Yamaguchi University/Osaka University: Mitsuru Sugimoto, Tokio Matsuyama, Fumihiko Hirosawa
External links
Daiwa Adrian Prize
BBSRC researchers rewarded for cross-cultural collaborations with Japan
Times Higher Education: Daiwa Adrian prizes
UK Plant Scientist Receives International Award
Science and technology awards
British science and technology awards
Japanese awards
Science and technology in Japan
Awards established in 1992
Japan–United Kingdom relations
1992 establishments in Japan
1992 establishments in the United Kingdom
Daiwa Securities Group | Daiwa Adrian Prize | Technology | 1,205 |
42,381,647 | https://en.wikipedia.org/wiki/Serial%20concatenated%20convolutional%20codes | Serial concatenated convolutional codes (SCCC) are a class of forward error correction (FEC) codes highly suitable for turbo (iterative) decoding. Data to be transmitted over a noisy channel may first be encoded using an SCCC. Upon reception, the coding may be used to remove any errors introduced during transmission. The decoding is performed by repeated decoding and [de]interleaving of the received symbols.
SCCCs typically include an inner code, an outer code, and a linking interleaver. A distinguishing feature of SCCCs is the use of a recursive convolutional code as the inner code. The recursive inner code provides the 'interleaver gain' for the SCCC, which is the source of the excellent performance of these codes.
The analysis of SCCCs was spawned in part by the earlier discovery of turbo codes in 1993. This analysis of SCCC's took place in the 1990s in a series of publications from NASA's Jet Propulsion Laboratory (JPL). The research offered SCCC's as a form of turbo-like serial concatenated codes that 1) were iteratively ('turbo') decodable with reasonable complexity, and 2) gave error correction performance comparable with the turbo codes.
Prior forms of serial concatenated codes typically did not use recursive inner codes. Additionally, the constituent codes used in prior forms of serial concatenated codes were generally too complex for reasonable soft-in-soft-out (SISO) decoding. SISO decoding is considered essential for turbo decoding.
Serial concatenated convolutional codes have not found widespread commercial use, although they were proposed for communications standards such as DVB-S2. Nonetheless, the analysis of SCCCs has provided insight into the performance and bounds of all types of iterative decodable codes including turbo codes and LDPC codes.
US patent 6,023,783 covers some forms of SCCCs. The patent expired on May 15, 2016.
History
Serial concatenated convolutional codes were first analyzed with a view toward turbo decoding in "Serial Concatenation of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding" by S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara. This analysis yielded a set of observations for designing high performance, turbo decodable serial concatenated codes that resembled turbo codes. One of these observations was that "the use of a recursive convolutional inner encoder always yields an interleaver gain." This is in contrast to the use of block codes or non-recursive convolutional codes, which do not provide comparable interleaver gain.
Additional analysis of SCCCs was done in "Coding Theorems for 'Turbo-Like' Codes" by D. Divsalar, Hui Jin, and Robert J. McEliece. This paper analyzed repeat-accumulate (RA) codes which are the serial concatenation of an inner two-state recursive convolutional code (also called an 'accumulator' or parity-check code) with a simple repeat code as the outer code, with both codes linked by an interleaver. The performance of the RA codes is quite good considering the simplicity of the constituent codes themselves.
SCCC codes were further analyzed in "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code". In this paper SCCCs were designed for use with higher order modulation schemes. Excellent performing codes with inner and outer constituent convolutional codes of only two or four states were presented.
Example Encoder
Fig 1 is an example of a SCCC.
The example encoder is composed of a 16-state outer convolutional code and a 2-state inner convolutional code linked by an interleaver. The natural code rate of the configuration shown is 1/4, however, the inner and/or outer codes may be punctured to achieve higher code rates as needed. For example, an overall code rate of 1/2 may be achieved by puncturing the outer convolutional code to rate 3/4 and the inner convolutional code to rate 2/3.
A recursive inner convolutional code is preferable for turbo decoding of the SCCC. The inner code may be punctured to a rate as high as 1/1 with reasonable performance.
Example Decoder
An example of an iterative SCCC decoder.
The SCCC decoder includes two soft-in-soft-out (SISO) decoders and an interleaver. While shown as separate units, the two SISO decoders may share all or part of their circuitry. The SISO decoding may be done is serial or parallel fashion, or some combination thereof. The SISO decoding is typically done using Maximum a posteriori (MAP) decoders using the BCJR algorithm.
Performance
SCCCs provide performance comparable to other iteratively decodable codes including turbo codes and LDPC codes. They are noted for having slightly worse performance at lower SNR environments (i.e. worse waterfall region), but slightly better performance at higher SNR environments (i.e. lower error floor).
See also
Convolutional code
Viterbi algorithm
Soft-decision decoding
Interleaver
BCJR algorithm
Low-density parity-check code
Repeat-accumulate code
Turbo equalizer
References
External links
Data
Error detection and correction
Encodings | Serial concatenated convolutional codes | Technology,Engineering | 1,167 |
40,877,639 | https://en.wikipedia.org/wiki/Saya%20%28folklore%29 | Saya or Sayaqan is a summer feast and festival Turkic Tengriism and Altai folklore. Arranged for the god that called Saya Khan (Turkish: Saya Han or Zaya Han). So this is a blessing, fertility and abundance ceremony.
Description
Saya (Zaya) was mythological male character associated with summertime in early Turkic mythology, particularly within Altai, Anatolia and Caucasus. He was associated with rituals conducted in rural areas during summertime. Turkic peasants celebrated the Summer Solstice on June 23 by going out to the fields.
In Anatolian folklore, a familiar spirit called "Saya Han" lived in mountains who protects sheep flocks.
Saya Game / Play
Saya Play and songs have an important role in the emotional, and moral development of children in rural areas. They learn about solidarity and co-operation. Also, an old tradition is continued with this game. Children wander homes and collect food, for instance.
Celebration
The Saya festival (literally it can be translated as abundance) is related to a cult of a solar deity, with a fertility cult.
Ancient Yakuts celebrated the New Year at the Yhyakh (23 June) festival. Its traditions include women and children decorating trees and tethering posts with "salama" (nine bunches of horse hair hung on horse-hair ropes). The oldest man, wearing white, opens the holiday. He is accompanied by seven virgin girls and nine virgin boys and starts the ritual by sprinkling kymys on the ground, feeding the fire. He prays to the Ai-ii spirits for the well-being of the people who depend on them and asks the spirits to bless all the people gathered.
Sources
SAYA GELENEĞİ, Hazırlayan ve Yazan: Doğan SIRIKLI / Sivas Halil Rıfat Paşa Lisesi / Tarih Öğretmeni - "SAYA GELENEĞİ"
See also
Paktaqan
Nardoqan
Paynaqan
Kosaqan
References
External links
Saya gezimi geleneği
Küreselleşme Karşısında Geleneksel Kültürümüzün Korunması, Kutlu Özen
Bünyan Yöresinde Saya Geleneği
“SAYALAR” AND “SAYAÇILAR” IN IRAN AZERBAIJAN (URMIYE), Talip Doğan
Çoban Ve Konuk Ağırlaması
Turkish folklore
Turkic mythology
June observances
Christmas-linked holidays
Asian shamanism
Religious festivals in Turkey
Shamanistic festivals
Summer solstice | Saya (folklore) | Astronomy | 533 |
56,997,441 | https://en.wikipedia.org/wiki/H3K9me3 | H3K9me3 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the tri-methylation at the 9th lysine residue of the histone H3 protein and is often associated with heterochromatin.
Nomenclature
H3K9me3 indicates trimethylation of lysine 9 on histone H3 protein subunit:
Lysine methylation
This diagram shows the progressive methylation of a lysine residue. The tri-methylation (right) denotes the methylation present in H3K9me3 .
Understanding histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as Histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K9me3 .
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions.
H3K4me3-promoters
H3K4me1- primed enhancers
H3K36me3-gene bodies
H3K27me3-polycomb repression
H3K9me3-heterochromatin
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Significance
Heterochromatin marked with H3K9me3 has a pivotal role in embryonic stem cells at the onset of organogenesis during lineage commitment, and also a role in lineage fidelity maintenance.
Methods
The histone mark can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
Methyllysine
References
Epigenetics
Post-translational modification | H3K9me3 | Chemistry | 1,011 |
642,982 | https://en.wikipedia.org/wiki/Trade%20winds | The trade winds or easterlies are permanent east-to-west prevailing winds that flow in the Earth's equatorial region. The trade winds blow mainly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere, strengthening during the winter and when the Arctic oscillation is in its warm phase. Trade winds have been used by captains of sailing ships to cross the world's oceans for centuries. They enabled European colonization of the Americas, and trade routes to become established across the Atlantic Ocean and the Pacific Ocean.
In meteorology, they act as the steering flow for tropical storms that form over the Atlantic, Pacific, and southern Indian oceans and cause rainfall in North America, Southeast Asia, and Madagascar and East Africa. Shallow cumulus clouds are seen within trade wind regimes and are capped from becoming taller by a trade wind inversion, which is caused by descending air aloft from within the subtropical ridge. The weaker the trade winds become, the more rainfall can be expected in the neighboring landmasses.
The trade winds also transport nitrate- and phosphate-rich Saharan dust to all Latin America, the Caribbean Sea, and to parts of southeastern and southwestern North America. Sahara dust is on occasion present in sunsets across Florida. When dust from the Sahara travels over land, rainfall is suppressed and the sky changes from a blue to a white appearance which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates.
History
The term originally derives from the early fourteenth century sense of trade (in late Middle English) still often meaning "path" or "track". The Portuguese recognized the importance of the trade winds (then the volta do mar, meaning in Portuguese "turn of the sea" but also "return from the sea") in navigation in both the north and south Atlantic Ocean as early as the 15th century. From West Africa, the Portuguese had to sail away from continental Africa, that is, to west and northwest. They could then turn northeast, to the area around the Azores islands, and finally east to mainland Europe. They also learned that to reach South Africa, they needed to go far out in the ocean, head for Brazil, and around 30°S go east again. (This is because following the African coast southbound means sailing upwind in the Southern hemisphere.) In the Pacific Ocean, the full wind circulation, which included both the trade wind easterlies and higher-latitude westerlies, was unknown to Europeans until Andres de Urdaneta's voyage in 1565.
The captain of a sailing ship seeks a course along which the winds can be expected to blow in the direction of travel. During the Age of Sail, the pattern of prevailing winds made various points of the globe easy or difficult to access, and therefore had a direct effect on European empire-building and thus on modern political geography. For example, Manila galleons could not sail into the wind at all.
By the 18th century, the importance of the trade winds to England's merchant fleet for crossing the Atlantic Ocean had led both the general public and etymologists to identify the name with a later meaning of "trade": "(foreign) commerce". Between 1847 and 1849, Matthew Fontaine Maury collected enough information to create wind and current charts for the world's oceans.
Cause
As part of the Hadley cell, surface air flows toward the equator while the flow aloft is towards the poles. A low-pressure area of calm, light variable winds near the equator is known as the doldrums, near-equatorial trough, intertropical front, or the Intertropical Convergence Zone. When located within a monsoon region, this zone of low pressure and wind convergence is also known as the monsoon trough. Around 30° in both hemispheres, air begins to descend toward the surface in subtropical high-pressure belts known as subtropical ridges. The subsident (sinking) air is relatively dry because as it descends, the temperature increases, but the moisture content remains constant, which lowers the relative humidity of the air mass. This warm, dry air is known as a superior air mass and normally resides above a maritime tropical (warm and moist) air mass. An increase of temperature with height is known as a temperature inversion. When it occurs within a trade wind regime, it is known as a trade wind inversion.
The surface air that flows from these subtropical high-pressure belts toward the Equator is deflected toward the west in both hemispheres by the Coriolis effect. These winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. Because winds are named for the direction from which the wind is blowing, these winds are called the northeasterly trade winds in the Northern Hemisphere and the southeasterly trade winds in the Southern Hemisphere. The trade winds of both hemispheres meet at the Doldrums.
As they blow across tropical regions, air masses heat up over lower latitudes due to more direct sunlight. Those that develop over land (continental) are drier and hotter than those that develop over oceans (maritime), and travel northward on the western periphery of the subtropical ridge. Maritime tropical air masses are sometimes referred to as trade air masses. All tropical oceans except the northern Indian Ocean have extensive areas of trade winds.
Weather and biodiversity effects
Clouds which form above regions within trade wind regimes are typically composed of cumulus which extend no more than in height, and are capped from being taller by the trade wind inversion. Trade winds originate more from the direction of the poles (northeast in the Northern Hemisphere, southeast in the Southern Hemisphere) during the cold season, and are stronger in the winter than the summer. As an example, the windy season in the Guianas, which lie at low latitudes in South America, occurs between January and April. When the phase of the Arctic oscillation (AO) is warm, trade winds are stronger within the tropics. The cold phase of the AO leads to weaker trade winds. When the trade winds are weaker, more extensive areas of rain fall upon landmasses within the tropics, such as Central America.
During mid-summer in the Northern Hemisphere (July), the westward-moving trade winds south of the northward-moving subtropical ridge expand northwestward from the Caribbean Sea into southeastern North America (Florida and Gulf Coast). When dust from the Sahara moving around the southern periphery of the ridge travels over land, rainfall is suppressed and the sky changes from a blue to a white appearance which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates. Although the Southeast US has some of the cleanest air in North America, much of the African dust that reaches the United States affects Florida. Since 1970, dust outbreaks have worsened due to periods of drought in Africa. There is a large variability in the dust transport to the Caribbean and Florida from year to year. Dust events have been linked to a decline in the health of coral reefs across the Caribbean and Florida, primarily since the 1970s.
Every year, millions of tons of nutrient-rich Saharan dust cross the Atlantic Ocean, bringing vital phosphorus and other fertilizers to depleted Amazon soils.
See also
Intertropical Convergence Zone
Volta do mar
Westerly wind burst
Winds in the Age of Sail
References
Climate patterns
Atmospheric dynamics
Wind
Age of Sail | Trade winds | Chemistry | 1,491 |
244,518 | https://en.wikipedia.org/wiki/Hodge%20conjecture | In mathematics, the Hodge conjecture is a major unsolved problem in algebraic geometry and complex geometry that relates the algebraic topology of a non-singular complex algebraic variety to its subvarieties.
In simple terms, the Hodge conjecture asserts that the basic topological information like the number of holes in certain geometric spaces, complex algebraic varieties, can be understood by studying the possible nice shapes sitting inside those spaces, which look like zero sets of polynomial equations. The latter objects can be studied using algebra and the calculus of analytic functions, and this allows one to indirectly understand the broad shape and structure of often higher-dimensional spaces which can not be otherwise easily visualized.
More specifically, the conjecture states that certain de Rham cohomology classes are algebraic; that is, they are sums of Poincaré duals of the homology classes of subvarieties. It was formulated by the Scottish mathematician William Vallance Douglas Hodge as a result of a work in between 1930 and 1940 to enrich the description of de Rham cohomology to include extra structure that is present in the case of complex algebraic varieties. It received little attention before Hodge presented it in an address during the 1950 International Congress of Mathematicians, held in Cambridge, Massachusetts. The Hodge conjecture is one of the Clay Mathematics Institute's Millennium Prize Problems, with a prize of $1,000,000 US for whoever can prove or disprove the Hodge conjecture.
Motivation
Let X be a compact complex manifold of complex dimension n. Then X is an orientable smooth manifold of real dimension , so its cohomology groups lie in degrees zero through . Assume X is a Kähler manifold, so that there is a decomposition on its cohomology with complex coefficients
where is the subgroup of cohomology classes which are represented by harmonic forms of type . That is, these are the cohomology classes represented by differential forms which, in some choice of local coordinates , can be written as a harmonic function times
Since X is a compact oriented manifold, X has a fundamental class, and so X can be integrated over.
Let Z be a complex submanifold of X of dimension k, and let be the inclusion map. Choose a differential form of type . We can integrate over Z using the pullback function ,
To evaluate this integral, choose a point of Z and call it . The inclusion of Z in X means that we can choose a local basis on X and have (rank-nullity theorem). If , then must contain some where pulls back to zero on Z. The same is true for if . Consequently, this integral is zero if .
The Hodge conjecture then (loosely) asks:
Which cohomology classes in come from complex subvarieties Z?
Statement of the Hodge conjecture
Let
We call this the group of Hodge classes of degree 2k on X.
The modern statement of the Hodge conjecture is
Hodge conjecture. Let X be a non-singular complex projective manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X.
A projective complex manifold is a complex manifold which can be embedded in complex projective space. Because projective space carries a Kähler metric, the Fubini–Study metric, such a manifold is always a Kähler manifold. By Chow's theorem, a projective complex manifold is also a smooth projective algebraic variety, that is, it is the zero set of a collection of homogeneous polynomials.
Reformulation in terms of algebraic cycles
Another way of phrasing the Hodge conjecture involves the idea of an algebraic cycle. An algebraic cycle on X is a formal combination of subvarieties of X; that is, it is something of the form
The coefficients are usually taken to be integral or rational. We define the cohomology class of an algebraic cycle to be the sum of the cohomology classes of its components. This is an example of the cycle class map of de Rham cohomology, see Weil cohomology. For example, the cohomology class of the above cycle would be
Such a cohomology class is called algebraic. With this notation, the Hodge conjecture becomes
Let X be a projective complex manifold. Then every Hodge class on X is algebraic.
The assumption in the Hodge conjecture that X be algebraic (projective complex manifold) cannot be weakened. In 1977, Steven Zucker showed that it is possible to construct a counterexample to the Hodge conjecture as complex tori with analytic rational cohomology of type , which is not projective algebraic. (see appendix B of )
Known cases of the Hodge conjecture
See Theorem 1 of Bouali.
Low dimension and codimension
The first result on the Hodge conjecture is due to . In fact, it predates the conjecture and provided some of Hodge's motivation.
Theorem (Lefschetz theorem on (1,1)-classes) Any element of is the cohomology class of a divisor on . In particular, the Hodge conjecture is true for .
A very quick proof can be given using sheaf cohomology and the exponential exact sequence. (The cohomology class of a divisor turns out to equal to its first Chern class.) Lefschetz's original proof proceeded by normal functions, which were introduced by Henri Poincaré. However, the Griffiths transversality theorem shows that this approach cannot prove the Hodge conjecture for higher codimensional subvarieties.
By the Hard Lefschetz theorem, one can prove:
Theorem. If for some the Hodge conjecture holds for Hodge classes of degree , then the Hodge conjecture holds for Hodge classes of degree .
Combining the above two theorems implies that Hodge conjecture is true for Hodge classes of degree . This proves the Hodge conjecture when has dimension at most three.
The Lefschetz theorem on (1,1)-classes also implies that if all Hodge classes are generated by the Hodge classes of divisors, then the Hodge conjecture is true:
Corollary. If the algebra is generated by , then the Hodge conjecture holds for .
Hypersurfaces
By the strong and weak Lefschetz theorem, the only non-trivial part of the Hodge conjecture for hypersurfaces is the degree m part (i.e., the middle cohomology) of a 2m-dimensional hypersurface . If the degree d is 2, i.e., X is a quadric, the Hodge conjecture holds for all m. For , i.e., fourfolds, the Hodge conjecture is known for .
Abelian varieties
For most abelian varieties, the algebra Hdg*(X) is generated in degree one, so the Hodge conjecture holds. In particular, the Hodge conjecture holds for sufficiently general abelian varieties, for products of elliptic curves, and for simple abelian varieties of prime dimension. However, constructed an example of an abelian variety where Hdg2(X) is not generated by products of divisor classes. generalized this example by showing that whenever the variety has complex multiplication by an imaginary quadratic field, then Hdg2(X) is not generated by products of divisor classes. proved that in dimension less than 5, either Hdg*(X) is generated in degree one, or the variety has complex multiplication by an imaginary quadratic field. In the latter case, the Hodge conjecture is only known in special cases.
Generalizations
The integral Hodge conjecture
Hodge's original conjecture was:
Integral Hodge conjecture. Let be a projective complex manifold. Then every cohomology class in is the cohomology class of an algebraic cycle with integral coefficients on
This is now known to be false. The first counterexample was constructed by . Using K-theory, they constructed an example of a torsion cohomology class—that is, a cohomology class such that for some positive integer —which is not the class of an algebraic cycle. Such a class is necessarily a Hodge class. reinterpreted their result in the framework of cobordism and found many examples of such classes.
The simplest adjustment of the integral Hodge conjecture is:
Integral Hodge conjecture modulo torsion. Let be a projective complex manifold. Then every cohomology class in is the sum of a torsion class and the cohomology class of an algebraic cycle with integral coefficients on
Equivalently, after dividing by torsion classes, every class is the image of the cohomology class of an integral algebraic cycle. This is also false. found an example of a Hodge class which is not algebraic, but which has an integral multiple which is algebraic.
have shown that in order to obtain a correct integral Hodge conjecture, one needs to replace Chow groups, which can also be expressed as motivic cohomology groups, by a variant known as étale (or Lichtenbaum) motivic cohomology. They show that the rational Hodge conjecture is equivalent to an integral Hodge conjecture for this modified motivic cohomology.
The Hodge conjecture for Kähler varieties
A natural generalization of the Hodge conjecture would ask:
Hodge conjecture for Kähler varieties, naive version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X.
This is too optimistic, because there are not enough subvarieties to make this work. A possible substitute is to ask instead one of the two following questions:
Hodge conjecture for Kähler varieties, vector bundle version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of vector bundles on X.
Hodge conjecture for Kähler varieties, coherent sheaf version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of coherent sheaves on X.
proved that the Chern classes of coherent sheaves give strictly more Hodge classes than the Chern classes of vector bundles and that the Chern classes of coherent sheaves are insufficient to generate all the Hodge classes. Consequently, the only known formulations of the Hodge conjecture for Kähler varieties are false.
The generalized Hodge conjecture
Hodge made an additional, stronger conjecture than the integral Hodge conjecture. Say that a cohomology class on X is of co-level c (coniveau c) if it is the pushforward of a cohomology class on a c-codimensional subvariety of X. The cohomology classes of co-level at least c filter the cohomology of X, and it is easy to see that the cth step of the filtration NH(X, Z) satisfies
Hodge's original statement was:
Generalized Hodge conjecture, Hodge's version.
observed that this cannot be true, even with rational coefficients, because the right-hand side is not always a Hodge structure. His corrected form of the Hodge conjecture is:
Generalized Hodge conjecture. NH(X, Q) is the largest sub-Hodge structure of H(X, Z) contained in
This version is open.
Algebraicity of Hodge loci
The strongest evidence in favor of the Hodge conjecture is the algebraicity result of . Suppose that we vary the complex structure of X over a simply connected base. Then the topological cohomology of X does not change, but the Hodge decomposition does change. It is known that if the Hodge conjecture is true, then the locus of all points on the base where the cohomology of a fiber is a Hodge class is in fact an algebraic subset, that is, it is cut out by polynomial equations. Cattani, Deligne & Kaplan (1995) proved that this is always true, without assuming the Hodge conjecture.
See also
Tate conjecture
Hodge theory
Hodge structure
Period mapping
References
Available from the Hirzebruch collection (pdf).
.
.
.
.
Reprinted in .
.
.
.
.
External links
Popular lecture on Hodge Conjecture by Dan Freed (University of Texas) (Real Video) (Slides)
Burt Totaro, Why believe the Hodge Conjecture?
Claire Voisin, Hodge loci
Algebraic geometry
Conjectures
Hodge theory
Homology theory
Millennium Prize Problems | Hodge conjecture | Mathematics,Engineering | 2,484 |
2,235,200 | https://en.wikipedia.org/wiki/PM3%20%28chemistry%29 | PM3, or Parametric Method 3, is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Neglect of Differential Diatomic Overlap integral approximation.
The PM3 method uses the same formalism and equations as the AM1 method. The only differences are:
1) PM3 uses two Gaussian functions for the core repulsion function, instead of the variable number used by AM1 (which uses between one and four Gaussians per element);
2) the numerical values of the parameters are different. The other differences lie in the philosophy and methodology used during the parameterization: whereas AM1 takes some of the parameter values from spectroscopical measurements, PM3 treats them as optimizable values.
The method was developed by J. J. P. Stewart and first published in 1989. It is implemented in the MOPAC program (of which the older versions are public domain), along with the related RM1, AM1, MNDO and MINDO methods, and in several other programs such as Gaussian, CP2K, GAMESS (US), GAMESS (UK), PC GAMESS, Chem3D, AMPAC, ArgusLab, BOSS, and SPARTAN.
The original PM3 publication included parameters for the following elements: H, C, N, O, F, Al, Si, P, S, Cl, Br, and I.
The PM3 implementation in the SPARTAN program includes PM3tm with additional extensions for transition metals supporting calculations on Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Zr, Mo, Tc, Ru, Rh, Pd, Hf, Ta, W, Re, Os, Ir, Pt, and Gd. Many other elements, mostly metals, have been parameterized in subsequent work.
A model for the PM3 calculation of lanthanide complexes, called Sparkle/PM3, was also introduced.
References
For a recent review,
Semiempirical quantum chemistry methods | PM3 (chemistry) | Chemistry | 423 |
53,713,152 | https://en.wikipedia.org/wiki/Klosneuvirus | Klosneuvirus (KNV, also KloV) is a new type of giant virus found by the analysis of low-complexity metagenomes from a wastewater treatment plant in Klosterneuburg, Austria. It has a 1.57-Mb genome coding unusually high number of genes typically found in cellular organisms, including aminoacyl transfer RNA synthetases with specificities for 19 different amino acids, over 10 translation factors and several tRNA-modifying enzymes. Klosneuvirus, Indivirus, Catovirus and Hokovirus, are part of a group of giant viruses denoted as Klosneuviruses or Klosneuvirinae, a proposed subfamily of the Mimiviridae.
Species in this clade include Bodo saltans virus infecting the kinetoplastid Bodo saltans.
Phylogenetic tree topology of Mimiviridae is still under discussion. As Klosneuviruses are related to Mimivirus, it was proposed to put them all together into a subfamily Megavirinae. Other authors (CNS 2018) like to put Klosneuviruses just together with Cafeteria roenbergensis virus (CroV) and Bodo saltans virus (BsV) into a tentative subfamily called Aquavirinae.
See also
Nucleocytoplasmic large DNA viruses
Giant Virus
Viral eukaryogenesis
Mimiviridae
References
Virus genera
Unaccepted virus taxa | Klosneuvirus | Biology | 289 |
20,878,968 | https://en.wikipedia.org/wiki/Correct%20sampling | During sampling of granular materials (whether airborne, suspended in liquid, aerosol, or aggregated), correct sampling is defined in Gy's sampling theory as a sampling scenario in which all particles in a population have the same probability of ending up in the sample.
The concentration of the property of interest in a sample can be a biased estimate for the concentration of the property of interest in the population from which the sample is drawn. Although generally non-zero, for correct sampling this bias is thought to be negligible.
See also
Particle filter
Particle in a box
Particulate matter sampler
Statistical sampling
Gy's sampling theory
References
Sampling (statistics)
Particulates
Meteorological instrumentation and equipment
Aerosols | Correct sampling | Chemistry,Technology,Engineering | 145 |
11,083,346 | https://en.wikipedia.org/wiki/Penitrem%20A | Penitrem A (tremortin) is an indole-diterpenoid mycotoxin produced by certain species of Aspergillus, Claviceps, and Penicillium, which can be found growing on various plant species such as ryegrass. Penitrem A is one of many secondary metabolites following the synthesis of paxilline in Penicillium crostosum. Penitrem A poisoning in humans and animals usually occurs through the consumption of contaminated foods by mycotoxin-producing species, which is then distributed through the body by the bloodstream. It bypasses the blood-brain barrier to exert its toxicological effects on the central nervous system. In humans, penitrem A poisoning has been associated with severe tremors, hyperthermia, nausea/vomiting, diplopia, and bloody diarrhea. In animals, symptoms of penitrem A poisoning has been associated with symptoms ranging from tremors, seizures, and hyperthermia to ataxia and nystagmus.
Roquefortine C has been commonly detected in documented cases of penitrem A poisoning, making it a possible biomarker for diagnoses.
Mechanism of action
Penitrem A impairs GABAergic amino acid neurotransmission and antagonizes high-conductance Ca2+-activated potassium channels in both humans and animals. Impairment of the GABAergic amino acid neurotransmission comes with the spontaneous release of the excitatory amino acids glutamate and aspartate as well as the inhibitory neurotransmitter γ-aminobutyric acid (GABA). The sudden release of these neurotransmitters results in imbalanced GABAergic signalling, which gives rise to neurological disorders such as the tremors associated with penitrem A poisoning.
Penitrem A also induces the production of reactive oxygen species (ROS) in the neutrophil granulocytes of humans and animals. Increased ROS production results in tissue damage in the brain and other afflicted organs as well as hemorrhages in acute poisonings.
Synthesis
In Penicillium crustosum, synthesis of penitrem A and other secondary metabolites follows the synthesis of paxilline. Synthesis of penitrem A involves six oxidative-transformation enzymes (four cytochrome P450 monooxygenases and two flavin adenine dinucleotide (FAD)-dependent monooxygenases), two acetyltransferases, one oxidoreductase, and one prenyltransferase. These enzymes are encoded by a cluster of genes used in paxilline synthesis and penitrem A-F synthesis. The pathway is described below:
Oxidoreductase catalyzes the reduction of paxilline's ketone and also adds a dimethylallyl group to its aromatic ring.
Acetyltransferases catalyze the removal of the intermediate's lower right-hand hydroxyl group and reduce of one of the nearby methyl groups to a methylene group.
Oxidative-transformation enzyme catalyzes the addition of a hydroxyl group to the intermediate's dimethylallyl group. The dimethylallyl's double bond migrates down one carbon.
Prenyltransferase catalyzes the formation of a dimethyl-cyclopentane and a cyclobutane using the intermediate's aromatic ring-alcohol group.
Oxidative-transformation enzyme catalyzes the formation of a methylenecyclohexane using the intermediate's dimethyl-cyclopentane, forming secopenitrem D.
Oxidative-transformation enzyme catalyzes the formation of a cyclooctane using cyclobutane's alcohol group and the carbon joining secopenitrem D's cyclohexane and cyclopentane, forming penitrem D.
Oxidative-transformation enzyme catalyzes the addition a chlorine atom at penitrem D's aromatic ring, forming penitrem C.
Oxidative-transformation enzyme catalyzes the formation of an epoxide ring at penitrem C's oxane-double bond, forming penitrem F.
Oxidative-transformation enzyme catalyzes the addition of a hydroxyl group at the carbon joining penitrem F's methylenecyclohexane and cyclobutane, forming penitrem A.
See also
Paxilline
Roquefortine C
References
Indole alkaloids
Neurotoxins
Penicillium
Cell communication
Chloroarenes
Alcohols
Halogen-containing natural products
Cyclobutanes
Mycotoxins | Penitrem A | Chemistry,Biology | 1,011 |
63,965,880 | https://en.wikipedia.org/wiki/Zero%20dynamics | In mathematics, zero dynamics is known as the concept of evaluating the effect of zero on systems.
History
The idea was introduced thirty years ago as the nonlinear approach to the concept of transmission of zeros. The original purpose of introducing the concept was to develop an asymptotic stabilization with a set of guaranteed regions of attraction (semi-global stabilizability), to make the overall system stable.
Initial working
Given the internal dynamics of any system, zero dynamics refers to the control action chosen in which the output variables of the system are kept identically zero. While, various systems have an equally distinctive set of zeros, such as decoupling zeros, invariant zeros, and transmission zeros. Thus, the reason for developing this concept was to control the non-minimum phase and nonlinear systems effectively.
Applications
The concept is widely utilized in SISO mechanical systems, whereby applying a few heuristic approaches, zeros can be identified for various linear systems. Zero dynamics adds an essential feature to the overall system’s analysis and the design of the controllers. Mainly its behavior plays a significant role in measuring the performance limitations of specific feedback systems. In a Single Input Single Output system, the zero dynamics can be identified by using junction structure patterns. In other words, using concepts like bond graph models can help to point out the potential direction of the SISO systems.
Apart from its application in nonlinear standardized systems, similar controlled results can be obtained by using zero dynamics on nonlinear discrete-time systems. In this scenario, the application of zero dynamics can be an interesting tool to measure the performance of nonlinear digital design systems (nonlinear discrete-time systems).
Before the advent of zero dynamics, the problem of acquiring non-interacting control systems by using internal stability was not specifically discussed. However, with the asymptotic stability present within the zero dynamics of a system, static feedback can be ensured. Such results make zero dynamics an interesting tool to guarantee the internal stability of non-interacting control systems.
References
Differential equations | Zero dynamics | Mathematics | 407 |
3,638,803 | https://en.wikipedia.org/wiki/Constant%20curvature | In mathematics, constant curvature is a concept from differential geometry. Here, curvature refers to the sectional curvature of a space (more precisely a manifold) and is a single number determining its local geometry. The sectional curvature is said to be constant if it has the same value at every point and for every two-dimensional tangent plane at that point. For example, a sphere is a surface of constant positive curvature.
Classification
The Riemannian manifolds of constant curvature can be classified into the following three cases:
Elliptic geometry – constant positive sectional curvature
Euclidean geometry – constant vanishing sectional curvature
Hyperbolic geometry – constant negative sectional curvature.
Properties
Every space of constant curvature is locally symmetric, i.e. its curvature tensor is parallel .
Every space of constant curvature is locally maximally symmetric, i.e. it has number of local isometries, where is its dimension.
Conversely, there exists a similar but stronger statement: every maximally symmetric space, i.e. a space which has (global) isometries, has constant curvature.
(Killing–Hopf theorem) The universal cover of a manifold of constant sectional curvature is one of the model spaces:
sphere (sectional curvature positive)
plane (sectional curvature zero)
hyperbolic manifold (sectional curvature negative)
A space of constant curvature which is geodesically complete is called space form and the study of space forms is intimately related to generalized crystallography (see the article on space form for more details).
Two space forms are isomorphic if and only if they have the same dimension, their metrics possess the same signature and their sectional curvatures are equal.
References
Further reading
Moritz Epple (2003) From Quaternions to Cosmology: Spaces of Constant Curvature ca. 1873 — 1925, invited address to International Congress of Mathematicians
Differential geometry of surfaces
Riemannian geometry
Curvature (mathematics) | Constant curvature | Physics | 373 |
23,982,990 | https://en.wikipedia.org/wiki/IFixit | iFixit ( ) is an American e-commerce and how-to website that publishes free wiki-like online repair guides and tear-downs of consumer electronics and gadgets. It also sells repair parts, tools, and accessories. It is a private company in San Luis Obispo, California founded in 2003, spurred by Kyle Wiens not being able to locate an Apple iBook G3 repair manual while the company's founders were attending Cal Poly San Luis Obispo.
Business model
iFixit has released product tear-downs of new mobile and laptop devices which provide advertising for the company's parts and equipment sales. These tear-downs have been reviewed by PC World, The Mac Observer, NetworkWorld, and other publications.
Co-founder Kyle Wiens has said that he aims to reduce electronic waste by teaching people to repair their own gear, and by offering tools, parts, and a forum to discuss repairs. In 2011, he travelled through Africa with a documentary team to meet a community of electronics technicians who repair and rebuild the world's discarded electronics.
iFixit provides a software as a service platform known as Dozuki to allow others to use iFixit's documentation framework to produce their own documentation. O'Reilly Media's Make and Craft magazines use Dozuki to feature community guides alongside instructions originally written by the staff for the print magazine.
On April 3, 2014 iFixit announced a partnership with Fairphone.
During the COVID-19 pandemic, iFixit and CALPIRG, the California arm of the Public Interest Research Group, worked with hospitals and medical research facilities to gather the largest known database of medical equipment manuals and repair guides to support the healthcare industry during the pandemic.
In 2022, iFixit announced plans to open a new distribution center and office in Chattanooga, Tennessee.
Reception
In September 2015, Apple removed the iFixit app from the App Store in reaction to the company's publication of a tear-down of a developer pre-release version of the Apple TV (4th generation) obtained under Apple's Developer Program violating a signed Non-Disclosure Agreement, and accordingly, their developer account was suspended. In response, iFixit says it has worked on improving its mobile site for users to access its services through a mobile browser.
In April 2019, it was revealed that some Oculus Quest and Oculus Rift S devices contain a physical Easter egg reading "Hi iFixit! We See You!", demonstrating that device manufacturers are aware of iFixit.
In March 2022, Samsung announced that they would be collaborating with iFixit to provide a self-repair program and parts store for a range of their electronic devices. iFixit ended their collaboration with Samsung in May 2024, with co-founder Kyle Wiens saying "Samsung does not seem interested in enabling repair at scale."
In April 2022, Google announced that they would be partnering with iFixit to provide replacement parts for their Pixel series of smartphones.
See also
Consumer Rights Act 2015
Do it yourself
Magnuson–Moss Warranty Act
Repair Café
Right to repair
References
Knowledge markets
Internet properties established in 2003
Maintenance
Do it yourself
DIY culture
Creative Commons-licensed websites | IFixit | Engineering | 656 |
625,420 | https://en.wikipedia.org/wiki/Paper%20football | Paper football (also called finger football, flick football, tabletop football, thump football, or freaky football) refers to a table-top game, loosely based on American football, in which a sheet of paper folded into a small triangle is slid back and forth across a table top by two opponents. This game is widely practiced for entertainment, mostly by students in primary, middle school (junior high), and high school age in the United States. Though its origin is in dispute, it was widely played at churches in Madison, Wisconsin in the early 1970s. The youth group at Grace Baptist Church held weekly events and competitions including monthly championships.
Gameplay
The game uses a piece of paper folded into a triangle, called the "ball". The starting player begins by kicking off the ball. To perform a kickoff, the ball is placed on the table, suspended by one of the player's hands with the index finger on the upper tip of the ball, then the player flicks the ball with the other hand's thumb and index finger. If the ball ends up flying off the table or hanging on the edge of the table, the kickoff is redone. If the ball lands on the table without reaching the edge of the receiving player's side, players take turns pushing it with a steady fast motion towards the opponent's side.
The player scores points by getting the ball hanging on the edge of the opponent's side, called a touchdown. Every time a touchdown is scored, the player who scored has a chance to make a field goal, which has that player flick the ball as in the kickoff through the opponent's goal post, formed by placing both wrists parallel to the table on the edge, with the tips of both thumbs touching each other and both index fingers pointing straight upward. If the field goal is successful, the kicking player scores one point. The player who conceded points starts the next kickoff.
The game ends based on the agreed-upon rules, be it time limit (the player with the most points when the predetermined amount of time has elapsed wins) or score limit (the first player to reach the predetermined score threshold wins).
See also
Penny football
Button football
Tabletop football
Blow football
References
External links
Children's games
Individual sports
football
Variations of American football
football | Paper football | Mathematics | 470 |
5,131,331 | https://en.wikipedia.org/wiki/Blue%20Book%20%28CD%20standard%29 | The Blue Book is a compact disc standard developed in 1995 by Philips and Sony. It defines the Enhanced Music CD format (E-CD, also known as CD-Extra, CD-Plus and CD+), which combines audio tracks and data tracks on the same disc. The format was created as a way to solve the problem of mixed mode CDs, which were not properly supported by many CD players.
E-CDs are created through the stamped multisession technology, which creates two sessions on a disc. The first session of an E-CD contains audio tracks according to the Red Book. As a consequence, existing compact disc players can play back this first session as an audio disc. The second session contains CD-ROM data files with content often related to the audio tracks in the first session. The second session will only be used by computer systems equipped with a CD-ROM drive, or by special “Enhanced CD players”.
The second session of a E-CD contains one track in CD-ROM XA Mode 2, Form 1 format. It must contain certain specific files inside an ISO 9660 file system, though an HFS file system may also be included for compatibility with the classic Mac OS. The mandatory files and directories include an autorun.inf file compatible with the Windows 95 AutoRun feature; a CDPLUS and a PICTURES directories; and an optional DATA directory.
The technology was originally developed by Albhy Galuten who, along with Ty Roberts brought the idea to the major record labels where it was built into commercial releases. There were other technologies that solved the same problem, but they were not compatible with many existing CD players and so this approach was brought to Sony and Philips where it was written into the Blue Book standard.
The term "enhanced CD" is also an umbrella term and a certification mark used to refer to different CD formats that support audio and data content, including mixed mode CDs, CD-i and CD-i Ready.
References
Rainbow Books
Optical disc authoring
Digital audio storage
Optical computer storage media | Blue Book (CD standard) | Technology | 414 |
61,227,693 | https://en.wikipedia.org/wiki/List%20of%20largest%20inflorescences | The following is a list of the largest inflorescences known from plants that produce seeds.
See also
List of world records held by plants
List of largest seeds
Largest organisms
List of Superlative trees
List of Largest Fungal Fruit Bodies
References
Lists of flowers
Inflorescences
Inflorescences | List of largest inflorescences | Biology | 59 |
46,930,681 | https://en.wikipedia.org/wiki/EIDORS | EIDORS is an open-source software tool box written mainly in MATLAB/GNU Octave designed primarily for image reconstruction from electrical impedance tomography (EIT) data, in a biomedical, industrial or geophysical setting. The name was originally an acronym for Electrical Impedance Tomography and Diffuse Optical Reconstruction Software. While the name reflects the original intention to cover image reconstruction of data from the mathematically similar near infra red diffuse optical imaging, to date there has been little development in that area.
The project was launched in 1999 with a Matlab code for 2D EIT reconstruction which had its origin in the PhD thesis of Marko Vauhkonen and the work of his supervisor Jari Kaipio at the University of Kuopio. While Kuopio also developed a three dimensional EIT code this was not released as open-source. Instead the three dimensional version of EIDORS was developed from work done at UMIST (now University of Manchester) by Nick Polydorides and William Lionheart.
Methods and models
The forward models in EIDORS use the finite element method and this requires mesh generation for sometimes irregular objects (such as human bodies), and the meshing needs to reflect the electrodes used to drive and measure current in EIT. For this purpose an interface was developed to the Netgen Mesh Generator.
History
As the project grew there was a desire to incorporate forward modelling and reconstruction code from a variety of groups and Andy Adler and Lionheart developed a more extensible software system. The most recent version is 3.10, released in Dec, 2019.
The EIDORS project also includes a repository of EIT data distributed under open-source licenses.
Applications
EIDORS has been extensively used in biomedical applications of EIT, including lung imaging, measuring cardiac output. It has been used for investigation of imaging electrical activity in the brain, and monitoring conductivity changes during radio-frequency ablation. Outside medical imaging the toolbox has been used in process tomography, geophysics and materials science.
References
External links
EIDORS website on Sourceforge
Medical imaging
Inverse problems | EIDORS | Mathematics | 425 |
11,068,873 | https://en.wikipedia.org/wiki/Pseudocercosporella%20capsellae | Pseudocercosporella capsellae is a plant pathogen infecting crucifers (canola, mustard, rapeseed). P. capsellae is the causal pathogen of white leaf spot disease, which is an economically significant disease in global agriculture. P. capsellae has a significant effect on crop yields on agricultural products, such as canola seed and rapeseed. Researchers are working hard to find effective methods of controlling this plant pathogen, using cultural control, genetic resistance, and chemical control practices. Due to its rapidly changing genome, P. capsellae is a rapidly emerging plant pathogen that is beginning to spread globally and affect farmers around the world.
Habitat and Geographical Distribution
Habitat
Pseudocercosporella capsellae is generally found in humid environments. When P. capsellae is found in environments with low humidity, the fungus is unable to germinate and cause disease. This pathogen is not a thermophile, explaining how it is found in temperate climates without extreme heat. After introduction into an area, P. capsellae is found in most neighboring Brassicaceae agricultural fields. In the wild, P. capsellae can be observed in prairie environments containing mustard weed.
Geographical Distribution
P. capsellae has been identified on four of the seven continents of the world: North America, Europe, Asia, and Australia. Specifically, P. capsellae has been found in agricultural fields in China, Japan, Canada, India, Australia, the Pacific Northwest region of the United States, the United Kingdom, France, Poland, and Scandinavian nations.
Morphology and Microscopic Features
P. capsellae is an ascomycete, meaning it produces ascospores housed in asci as means of sexual reproduction. Sexual structures are found in the sexual stage of this fungus, which has been classified as Mycosphaerella capsellae. The ascocarp of M. capsellae is a cleistothecium, meaning asci are shielded from the environment prior to ascospore release. As means of asexual reproduction, P. capsellae produces chains of septate conidia. Conidia range in size between about 42-71μm in length and about 3μm in width. These chains of conidia are attached to a long conidiophore and stipe, connecting these asexual structures to the sterile hyphal network of the fungal body. In culture, P. capsellae appears black and white on potato dextrose agar (PDA). When observed under a microscope, P. capsellae appears a reddish-purple color due to the fungus' production of a purple-pink pigment. P. capsellae also is known to produce a mycotoxin, cercosporin, which increases the virulence of the pathogen.
Disease Signs/Symptoms, Cycle, and Control
Disease Signs and Symptoms
Infected crucifers display white lesions on leaves when infected by P. capsellae. These white lesions oftentimes have nonuniform shapes, and darken as the fungus matures on its host. Lesions on leaves initially can be 1-2mm in diameter, but can grow up to 10mm in diameter as the disease progresses. Leaves can fall off of host plants if infection is severe and widespread throughout a particular host. Gray or tan lesions may also appear on host stems; these lesions oftentimes harbor the sexual stage of P. capsellae, where ascospores are developed and released. Conidia can be found on the underside of leaves, oftentimes in locations corresponding to where lesions are present.
Disease Cycle
Conidia from the asexual structures of P. capsellae germinate at optimal temperatures of 20-24°C. At these temperate conditions and in ample humidity, conidia can be spread to new host plants via wind, water droplet splash, or by improperly sanitized farm equipment. These conidia penetrate new host leaves or stems and create infection. Crucifers, such as canola or rapeseed, are the primary host for this pathogen. In rare cases, cover crops or neighboring species of weeds can act as secondary hosts for the sexual stage of P. capsellae. P. capsellae overwinters as thick-walled mycelium on infected detritus in fields, and germinates again to infect new hosts as conditions become more ideal for spread. P. capsellae is a hemibiotroph, as indicated by its ability to keep host crucifers alive until host leaves fall off during severe infection.
Control Strategies
Many management strategies have been implemented in attempt to control the spread of P. capsellae. One common method of control is the use of fungicides as means of chemical control. The use of fungicides has been discovered to be ineffective at the control of P. capsellae, as this pathogen is resistant to most of the common fungicides utilized by farmers.
Cultural control methods are the most common management strategy that farmers use against P. capsellae. Methods such as crop rotation, proper sanitation of farm equipment, and planting crucifer crops with more space in between crops are effective methods of managing the spread of P. capsellae in fields. Sanitation of farm equipment and crop rotation are methods of reducing initial inoculum of conidia produced by P. capsellae.
Breeding genetic resistance towards P. capsellae is a promising method for disease management of this pathogen. Researchers across the world have been conducting genetic crosses of Brassica crops to find resistance genes that can make crops less susceptible to P. capsellae infection. Although this method of control is promising, P. capsellae has a genome that is rapidly changing, making it difficult for researchers to identify host resistance genes that remain effective against P. capsellae for substantial periods of time.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Canola diseases
Mycosphaerellaceae
Fungi described in 1887
Taxa named by Benjamin Matlack Everhart
Fungus species | Pseudocercosporella capsellae | Biology | 1,249 |
3,391,379 | https://en.wikipedia.org/wiki/Lighting%20control%20system | A lighting control system is intelligent network-based lighting control that incorporates communication between various system inputs and outputs related to lighting control with the use of one or more central computing devices. Lighting control systems are widely used on both indoor and outdoor lighting of commercial, industrial, and residential spaces. Lighting control systems are sometimes referred to under the term smart lighting. Lighting control systems serve to provide the right amount of light where and when it is needed.
Lighting control systems are employed to maximize the energy savings from the lighting system, satisfy building codes, or comply with green building and energy conservation programs. Lighting control systems may include a lighting technology designed for energy efficiency, convenience and security. This may include high efficiency fixtures and automated controls that make adjustments based on conditions such as occupancy or daylight availability. Lighting is the deliberate application of light to achieve some aesthetic or practical effect (e.g. illumination of a security breach). It includes task lighting, accent lighting, and general lighting.
Lighting controls
The term lighting controls is typically used to indicate stand-alone control of the lighting within a space. This may include occupancy sensors, timeclocks, and photocells that are hard-wired to control fixed groups of lights independently. Adjustment occurs manually at each devices location. The efficiency of and market for residential lighting controls has been characterized by the Consortium for Energy Efficiency.
The term lighting control system refers to an intelligent networked system of devices related to lighting control. These devices may include relays, occupancy sensors, photocells, light control switches or touchscreens, and signals from other building systems (such as fire alarm or HVAC). Adjustment of the system occurs both at device locations and at central computer locations via software programs or other interface devices.
Advantages
The major advantage of a lighting control system over stand-alone lighting controls or conventional manual switching is the ability to control individual lights or groups of lights from a single user interface device. This ability to control multiple light sources from a user device allows complex lighting scenes to be created. A room may have multiple scenes pre-set, each one created for different activities in the room. A major benefit of lighting control systems is reduced energy consumption. Longer lamp life is also gained when dimming and switching off lights when not in use. Wireless lighting control systems provide additional benefits including reduced installation costs and increased flexibility over where switches and sensors may be placed.
Minimizing energy usage
Lighting applications represents 19% of the world's energy use and 6% of all greenhouse emissions. In the United States, 65 percent of energy consumption is used by commercial and industrial sectors, and 22 percent of this is used for lighting.
Smart lighting enables households and users to remotely control cooling, heating, lighting and appliances, minimizing unnecessary light and energy use. This ability saves energy and provides a level of comfort and convenience. From outside the traditional lighting industry, the future success of lighting will require involvement of a number of stakeholders and stakeholder communities. The concept of smart lighting also involves utilizing natural light from the sun to reduce the use of man-made lighting, and the simple concept of people turning off lighting when they leave a room.
Convenience
A smart lighting system can ensure that dark areas are illuminated when in use. The lights actively respond to the activities of the occupants based on sensors and intelligence (logic) that anticipates the lighting needs of an occupant.
Security
Lights can be used to dissuade those from areas they should not be. A security breach, for example, is an event that could trigger floodlights at the breach point. Preventative measures include illuminating key access points (such as walkways) at night and automatically adjusting the lighting when a household is away to make it appear as though there are occupants.
Automated control
Lighting control systems typically provide the ability to automatically adjust a lighting device's output based on:
Chronological time (time of day)
Solar time (sunrise/sunset)
Occupancy using occupancy sensors
Daylight availability using photocells
Alarm conditions
Program logic (combination of events)
Chronological time
Chronological time schedules incorporate specific times of the day, week, month or year.
Solar time
Solar time schedules incorporate sunrise and sunset times, often used to switch outdoor lighting. Solar time scheduling requires that the location of the building be set. This is accomplished using the building's geographic location via either latitude and longitude or by picking the nearest city in a given database giving the approximate location and corresponding solar times.
Occupancy
Space occupancy is primarily determined with occupancy sensors. Smart lighting that utilizes occupancy sensors can work in unison with other lighting connected to the same network to adjust lighting per various conditions. The table below shows potential electricity savings from using occupancy sensors to control lighting in various types of spaces.
Ultrasonic
The advantages of ultrasonic devices are that they are sensitive to all types of motion and generally there are zero coverage gaps, since they can detect movements not within the line of sight.
Daylight availability
Electric lighting energy use can be adjusted by automatically dimming and/or switching electric lights in response to the level of available daylight. Reducing the amount of electric lighting used when daylight is available is known as daylight harvesting.
Daylight sensing
In response to daylighting technology, daylight-linked automated response systems have been developed to further reduce energy consumption. These technologies are helpful, but they do have their downfalls. Many times, rapid and frequent switching of the lights on and off can occur, particularly during unstable weather conditions or when daylight levels are changing around the switching illuminance. Not only does this disturb occupants, it can also reduce lamp life. A variation of this technology is the 'differential switching' or 'dead-band' photoelectric control which has multiple illuminances it switches from to reduce occupants being disturbed.
Alarm conditions
Alarm conditions typically include inputs from other building systems such as the fire alarm or HVAC system, which may trigger an emergency 'all lights on' or ' all lights flashing' command for example.
Program logic
Program logic can tie all of the above elements together using constructs such as if-then-else statements and logical operators. Digital Addressable Lighting Interface (DALI) is specified in the IEC 62386 standard.
Automatic dimming
The use of automatic light dimming is an aspect of smart lighting that serves to reduce energy consumption. Manual light dimming also has the same effect of reducing energy use.
Use of sensors
In the paper "Energy savings due to occupancy sensors and personal controls: a pilot field study", Galasiu, A.D. and Newsham, G.R have confirmed that automatic lighting systems including occupancy sensors and individual (personal) controls are suitable for open-plan office environments and can save a significant amount of energy (about 32%) when compared to a conventional lighting system, even when the installed lighting power density of the automatic lighting system is ~50% higher than that of the conventional system.
Components
A complete sensor consists of a motion detector, an electronic control unit, and a controllable switch/relay. The detector senses motion and determines whether there are occupants in the space. It also has a timer that signals the electronic control unit after a set period of inactivity. The control unit uses this signal to activate the switch/relay to turn equipment on or off. For lighting applications, there are three main sensor types: passive infrared, ultrasonic, and hybrid.
Others
Motion-detecting (microwave), heating-sensing (infrared), and sound-sensing; optical cameras, infrared motion, optical trip wires, door contact sensors, thermal cameras, micro radars, daylight sensors.
Standards and protocols
In the 1980s there was a strong requirement to make commercial lighting more controllable so that it could become more energy efficient. Initially this was done with analog control, allowing fluorescent ballasts and dimmers to be controlled from a central source. This was a step in the right direction, but cabling was complicated and therefore not cost effective.
Tridonic was an early company to go digital with their broadcast protocols, DSI, in 1991. DSI was a basic protocol as it transmitted one control value to change the brightness of all the fixtures attached to the line. What made this protocol more attractive, and able to compete with the established analog option, was the simple wiring.
There are two types of lighting control systems which are:
Analog lighting control
Digital lighting control
Examples for analog lighting control systems are:
0-10V based system.
AMX192 based systems (often referred to as AMX) (USA standard).
D54 based systems (European standard).
In production lighting 0-10V system was replaced by analog multiplexed systems such as D54 and AMX192, which themselves have been almost completely replaced by DMX512. For dimmable fluorescent lamps (where it operates instead at 1-10 V, where 1 V is minimum and 0 V is off) the system is being replaced by DSI, which itself is in the process of being replaced by DALI.
Examples for digital lighting control systems are:
DALI based system.
DSI based system
KNX based systems
Those are all wired lighting control systems.
There are also wireless lighting control systems that are based on some standard protocols like MIDI, ZigBee, Bluetooth Mesh, and others. The standard for digital addressable lighting interface, mostly in professional and commercial deployments, is IEC 62386-104. This standard specifies the underlying technologies, which in wireless are VEmesh, which operates in the industrial Sub-1 GHz frequency band and Bluetooth Mesh, which operates in the 2.4 GHz frequency band.
Other notable protocols, standards and systems include:
Architecture for Control Networks
Art-Net
Bluetooth mesh Lighting model
C-Bus
Dynalite
INSTEON
Lonworks
MIDI
Modbus
RDM
VSCP
X10
Z-Wave
Bluetooth lighting control
The new type of control for lighting system is using Bluetooth connection directly to the lighting system. It is recently introduced by Philips HUE and company new name as Signify formerly known as Philips Lighting. This system will need a smartphone or tablet where the user can install a special Philips Hue Bluetooth app. The Bluetooth bulbs don't need a Philips Hue bridge to function. There is no need to have a Wi-Fi or data connection for controlling the lights with that system.
Smart lighting ecosystem
Smart lighting systems can be controlled using the internet to adjust lighting brightness and schedules. One technology involves a smart lighting network that assigns IP addresses to light bulbs.
Information transmitting with smart light
Schubert predicts that revolutionary lighting systems will provide an entirely new means of sensing and broadcasting information. By blinking far too rapidly for any human to notice, the light will pick up data from sensors and carry it from room to room, reporting such information as the location of every person within a high-security building. A major focus of the Future Chips Constellation is smart lighting, a revolutionary new field in photonics based on efficient light sources that are fully tunable in terms of such factors as spectral content, emission pattern, polarization, color temperature, and intensity. Schubert, who leads the group, says smart lighting will not only offer better, more efficient illumination; it will provide "totally new functionalities."
Theatrical lighting control
Architectural lighting control systems can integrate with a theater's on-off and dimmer controls, and are often used for house lights and stage lighting, and can include worklights, rehearsal lighting, and lobby lighting. Control stations can be placed in several locations in the building and range in complexity from single buttons that bring up preset options-looks, to in-wall or desktop LCD touchscreen consoles. Much of the technology is related to residential and commercial lighting control systems.
The benefit of architectural lighting control systems in the theater is the ability for theater staff to turn worklights and house lights on and off without having to use a lighting control console. Alternately, the light designer can control these same lights with light cues from the lighting control console so that, for instance, the transition from houselights being up before a show starts and the first light cue of the show is controlled by one system.
Smart-lighting emergency ballast for fluorescent lamps
The function of a traditional emergency lighting system is the supply of a minimum illuminating level when a line voltage failure appears. Therefore, emergency lighting systems have to store energy in a battery module to supply lamps in case of failure. In this kind of lighting systems the internal damages, for example battery overcharging, damaged lamps and starting circuit failure must be detected and repaired by specialist workers.
For this reason, the smart lighting prototype can check its functional state every fourteen days and dump the result into a LED display. With these features they can test themselves checking their functional state and displaying their internal damages. Also the maintenance cost can be decreased.
Overview
The main idea is the substitution of the simple line voltage sensing block that appears in the traditional systems by a more complex one based on a microcontroller. This new circuit will assume the functions of line voltage sensing and inverter activation, by one side, and the supervision of all the system: lamp and battery state, battery charging, external communications, correct operation of the power stage, etc., by the other side.
The system has a great flexibility, for instance, it would be possible the communication of several devices with a master computer, which would know the state of each device all the time.
A new emergency lighting system based on an intelligent module has been developed. The micro-controller as a control and supervision device guarantees increase in the installation security and a maintenance cost saving.
Another important advantage is the cost saving for mass production specially whether a microcontroller with the program in ROM memory is used.
Advances in photonics
The advances achieved in photonics are already transforming society just as electronics revolutionized the world in recent decades and it will continue to contribute more in the future. From the statistics, North America's optoelectronics market grew to more than $20 billion in 2003. The LED (light-emitting diode) market is expected to reach $5 billion in 2007, and the solid-state lighting market is predicted to be $50 billion in 15–20 years, as stated by E. Fred Schubert, Wellfleet Senior Distinguished Professor of the Future Chips Constellation at Rensselaer.
Notable inventors
Alexander Nikolayevich Lodygin – carbon-rod filament incandescent lamp (1874)
Joseph Swan – carbonized-thread filament incandescent lamp (1878)
Thomas Edison – long-lasting incandescent lamp with high-resistance filament (1880)
John Richardson Wigham – electric lighthouse illumination (1885)
Nick Holonyak – light-emitting diode (1962)
Howard Borden, Gerald Pighini, Mohamed Atalla, Monsanto – LED lamp (1968)
Shuji Nakamura, Isamu Akasaki, Hiroshi Amano – blue LED (1992)
See also
Banning of incandescent light bulbs
Dimmer
Home automation
Lutron
Light fixture
Light in school buildings
Light pollution
Lighting for the elderly
Lighting control console
Luminous efficacy
Over-illumination
Passive infrared sensor
Seasonal affective disorder
Stage lighting
Street lighting
Sustainable lighting
Three-point lighting, technique used in both still photography and in film
Ultrasonic sensors
Lists
List of light sources
List of lighting design applications
Timeline of lighting technology
References
Lighting
Energy-saving lighting
Internet of things
Home automation
Building automation
Building engineering
Environmental technology | Lighting control system | Technology,Engineering | 3,143 |
22,505,779 | https://en.wikipedia.org/wiki/Source%20measure%20unit | A source measure unit (SMU) is a type of electronic test equipment which can source voltage and current and measure them as it does so.
Overview
The source measure unit (SMU), or source-measurement unit, is an electronic instrument that is capable of both sourcing and measuring at the same time. It can precisely force voltage or current and simultaneously measure precise voltage and/or current.
SMUs are used for test applications requiring high accuracy, high resolution and measurement flexibility. Such applications include I-V characterizing and testing semiconductors and other non-linear devices and materials, where sourcing voltage and current source span across both positive and negative values. To accomplish this, SMUs have four-quadrant outputs. For characterization purposes SMUs are bench instruments similar to a curve tracer. They are also commonly used in automatic test equipment and usually are equipped with an interface such as GPIB or USB to enable connection to a computer.
History
Semiconductor characterization led to the development of source measure units. The HP4145A semiconductor parameter analyzer introduced in 1982 was capable of a complete DC characterization of semiconductor devices and materials. It consisted of four independently controlled source monitor units (the precursor to source measure units) enclosed in a mainframe.
The Keithley 236 introduced in 1989 was the first stand-alone SMU and allowed system builders to integrate one or more SMUs with a separate PC control. Over time stand-alone SMUs have evolved to offer a broader range of current, voltage, power level and price points for applications beyond semiconductor characterization. Smaller form factors made possible through the use of modern computing technologies have allowed system builders to integrate SMUs into rack and stack systems for larger scale production test applications.
Operation
A SMU integrates a highly stable DC power source, as a constant current source or as a constant voltage source, and a high precision multimeter.
It typically has four terminals, two for source and measurement and two more for kelvin, or remote sense, connection. Power is simultaneously sourced (positive) or sinked (negative) to a pair of terminals at the same time as measuring the current or voltage across those terminals is done.
SMU vs. power supply
A power supply is mainly intended to provide appropriate power for a particular application. Due to this, the majority of power-supplies are one-quadrant (source only, with fixed polarity), and in most cases constant-voltage operation. Bench power supplies might add constant-current operation as well as providing limited measurement capabilities, but these are in many cases still one-quadrant only and with margins of errors acceptable for coarse lab-work.
Some high-end lab power-supplies will have two- or four-quadrant operation (source and sink, with fixed or dual polarity), which is an essential feature of a SMU. However, many of these still have a main focus on providing power to an application, where eventual measurement capabilities has secondary priority. These may have advanced capabilities of controlling the power output, but might lack things like specialized test-modes or monitoring-options tailored for precise and easy power-characterization. This particular class of power-supplies can be regarded as the predecessor for the SMU, where the SMU differs in that it adds features particularly aimed towards characterization.
SMU vs. DMM
The built-in sourcing capabilities of an SMU work with the instrument's measurement capabilities to reduce measurement uncertainty and support low current and more flexible resistance measurements. In voltage measurements system-level leakage can be suppressed more easily than with separate instruments. In current measurements, the SMU's design reduces voltage burden. For resistance measurements, SMUs provide programmable source values, useful for protecting the device being tested.
Significant features
Notable features of SMUs include the following:
I and V sweeping—Sweep capabilities offer a way to test devices under a range of conditions with different source, delay and measure characteristics. These can include fixed level, linear/log and pulsed sweeps.
On-board processor—Some SMUs further improve instrument integration, communication and test time by adding an on-board script processor. User-defined on-board script execution offers capabilities for controlling test sequencing/flow, decision making, and instrument autonomy.
Contact check—SMUs can verify good connections to the device under test before the test begins. Some of the problems this function can detect include contact fatigue, breakage, contamination, corrosion, loose or broken connections and relay failures.
See also
Semiconductor curve tracer
Voltage source
Current source
Digital multimeter
Power supply
Electrometer
Electronic load
References
External links
Video of Keithley 2450 SMU Review and Experiments
Source Measurement Unit solutions from National Instruments
SMMU07 Source Measurement Multiplex Unit from FRANK Germany
Source Measure Unit
GS610 Single channel Source Measure Unit from Yokogawa
GS820 Two channel Source Measure Unit from Yokogawa
Aim-TTi SMU4000 series PowerFlex SMU
Electronic test equipment | Source measure unit | Technology,Engineering | 991 |
844,920 | https://en.wikipedia.org/wiki/Astronomy%20Picture%20of%20the%20Day | Astronomy Picture of the Day (APOD) is a website provided by NASA and Michigan Technological University (MTU). It reads: "Each day a different image or photograph of our universe is featured, along with a brief explanation written by a professional astronomer."
The photograph does not necessarily correspond to a celestial event on the exact day that it is displayed, and images are sometimes repeated.
These often relate to current events in astronomy and space exploration. The text has several hyperlinks to more pictures and websites for more information. The images are either visible spectrum photographs, images taken at non-visible wavelengths and displayed in false color, video footage, animations, artist's conceptions, or micrographs that relate to space or cosmology.
Past images are stored in the APOD Archive, with the first image appearing on June 16, 1995. This initiative has received support from NASA, the National Science Foundation, and MTU. The images are sometimes authored by people or organizations outside NASA, and therefore APOD images are often copyrighted, unlike many other NASA image galleries.
When the APOD website was created, it received a total of 14 page views on its first day. , the APOD website has received over a billion image views throughout its lifetime. APOD is also translated into 21 languages daily.
APOD was presented at a meeting of the American Astronomical Society in 1996. Its practice of using hypertext was analyzed in a paper in 2000. It received a Scientific American Sci/Tech Web Award in 2001. In 2002, the website was featured in an interview with Nemiroff on CNN Saturday Morning News. In 2003, the two authors published a book titled The Universe: 365 Days from Harry N. Abrams, which is a collection of the best images from APOD as a hardcover "coffee table" style book. APOD was the Featured Collection in the November 2004 issue of D-Lib Magazine.
During the United States federal government shutdown of 2013, APOD continued its service on mirror sites.
Robert J. Nemiroff and Jerry T. Bonnell were awarded the 2015 Klumpke-Roberts Award by the Astronomical Society of the Pacific "for outstanding contributions to public understanding and appreciation of astronomy" for their work on APOD. The site was awarded the International Astronomical Union's 2022 Astronomy Outreach Prize.
Pictures
References
External links
APOD Archive
About APOD – includes a list of mirror websites
Astronomy Picture of the Day RSS Feed – Official RSS feed
Official list of alternative (mirror) sites for when the NASA APOD site is down
Observatorio – Spanish official translation, with web2.0 features
Starship Asterisk* – APOD and General Astronomy Discussion Forum
Astronomy Picture of the Day (APOD) – Official Facebook Page
Official APOD Telegram channel - Official channel in Telegram Messenger
Astronomy Picture of the Day App – Official iOS mirror
APOD email service
List of APOD Mirrors and Social Sites
NASA online
Michigan Technological University
Astronomy education works
American science websites
Internet properties established in 1995 | Astronomy Picture of the Day | Astronomy | 612 |
407,354 | https://en.wikipedia.org/wiki/101%20%28number%29 | 101 (one hundred [and] one) is the natural number following 100 and preceding 102.
It is variously pronounced "one hundred and one" / "a hundred and one", "one hundred one" / "a hundred one", and "one oh one". As an ordinal number, 101st (one hundred [and] first), rather than 101th, is the correct form.
In mathematics
101 is:
the 26th prime number and the smallest integer above 100.
a palindromic number in decimal, and so a palindromic prime.
a Chen prime since 103 is also prime, with which it makes a twin prime pair.
a sexy prime since 107 and 113 are also prime, with which it makes a sexy prime triplet.
a unique prime because the period length of its reciprocal is unique among primes.
an Eisenstein prime with no imaginary part and real part of the form .
the fifth alternating factorial.
a centered decagonal number.
the only existing prime with alternating 1s and 0s in decimal and the largest known prime of the form .
the number of compositions of 12 into distinct parts.
the smallest number that can be expressed as the sum of three distinct nonzero squares in more than two ways: , or (see image).
Given 101, the Mertens function returns 0. It is the second prime to have this property after 2.
For a 3-digit number in decimal, this number has a relatively simple divisibility test. The candidate number is split into groups of four, starting with the rightmost four, and added up to produce a 4-digit number. If this 4-digit number is of the form (where and are integers from 0 to 9), such as 3232 or 9797, or of the form , such as 707 and 808, then the number is divisible by 101.
On the seven-segment display of a calculator, 101 is both a strobogrammatic prime and a dihedral prime.
In books
According to Books in Print, more books are now published with a title that begins with '101' than '100'. They usually describe or discuss a list of items, such as 101 Ways to... or 101 Questions and Answers About... . This marketing tool is used to imply that the customer is given a little extra information beyond books that include only 100 items. Some books have taken this marketing scheme even further with titles that begin with '102', '103', or '1001'. The number is used in this context as a slang term when referring to "a 101 document" what is usually referred to as a statistical survey or overview of some topic.
In education
In American university course numbering systems, the number 101 is often used for an introductory course at a beginner's level in a department's subject area. This common numbering system was designed to make transfer between colleges easier. It can also indicate a course for students not intending to major in the subject; e.g. a student intending to major in English would take English 111 not English 101.
In theory, any numbered course in one academic institution should bring a student to the same standard as a similarly numbered course at other institutions. One of earliest such usages, perhaps the first, was by the University of Buffalo in 1929.
Based on this usage, the term "101" (pronounced ) has gained a slang sense referring to basic knowledge of a topic or a collection of introductory materials to a topic, as in the sentence, "Boiling potatoes is Cooking 101". The Oxford English Dictionary records the usage of "101" in this slang sense from 1986.
In other fields
In public life:
In Hinduism, 101 is a lucky number.
101st kilometre, a condition of release from the Gulag in the Soviet Union.
101 is the main Police Emergency Number in Belgium.
101 is the Single Non-Emergency Number (SNEN) in some parts of the UK, a telephone number used to call emergency services that are urgent but not emergencies. 101 is now available across all areas of England and Wales.
In technology:
An HTTP status code indicating that the server is switching protocols as requested by the client to do so.
References
Wells, D. The Penguin Dictionary of Curious and Interesting Numbers London: Penguin Group. (1987): page 133.
Integers
Academic slang | 101 (number) | Mathematics | 893 |
61,903,668 | https://en.wikipedia.org/wiki/Alice%20Christine%20Stickland | Alice Christine Stickland (16 March 1906 – 16 April 1987) was an applied mathematician and astrophysics engineer with interests in radar and radiowave propagation.
Early life
Alice Christine Stickland was born in Camberwell, London, on 16 March 1906. Her father was a publisher's clerk.
Education
Stickland studied mathematics at King's College, London, and graduated with a BSc in 1927. She then went on to study privately while working at the Radio Research Station, Ditton Park. First receiving an MSc in mathematical physics in 1929 and then being awarded a PhD in mathematical physics from University of London in 1943. Her dissertation title was ‘The Propagation of the Magnetic Field of the Electron Magnetic Wave along the Ground and in the Lower Atmosphere’.
Career
Stickland worked as a scientific civil servant at the Radio Research Station between 1928 and 1947. She worked with radar pioneer, Robert Watson-Watt, on long-wave propagation, Reginald Smith-Rose on short-wave propagation, and Edward Appleton on the properties of the ionosphere.
Stickland, along with Smith-Rose, read a paper entitled 'Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data' at the Institution of Electrical Engineers. The paper described the results of field intensity measurements obtained between 1937 and 1939 using the Post Office radio-telephone link between Guernsey and Chaldon.
She officially retired in 1968 but continued to work as General Editor of the Annals of the International Years of the Quiet Sun (1964-65), and with the International Council for Science’s Committee on Space Research (COSPAR). She was heavily involved in the Girl Guides’ Association.
Selected publications
Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data - Dr. R. L. Smith-Rose, Miss A. C. Stickland
References
1906 births
1987 deaths
British mathematicians
Applied mathematicians
British electrical engineers
Alumni of King's College London
British women engineers
People from Camberwell
British women mathematicians | Alice Christine Stickland | Mathematics | 391 |
57,329,518 | https://en.wikipedia.org/wiki/Tyromyces%20toatoa | Tyromyces toatoa is a species of poroid fungus found in New Zealand. It was described as a new species by G. H. Cunningham in 1965. The type collections were made by Joan Dingley, who found the fungus in Taupō, Mount Ruapehu, near Whakapapa Stream. She found it fruiting on the bark of dead branches and trunks of Phyllocladus alpinus, at an elevation of . The specific epithet toatoa evokes the Māori name of the host plant.
Description
The fungus is characterized by its dark surface and thin cuticle of the small, effused-reflexed caps. The spores of T. toatoa are more or less sausage-shaped (suballantoid), measuring 4–5 by 1.5–2 μm.
References
toatoa
Fungi of New Zealand
Fungi described in 1965
Fungus species | Tyromyces toatoa | Biology | 187 |
1,908,395 | https://en.wikipedia.org/wiki/Artificial%20brain | An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain.
Research investigating "artificial brains" and brain emulation plays three important roles in science:
An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being.
A long-term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI.
An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease.
The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus's critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".
The third objective is generally called artificial general intelligence by researchers. However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.
Approaches to brain simulation
Although direct human brain emulation using artificial neural networks on a high-performance computing engine is a commonly discussed approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the quantum mechanical wave equation.
EvBrain is a form of evolutionary software that can evolve "brainlike" neural networks, such as the network immediately behind the retina.
In November 2008, IBM received a US$4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons "in the computer" by placing thirty million synapses in their proper three-dimensional position.
Some proponents of strong AI speculated in 2009 that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.
While Blue Brain is able to represent complex neural connections on the large scale, the project does not achieve the link between brain activity and behaviors executed by the brain. In 2012, project Spaun (Semantic Pointer Architecture Unified Network) attempted to model multiple parts of the human brain through large-scale representations of neural connections that generate complex behaviors in addition to mapping.
Spaun's design recreates elements of human brain anatomy. The model, consisting of approximately 2.5 million neurons, includes features of the visual and motor cortices, GABAergic and dopaminergic connections, the ventral tegmental area (VTA), substantia nigra, and others. The design allows for several functions in response to eight tasks, using visual inputs of typed or handwritten characters and outputs carried out by a mechanical arm. Spaun's functions include copying a drawing, recognizing images, and counting.
There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. Another near term limitation is that all current approaches for brain simulation require orders of magnitude larger power consumption compared with a human brain. The human brain consumes about 20 W of power, whereas current supercomputers may use as much as 1 MW—i.e., an order of 100,000 more.
Artificial brain thought experiment
Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds.
See also
AI takeover
Animat
Artificial consciousness
Artificial intelligence
Artificial intelligence and elections
Artificial Intelligence System
Artificial life
Philosophy of artificial intelligence
Biological neural networks
Blue Brain
CoDi
Cognitive architecture
Effective altruism
Existential risk from advanced artificial intelligence
Future of Humanity Institute
Human Brain Project
Multi-agent system
Neuromorphic computing
Never-Ending Language Learning
Nick Bostrom
Outline of artificial intelligence
OpenWorm
Robotics
Simulated reality
Superintelligence
Turing's Wager
Notes
References
External links
Artificial Brains – the quest to build sentient machines
Computational neuroscience
Brain
Thought experiments in philosophy of mind
Robotics engineering
Artificial intelligence | Artificial brain | Technology,Engineering | 1,137 |
5,230,467 | https://en.wikipedia.org/wiki/Systems%20for%20Nuclear%20Auxiliary%20Power | The Systems Nuclear Auxiliary POWER (SNAP) program was a program of experimental radioisotope thermoelectric generators (RTGs) and space nuclear reactors flown during the 1960s by NASA.
The SNAP program developed as a result of Project Feedback, a Rand Corporation study of reconnaissance satellites completed in 1954. As some of the proposed satellites had high power demands, some as high as a few kilowatts, the U.S. Atomic Energy Commission (AEC) requested a series of nuclear power-plant studies from industry in 1951. Completed in 1952, these studies determined that nuclear power plants were technically feasible for use on satellites.
In 1955, the AEC began two parallel SNAP nuclear power projects. One, contracted with The Martin Company, used radio-isotopic decay as the power source for its generators. These plants were given odd-numbered SNAP designations beginning with SNAP-1. The other project used nuclear reactors to generate energy, and was developed by the Atomics International Division of North American Aviation. Their systems were given even-numbered SNAP designations, the first being SNAP-2.
Most of the systems development and reactor testing was conducted at the Santa Susana Field Laboratory, Ventura County, California using a number of specialized facilities.
Odd-numbered SNAPs: radioisotope thermoelectric generators
Radioisotope thermoelectric generators use the heat of radioactive decay to produce electricity.
SNAP-1
SNAP-1 was a test platform that was never deployed, using cerium-144 in a Rankine cycle with mercury as the heat transfer fluid. Operated successfully for 2500 hours.
SNAP-3
SNAP-3 was the first RTG used in a space mission (1961). Launched aboard U.S. Navy Transit 4A and 4B navigation satellites. The electrical output of this RTG was 2.5 watts.
SNAP-7
SNAP-7A, D and F was designed for marine applications such as lighthouses and buoys; at least six units were deployed in the mid-1960s, with names SNAP-7A through SNAP-7F. SNAP-7D produced thirty watts of electricity using (about four kilograms) of strontium-90 as SrTiO3. These were very large units, weighing between .
SNAP-9
After SNAP-3 on Transit 4A/B, SNAP-9A units served aboard many of the Transit satellite series. In April 1964 a SNAP-9A failed to achieve orbit and disintegrated, dispersing roughly of plutonium-238 over all continents. Most plutonium fell in the southern hemisphere. Estimated 630 TBq of radiation was released.
SNAP-11
SNAP-11 was an experimental RTG intended to power the Surveyor probes during the lunar night. The curium-242 RTGs would have produced 25 watts of electricity using 900 watts of thermal energy for 130 days. The hot junction temperature was , the cold junction temperature was . They had a liquid NaK thermal control system and a movable shutter to dump excess heat. They were not used on the Surveyor missions.
In general, the SNAP 11 fuel block is a cylindrical multi-material unit which occupies the internal volume of the generator. TZM (molybdenum alloy) fuel capsule, fueled with curium-242 (Cm2O3 in an iridium matrix) is located in the center of the fuel block. Capsule is surrounded by a platinum sphere, approximately inches in diameter, which provides shielding and acts as an energy absorber for impact considerations. This assembly is enclosed in graphite and beryllium sub-assemblies to provide the proper thermal distribution and ablative protection.
SNAP-19
SNAP-19(B) was developed for the Nimbus-B satellite by the Nuclear Division of the Martin-Marietta Company (now Teledyne Energy Systems). Fueled with plutonium-238, two parallel lead telluride thermocouple generators produced an initial maximum of approximately 30 watts of electricity. Nimbus 3 used a SNAP-19B with the recovered fuel from the Nimbus-B1 attempt.
SNAP-19's powered the Pioneer 10 and Pioneer 11 missions. They used n-type 2N-PbTe and p-type TAGS-85 thermoelectric elements.
Modified SNAP-19B's were used for the Viking 1 and Viking 2 landers.
A SNAP-19C was used to power a telemetry array at Nanda Devi in Uttarakhand for a CIA operation to track Chinese missile launches.
SNAP-21 & 23
SNAP-21 and SNAP-23 were designed for underwater use and used strontium-90 as the radioactive source, encapsulated as either strontium oxide or strontium titanate. They produced about ten watts of electricity.
SNAP-27
Five SNAP-27 units provided electric power for the Apollo Lunar Surface Experiments Packages (ALSEP) left on the Moon by Apollo 12, 14, 15, 16, and 17. The SNAP-27 power supply weighed about 20 kilograms, was 46 cm long and 40.6 cm in diameter. It consisted of a central fuel capsule surrounded by concentric rings of thermocouples. Outside of the thermocouples was a set of fins to provide for heat rejection from the cold side of the thermocouple. Each of the SNAP devices produced approximately 75 W of electrical power at 30 VDC. The energy source for each device was a rod of plutonium-238 providing a thermal power of approximately 1250 W. This fuel capsule, containing of plutonium-238 in oxide form (44,500 Ci or 1.65 PBq), was carried to the Moon in a separate fuel cask attached to the side of the Lunar Module. The fuel cask provided thermal insulation and added structural support to the fuel capsule. On the Moon, the Lunar Module pilot removed the fuel capsule from the cask and inserted it in the RTG.
These stations transmitted information about moonquakes and meteor impacts, lunar magnetic and gravitational fields, the Moon's internal temperature, and the Moon's atmosphere for several years after the missions. After ten years, a SNAP-27 still produced more than 90% of its initial output of 75 watts.
The fuel cask from the SNAP-27 unit carried by the Apollo 13 mission currently lies in of water at the bottom of the Tonga Trench in the Pacific Ocean. This mission failed to land on the moon, and the lunar module carrying its generator burnt up during re-entry into the Earth's atmosphere, with the trajectory arranged so that the cask would land in the trench. The cask survived re-entry, as it was designed to do, and no release of plutonium has been detected. The corrosion resistant materials of the capsule are expected to contain it for 10 half-lives (870 years).
Even-numbered SNAPs: compact nuclear reactors
A series of compact nuclear reactors intended for space use, the even numbered SNAPs were developed for the U.S. government by the Atomics International division of North American Aviation.
SNAP Experimental Reactor (SER)
The SNAP Experimental Reactor (SER) was the first reactor to be built by the specifications established for space satellite applications. The SER used uranium zirconium hydride as the fuel and eutectic sodium-potassium alloy (NaK) as the coolant and operated at approximately 50 kW thermal. The system did not have a power conversion but used a secondary heat air blast system to dissipate the heat to the atmosphere. The SER used a similar reactor reflector moderator device as the SNAP-10A but with only one reflector. Criticality was achieved in September 1959 with final shutdown completed in December 1961. The project was considered a success. It gave continued confidence in the development of the SNAP Program and it also led to in depth research and component development.
SNAP-2
The SNAP-2 Developmental Reactor was the second SNAP reactor built. This device used Uranium-zirconium hydride fuel and had a design reactor power of 55 kWt. It was the first model to use a flight control assembly and was tested from April 1961 to December 1962. The basic concept was that nuclear power would be a long term source of energy for crewed space capsules. However, the crew capsule had to be shielded from deadly radiation streaming from the nuclear reactor. Surrounding the reactor with a radiation shield was out of the question. It would be far too heavy to launch with the rockets available at that time. To protect the "crew" and "payload", the SNAP-2 system used a "shadow shield". The shield was a truncated cone containing lithium hydride. The reactor was at the small end and the crew capsule/payload was in the shadow of the large end.
Studies were performed on the reactor, individual components and the support system. Atomics International, a division of North American Aviation did the development and testing work. The SNAP-2 Shield Development unit was responsible for developing the radiation shield. Creating the shield meant melting lithium hydride and casting it into the form required. The form was a big truncated cone. Molten lithium hydride had to be poured into the casting mold a little at a time otherwise it would crack as it cooled and solidified. Cracks in the shield material would be fatal to any space crew or payload depending on it because it would allow radiation to stream through to the crew/payload compartment. As the material cooled, it would form kind of a hollowed vortex in the middle. The development engineers had to create ways to fill the vortex while maintaining the shield's integrity. And, in doing all this they had to keep in mind that they were working with a material that could be explosively unstable in a moist oxygen rich environment. Analysis also revealed that under thermal and radiation gradients, the lithium hydride could disassociate and hydrogen ions could migrate through the shield. This would produce variations of shielding efficacy and could subject the payloads to intense radiation. Efforts were made to mitigate these effects.
The SNAP 2DR used a similar reactor reflector moderator device as the SNAP-10A but with two movable and internal fixed reflectors. The system was designed so that the reactor could be integrated with a mercury Rankine cycle to generate 3.5 kW of electricity.
SNAP-8
The SNAP-8 reactors were designed, constructed and operated by Atomics International under contract with the National Aeronautics and Space Administration. Two SNAP-8 reactors were produced: The SNAP 8 Experimental Reactor and the SNAP 8 Developmental Reactor. Both SNAP 8 reactors used the same highly enriched uranium zirconium hydride fuel as the SNAP 2 and SNAP 10A reactors. The SNAP 8 design included primary and secondary NaK loops to transfer heat to the mercury rankine power conversion system. The electrical generating system for the SNAP 8 reactors was supplied by Aerojet General.
The SNAP 8 Experimental Reactor was a 600 kWt reactor that was tested from 1963 to 1965.
The SNAP 8 Developmental Reactor had a reactor core measuring , contained a total of of fuel, had a power rating of 1 MWt. The reactor was tested in 1969 at the Santa Susana Field Laboratory.
SNAP-10A
The SNAP-10A was a space-qualified nuclear reactor power system launched into space in 1965 under the SNAPSHOT program. It was built as a research project for the Air Force, to demonstrate the capability to generate higher power than RTGs. The reactor employed two moveable beryllium reflectors for control, and generated 35 kWt at beginning of life. The system generated electricity by circulating NaK around lead tellurium thermocouples. To mitigate launch hazards, the reactor was not started until it reached a safe orbit.
SNAP-10A was launched into Earth orbit in April 1965, and used to power an Agena-D research satellite, built by Lockheed/Martin. The system produced 500W of electrical power during an abbreviated 43-day flight test. The reactor was prematurely shut down by a faulty command receiver. It is predicted to remain in orbit for 4,000 years.
See also
List of nuclear power systems in space
Nuclear power in space
Citations
General sources
"Nuclear Power in Space". U.S. Department of Energy, Office of Nuclear Energy, Science & Technology
External links
SNAP-8 Electrical Generating System Development Program, Final Report
SNAP-19, Phase 3. Quarterly Progress Report, 1 January – 31 March 1966
SNAP 19, Phase 3. Quarterly Progress Report, 1 Apr. – 30 Jun. 1966
Analysis of the need for Agena command destruct and/or generator eject systems on the Nimbus B/SNAP-19 mission
SNAP-19/Nimbus B integration experience
SNAP-27, Volume 1. Quarterly Report, 1 Jul. – 30 Sep. 1966
SNAP-27, Volume 2. Quarterly Report, 1 Jan. – 31 Mar. 1966
"Space Nuclear Power: Opening the Final Frontier" by G. L. Bennett (2006)
"Space Nuclear Power Sources" (tables)
Atomics International
Electrical generators
NASA programs
North American Aviation
Nuclear power in space
Nuclear technology
United States Atomic Energy Commission | Systems for Nuclear Auxiliary Power | Physics,Technology | 2,671 |
24,470,664 | https://en.wikipedia.org/wiki/C30H26O12 | {{DISPLAYTITLE:C30H26O12}}
The molecular formula C30H26O12 may refer to:
Several B type proanthocyanidins dimers:
Procyanidin B1 or epicatechin-(4β→8)-catechin
Procyanidin B2 or (-)-epicatechin-(4β→8)-(-)-epicatechin
Procyanidin B3 or catechin-(4β→8)-catechin
Procyanidin B4 or catechin-(4α→8)-epicatechin
Procyanidin B5 or epicatechin-(4β→6)-epicatechin
Procyanidin B6 or catechin-(4α→6)-catechin
Procyanidin B8 catechin-(4α→6)-epicatechin | C30H26O12 | Chemistry | 195 |
37,136,779 | https://en.wikipedia.org/wiki/Experience%20architecture | Experience architecture (XA) is the art of articulating a clear user story/journey through an information architecture, interaction design and experience design that an end user navigates across products and services offered by the client or as intended by the designer. This visual representation is intended not only to highlight the systems that the end user will touch and interact with, but also the key interactions that the user will have with interacting the internal systems or back end structure of an application. It provides a holistic view of the experience, vertical knowledge of industry, the systems, documentation, and analysis of the points that should be focused on when delivering a holistic experience. The Experience architecture provides an overall direction for user experience actions across the projects.
Experience architect
An experience architect (also known as an XA) is a designer authoring, planning, and designing the experience architecture deliverables. An XA will encompass a variety of interaction and digital design skills of human behaviour, user-centered design (UCD) and interaction design. This person is also responsible for connecting human emotions with the end product thus creating co-relations by ensuring that the experience meets or exceeds needs and objectives of the intended or wide users. The XA integrates the results into an actionable requirements. They are responsible for conceptualising and delivering the design deliverables that meets business and usability objectives by identifying the modules, templates, and structure necessary for end-product integrations.
Experience architect deliverables
Experience architects are responsible for documenting and delivering a series of project manuals, guidelines and specifications. These include all or some of the below practices.
Persona
Scenario
User story
User journey
Process flow diagram
Information Architecture
Wireframe
Ranging from High-Fidelity to Low-Fidelity
Content strategy
Prototype
Functional specification
Inclusive experience
A great design experience must be self-explanatory and emphasize a user journey from step to step in minimalistic manner. In a broader term, it is a branch of inclusive design and universal design. The purpose of inclusion in the context of experience architecture is to create technology and user interfaces accessible for wider audiences inclusive of full range of human diversity with respect to ability, gender, age and other forms of human difference.
This methodology is to achieve independent experience and accessibility for users who are aging, abled or disabled or impaired either by birth, natural or incurred through natural events. It requires exercising and creating interfaces or prototyping a lower level design for physical world that makes actions and steps more self-explanatory thereby removing layers of prerequisite requirements to access any digital system. Inclusive Experience is an emerging and developing skill sets and standard elements in applications that are mass-produced for consumers, government and in other public domains.
Education
The first Experience Architecture program began at Michigan State University. Developed by Liza Potts, Rebecca Tegtmeyer, Bill Hart-Davidson, this program launched in 2014.
Bachelor programs
BA Experience Architecture – Michigan State University
Related areas
Application architecture
Business analyst
Card sorting
Content strategy
Contextual inquiry
Data architecture
Data management
Design thinking
Experiential interior design
Human factors
Information architecture
Information design
Information system
Interaction design
Participatory design
Semantic Web
Service design
Taxonomy
Usability testing
User-centered design
User experience design
Design | Experience architecture | Engineering | 645 |
23,618,627 | https://en.wikipedia.org/wiki/Minor%20Use%20Animal%20Drug%20Program | The Minor Use Animal Drug Program (or National Research Support Project 7) is the counterpart for animals of the IR-4 Minor Crop Pest Management Program. The program targets development of therapeutic drugs for minor species, such as small ruminants and aquatic species, plus support for drugs for minor use within major species. It is carried out in partnership with the Food and Drug Administration’s (FDA) Center for Veterinary Medicine.
References
External links
Program website
Food and Drug Administration
Drug discovery | Minor Use Animal Drug Program | Chemistry,Biology | 97 |
11,007,310 | https://en.wikipedia.org/wiki/Pennington%20Biomedical%20Research%20Center | The Pennington Biomedical Research Center is a health science-focused research center in Baton Rouge, Louisiana. It is part of the Louisiana State University System and conducts clinical, basic, and population science research. It is the largest academically-based nutrition research center in the world, with the greatest number of obesity researchers on faculty. The center's over 500 employees occupy several buildings on the campus. The center was designed by the Baton Rouge architect John Desmond.
History
In 1980, Baton Rouge oilman and philanthropist C. B. "Doc" Pennington and his wife, Irene, provided $125 million to fund construction of the nutritional research center. With a U.S. Department of Defense contract and funding from the Louisiana Public Facilities Authority, Governor Buddy Roemer proclaimed the official opening of the Center in 1988. Dr George A. Bray, a renowned obesity researcher, was recruited to be the first executive director of the center and under his leadership the center reached its present status in the scientific world.
Today, the Pennington Biomedical Research Center houses almost 600 employees, 14 research laboratories, 17 core service laboratories, an inpatient and outpatient clinic, two metabolic chambers, a research kitchen, an administrative area, more than $20 million in technologically advanced equipment, and a team of over 80 scientists and physicians with specialties such as molecular biology, genomics and proteomics, neuroanatomy, exercise physiology, biochemistry, psychology, endocrinology, biostatistics and electrophysiology.
One of the former employees was the late state legislator Leonard J. Chabert from Terrebonne Parish, the namesake of the Leonard J. Chabert Medical Center in Houma.
Research programs and labs
The comprehensive research program at the Pennington Biomedical Research Center focuses on ten specific research program areas as outlined below. Researchers in these divisions rely on the latest molecular, physiological, clinical, behavioral, and bioinformatics technologies with the ultimate goal of preventing common diseases such as heart disease, diabetes, hypertension, and cancer.
Cancer: Clinical Oncology & Metabolism, Cancer Energetics
Diabetes: Antioxidant and Gene Regulation, John S McIlhenny Skeletal Muscle Physiology, John S. McIlhenny Botanical Research, Joint Program on Diabetes, Endocrinology and Metabolism, Oxidative Stress and Disease
Epidemiology and Prevention: Chronic Disease Epidemiology, Contextual Risk Factors, Nutritional Epidemiology, Physical Activity and Obesity Epidemiology
Genomics & Molecular Genetics: Gene-Nutrient Interactions, Genetics of Eating Behavior, Human Genomics, Regulation of Gene Expression
Neurobiology: Autonomic Neuroscience, Leptin Signaling in the Brain, Neurobiology & Nutrition, Neurobiology of Metabolic Dysfunction Lab, Neurosignaling, Nutrition & Neural Signaling,
Neurodegeneration: Aging and Neurodegeneration, Blood Brain Barrier I, Blood Brain Barrier II, Inflammation and Neurodegeneration, Nutritional Neuroscience and Aging
Nutrient Sensing & Signaling: Nutrient Sensing and Adipocyte Signaling
Obesity: Behavior Modification Clinical Trials, Behavior Technology Laboratory: Eating Disorders and Obesity, Behavioral Medicine, Infection and Obesity, Ingestive Behavior Laboratory, Pediatric Obesity and Health Behavior, Pharmacology-based Clinical Trials, Reproductive Endocrinology & Women's Health, Women's Health, Eating Behavior, & Smoking Cessation Program
Physical Activity & Health: Exercise Biology, Human Physiology, Inactivity Physiology, Physical Activity & Ethnic Minority Health, Preventive Medicine, Walking Behavior
Stem Cell & Developmental Biology: Developmental Biology, Epigenetics & Nuclear Reprogramming, Ubiquitin Biology
Core services
Pennington Biomedical Research Center provides core services in three specific areas (i.e., Basic Science, Clinical Science, and Population Science) to support researchers and increase the efficiency and accuracy of investigative procedures.
The Basic Science Core allows researchers to use cutting edge technology in the following areas: comparative biology, animal behavior, animal metabolism, cell and tissue imaging and microscopy, cell culture facilities, genomics, transgenics, proteomics and metabolomics.
The Clinical Science Core provides researchers access to clinical research study protocol development tools, Internal Review Board (IRB) submission, budgeting assistance, and contract support. The Center assists with study participant recruitment, specimen collection, processing and analysis, dietary assessment, exercise testing, psychological review, and phlebotomy. The Core also provides meal preparation using the Metabolic Kitchen and provides support for data collection and storage.
The Population Science Core provides researchers with statistical support for studies, data management assistance, and access to the Library and Information Center which provides bibliographic instruction, interlibrary loan processing, and other services.
Centers of excellence
The National Institutes of Health (NIH) awards center grants to institutions with groups of established researchers working in a variety of scientific research fields. There are three NIH Centers of Excellence at Pennington Biomedical Research Center. the Center for Research on Botanicals and Metabolic Syndrome (BRC), the Center of Biomedical Research Excellence (COBRE), and the Nutrition and Obesity Research Center (NORC)
References
External links
Official website
Louisiana State University System
Biochemistry research institutes
Medical research institutes in the United States
Buildings and structures in Baton Rouge, Louisiana
Educational institutions established in 1981
1981 establishments in Louisiana
Neuroscience research centers in the United States
Research institutes in Louisiana | Pennington Biomedical Research Center | Chemistry | 1,089 |
2,858,710 | https://en.wikipedia.org/wiki/Phi%20Aquarii | Phi Aquarii, Latinized from φ Aquarii, is the Bayer designation for a binary star system in the equatorial constellation of Aquarius. It is visible to the naked eye with a combined apparent visual magnitude of +4.223. Parallax measurements indicate its distance from Earth is roughly , and it is drifting further away with a radial velocity of +2.5 km/s. It is 1.05 degrees south of the ecliptic so it is subject to lunar occultations.
This is a spectroscopic binary star system with an estimated period of 2,500 days. The primary component is a red giant star with a stellar classification of M1.5 III. The outer envelope of this evolved star has expanded to 35 times the size of the Sun. The star has the same mass as the Sun. It is radiating 208 times the luminosity of the Sun at an effective temperature of 3,715 K, giving it the reddish hue of an M-type star.
References
External links
Image Phi Aquarii
M-type giants
Spectroscopic binaries
Aquarius (constellation)
Aquarii, Phi
BD-06 6170
Aquarii, 090
219215
114724
8834 | Phi Aquarii | Astronomy | 250 |
27,356,583 | https://en.wikipedia.org/wiki/ERIKA%20Enterprise | ERIKA Enterprise is a real-time operating system (RTOS) kernel for embedded systems, which is OSEK/VDX certified. It is free and open source software released under a GNU General Public License (GPL). The RTOS also includes RT-Druid, an integrated development environment (IDE) based on Eclipse.
ERIKA Enterprise implements various conformance classes, including the standard OSEK/VDX conformance classes BCC1, BCC2, ECC1, ECC2, CCCA, and CCCB. Also, ERIKA provides other custom conformance classes named FP (fixed priority), EDF (earliest deadline first scheduling), and FRSH (an implementation of resource reservation protocols).
Due to the collaboration with the Tool & Methodologies team of Magneti Marelli Powertrain & Electronics, the automotive kernel (BCC1, BCC2, ECC1, ECC2, multicore, memory protection, and kernel fixed priority with Diab 5.5.1 compiler) is MISRA C 2004 compliant using FlexeLint 9.00h under the configuration suggested by Magneti Marelli.
In August 2012 ERIKA Enterprise officially received the OSEK/VDX certification; see below.
History
ERIKA Enterprise began in the year 2000 with the aim to support multicore devices for the automotive markets.
The main milestones are:
2000: support for STMicroelectronics ST10
2001: support for ARM7
2002: support for Janus, a prototype dual ARM7 system for the automotive market
2004: support for Hitachi H8
2005: support for Altera Nios II, with support for partitioning on multicore designs; availability of the RT-Druid code generator
2006: support for Microchip dsPIC
2007: support for Atmel AVR Micaz
2009: announced ERIKA website on TuxFamily
2010: support for TriCore, Freescale S12XS, Freescale PowerPC 5000 PPC MPC5674F Mamba, Microchip PIC24, Microchip PIC32, Lattice MICO32, eSi-RISC
2011: support for Texas Instruments MSP430, Renesas R2xx, Freescale S12G, Freescale PowerPC 5000 PPC MPC5668G Fado
2012: support for ARM Cortex-M, Atmel AVR (Arduino), TI Stellaris Cortex M4, Freescale PowerPC 5000 PPC MPC5643L Leopard, NXP LPCXpresso. ERIKA Enterprise received OSEK/VDX certification.
2013: ERIKA Enterprise is supported by E4Coder automatic code generation tool.
2014: OSEK/VDX certification for Tricore AURIX
2017: RTOS was rewritten from scratch; new version (3) has proper support for multicore platforms (i.e., one binary for multiple cores), better support for memory protection, and an easier build system. The source code is now maintained on a GitHub repository.
2017: ERIKA v2.8.0 is released in November 2017.
2018: Multicore and AUTOSAR Scalability Class 1 added to ERIKA3. Graphical editor now available for the OIL file.
2019: On May 24, Erika released version RH65. Since August 27, 2019, the official website of ERIKA has not been updated up to the present date of April 2, 2024.
Licensing
Version 2 of the RTOS was released under GPL linking exception. Version 3 of the RTOS (also called ERIKA3) is released under plain GNU General Public License (GPL), with the linking exception sold on request.
Industrial usage
In 2010, Cobra Automotive Technology announced support for ERIKA Enterprise
In 2010, EnSilica and Pebble Bay consultancy ported ERIKA Enterprise to a family of configurable soft processor cores for automotive systems
In 2010, Magneti Marelli Powertrain announced support for ERIKA Enterprise.
In 2011, FAAM Spa announced support for ERIKA Enterprise.
In 2011, Aprilia Racing announced support for ERIKA Enterprise.
Hardware support
The ERIKA Enterprise kernel directly supports:
FLEX Boards.
Easy lab boards
Nvidia Jetson TX1 and TX2
Other evaluation boards are supported.
References
Embedded operating systems
Operating system technology
Real-time operating systems
ARM operating systems
Software using the GPL linking exception | ERIKA Enterprise | Technology | 892 |
1,522,428 | https://en.wikipedia.org/wiki/Theta%20Centauri | Theta Centauri or θ Centauri, officially named Menkent (), is a single star in the southern constellation of Centaurus, the centaur. With an apparent visual magnitude of +2.06, it is the fourth-brightest member of the constellation. Based on parallax measurements obtained during the Hipparcos mission, it is about distant. It has a relatively high proper motion, traversing the celestial sphere at the rate of . This suggests that Menkent may have originated in the outer disk of the Milky Way and is merely passing through the solar neighborhood.
Nomenclature
θ Centauri, Latinised to Theta Centauri, is the star's Bayer designation.
It bore the traditional name of Menkent derived from the Arabic word مَنْكِب (mankib) for "shoulder" (of the Centaur), apparently blended with a shortened form of "kentaurus" (centaur). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Menkent for this star on 21 August 2016 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning arsenal, refers to an asterism consisting of Theta Centauri, Zeta Centauri, Eta Centauri, 2 Centauri, HD 117440, Xi¹ Centauri, Gamma Centauri, Tau Centauri, D Centauri and Sigma Centauri. Consequently, the Chinese name for Theta Centauri itself is (, ).
Properties
This is an evolved giant star with a stellar classification of K0 III and 1.27 times the mass of the Sun. It is believed to be fusing helium into carbon and heavier elements within its core, qualifying it as a red clump star. It is a southern analog to Pollux, the brightest star in Gemini and the closest giant to the Sun. It is over ten times larger than the Sun and 60 times more luminous. The outer envelope has an effective temperature of 4,853 K, giving it the orange-hued glow of a cool, K-type star. Soft X-ray emission has been detected from this star, which has an estimated X-ray luminosity of .
See also
List of nearest giant stars
Notes
References
K-type giants
Centaurus
Centauri, Theta
Durchmusterung objects
Centauri, 5
0539
123139
068933
5288
Menkent | Theta Centauri | Astronomy | 530 |
62,735,157 | https://en.wikipedia.org/wiki/Murat%20Aitkhozhin | Murat Abenovich Aitkhozhin (, ) (29 June 1939 – 19 December 1987) was a Kazakh Soviet molecular biologist: the founder of molecular biology in Kazakhstan. He was the President of the Kazakhstan Academy of Sciences (1986–87), Deputy of The Supreme Soviet of the USSR, member (and Chairman of the Kazakh branch) of the Soviet Peace Fund. He was the founder and first Director of the Kazakh Institute of Molecular Biology and Biochemistry as well as being an Academian of the Academy of Sciences of the Kazakh SSR (1983), Doctor of Sciences (1977), Professor (1980) and Lenin Prize laureate (1976).
Biography
Early career
Murat Aitkhozhin was born on 29 June 1939 in Petropavlovsk in the Kazakh SSR in a large family. He was educated at the Kazakh State University (KazGU) (graduated in 1962) and Moscow State University (graduated in 1965).
In 1966, Aitkhozhin defended his thesis on “Ribonucleic Acids in the Early Embryogenesis of Loach Misgurnus fossilis” and between 1965 and 1967 he worked as a junior researcher. From 1967 to 1969 he was a senior research fellow at the Institute of Botany of the Academy of Sciences of the Kazakh SSR, and became head of the laboratory in 1969.
In 1976, he defended a doctoral dissertation at Moscow State University on the topic: "Ribonucleoprotein particles of higher plants." He was awarded the diploma No. 1 of a doctor of sciences in the specialty of molecular biology. In the same year he was awarded the most important prize of the USSR, the Lenin Prize for the discovery of a special class of ribonucleoprotein particles, informosomes.
Institute Director
In 1978, he became the director of the Institute of Botany of the Academy of Sciences of the Kazakh SSR, and was elected as a Corresponding Member of the Academy the following year (1979).
In 1983, he founded the Institute of Molecular Biology and Biochemistry of the Academy of Sciences of the Kazakh SSR. and was elected to the position of Academian of the Academy of Sciences of the Kazakh SSR. The international recognition of the successes of molecular biology in Kazakhstan was the holding in 1984 of the international symposium "Prospects for bioorganic chemistry and molecular biology" at the institute. The symposium was attended by leading scientists in the field of molecular biology and bioorganic chemistry, dozens of Nobel Prize winners – including Linus Pauling and Dorothy Hodgkin. At this international symposium Aitkhozhin made a large plenary report.
President of the Academy of Sciences
On 22 April 1986, at a turning point for the country and the republic, Aitkhozhin was elected as President of the Academy of Sciences of the Kazakh SSR, a post in which he served until his death in 1987.
On June 5, 1986, a session of the General Meeting of the Academy of Sciences of the Kazakh SSR was held, which was opened by the President of the Academy of Sciences of the Kazakh SSR, Academician M. A. Aitkhozhin, with the report "Tasks of scientists of the Academy of Sciences of the Kazakh SSR on the implementation of decisions of the XXVII Congress of the CPSU and XVI Congress of the Communist Party of Kazakhstan." At the session, Dinmukhamed Kunaev, First Secretary of the Communist Party of Kazakhstan and member of the Politburo of the USSR also spoke.
Murat Aitkhozhin conducted tremendous organizational work as President of the Academy of Sciences of the Kazakh SSR – over the period under his leadership, the coordination of scientific research in Kazakhstan significantly improved and cooperation with leading research centers was expanded, which ensured the passage of academic science to the modern frontiers of scientific and technological progress.
As president of the Academy of Sciences, Aitkhozhin paid great attention to the training of young scientific personnel. Being a professor at Kazakh State University, he taught a self-developed course in molecular biology and special courses on biochemistry for many years. He created the only dissertation council in the Central Asian region on the defense of candidate dissertations in molecular biology and biochemistry. Additional funds were allocated to expand the list of leading journals from abroad according to the profiles of sciences obtained by the Central Scientific Library of the Academy of Sciences.
Contribution to Science
Murat Aitkhozhin was the founder of molecular biology and biotechnology in Kazakhstan. Aitkhozhin was engaged in the search and study of the physicochemical properties of informosomes in plant cells in the group of academician A.S. Spirin. The group discovered classes of plant informosomes - free cytoplasmic, polysomal-linked, and nuclear, including RNA-binding proteins. For this work, they were awarded the Lenin Prize in 1976.
In 1983, Aitkhozhin founded the Institute of Molecular Biology and Biochemistry of the Academy of Sciences of the Kazakh SSR. In 1987, he organized the Kazakh Agricultural Biotechnology Center, where work is carried out on cell and genetic engineering of plants. Murat Aitkhozhin first introduced the course of molecular biology and a number of special courses for students of the biological faculty of the Kazakh State University.
Under the leadership of Aitkhozhin, a set of instruments for the automation of molecular biological experiments was developed, which was protected by 15 copyright certificates and 16 patents in leading countries.
Contribution to Public life
Along with science, Murat Aitkhozhin also took part in the party life of the country. Murat Aitkhozhin was elected a deputy of the Supreme Soviet of the USSR of the 11th convocation (Soviet of the Union) from the Chapaevsky constituency of the Ural region and a delegate to the XXVII Congress of the Communist Party of the Soviet Union.
Murat Aitkhozhin was a member of the Soviet Peace Fund, and was appointed Chairman of the Republican branch of the Soviet Peace Fund in 1981. Furthermore, he was appointed to the Presidium of the Academy of Sciences of the Kazakh SSR (in 1981). Along with future President of Kazakhstan, Nursultan Nazarbayev, Aitkhozhin was in the Kazakh delegation to Latvia in 1977. He was awarded the Gold Medal of the Soviet Peace Fund in 1987 as well as the Order of the Friendship of Peoples.
He was one of the signatories of the article “We Are Bitter” in the newspaper “Evening Alma-Ata” dated 12/27/1986 in which he condemned the actions of the Jheltoqsans, quote “..What right were these hooligans, most of whom were under the influence of alcohol and drugs ..."
Murat Aitkhozhin was in 1983–1987 a member of the Lenin Komsomol Prize Award Committee in the field of science and technology and in 1986-1987 he was chairman of the Committee on State Prizes in Science and Technology under the Council of Ministers of the Kazakh SSR. He was Chairman of the scientific council on physical and chemical biology at the Presidium of the Academy of Sciences of the Kazakh SSR, member of the editorial Board of the Union journals “Molecular biology”, “Biopolymers and cell”, was the editor-in-chief of the journal “Bulletin of the Kazakh SSR”.
Awards
Jubilee Medal "For Valiant labour - In Commemoration of the 100th Anniversary of the Birth of Vladimir Ilyich Lenin" (1970).
Is entered in the Golden Book of Honour of the Kazakh SSR (1974).
Lenin Prize (1976) - for the series of works "The discovery of informosomes - a new class of intracellular particles."
Certificate of Honour of the Supreme Soviet of the Kazakh SSR (1981).
Gold Medal of the Soviet Peace Fund (1987).
Order of the Friendship of Peoples (1987).
Legacy
Named after Academician Murat Aitkhozhin:
Secondary school number 1 in Petropavlovsk.
Institute of Molecular Biology and Biochemistry of the Academy of Sciences of Kazakhstan.
2 scholarships for graduate students of the Academy of Sciences of Kazakhstan in the field of biology.
Prize for young scientists of the Academy of Sciences of Kazakhstan in the field of biological sciences.
A commemorative plaque adorns the building of the house where Murat Aitkhozhin lived.
On the occasion of the 60th anniversary of Murat Aitkhozhin, a memorial museum was created at the Institute of Molecular Biology and Biochemistry to commemorate his life and works.
In 2016, Murat Aitkhozhin was chosen as one of the nominees in the "Science" category of the national project «El Tulgasy» (Name of the Motherland) The idea of the project was to select the most significant and famous citizens of Kazakhstan whose names are now associated with the achievements of the country. More than 350,000 people voted in this project, and Aitkhozhin was voted into 5th place in his category.
Scientific Work
Aitkhozhin M. A. Molecular mechanisms of plant protein biosynthesis: Fav. tr - Alma-Ata: Science of the Kazakh SSR, 1989 .-- 287 p. - .
Aitkhozhin M. A. Ribonucleoprotein particles of higher plants: Abstract. dis. ... Dr. Biol. sciences. - Moscow, 1976.- 44 p.
Aitkhozhin M.A., Azimuratova R. Zh., Kim T.N., Darkanbaev T.B. Isolation and characterization of fast-flowing fractions of Aspergillus Niger cytoplasm RNA // Biochemistry. - 1972. - T. 37, No. 6 . - S. 1276-1281 .
Aitkhozhin M.A., Beklemishev A.V., Nazarova L.M., Filimonov N.G. Functional activity of heterologous ribosomes // Dokl. USSR Academy of Sciences. - 1972. - T. 203, No. 6 . - S. 1403-1404 .
Aitkhozhin M.A., Belitsina N.V., Spirin A.S. Nucleic acids in the early stages of embryo development in fish (on the example of the loach Misgurnus fossilis) // Biochemistry. - 1964. - T. 29, No. 1 . - S. 169-175 .
Aitkhozhin M.A., Iskakov B.K. Plant Informosomes. - Alma-Ata: Science, 1982. - 182 p.
Belitsina N.V., Aitkhozhin M.A., Gavrilova L.P., Spirin A.S. Information ribonucleic acids of differentiating animal cells // Biochemistry. - 1964. - T. 29, No. 2 . - S. 363-374 .
Doshchanov Kh.I., Pushkarev V.M., Polimbetova N.S., Aitkhozhin M.A. Liberation of informosomes from isolated nuclei of wheat germ in the in vitro system // Molec. biology. - 1981. - T. 15, No. 1 . - S. 72-78 .
Iskakov B.K. and Aitkhozhin M.A. Proteins of informosomes associated with polyribosomes from germinating wheat germ // Molec. biology. - 1979. - T. 13, No. 5 . - S. 1124-1129 .
Methods of molecular biology, biochemistry and plant biotechnology: [Sat. Art.] / Resp. ed. M.A. Aitkhozhin, H.I. Doshchanov. - Alma-Ata: Science of the Kazakh SSR, 1988 .-- 165 p. - .
Spirin A.S., Belitsina N.V., Aitkhozhin M.A. Information RNA in early embryogenesis // Zh. total biol. - 1964. - T. 25, No. 5 . - S. 321-338 .
Spirin A.S., Belitsina N.V., Aitkhozhin M.A. (Spirin AS, Belitsina NV, Aitkhozhin MA). Messenger RNA in early embryogenesis (Eng.) // Fed. Proc. Transl. Suppl. - 1965. - Vol. 24, no. 5 . - P. 907-915 .
Filimonov N.G., Aitkhozhin M.A. Poly-A-protein complexes in the polyribosomes of germinating wheat germ // Biochemistry. - 1978. - T. 43, No. 6 . - S. 1062-1066 .
Filimonov N.G., Aitkhozhin M.A., Gazaryan K.G. Poly (A) -containing RNA from germinating wheat germ // Molec. biology. - 1978. - T. 12, No. 3 . - S. 522-526 .
Filimonov N.G., Martakova N.A., Popov L.S., Tarantul V.Z., Aitkhozhin M.A. Organization of nucleotide sequences of nuclear DNA of wheat germ // Biochemistry. - 1982. - T. 47, No. 7 . - S. 1198-1207 .
Shakulov R.S., Aitkhozhin M.A., Spirin A.S. // Biochemistry. - 1962. - T. 27, No. 4 . - S. 744–751.
References
1939 births
1987 deaths
Kazakhstani scientists
Soviet scientists
Soviet biochemists
Soviet biologists
Molecular biologists
Recipients of the Lenin Prize
Recipients of the Order of Friendship of Peoples
Communist Party of the Soviet Union members
Moscow State University alumni
Al-Farabi Kazakh National University alumni
Eleventh convocation members of the Soviet of the Union | Murat Aitkhozhin | Chemistry | 2,939 |
41,547,608 | https://en.wikipedia.org/wiki/PAS754 | BS PAS 754:2014 is a British Standards Institution (BSI) software Publicly Available Specification, published in May 2014. BS PAS 754:2014 was withdrawn following the publication of BS 10754-1:2018 in February 2018.
The PAS defines the overall principles for effective software trustworthiness, and includes technical, physical, cultural and behavioral measures alongside effective leadership and governance. It also identifies the necessary tools, techniques and processes and addresses safety, reliability, availability, security and resilience issues.
Structure of the standard
The official title of the standard is "Software Trustworthiness – Governance and management – Specification".
PAS 754:2014 has seven main clauses, plus three annexes, which cover:
0. Introduction
1. Scope
2. Normative References
3. Terms, definitions and acronyms
4. Approach
5. Concepts
6. Principles
Annex A. System Lifecycle
Annex B. Techniques
Bibliography
Development
The development of PAS754 has been led by the Trustworthy Software Initiative, a UK government sponsored Public Good activity aimed at Making Software Better.
The following organizations were involved in the development of this specification: Atkins Group; BIS; CPNI; Certification Europe; De Montfort University; Group 5 Training; IET; Microsoft (UK); MISRA; Nexor; Oxford Brookes University; QinetiQ; TechUK and University of Warwick.
References
British Standards
Information assurance standards
Information technology in the United Kingdom | PAS754 | Technology,Engineering | 290 |
15,194,142 | https://en.wikipedia.org/wiki/Wrch1 | RhoU (or Wrch1 or Chp2) is a small (~21 kDa) signaling G protein (more specifically a GTPase), and is a member of the Rho family of GTPases.
Wrch1 was identified in 2001 as encoded by a non-canonical Wnt induced gene.
RhoU/Wrch delineates with RhoV/Chp a Rho subclass related to Rac and Cdc42, which emerged in early multicellular organisms during evolution.
References
G proteins
Genes mutated in mice | Wrch1 | Chemistry | 116 |
4,603,176 | https://en.wikipedia.org/wiki/Covariant%20formulation%20of%20classical%20electromagnetism | The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
Covariant objects
Preliminary four-vectors
Lorentz tensors of the following kinds may be used in this article to describe bodies or particles:
four-displacement:
Four-velocity: where γ(u) is the Lorentz factor at the 3-velocity u.
Four-momentum: where is 3-momentum, is the total energy, and is rest mass.
Four-gradient:
The d'Alembertian operator is denoted ,
The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor:
Electromagnetic tensor
The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities.
and the result of raising its indices is
where E is the electric field, B the magnetic field, and c the speed of light.
Four-current
The four-current is the contravariant four-vector which combines electric charge density ρ and electric current density j:
Four-potential
The electromagnetic four-potential is a covariant four-vector containing the electric potential (also called the scalar potential) ϕ and magnetic vector potential (or vector potential) A, as follows:
The differential of the electromagnetic potential is
In the language of differential forms, which provides the generalisation to curved spacetimes, these are the components of a 1-form and a 2-form respectively. Here, is the exterior derivative and the wedge product.
Electromagnetic stress–energy tensor
The electromagnetic stress–energy tensor can be interpreted as the flux density of the momentum four-vector, and is a contravariant symmetric tensor that is the contribution of the electromagnetic fields to the overall stress–energy tensor:
where is the electric permittivity of vacuum, μ0 is the magnetic permeability of vacuum, the Poynting vector is
and the Maxwell stress tensor is given by
The electromagnetic field tensor F constructs the electromagnetic stress–energy tensor T by the equation:
where η is the Minkowski metric tensor (with signature ). Notice that we use the fact that
which is predicted by Maxwell's equations.
Maxwell's equations in vacuum
In vacuum (or for the microscopic equations, not including macroscopic material descriptions), Maxwell's equations can be written as two tensor equations.
The two inhomogeneous Maxwell's equations, Gauss's Law and Ampère's law (with Maxwell's correction) combine into (with metric):
The homogeneous equations – Faraday's law of induction and Gauss's law for magnetism combine to form , which may be written using Levi-Civita duality as:
where Fαβ is the electromagnetic tensor, Jα is the four-current, εαβγδ is the Levi-Civita symbol, and the indices behave according to the Einstein summation convention.
Each of these tensor equations corresponds to four scalar equations, one for each value of β.
Using the antisymmetric tensor notation and comma notation for the partial derivative (see Ricci calculus), the second equation can also be written more compactly as:
In the absence of sources, Maxwell's equations reduce to:
which is an electromagnetic wave equation in the field strength tensor.
Maxwell's equations in the Lorenz gauge
The Lorenz gauge condition is a Lorentz-invariant gauge condition. (This can be contrasted with other gauge conditions such as the Coulomb gauge, which if it holds in one inertial frame will generally not hold in any other.) It is expressed in terms of the four-potential as follows:
In the Lorenz gauge, the microscopic Maxwell's equations can be written as:
Lorentz force
Charged particle
Electromagnetic (EM) fields affect the motion of electrically charged matter: due to the Lorentz force. In this way, EM fields can be detected (with applications in particle physics, and natural occurrences such as in aurorae). In relativistic form, the Lorentz force uses the field strength tensor as follows.
Expressed in terms of coordinate time t, it is:
where pα is the four-momentum, q is the charge, and xβ is the position.
Expressed in frame-independent form, we have the four-force
where uβ is the four-velocity, and τ is the particle's proper time, which is related to coordinate time by .
Charge continuum
The density of force due to electromagnetism, whose spatial part is the Lorentz force, is given by
and is related to the electromagnetic stress–energy tensor by
Conservation laws
Electric charge
The continuity equation:
expresses charge conservation.
Electromagnetic energy–momentum
Using the Maxwell equations, one can see that the electromagnetic stress–energy tensor (defined above) satisfies the following differential equation, relating it to the electromagnetic tensor and the current four-vector
or
which expresses the conservation of linear momentum and energy by electromagnetic interactions.
Covariant objects in matter
Free and bound four-currents
In order to solve the equations of electromagnetism given here, it is necessary to add information about how to calculate the electric current, Jν. Frequently, it is convenient to separate the current into two parts, the free current and the bound current, which are modeled by different equations;
where
Maxwell's macroscopic equations have been used, in addition the definitions of the electric displacement D and the magnetic intensity H:
where M is the magnetization and P the electric polarization.
Magnetization–polarization tensor
The bound current is derived from the P and M fields which form an antisymmetric contravariant magnetization-polarization tensor
which determines the bound current
Electric displacement tensor
If this is combined with Fμν we get the antisymmetric contravariant electromagnetic displacement tensor which combines the D and H fields as follows:
The three field tensors are related by:
which is equivalent to the definitions of the D and H fields given above.
Maxwell's equations in matter
The result is that Ampère's law,
and Gauss's law,
combine into one equation:
The bound current and free current as defined above are automatically and separately conserved
Constitutive equations
Vacuum
In vacuum, the constitutive relations between the field tensor and displacement tensor are:
Antisymmetry reduces these 16 equations to just six independent equations. Because it is usual to define Fμν by
the constitutive equations may, in vacuum, be combined with the Gauss–Ampère law to get:
The electromagnetic stress–energy tensor in terms of the displacement is:
where δαπ is the Kronecker delta. When the upper index is lowered with η, it becomes symmetric and is part of the source of the gravitational field.
Linear, nondispersive matter
Thus we have reduced the problem of modeling the current, Jν to two (hopefully) easier problems — modeling the free current, Jνfree and modeling the magnetization and polarization, . For example, in the simplest materials at low frequencies, one has
where one is in the instantaneously comoving inertial frame of the material, σ is its electrical conductivity, χe is its electric susceptibility, and χm is its magnetic susceptibility.
The constitutive relations between the and F tensors, proposed by Minkowski for a linear materials (that is, E is proportional to D and B proportional to H), are:
where u is the four-velocity of material, ε and μ are respectively the proper permittivity and permeability of the material (i.e. in rest frame of material), and denotes the Hodge star operator.
Lagrangian for classical electrodynamics
Vacuum
The Lagrangian density for classical electrodynamics is composed by two components: a field component and a source component:
In the interaction term, the four-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables; the four-current is not itself a fundamental field.
The Lagrange equations for the electromagnetic lagrangian density can be stated as follows:
Noting
the expression inside the square bracket is
The second term is
Therefore, the electromagnetic field's equations of motion are
which is the Gauss–Ampère equation above.
Matter
Separating the free currents from the bound currents, another way to write the Lagrangian density is as follows:
Using Lagrange equation, the equations of motion for can be derived.
The equivalent expression in vector notation is:
See also
Covariant classical field theory
Electromagnetic tensor
Electromagnetic wave equation
Liénard–Wiechert potential for a charge in arbitrary motion
Moving magnet and conductor problem
Inhomogeneous electromagnetic wave equation
Proca action
Quantum electrodynamics
Relativistic electromagnetism
Stueckelberg action
Wheeler–Feynman absorber theory
Notes
References
Further reading
The Feynman Lectures on Physics Vol. II Ch. 25: Electrodynamics in Relativistic Notation
Concepts in physics
Electromagnetism
Special relativity | Covariant formulation of classical electromagnetism | Physics | 1,989 |
618,077 | https://en.wikipedia.org/wiki/Power%20engineering | Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering.
History
Pioneering years
Electricity became a subject of scientific interest in the late 17th century. Over the next two centuries a number of important discoveries were made including the incandescent light bulb and the voltaic pile. Probably the greatest discovery with respect to power engineering came from Michael Faraday who in 1831 discovered that a change in magnetic flux induces an electromotive force in a loop of wire—a principle known as electromagnetic induction that helps explain how generators and transformers work.
In 1881 two electricians built the world's first power station at Godalming in England. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and thirty-four incandescent lamps at 40 volts. However supply was intermittent and in 1882 Thomas Edison and his company, The Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station consisted of several generators and initially powered around 3,000 lamps for 59 customers. The power station used direct current and operated at a single voltage. Since the direct current power could not be easily transformed to the higher voltages necessary to minimise power loss during transmission, the possible distance between the generators and load was limited to around half-a-mile (800 m).
That same year in London Lucien Gaulard and John Dixon Gibbs demonstrated the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up forty kilometres (25 miles) of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that switching one lamp on or off would affect other lamps further down the line. Following the demonstration George Westinghouse, an American entrepreneur, imported a number of the transformers along with a Siemens generator and set his engineers to experimenting with them in the hopes of improving them for use in a commercial power system.
One of Westinghouse's engineers, William Stanley, recognised the problem with connecting transformers in series as opposed to parallel and also realised that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built the world's first practical transformer based alternating current power system at Great Barrington, Massachusetts in 1886. In 1885 the Italian physicist and electrical engineer Galileo Ferraris demonstrated an induction motor and in 1887 and 1888 the Serbian-American engineer Nikola Tesla filed a range of patents related to power systems including one for a practical two-phase induction motor which Westinghouse licensed for his AC system.
By 1890 the power industry had flourished and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe – these networks were effectively dedicated to providing electric lighting. During this time a fierce rivalry in the US known as the "war of the currents" emerged between Edison and Westinghouse over which form of transmission (direct or alternating current) was superior. In 1891, Westinghouse installed the first major power system that was designed to drive an electric motor and not just provide electric lighting. The installation powered a synchronous motor at Telluride, Colorado with the motor being started by a Tesla induction motor. On the other side of the Atlantic, Oskar von Miller built a 20 kV 176 km three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt. In 1895, after a protracted decision-making process, the Adams No. 1 generating station at Niagara Falls began transmitting three-phase alternating current power to Buffalo at 11 kV. Following completion of the Niagara Falls project, new power systems increasingly chose alternating current as opposed to direct current for electrical transmission.
Twentieth century
Power engineering and Bolshevism
The generation of electricity was regarded as particularly important following the Bolshevik seizure of power. Lenin stated "Communism is Soviet power plus the electrification of the whole country." He was subsequently featured on many Soviet posters, stamps etc. presenting this view. The GOELRO plan was initiated in 1920 as the first Bolshevik experiment in industrial planning and in which Lenin became personally involved. Gleb Krzhizhanovsky was another key figure involved, having been involved in the construction of a power station in Moscow in 1910. He had also known Lenin since 1897 when they were both in the St. Petersburg chapter of the Union of Struggle for the Liberation of the Working Class.
Power engineering in the USA
In 1936 the first commercial high-voltage direct current (HVDC) line using mercury-arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by installing direct current generators in series (a system known as the Thury system) although this suffered from serious reliability issues. In 1957 Siemens demonstrated the first solid-state rectifier (solid-state rectifiers are now the standard for HVDC systems) however it was not until the early 1970s that this technology was used in commercial power systems. In 1959 Westinghouse demonstrated the first circuit breaker that used SF6 as the interrupting medium. SF6 is a far superior dielectric to air and, in recent times, its use has been extended to produce far more compact switching equipment (known as switchgear) and transformers. Many important developments also came from extending innovations in the ICT field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for much better remote control of the power system's switchgear and generators.
Power
Power Engineering deals with the generation, transmission, distribution and utilization of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors and power electronics.
Power engineers may also work on systems that do not connect to the grid. These systems are called off-grid power systems and may be used in preference to on-grid systems for a variety of reasons. For example, in remote locations it may be cheaper for a mine to generate its own power rather than pay for connection to the grid and in most mobile applications connection to the grid is simply not practical.
Fields
Electricity generation covers the selection, design and construction of facilities that convert energy from primary forms to electric power.
Electric power transmission requires the engineering of high voltage transmission lines and substation facilities to interface to generation and distribution systems. High voltage direct current systems are one of the elements of an electric power grid.
Electric power distribution engineering covers those elements of a power system from a substation to the end customer.
Power system protection is the study of the ways an electrical power system can fail, and the methods to detect and mitigate for such failures.
In most projects, a power engineer must coordinate with many other disciplines such as civil and mechanical engineers, environmental experts, and legal and financial personnel. Major power system projects such as a large generating station may require scores of design professionals in addition to the power system engineers. At most levels of professional power system engineering practice, the engineer will require as much in the way of administrative and organizational skills as electrical engineering knowledge.
Professional societies and international standards organizations
In both the UK and the US, professional societies had long existed for civil and mechanical engineers. The Institution of Electrical Engineers (IEE) was founded in the UK in 1871, and the AIEE in the United States in 1884. These societies contributed to the exchange of electrical knowledge and the development of electrical engineering education.
On an international level, the International Electrotechnical Commission (IEC), which was founded in 1906, prepares standards for power engineering, with 20,000 electrotechnical experts from 172 countries developing global specifications based on consensus.
See also
Energy economics
Industrial ecology
Power electronics
Power system simulation
Power engineering software
References
External links
IEEE Power Engineering Society
Jadavpur University, Department of Power Engineering
Power Engineering International Magazine Articles
Power Engineering Magazine Articles
American Society of Power Engineers, Inc.
National Institute for the Uniform Licensing of Power Engineer Inc.
Worcester Polytechnic Institute Power Systems Engineering
P
P | Power engineering | Physics,Engineering | 1,795 |
48,712,402 | https://en.wikipedia.org/wiki/Parthanatos | Parthanatos (derived from the Greek Θάνατος, "Death") is a form of programmed cell death that is distinct from other cell death processes such as necrosis and apoptosis. While necrosis is caused by acute cell injury resulting in traumatic cell death and apoptosis is a highly controlled process signalled by apoptotic intracellular signals, parthanatos is caused by the accumulation of Poly(ADP ribose) (PAR) and the nuclear translocation of apoptosis-inducing factor (AIF) from mitochondria. Parthanatos is also known as PARP-1 dependent cell death. PARP-1 mediates parthanatos when it is over-activated in response to extreme genomic stress and synthesizes PAR which causes nuclear translocation of AIF. Parthanatos is involved in diseases that afflict hundreds of millions of people worldwide. Well known diseases involving parthanatos include Parkinson's disease, stroke, heart attack, and diabetes. It also has potential use as a treatment for ameliorating disease and various medical conditions such as diabetes and obesity.
History
Name
The term parthanatos was not coined until a review in 2009. The word parthanatos is derived from Thanatos, the personification of death in Greek mythology.
Discovery
Parthanatos was first discovered in a 2006 paper by Yu et al. studying the increased production of mitochondrial reactive oxygen species (ROS) by hyperglycemia. This phenomenon is linked with negative effects arising from clinical complications of diabetes and obesity.
Researchers noticed that high glucose concentrations led to overproduction of reactive oxygen species and rapid fragmentation of mitochondria. Inhibition of mitochondrial pyruvate uptake blocked the increase of ROS, but did not prevent mitochondrial fragmentation. After incubating cells with the non-metabolizable stereoisomer L-glucose, neither reactive oxygen species increase nor mitochondrial fragmentation were observed. Ultimately, the researchers found that mitochondrial fragmentation mediated by the fission process is a necessary component for high glucose-induced respiration increase and ROS overproduction.
Extended exposure to high glucose conditions are similar to untreated diabetic conditions, and so the effects mirror each other. In this condition, the exposure creates a periodic and prolonged increase in ROS production along with mitochondrial morphology change. If mitochondrial fission was inhibited, the periodic fluctuation of ROS production in a high glucose environment was prevented. This research shows that when cell damage to the ROS is too great, PARP-1 will initiate cell death.
Morphology
Structure of PARP-1
Poly(ADP-ribose) polymerase-1 (PARP-1) is a nuclear enzyme that is found universally in all eukaryotes and is encoded by the PARP-1 gene. It belongs to the PARP family, which is a group of catalysts that transfer ADP-ribose units from NAD (nicotinamide dinucleotide) to protein targets, thus creating branched or linear polymers. The major domains of PARP-1 impart the ability to fulfill its functions. These protein sections include the DNA-binding domain on the N-terminus (allows PARP-1 to detect DNA breaks), the automodification domain (has a BRCA1 C terminus motif which is key for protein-protein interactions), and a catalytic site with the NAD+-fold (characteristic of mono-ADP ribosylating toxins).
Role of PARP-1
Normally, PARP-1 is involved in a variety of functions that are important for cell homeostasis such as mitosis. Another of these roles is DNA repair, including the repair of base lesions and single-strand breaks. PARP-1 interacts with a wide variety of substrates including histones, DNA helicases, high mobility group proteins, topoisomerases I and II, single-strand break repair factors, base-excision repair factors, and several transcription factors.
Role of PAR
PARP-1 accomplishes many of its roles through regulating poly(ADP-ribose) (PAR). PAR is a polymer that varies in length and can be either linear or branched. It is negatively charged which allows it to alter the function of the proteins it binds to either covalently or non-covalently. PAR binding affinity is strongest for branched polymers, weaker for long linear polymers and weakest for short linear polymers. PAR also binds selectively with differing strengths to the different histones. It is suspected that PARP-1 modulates processes (such as DNA repair, DNA transcription, and mitosis) through the binding of PAR to its target proteins.
Pathway
The parthanatos pathway is activated by DNA damage caused by genotoxic stress or excitotoxicity. This damage is recognized by the PARP-1 enzyme which causes an upregulation in PAR. PAR causes translocation of apoptosis-inducing factor (AIF) from the mitochondria to the nucleus where it induces DNA fragmentation and ultimately cell death. This general pathway has been outlined now for almost a decade. While considerable success has been made in understanding the molecular events in parthanatos, efforts are still ongoing to completely identify all of the major players within the pathway, as well how spatial and temporal relationships between mediators affect them.
Pathway activation
Extreme damage of DNA causing breaks and changes in chromatin structure have been shown to induce the parthanatos pathway. Stimuli that causes the DNA damage can come from a variety of different sources. Methylnitronitrosoguanidine, an alkylating agent, has been widely used in several studies to induce the parthanatos pathway. A noted number of other stimuli or toxic conditions have also been used to cause DNA damage such as H2O2, NO, and ONOO− generation (oxygenglucose deprivation).
The magnitude, length of exposure, type of cell used, and purity of the culture, are all factors that can influence the activation of the pathway. The damage must be extreme enough for the chromatin structure to be altered. This change in structure is recognized by the N-terminal zinc-finger domain on the PARP-1 protein. The protein can recognize both single and double strand DNA breaks.
Cell death initiation
Once the PARP-1 protein recognizes the DNA damage, it catalyzes post-transcriptional modification of PAR. PAR will be formed either as a branched or linear molecule. Branching and long-chain polymers will be more toxic to the cell than simple short polymers. The more extreme the DNA damage, the more PAR accumulates in the nucleus. Once enough PAR has accumulated, it will translocate from the nucleus into the cytosol. One study has suggested that PAR can translocate as a free polymer, however translocation of a protein-conjugated PAR cannot be ruled out and is in fact a topic of active research. PAR moves through the cytosol and enters the mitochondria through depolarization. Within the mitochondria, PAR binds directly to the AIF which has a PAR polymer binding site, causing the AIF to dissociate from the mitochondria. AIF is then translocated to the nucleus where it induces chromatin condensation and large scale (50Kb) DNA fragmentation. How AIF induces these effects is still unknown. It is thought that an AIF associated nuclease (PAAN) that is currently unidentified may be present. Human AIF have a DNA binding site that would indicate that AIF binds directly to the DNA in the nucleus directly causing the changes. However, as mice AIF do not have this binding domain and are still able to undergo parthanatos, it is evident that there must be another mechanism involved.
PARG
PAR, which is responsible for the activation of AIF, is regulated in the cell by the enzyme poly(ADP-ribose) glycohydrolase (PARG). After PAR is synthesized by PARP-1, it is degraded through a process catalyzed by PARG. PARG has been found to protect against PAR-mediated cell death while its deletion has increased toxicity through the accumulation of PAR.
Other proposed mechanisms
Before the discovery of the PAR and AIF pathway, it was thought that the overactivation of PARP-1 lead to over consumption of NAD+. As a result of NAD+ depletion, a decrease of ATP production would occur, and the resulting loss of energy would kill the cell. However it is now known that this loss of energy would not be enough to account for cell death. In cells lacking PARG, activation of PARP-1 leads to cell death in the presence of ample NAD+.
Differences between cell death pathways
Parthanatos is defined as a unique cell death pathway from apoptosis for a few key reasons. Primarily, apoptosis is dependent on the caspase pathway activated by cytochrome c release, while the parthanatos pathway is able to act independently of caspase. Furthermore, unlike apoptosis, parthanatos causes large scale DNA fragmentation (apoptosis only produces small scale fragmentation) and does not form apoptotic bodies.
While parthanatos does share similarities with necrosis, is also has several differences. Necrosis is not a regulated pathway and does not undergo any controlled nuclear fragmentation. While parthanatos does involve loss of cell membrane integrity like necrosis, it is not accompanied by cell swelling.
Comparison of cell death types
Pathology and treatment
Neurotoxicity
The PAR enzyme was originally connected to neural degradation pathways in 1993. Elevated levels of nitric oxide (NO) have been shown to cause neurotoxicity in samples of rat hippocampal neurons. A deeper look into the effects of NO on neurons showed that nitric oxides cause damage to DNA strands; the damage in turn elicits PAR enzyme activity that leads to further degradation and neuronal death. PAR- blockers halted the cell death mechanisms in the presence of elevated NO levels.
PARP activity has also been linked to the neurodegenerative properties of toxin induced Parkinsonism. 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) is a neurotoxin that has been linked to neurodegeneration and development of Parkinson Disease-like symptoms in patients since 1983. The MPTP toxin's effects were discovered when four people were intravenously injecting the toxin that they produced inadvertently when trying to street-synthesise the merpyridine (MPPP) drug. The link between MPTP and PARP was found later when research showed that the MPTP effects on neurons were reduced in mutated cells lacking the PARP gene. The same research also showed highly increased PARP activation in dopamine producing cells in the presence of MPTP.
Alpha-synuclein is a protein that binds to DNA and modulates DNA repair. A key feature of Parkinson's disease is the pathologic accumulation and aggregation of alpha-synuclein. In the neurons of individuals with Parkinson's disease, alpha-synuclein is deposited as fibrils in intracytoplasmic structures referred to as Lewy bodies. Formation of pathologic alpha-synuclein is associated with activation of PARP1, increased poly(ADP) ribose generation and further acceleration of pathologic alpha-synuclein formation. This process can lead to cell death by parthanatos.
Multisystem involvement
Parthanatos, as a cell death pathway, is being increasingly linked to several syndromes connected with specific tissue damage outside of the nervous system. This is highlighted in the mechanism of streptozotocin (STZ) induced diabetes. STZ is a chemical that is naturally produced by the human body. However, in high doses, STZ has been shown to produce diabetic symptoms by damaging pancreatic β cells, which are insulin-producing. The degradation of β cells by STZ was linked to PARP in 1980 when studies showed that a PAR synthesis inhibitor reduced STZ's effects on insulin synthesis. Inhibition of PARP causes pancreatic tissue to sustain insulin synthesis levels, and reduce β cell degradation even with elevated STZ toxin levels.
PARP activation has also been preliminarily connected with arthritis, colitis, and liver toxicity.
Therapy
The multi-step nature of the parthanatos pathway allows for chemical manipulation of its activation and inhibition for use in therapy. This rapidly developing field seems to be currently focused on the use of PARP blockers as treatments for chronically degenerative illnesses. This culminated in 3rd generation inhibitors such as midazoquinolinone and isoquinolindione currently going to clinical trials.
Another path for treatments is to recruit the parthanatos pathway to induce apoptosis into cancer cells, however no treatments have passed the theoretical stage.
See also
Apoptosis inducing factor
Programmed cell death
PARP1
References
Cellular processes
Programmed cell death
Medical aspects of death | Parthanatos | Chemistry,Biology | 2,701 |
74,336,773 | https://en.wikipedia.org/wiki/Aldehyde-stabilized%20cryopreservation | Aldehyde-stabilized cryopreservation is a new technique for cryopreservation first demonstrated in 2016 by Robert L. McIntyre and Gregory Fahy at the cryobiology research company 21st Century Medicine, Inc. This technique uses a particular implementation of fixation and vitrification that can successfully preserve a rabbit brain in "near perfect" condition at −135 °C, with the cell membranes, synapses, and intracellular structures intact in electron micrographs. In 2016, McIntire and Fahy were awarded the first portion of the Brain Preservation Technology Prize, the Small Animal Brain Preservation Prize, by the Brain Preservation Foundation for the successful cryopreservation of a whole mouse brain. The cryopreserved brain was rewarmed and no serious degradation was found to have occurred; the brain structure under electron microscopic evaluation after rewarming remained well-preserved. Although this technique has not yet lead to a successful revival of a cryopreserved brain, some researchers see this technique as providing promising directions for future research.
See also
Cryonics
References
Cryonics | Aldehyde-stabilized cryopreservation | Biology | 223 |
1,337,152 | https://en.wikipedia.org/wiki/Zoo%20hypothesis | The zoo hypothesis speculates on the assumed behavior and existence of technologically advanced extraterrestrial life and the reasons they refrain from contacting Earth. It is one of many theoretical explanations for the Fermi paradox. The hypothesis states that extraterrestrial life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo. The hypothesis seeks to explain the apparent absence of extraterrestrial life despite its generally accepted plausibility and hence the reasonable expectation of its existence.
Extraterrestrial life forms might, for example, choose to allow contact once the human species has passed certain technological, political, and/or ethical standards. Alternatively, they may withhold contact until humans force contact upon them, possibly by sending a spacecraft to an extraterrestrial-inhabited planet. In this regard, reluctance to initiate contact could reflect a sensible desire to minimize risk. An extraterrestrial society with advanced remote-sensing technologies may conclude that direct contact with neighbors confers added risks to itself without an added benefit. A variant on the zoo hypothesis suggested by former MIT Haystack Observatory scientist John Allen Ball is the "laboratory" hypothesis, in which humanity is being subjected to experiments, with Earth serving as a giant laboratory. Ball describes this hypothesis as "morbid" and "grotesque", simultaneously overlooking the possibility that such experiments may be altruistic, i.e., designed to accelerate the pace of civilization to overcome a tendency for intelligent life to destroy itself, until a species is sufficiently developed to establish contact.
Assumptions
The zoo hypothesis assumes, first, that whenever the conditions are such that life can exist and evolve, it will, and secondly, there are many places where life can exist and a large number of extraterrestrial cultures in existence. It also assumes that these extraterrestrials have great reverence for independent, natural evolution and development. In particular, assuming that intelligence is a physical process that acts to maximize the diversity of a system's accessible futures, a fundamental motivation for the zoo hypothesis would be that premature contact would "unintelligently" reduce the overall diversity of paths the universe itself could take.
These ideas are perhaps most plausible if there is a relatively universal cultural or legal policy among a plurality of extraterrestrial civilizations necessitating isolation with respect to civilizations at Earth-like stages of development. In a universe without a hegemonic power, random single civilizations with independent principles would make contact. This makes a crowded universe with clearly defined rules seem more plausible.
If there is a plurality of extraterrestrial cultures, however, this theory may break down under the uniformity of motive concept because it would take just a single extraterrestrial civilization, or simply a small group within any given civilisation, to decide to act contrary to the imperative within human range of detection for it to be undone, and the probability of such a violation of hegemony increases with the number of civilizations. This idea, however, becomes more plausible if all civilizations tend to evolve similar cultural standards and values with regard to contact much like convergent evolution on Earth has independently evolved eyes on numerous occasions, or all civilizations follow the lead of some particularly distinguished civilization, such as the first civilization among them.
In this hypothesis, the problem of universal ethical homogeneity is solved because the acquisition of a persistent advanced level of civilization requires overcoming many problems, such as self-destruction, war, overpopulation, pollution, and scarcity. Managing to solve these problems could guide a civilization to adopt a responsible and wise behavior, otherwise they would disappear (involving other solutions to the Fermi paradox). In the zoo hypothesis, no contact would be possible until humanity had acquired a certain level of civilization and maturity (responsibility and wisdom), otherwise it would become a potential threat.
One estimate for when humanity might be able to test the zoo hypothesis, essentially by eliminating ways technological extraterrestrials within the Galaxy may be able to hide, is some time within the next half century.
Fermi paradox
A modified zoo hypothesis is a possible solution to the Fermi paradox. The time between the emergence of the first civilization within the Milky Way and all subsequent civilizations could be enormous. Monte Carlo simulation shows the first few inter-arrival times between emergent civilizations would be similar in length to geologic epochs on Earth. The zoo hypothesis assumes a civilization may have a ten-million, one-hundred-million, or half-billion-year head start on humanity, i.e., it may have the capability to completely negate our best attempts to detect it.
The zoo hypothesis relies in part on applying the concept of hegemonic power to the Fermi paradox. Even if a first hegemonic non-interventionist grand civilization (first civilization) is long gone, their initial legacy could persist in the form of a passed-down tradition, or perhaps in an artificial lifeform (artificial superintelligence) dedicated to a non-interventionist hegemonic goal without the risk of death. Thus, the hegemonic power does not even have to be the first civilization, but simply the first to spread its non-interventionist doctrine and control over a large volume of the galaxy. If just one civilization acquired hegemony in the distant past, it could form an unbroken chain of taboo against rapacious colonization in favour of non-interference in any civilizations that follow. The uniformity of motive concept previously mentioned would become moot in such a situation. The main problem would be how a galaxy-wide civilization would block Earth from receiving all intentional or unintentional communications.
Nonetheless, if the oldest civilization still present in the Milky Way has, for example, a 100-million-year time advantage over the next oldest civilization, then it is conceivable that they could be in the singular position of being able to control, monitor, influence or isolate the emergence of every civilization that follows within their sphere of influence. This is analogous to what happens on Earth within our own civilization on a daily basis, in that everyone born on this planet is born into a pre-existing system of familial associations, customs, traditions and laws that were already long established before our birth and which we have little or no control over.
METI (Messaging Extraterrestrial Intelligence)
Overcoming the zoo hypothesis is one of the goals of METI, an organization created in 2015 to communicate with extraterrestrials, an active form of the search for extraterrestrials (SETI). METI, however, has been criticized for not representing humanity's collective will and for potentially endangering humanity.
Criticism
Some critics of the hypothesis say that only a single dissident group in an extraterrestrial civilization, or alternatively the existence of galactic cliques instead of a unified galactic club, would be enough to break the pact of no contact. To Stephen Webb and others, it seems unlikely, taking humans and human intercivilizational politics as reference, that such prohibition would be in effect for millions of years or at least human existence without a single breach thereof. Others say that the zoo hypothesis, along with its planetarium variation, is highly speculative and more aligned with theological theories. One possible counterargument to the dissident (rogue) group argument is that extraterrestrial artificial superintelligences dominate space, including space occupied by biological intelligences; moreoever, separate artificial superintelligences are assumed to tend towards a network of merged superintelligencies, thereby dissuading rogue behaviour.
Appearance in fiction
The zoo hypothesis is a common theme in science fiction.
1930s
1937: In Olaf Stapledon's 1937 novel Star Maker, great care is taken by the Symbiont race to keep its existence hidden from "pre-utopian" primitives, "lest they should lose their independence of mind.” It is only when such worlds become utopian-level space travellers that the Symbionts make contact and bring the young utopia to an equal footing.
1950s
1951: Arthur C. Clarke's The Sentinel (first published in 1951) and its later novel adaptation 2001: A Space Odyssey (1968) feature a beacon which is activated when the human race discovers it on the moon.
1953: In Childhood's End, a novel by Arthur C. Clarke published in 1953, the alien cultures had been observing and registering Earth's evolution and human history for thousands (perhaps millions) of years. At the beginning of the book, when mankind is about to achieve spaceflight, the aliens reveal their existence and quickly end the arms race, colonialism, racial segregation and the Cold War.
1960s
In Star Trek, the Federation (including humans) has a strict Prime Directive policy of nonintervention with less technologically advanced cultures which the Federation encounters. The threshold of inclusion is the independent technological development of faster-than-light propulsion. In the show's canon, the Vulcan race limited their encounters to observation until Humans made their first warp flight, after which they initiated first contact, indicating the practice predated the Human race's advance of this threshold. Additionally, in the episode "The Chase (TNG)", a message from a first (or early) civilization is discovered, hidden in the DNA of sentient species spread across many worlds, something that could only have been fully discovered after a race had become sufficiently advanced.
In Hard to Be a God by Arkady and Boris Strugatsky, the (unnamed) medieval-esque planet where the novel takes action is protected by the advanced civilization of Earth, and the observers from Earth present on the planet are forbidden to intervene and make overt contact. One of the major themes of the novel is the ethical dilemma presented by such a stance to the observers.
1980s
1986: In Speaker for the Dead by Orson Scott Card, the human xenobiologists and xenologers, biologists and anthropologists observing alien life are forbidden from giving the native species, the Pequeninos, any technology or information. When one of the xenobiologists is killed in an alien ceremony, they are forbidden to mention it. This happens again until Ender Wiggin, the main character of Ender's Game, explains to the Pequeninos that humans cannot partake in the ceremony because it kills them. While this is not exactly an example of the zoo hypothesis, since humanity makes contact, it is very similar and the humans seek to keep the Pequeninos ignorant of technology.
1987: In Julian May's 1987 novel Intervention, the five alien races of the Galactic Milieu keep Earth under surveillance, but do not intervene until humans demonstrate mental and ethical maturity through a paranormal prayer of peace.
1989: Iain M. Banks' The State of the Art depicts the Culture secretly visiting Earth and then deciding to leave it uncontacted, watching its development as a control group, to confirm whether their manipulations of other civilizations are ultimately for the best (the laboratory hypothesis). Other works by Banks depict the Culture (or a Culture equivalent) routinely manipulating less advanced civilizations, including pre-industrial ones (e.g., Inversions), both covertly and overtly, for philosophical or foreign policy purposes.
1989: Bill Watterson's Calvin and Hobbes comic strip for 8 November 1989 alludes to the possibility of an ethical threshold for first contact (or at least for the prudence of first contact) in Calvin's remark "Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us."
2000s
2000: In Robert J. Sawyer's SF novel Calculating God (2000), Hollus, a scientist from an advanced alien civilization, denies that her government is operating under the prime directive.
2003: In South Park inaugural episode of season seven, "Cancelled", aliens refrain from contacting Earth because the planet is the subject and setting of a reality television show. Unlike most variations of the zoo hypothesis where contact is not initiated in order to allow organic socioeconomic, cultural, and technological development, the aliens in this episode refrain from contact for the sole purpose of entertainment. In essence, the aliens treat all of Earth like the titular character in The Truman Show in order to maintain the show's integrity.
2008: In the video game Spore, which simulates the evolution and life of species on a fictional galaxy, intelligent species in the "Space Stage" cannot contact those in previous stages, which did not unify their planets, nor develop spaceflight yet. However, they are allowed to abduct their citizens/members, to create crop circles in their terrain and to place in their planets a tool called "monolith", which accelerates their technological evolution.
2010s
2012: In the sci-fi video game Star Citizen, the zoo hypothesis is vaguely referenced in a lore point and is referred to as the Fair Chance Act. In the document, humans are generally forbidden from terraforming, mining, and inhabiting planets if the world is found to harbor lifeforms capable of developing intelligence. Multiple planetary systems are planned to be implemented as the game's active development continues that will feature planets protected under the Fair Chance Act.
2016: In the video game Stellaris, players control an interstellar empire that can encounter less technologically advanced, non-space faring civilizations. Depending on player choices and their empire's organization, they can observe such "pre faster-than-light" planets in a manner similar to the zoo hypothesis, using science stations with missions that can include passive observation, technological enlightenment, covert infiltration and indoctrination. Players can also discover "pre-sapient" species, which can be uplifted to sentience using scientific research projects.
References
Extraterrestrial life
Hypotheses
Fermi paradox
Biological hypotheses
Search for extraterrestrial intelligence | Zoo hypothesis | Astronomy,Biology | 2,850 |
13,933,319 | https://en.wikipedia.org/wiki/Atlantic%20Terra%20Cotta%20Company | The Atlantic Terra Cotta Company was established in 1879 as the Perth Amboy Terra Cotta in Perth Amboy, New Jersey due to rich regional supplies of clay. It was one of the first successful glazed architectural terra-cotta companies in the United States.
History
Perth Amboy Terra Cotta Company
Alfred Hall had previously owned a company that produced porcelain and household wares but was inspired to begin production of Architectural terra cotta after receiving advice from his nephew. Hall attempted to dominate the market for Architectural terra cotta, but his success led to the formation of multiple regional competitors in the 1880s, such as the New Jersey Terra Cotta Company, the Standard Terra Cotta Company, and the Excelsior Terra Cotta Company.
The demand for architectural terra cotta grew dramatically in the last two decades of the 1800s, with total annual industry profits rising from one million dollars in 1890 to eight million in 1900.
Atlantic Terra Cotta Company
Between 1906 and 1907 the Perth Amboy Terra Cotta Company, the Excelsior Terra Cotta Company, the Standard Terra Cotta Company, and the Atlantic Terra Cotta Company of Staten Island merged together, with the newly formed corporation named after the latter group. The sheer size of the new group allowed it to become the leading manufacturer on the East Coast and secure contracts producing terra cotta for much of the steel-frame construction in the Northeast.
At the time of the merger the company had four plants, in Perth Amboy and Rocky Hill, New Jersey, Staten Island, New York, and Eastpoint, Georgia.
In 1921 the company was charged with violating the Sherman Anti-Trust Act and colluding with competitors by sharing pricing information with other manufacturers of terra cotta. The company weathered that difficulty and subsequent fines, but was hit hard by the Great Depression, when construction of skyscrapers paused and terra cotta ornamentation suddenly seemed unjustifiably expensive.
Prevailing architectural attitudes favored materials such as glass, metal, and concrete and the company's work diminished over the next decade. The company ceased operations in 1943.
Notable projects
Some of the company's most notable projects include the Flatiron Building (1901), the Woolworth Building (1910), the Philadelphia Museum of Art (1928), and the United States Supreme Court (1932).
Additionally, the Atlantic Terra Cotta Company and its predecessors contributed significantly to the architecture of Perth Amboy, which features a total of 111 structures with terra cotta detailing or facades.
Gallery
See also
Architectural terracotta
Glazed architectural terra-cotta
References
External links
Atlantic Terra Cotta Company Records at the University of Texas
Terracotta
Ceramics manufacturers of the United States
Perth Amboy, New Jersey
Companies based in Middlesex County, New Jersey
Companies established in 1846
1846 establishments in New Jersey
American companies disestablished in 1943
Defunct manufacturing companies based in New Jersey
Manufacturer of architectural terracotta | Atlantic Terra Cotta Company | Engineering | 581 |
10,962,250 | https://en.wikipedia.org/wiki/Howard%20Jerome%20Keisler | Howard Jerome Keisler (born 3 December 1936) is an American mathematician, currently professor emeritus at University of Wisconsin–Madison. His research has included model theory and non-standard analysis.
His Ph.D. advisor was Alfred Tarski at Berkeley; his dissertation is Ultraproducts and Elementary Classes (1961).
Following Abraham Robinson's work resolving what had long been thought to be inherent logical contradictions in the literal interpretation of Leibniz's notation that Leibniz himself had proposed, that is, interpreting "dx" as literally representing an infinitesimally small quantity, Keisler published Elementary Calculus: An Infinitesimal Approach, a first-year calculus textbook conceptually centered on the use of infinitesimals, rather than the epsilon, delta approach, for developing the calculus.
He is also known for extending the Henkin construction (of Leon Henkin) to what are now called Henkin–Keisler models. He is also known for the Rudin–Keisler ordering along with Mary Ellen Rudin.
He held the named chair of Vilas Professor of Mathematics at Wisconsin.
Among Keisler's graduate students, several have made notable mathematical contributions, including Frederick Rowbottom who discovered Rowbottom cardinals. Several others have gone on to careers in computer science research and product development, including: Michael Benedikt, a professor of computer science at the University of Oxford, Kevin J. Compton, a professor of computer science at the University of Michigan, Curtis Tuckey, a developer of software-based collaboration environments; Joseph Sgro, a neurologist and developer of vision processor hardware and software, and Edward L. Wimmers, a database researcher at IBM Almaden Research Center.
In 2012 he became a fellow of the American Mathematical Society.
His son Jeffrey Keisler is a Fulbright Distinguished Chair at the University of Massachusetts, Boston, College of Management.
Publications
Chang, C. C.; Keisler, H. J. Continuous Model Theory. Annals of Mathematical Studies, 58, Princeton University Press, 1966. xii+165 pp.
Model Theory for Infinitary Logic, North-Holland, 1971
Chang, C. C.; Keisler, H. J. Model theory. Third edition. Studies in Logic and the Foundations of Mathematics, 73. North-Holland Publishing Co., Amsterdam, 1990. xvi+650 pp. ; 1st edition 1973; 2nd edition 1977
Elementary Calculus: An Infinitesimal Approach. Prindle, Weber & Schmidt, 1976/1986. Available online at .
An Infinitesimal Approach to Stochastic Analysis, American Mathematical Society Memoirs, 1984
Keisler, H. J.; Robbin, Joel. Mathematical Logic and Computability, McGraw-Hill, 1996
Fajardo, Sergio; Keisler, H. J. Model Theory of Stochastic Processes, Lecture Notes in Logic, Association for Symbolic Logic. 2002
See also
Criticism of non-standard analysis
Non-standard calculus
Elementary Calculus: An Infinitesimal Approach
Influence of non-standard analysis
References
External links
Keisler's home page
20th-century American mathematicians
21st-century American mathematicians
Living people
Model theorists
University of Wisconsin–Madison faculty
Fellows of the American Mathematical Society
1936 births | Howard Jerome Keisler | Mathematics | 653 |
619,739 | https://en.wikipedia.org/wiki/Nitrile | In organic chemistry, a nitrile is any organic compound that has a functional group. The name of the compound is composed of a base, which includes the carbon of the , suffixed with "nitrile", so for example is called "propionitrile" (or propanenitrile). The prefix cyano- is used interchangeably with the term nitrile in industrial literature. Nitriles are found in many useful compounds, including methyl cyanoacrylate, used in super glue, and nitrile rubber, a nitrile-containing polymer used in latex-free laboratory and medical gloves. Nitrile rubber is also widely used as automotive and other seals since it is resistant to fuels and oils. Organic compounds containing multiple nitrile groups are known as cyanocarbons.
Inorganic compounds containing the group are not called nitriles, but cyanides instead. Though both nitriles and cyanides can be derived from cyanide salts, most nitriles are not nearly as toxic.
Structure and basic properties
The N−C−C geometry is linear in nitriles, reflecting the sp hybridization of the triply bonded carbon. The C−N distance is short at 1.16 Å, consistent with a triple bond. Nitriles are polar, as indicated by high dipole moments. As liquids, they have high relative permittivities, often in the 30s.
History
The first compound of the homolog row of nitriles, the nitrile of formic acid, hydrogen cyanide was first synthesized by C. W. Scheele in 1782. In 1811 J. L. Gay-Lussac was able to prepare the very toxic and volatile pure acid.
Around 1832 benzonitrile, the nitrile of benzoic acid, was prepared by Friedrich Wöhler and Justus von Liebig, but due to minimal yield of the synthesis neither physical nor chemical properties were determined nor a structure suggested. In 1834 Théophile-Jules Pelouze synthesized propionitrile, suggesting it to be an ether of propionic alcohol and hydrocyanic acid.
The synthesis of benzonitrile by Hermann Fehling in 1844 by heating ammonium benzoate was the first method yielding enough of the substance for chemical research.
Fehling determined the structure by comparing his results to the already known synthesis of hydrogen cyanide by heating ammonium formate. He coined the name "nitrile" for the newfound substance, which became the name for this group of compounds.
Synthesis
Industrially, the main methods for producing nitriles are ammoxidation and hydrocyanation. Both routes are green in the sense that they do not generate stoichiometric amounts of salts.
Ammoxidation
In ammoxidation, a hydrocarbon is partially oxidized in the presence of ammonia. This conversion is practiced on a large scale for acrylonitrile:
CH3CH=CH2 + 3/2 O2 + NH3 -> N#CCH=CH2 + 3 H2O
In the production of acrylonitrile, a side product is acetonitrile. On an industrial scale, several derivatives of benzonitrile, phthalonitrile, as well as Isobutyronitrile are prepared by ammoxidation. The process is catalysed by metal oxides and is assumed to proceed via the imine.
Hydrocyanation
Hydrocyanation is an industrial method for producing nitriles from hydrogen cyanide and alkenes. The process requires homogeneous catalysts. An example of hydrocyanation is the production of adiponitrile, a precursor to nylon-6,6 from 1,3-butadiene:
From organic halides and cyanide salts
Two salt metathesis reactions are popular for laboratory scale reactions. In the Kolbe nitrile synthesis, alkyl halides undergo nucleophilic aliphatic substitution with alkali metal cyanides. Aryl nitriles are prepared in the Rosenmund-von Braun synthesis.
In general, metal cyanides combine with alkyl halides to give a mixture of the nitrile and the isonitrile, although appropriate choice of counterion and temperature can minimize the latter. An alkyl sulfate obviates the problem entirely, particularly in nonaqueous conditions (the Pelouze synthesis).
Cyanohydrins
The cyanohydrins are a special class of nitriles. Classically they result from the addition of alkali metal cyanides to aldehydes in the cyanohydrin reaction. Because of the polarity of the organic carbonyl, this reaction requires no catalyst, unlike the hydrocyanation of alkenes. O-Silyl cyanohydrins are generated by the addition trimethylsilyl cyanide in the presence of a catalyst (silylcyanation). Cyanohydrins are also prepared by transcyanohydrin reactions starting, for example, with acetone cyanohydrin as a source of HCN.
Dehydration of amides
Nitriles can be prepared by the dehydration of primary amides. Common reagents for this include phosphorus pentoxide () and thionyl chloride (). In a related dehydration, secondary amides give nitriles by the von Braun amide degradation. In this case, one C-N bond is cleaved.
Oxidation of amines
Numerous traditional methods exist for nitrile preparation by amine oxidation. In addition, several selective methods have been developed in the last decades for electrochemical processes.
From aldehydes and oximes
The conversion of aldehydes to nitriles via aldoximes is a popular laboratory route. Aldehydes react readily with hydroxylamine salts, sometimes at temperatures as low as ambient, to give aldoximes. These can be dehydrated to nitriles by simple heating, although a wide range of reagents may assist with this, including triethylamine/sulfur dioxide, zeolites, or sulfuryl chloride. The related hydroxylamine-O-sulfonic acid reacts similarly.
In specialised cases the Van Leusen reaction can be used. Biocatalysts such as aliphatic aldoxime dehydratase are also effective.
Sandmeyer reaction
Aromatic nitriles are often prepared in the laboratory from the aniline via diazonium compounds. This is the Sandmeyer reaction. It requires transition metal cyanides.
Other methods
A commercial source for the cyanide group is diethylaluminum cyanide which can be prepared from triethylaluminium and HCN. It has been used in nucleophilic addition to ketones. For an example of its use see: Kuwajima Taxol total synthesis
Cyanide ions facilitate the coupling of dibromides. Reaction of α,α′-dibromoadipic acid with sodium cyanide in ethanol yields the cyano cyclobutane:
Aromatic nitriles can be prepared from base hydrolysis of trichloromethyl aryl ketimines () in the Houben-Fischer synthesis
Nitriles can be obtained from primary amines via oxidation. Common methods include the use of potassium persulfate, Trichloroisocyanuric acid, or anodic electrosynthesis.
α-Amino acids form nitriles and carbon dioxide via various means of oxidative decarboxylation. Henry Drysdale Dakin discovered this oxidation in 1916.
From aryl carboxylic acids (Letts nitrile synthesis)
Reactions
Nitrile groups in organic compounds can undergo a variety of reactions depending on the reactants or conditions. A nitrile group can be hydrolyzed, reduced, or ejected from a molecule as a cyanide ion.
Hydrolysis
The hydrolysis of nitriles RCN proceeds in the distinct steps under acid or base treatment to first give carboxamides and then carboxylic acids . The hydrolysis of nitriles to carboxylic acids is efficient. In acid or base, the balanced equations are as follows:
Strictly speaking, these reactions are mediated (as opposed to catalyzed) by acid or base, since one equivalent of the acid or base is consumed to form the ammonium or carboxylate salt, respectively.
Kinetic studies show that the second-order rate constant for hydroxide-ion catalyzed hydrolysis of acetonitrile to acetamide is 1.6 M−1 s−1, which is slower than the hydrolysis of the amide to the carboxylate (7.4 M−1 s−1). Thus, the base hydrolysis route will afford the carboxylate (or the amide contaminated with the carboxylate). On the other hand, the acid catalyzed reactions requires a careful control of the temperature and of the ratio of reagents in order to avoid the formation of polymers, which is promoted by the exothermic character of the hydrolysis. The classical procedure to convert a nitrile to the corresponding primary amide calls for adding the nitrile to cold concentrated sulfuric acid. The further conversion to the carboxylic acid is disfavored by the low temperature and low concentration of water.
Two families of enzymes catalyze the hydrolysis of nitriles. Nitrilases hydrolyze nitriles to carboxylic acids:
Nitrile hydratases are metalloenzymes that hydrolyze nitriles to amides.
These enzymes are used commercially to produce acrylamide.
The "anhydrous hydration" of nitriles to amides has been demonstrated using an oxime as water source:
Reduction
Nitriles are susceptible to hydrogenation over diverse metal catalysts. The reaction can afford either the primary amine () or the tertiary amine (), depending on conditions. In conventional organic reductions, nitrile is reduced by treatment with lithium aluminium hydride to the amine. Reduction to the imine followed by hydrolysis to the aldehyde takes place in the Stephen aldehyde synthesis, which uses stannous chloride in acid.
Deprotonation
Alkyl nitriles are sufficiently acidic to undergo deprotonation of the C-H bond adjacent to the group. Strong bases are required, such as lithium diisopropylamide and butyl lithium. The product is referred to as a nitrile anion. These carbanions alkylate a wide variety of electrophiles. Key to the exceptional nucleophilicity is the small steric demand of the unit combined with its inductive stabilization. These features make nitriles ideal for creating new carbon-carbon bonds in sterically demanding environments.
Nucleophiles
The carbon center of a nitrile is electrophilic, hence it is susceptible to nucleophilic addition reactions:
with an organozinc compound in the Blaise reaction
with alcohols in the Pinner reaction.
with amines, e.g. the reaction of the amine sarcosine with cyanamide yields creatine
with arenes to form ketones in the Houben–Hoesch reaction via an imine intermediate.
with Grignard reagents to form primary ketimines in the Moureau-Mignonac ketimine synthesis. While not a classical Grignard reaction, it may be considered one under broader modern definitions.
Miscellaneous methods and compounds
In reductive decyanation the nitrile group is replaced by a proton. Decyanations can be accomplished by dissolving metal reduction (e.g. HMPA and potassium metal in tert-butanol) or by fusion of a nitrile in KOH. Similarly, α-aminonitriles can be decyanated with other reducing agents such as lithium aluminium hydride.
In the so-called Franchimont Reaction (developed by the Belgian doctoral student Antoine Paul Nicolas Franchimont (1844-1919) in 1872), an α-cyanocarboxylic acid heated in acid hydrolyzes and decarboxylates to a dimer.
Nitriles self-react in presence of base in the Thorpe reaction in a nucleophilic addition
In organometallic chemistry nitriles are known to add to alkynes in carbocyanation:
Complexation
Nitriles are precursors to transition metal nitrile complexes, which are reagents and catalysts. Examples include tetrakis(acetonitrile)copper(I) hexafluorophosphate () and bis(benzonitrile)palladium dichloride ().
Nitrile derivatives
Organic cyanamides
Cyanamides are N-cyano compounds with general structure and related to the parent cyanamide.
Nitrile oxides
Nitrile oxides have the chemical formula . Their general structure is . The R stands for any group (typically organyl, e.g., acetonitrile oxide , hydrogen in the case of fulminic acid , or halogen (e.g., chlorine fulminate ).
Nitrile oxides are quite different from nitriles: they are highly reactive 1,3-dipoles, and cannot be synthesized from the direct oxidation of nitriles. Instead, they can be synthesised by nitroalkane dehydration, oxime dehydrogenation, or halooxime elimination in base. They are used in 1,3-dipolar cycloadditions, such as to isoxazoles. They undergo type 1 dyotropic rearrangement to isocyanates.
The heavier nitrile sulfides are extremely reactive and rare, but temporarily form during the thermolysis of oxathiazolones. They react similarly to nitrile oxides.
Occurrence and applications
Nitriles occur naturally in a diverse set of plant and animal sources. Over 120 naturally occurring nitriles have been isolated from terrestrial and marine sources. Nitriles are commonly encountered in fruit pits, especially almonds, and during cooking of Brassica crops (such as cabbage, Brussels sprouts, and cauliflower), which release nitriles through hydrolysis. Mandelonitrile, a cyanohydrin produced by ingesting almonds or some fruit pits, releases hydrogen cyanide and is responsible for the toxicity of cyanogenic glycosides.
Over 30 nitrile-containing pharmaceuticals are currently marketed for a diverse variety of medicinal indications with more than 20 additional nitrile-containing leads in clinical development. The types of pharmaceuticals containing nitriles are diverse, from vildagliptin, an antidiabetic drug, to anastrozole, which is the gold standard in treating breast cancer. In many instances the nitrile mimics functionality present in substrates for enzymes, whereas in other cases the nitrile increases water solubility or decreases susceptibility to oxidative metabolism in the liver. The nitrile functional group is found in several drugs.
See also
Protonated nitriles: Nitrilium
Deprotonated nitriles: Nitrile anion
Cyanocarbon
Nitrile ylide
References
External links
Functional groups | Nitrile | Chemistry | 3,241 |
9,995,954 | https://en.wikipedia.org/wiki/Gastric%20inhibitory%20polypeptide%20receptor | The gastric inhibitory polypeptide receptor (GIP-R), also known as the glucose-dependent insulinotropic polypeptide receptor, is a protein that in humans is encoded by the GIPR gene.
GIP-R is a member of the class B family of G protein coupled receptors. GIP-R is found on beta-cells in the pancreas where it serves as the receptor for the hormone Gastric inhibitory polypeptide (GIP).
Function
Gastric inhibitory polypeptide, also called glucose-dependent insulinotropic polypeptide, is a 42-amino acid polypeptide synthesized by K cells of the duodenum and small intestine. It was originally identified as an activity in gut extracts that inhibited gastric acid secretion and gastrin release, but subsequently was demonstrated to stimulate insulin release potently in the presence of elevated glucose. The insulinotropic effect on pancreatic islet beta-cells was then recognized to be the principal physiologic action of GIP. Together with glucagon-like peptide-1, GIP is largely responsible for the secretion of insulin after eating. It is involved in several other facets of the anabolic response.
References
Further reading
External links
G protein-coupled receptors | Gastric inhibitory polypeptide receptor | Chemistry | 277 |
14,509,284 | https://en.wikipedia.org/wiki/Kitchen%20exhaust%20cleaning | Kitchen exhaust cleaning (often referred to as hood cleaning) is the process of removing grease that has accumulated inside the ducts, hoods, fans and vents of exhaust systems of commercial kitchens. Left uncleaned, kitchen exhaust systems eventually accumulate enough grease to become a fire hazard.
Exhaust systems must be inspected regularly, at intervals consistent with usage, to determine whether cleaning is needed before a dangerous amount of grease has accumulated.
Cleaning
National Fire Protection Association Standard 96, Standard for Ventilation Control and Fire Protection of Commercial Cooking Operations, provides cleaning requirements. The cleaning frequency depends on the type of food being cooked and volume of grease laden vapors drawn up through hood plenum.
Caustic chemicals
Caustic chemicals can be applied to break down the grease. After that, hot water can be used to rinse away the residue.
In extreme situations, where grease buildup is too heavy for a chemical application and a rinse, scrapers may be used to remove excess buildup from the contaminated surfaces, before chemicals are applied.
References
Cleaning
Indoor air pollution | Kitchen exhaust cleaning | Chemistry | 213 |
7,462,048 | https://en.wikipedia.org/wiki/Microgametogenesis | Microgametogenesis is the process in plant reproduction where a microgametophyte develops in a pollen grain to the three-celled stage of its development. In flowering plants it occurs with a microspore mother cell inside the anther of the plant.
When the microgametophyte is first formed inside the pollen grain four sets of fertile cells called sporogenous cells are apparent. These cells are surrounded by a wall of sterile cells called the tapetum, which supplies food to the cell and eventually becomes the cell wall for the pollen grain. These sets of sporogenous cells eventually develop into diploid microspore mother cells. These microspore mother cells, also called microsporocytes, then undergo meiosis and become four microspore haploid cells. These new microspore cells then undergo mitosis and form a tube cell and a generative cell. The generative cell then undergoes mitosis one more time to form two male gametes, also called sperm.
See also
Gametogenesis
References
Raven, Peter H., Evert, Ray F., Eichhorn, Susan E.(2005). "Biology of Plants, 7th Edition". W. H. Freeman Chapter 19: 442–449.
Plant reproduction | Microgametogenesis | Biology | 261 |
78,586,725 | https://en.wikipedia.org/wiki/Semi-Dirac%20fermion | In condensed matter physics, semi-Dirac fermions are a class of quasiparticles that are fermionic with the unusual property that their energy dispersion relation changes from quadratic to linear dependent on their direction of motion. Their theoretical properties have been studied for some time.
Their first observation in a solid was in zirconium silicon sulfide (ZrSiS), a topological semi-metal, and was published in 2024.
See also
Dirac fermion
References
Fermions
Quasiparticles
External links
David Nield: Physicists Find Particle That Only Has Mass When Moving in One Direction. ScienceAlert, 14 December 2024. | Semi-Dirac fermion | Physics,Materials_science | 139 |
5,625,220 | https://en.wikipedia.org/wiki/Metakaolin | Metakaolin is the anhydrous calcined form of the clay mineral kaolinite. Rocks that are rich in kaolinite are known as china clay or kaolin, traditionally used in the manufacture of porcelain. The particle size of metakaolin is smaller than cement particles, but not as fine as silica fume.
Kaolinite sources
The quality and reactivity of metakaolin is strongly dependent of the characteristics of the raw material used. Metakaolin can be produced from a variety of primary and secondary sources containing kaolinite:Metakaolin is a dehydrated form of kaolinite, a type of clay mineral. Kaolinite-rich minerals are also referred to as china clay or kaolin, which are traditionally utilized in the production of porcelain. The grain size of metakaolin is less than that of cement particles, but it's not as minuscule as silica fume.
High purity kaolin deposits
Kaolinite deposits or tropical soils of lower purity
Paper sludge waste (if containing kaolinite)
Oil sand tailings (if containing kaolinite)
Forming metakaolin
The T-O clay mineral kaolinite does not contain interlayer cations or interlayer water. The temperature of dehydroxylation depends on the structural layer stacking order. Disordered kaolinite dehydroxylates between 530 and 570 °C, ordered kaolinite between 570 and 630 °C. Dehydroxylated disordered kaolinite shows higher pozzolanic activity than ordered. The dehydroxylation of kaolin to metakaolin is an endothermic process due to the large amount of energy required to remove the chemically bonded hydroxyl ions. Above the temperature range of dehydroxylation, kaolinite transforms into metakaolin, a complex amorphous structure which retains some long-range order due to layer stacking. Much of the aluminum of the octahedral layer becomes tetrahedrally and pentahedrally coordinated.
In order to produce a pozzolan (supplementary cementitious material) nearly complete dehydroxylation must be reached without overheating, i.e., thoroughly roasted but not burnt. This produces an amorphous, highly pozzolanic state, whereas overheating can cause sintering, to form a dead burnt, nonreactive refractory, containing mullite and a defect Al-Si spinel. Reported optimum activation temperatures vary between 550 and 850 °C for varying durations, however the range 650-750 °C is most commonly quoted.
In comparison with other clay minerals kaolinite shows a broad temperature interval between dehydroxylation and recrystallization, much favoring the formation of metakaolin and the use of thermally activated kaolin clays as pozzolans. Also, because the octahedral layer is directly exposed to the interlayer (in comparison to for instance T-O-T clay minerals such as smectites), structural disorder is attained more easily upon heating.
High-reactivity metakaolin
High-reactivity metakaolin (HRM) is a highly processed reactive aluminosilicate pozzolan, a finely-divided material that reacts with slaked lime at ordinary temperature and in the presence of moisture to form a strong slow-hardening cement. It is formed by calcining purified kaolinite, generally between 650 and 700 °C in an externally fired rotary kiln. It is also reported that HRM is responsible for acceleration in the hydration of ordinary portland cement (OPC), and its major impact is seen within 24 hours. It also reduces the deterioration of concrete by Alkali Silica Reaction (ASR), particularly useful when using recycled crushed glass or glass fines as aggregate. The amount of slaked lime that can be bound by metakaolin is measured by the modified Chapelle test.
Adsorption properties
The adsorption surface properties of the metakaolins can be characterized by inverse gas chromatography analysis.
Concrete admixture
Considered to have twice the reactivity of most other pozzolans, metakaolin is a valuable admixture for concrete/cement applications. Replacing portland cement with 8–20 wt.% (% by weight) metakaolin produces a concrete mix that exhibits favorable engineering properties, including: the filler effect, the acceleration of OPC hydration, and the pozzolanic reaction. The filler effect is immediate, while the effect of pozzolanic reaction occurs between 3 and 14 days.
In the mid-2010s, Limestone Calcined Clay Cement mixture incorporating even more than 20% metakaolin was developed as a lower-carbon cement substitute. The technology is on the commercialization stage in the 2020s.
Advantages
Increased compressive and flexural strengths
Reduced permeability (including chloride permeability)
Reduced potential for efflorescence, which occurs when calcium is transported by water to the surface where it combines with carbon dioxide from the atmosphere to make calcium carbonate, which precipitates on the surface as a white residue.
Increased resistance to chemical attack
Increased durability
Reduced effects of alkali-silica reactivity (ASR)
Enhanced workability and finishing of concrete
Reduced shrinkage, due to "particle packing" making concrete denser
Improved color by lightening the color of concrete making it possible to tint lighter integral color.
Higher thermal resistance due to increased temperature levels
Uses
High performance, high strength, and lightweight concrete
Precast and poured-mold concrete
Fibercement and ferrocement products
Glass fiber reinforced concrete
Countertops, art sculptures (see for example the free-standing sculptures of Albert Vrana)
Mortar and stucco
See also
Concrete
Engineered cementitious composite
Fly ash
Kaolinite
Portland cement
Pozzolan
Rice husk ash (also very rich in )
Silica fume
References
Concrete
Cement
Silicate minerals | Metakaolin | Engineering | 1,224 |
48,975,171 | https://en.wikipedia.org/wiki/Resource%20exhaustion%20attack | Resource exhaustion attacks are computer security exploits that crash, hang, or otherwise interfere with the targeted program or system. They are a form of denial-of-service attack but are different from distributed denial-of-service attacks, which involve overwhelming a network host such as a web server with requests from many locations.
Attack vectors
Resource exhaustion attacks generally exploit a software bug or design deficiency. In software with manual memory management (most commonly written in C or C++), memory leaks are a very common bug exploited for resource exhaustion. Even if a garbage collected programming language is used, resource exhaustion attacks are possible if the program uses memory inefficiently and does not impose limits on the amount of state used when necessary.
File descriptor leaks are another common vector. Most general-purpose programming languages require the programmer to explicitly close file descriptors, so even particularly high-level languages allow the programmer to make such mistakes.
Types and examples
Billion laughs
Fork bomb
Infinite loop
Local Area Network Denial (LAND)
Pentium F00F bug
Ping of death
Regular expression denial of service (ReDoS)
References
External links
OWASP's wiki article on resource exhaustion
Daniel J. Bernstein on resource exhaustion
Denial-of-service attacks | Resource exhaustion attack | Technology | 250 |
1,107,345 | https://en.wikipedia.org/wiki/Repression%20of%20science%20in%20the%20Soviet%20Union | Many fields of scientific research in the Soviet Union were banned or suppressed with various justifications. All humanities and social sciences were tested for strict accordance with dialectical materialism. These tests served as a cover for political suppression of scientists who engaged in research labeled as "idealistic" or "bourgeois". Many scientists were fired, others were arrested and sent to Gulags. The suppression of scientific research began during the Stalin era and continued after his death.
The ideologically motivated persecution damaged many fields of Soviet science.
Examples
Biology
In the mid-1930s, the agronomist Trofim Lysenko started a campaign against genetics and was supported by Stalin. If the field of genetics' connection to Nazis wasn't enough, Mendelian genetics was also suppressed due to beliefs that it was "bourgeoisie science" and its association with the priest Gregor Mendel due to hostility to religion because of the Soviet policy of state atheism.
In 1950, the Soviet government organized the Joint Scientific Session of the USSR Academy of Sciences and the USSR Academy of Medical Sciences, the "Pavlovian session". Several prominent Soviet physiologists (L.A. Orbeli, P.K. Anokhin, , Ivane Beritashvili) were accused of deviating from Pavlov's teaching. As a consequence of the Pavlovian session, Soviet physiologists were forced to accept a dogmatic ideology; the quality of physiological research deteriorated and Soviet physiology excluded itself from the international scientific community. Later Soviet biologists heavily criticised Lysenko's theories and pseudo-scientific methods.
Cybernetics
Cybernetics was also outlawed as bourgeois pseudoscience during Stalin's reign. Norbert Wiener's 1948 book Cybernetics was condemned and translated only in 1958. A 1954 edition of the Brief Philosophical Dictionary condemned cybernetics for "mechanistically equating processes in live nature, society and in technical systems, and thus standing against materialistic dialectics and modern scientific physiology developed by Ivan Pavlov". (However this article was removed from the 1955 reprint of the dictionary.) After an initial period of doubts, Soviet cybernetics took root, but this early attitude hampered the development of computing in the Soviet Union.
History
Soviet historiography (the way in which history was and is written by scholars of the Soviet Union) was significantly influenced by the strict control by the authorities aimed at propaganda of communist ideology and Soviet power.
Since the late 1930s, Soviet historiography treated the party line and reality as one and the same. As such, if it was a science, it was a science in service of a specific political and ideological agenda, commonly employing historical negationist methods. In the 1930s, historic archives were closed and original research was severely restricted. Historians were required to pepper their works with references – appropriate or not – to Stalin and other "Marxist-Leninist classics", and to pass judgment – as prescribed by the Party – on pre-revolution historic Russian figures.
Many works of Western historians were forbidden or censored, many areas of history were also forbidden for research as, officially, they never happened. Translations of foreign historiography were often produced in a truncated form, accompanied with extensive censorship and corrective footnotes. For example, in the Russian 1976 translation of Basil Liddell Hart's History of the Second World War pre-war purges of Red Army officers, the secret protocol to the Molotov–Ribbentrop Pact, many details of the Winter War, the occupation of the Baltic states, the Soviet occupation of Bessarabia and Northern Bukovina, Western Allied assistance to the Soviet Union during the war, many other Western Allies' efforts, the Soviet leadership's mistakes and failures, criticism of the Soviet Union and other content were censored out.
Of note was the ban of the theory about the Varangian origin of Kievan Rus for ideological reasons.
Linguistics
At the beginning of Stalin's rule, the dominant figure in Soviet linguistics was Nikolai Yakovlevich Marr, who argued that language is a class construction and that language structure is determined by the economic structure of society. Stalin, who had previously written about language policy as People's Commissar for Nationalities, read a letter by Arnold Chikobava criticizing the theory. He "summoned Chikobava to a dinner that lasted from 9 p.m. to 7 a.m. taking notes diligently." In this way he grasped enough of the underlying issues to oppose this simplistic Marxist formalism, ending Marr's ideological dominance over Soviet linguistics. Stalin's principal work in the field was a small essay, "Marxism and Linguistic Questions."
The term "semiotics" was banned, and the researchers used the obfuscated term "secondary modeling systems" () coined by Juri Lotman and Vladimir Uspensky in 1964; see Tartu–Moscow Semiotic School.
Pedology
Pedology was a popular area of research on the basis of numerous orphanages created after the Russian Civil War. Soviet pedology was a combination of pedagogy and psychology of human development, that heavily relied on various tests. It was officially banned in 1936 after a special decree of the Central Committee of the Communist Party of the Soviet Union "On Pedolodical Perversions in the Narkompros System" on July 4, 1936.
Physics
In the late 1940s, some areas of physics, were also criticized on grounds of "idealism".
In quantum mechanics Soviet physicists Dmitry Blokhintsev, Yaakov Terletsky and K. V. Nikolsky developed a version of the statistical interpretation of quantum mechanics, which was seen as more adhering to the principles of dialectical materialism.
Special and general relativity were a matter of controversy among the Soviet scientists since 1920. Some of them argued that this theory is grounded in Machism (acutely criticized by Vladimir Lenin in his Materialism and Empiriocriticism), others were a group of so-called "mechanists" (see ), later "Young Stalinists" joined the ranks of the relativity theory. At the same time a considerable number of prominent Soviet physicists defended the relativity theory. The attacks on the relativity theory intensified in 1949 under the auspices of the struggle against the "physical idealism" in the work of Leonid Mandelstam. Initially Sergey Vavilov, President of the Academy of Sciences of the Soviet Union, managed to defend Mandelstam, but in 1952 the political attacks on "reactionary Einsteinianism" intensified further. This pseudoscientific campaign sizzled after the death of Stalin.
Although initially planned, the process of "ideological cleansing" in physics did not go as far as defining an "ideologically correct" version of physics and purging those scientists who refused to conform to it, because this was recognized as potentially too harmful to the Soviet nuclear program. During 1949-1951 there was "antiresonance campaign" against the theory of resonance, during which scientists who supported it were accused of "cosmopolitan" sympathies and repressed. As Anna Krylov writes on the perils of ideological intrusion into science, "Stalin rolled back the planned campaign against physics and instructed Beria to give physicists some space; this led to significant advances and accomplishments by Soviet scientists in several domains. However, neither Stalin nor the subsequent Soviet leaders were able to let go of the controls completely. Government control over science turned out to be a grand failure, and the attempt to patch the widening gap between the West and the East by espionage did not help. Today Russia is hopelessly behind the West in both technology and quality of life."
Sociology
After the Russian Revolution, sociology was gradually "politicized, Bolshevisized and eventually, Stalinized". In 1920s a position had formed in the Soviet Union that historical materialism is in fact Marxist sociology, and the major discussion was whether to use the terms "sociology" and "historical materialism" synonymously or to abandon the term "sociology" altogether and consider it to be an anti-Marxist bourgeois science. From 1930s to 1950s, the independent discipline of sociology virtually ceased to exist in the Soviet Union. Even in the era where it was allowed to be practiced, and not replaced by Marxist philosophy, it was always dominated by Marxist thought; hence sociology in the Soviet Union and the entire Eastern Bloc represented, to a significant extent, only one branch of sociology: Marxist sociology. With the death of Joseph Stalin and the 20th Party Congress in 1956, restrictions on sociological research were somewhat eased, and finally, after the 23rd Party Congress in 1966, sociology in Soviet Union was once again officially recognized as an acceptable branch of science.
Reliability of data
The quality (accuracy and reliability) of data published in the Soviet Union and used in historical research is another issue raised by various Sovietologists. The Marxist theoreticians of the Party considered statistics as a social science; hence many applications of statistical mathematics were curtailed, particularly during the Stalin era. Under central planning, nothing could occur by accident. The law of large numbers and the idea of random deviation were decreed as "false theories". Statistical journals and university departments were closed; world-renowned statisticians like Andrey Kolmogorov and Eugen Slutsky abandoned statistical research.
As with all Soviet historiography, reliability of Soviet statistical data varied from period to period. The first revolutionary decade and the period of Stalin's dictatorship both appear highly problematic with regards to statistical reliability; very little statistical data was published from 1936 to 1956 (see Soviet Census (1937)). The reliability of data improved after 1956 when some missing data was published and Soviet experts themselves published some adjusted data for Stalin's era; however the quality of documentation deteriorated.
While on occasion statistical data useful in historical research might have been completely invented by the Soviet authorities, there is little evidence that most statistics were significantly affected by falsification or insertion of false data with the intent to confound the West. Data was however falsified both during collection – by local authorities who would be judged by the central authorities based on whether their figures reflected the central economy prescriptions – and by internal propaganda, with its goal to portray the Soviet state in most positive light to its very citizens. Nonetheless, the policy of not publishing, or simply not collecting, data that was deemed unsuitable for various reasons was much more common than simple falsification; hence there are many gaps in Soviet statistical data. Inadequate or lacking documentation for much of Soviet statistical data is also a significant problem.
Theme in literature
Vladimir Dudintsev, White Garments (1987), a fictionalized story about Soviet geneticists working during the Lysenkoism era
See also
Academic freedom
Antiscience
Anti-intellectualism
Bourgeois pseudoscience
Censorship in the Soviet Union
Deutsche Physik
First Department
Historical negationism
Political correctness
Politicization of science
Science and technology in the Soviet Union
Soviet historiography
Alexander Veselovsky, a case of suppressed literary research
Stalin and the Scientists
References
Я. В. Васильков, М. Ю. Сорокина (eds.), Люди и судьбы. Биобиблиографический словарь востоковедов жертв политического террора в советский период (1917–1991) ("People and Destiny. Bio-Bibliographic Dictionary of Orientalists – Victims of the political terror during the Soviet period (1917–1991)"), Петербургское Востоковедение (2003). online edition
20th century in science
History of science
Censorship in the Soviet Union
Science and technology in the Soviet Union
Research in the Soviet Union
Politics of science
Anti-intellectualism
Denialism
Soviet cover-ups
Persecution of intellectuals in the Soviet Union | Repression of science in the Soviet Union | Technology | 2,498 |
2,680,701 | https://en.wikipedia.org/wiki/List%20of%20compounds%20with%20carbon%20number%207 | This is a partial list of molecules that contain 7 carbon atoms.
See also
Carbon number
List of compounds with carbon number 6
List of compounds with carbon number 8
C07 | List of compounds with carbon number 7 | Chemistry | 36 |
541,808 | https://en.wikipedia.org/wiki/Paddy%20field | A paddy field is a flooded field of arable land used for growing semiaquatic crops, most notably rice and taro. It originates from the Neolithic rice-farming cultures of the Yangtze River basin in southern China, associated with pre-Austronesian and Hmong-Mien cultures. It was spread in prehistoric times by the expansion of Austronesian peoples to Island Southeast Asia, Madagascar, Melanesia, Micronesia, and Polynesia. The technology was also acquired by other cultures in mainland Asia for rice farming, spreading to East Asia, Mainland Southeast Asia, and South Asia.
Fields can be built into steep hillsides as terraces or adjacent to depressed or steeply sloped features such as rivers or marshes. They require a great deal of labor and materials to create and need large quantities of water for irrigation. Oxen and water buffalo, adapted for life in wetlands, are important working animals used extensively in paddy field farming.
Paddy field farming remains the dominant form of growing rice in modern times. It is practiced extensively in Bangladesh, Cambodia, China, India, Indonesia, northern Iran, Japan, Laos, Malaysia, Mongolia, Myanmar, Nepal, North Korea, Pakistan, the Philippines, South Korea, Sri Lanka, Taiwan, Thailand, and Vietnam. It has also been introduced elsewhere since the colonial era, notably in northern Italy, the Camargue in France, and in Spain, particularly in the Albufera de València wetlands in the Valencian Community, the Ebro Delta in Catalonia and the Guadalquivir wetlands in Andalusia, as well as along the eastern coast of Brazil, the Artibonite Valley in Haiti, Sacramento Valley in California, and West Lothian in Scotland among other places.
Paddy cultivation should not be confused with cultivation of deepwater rice, which is grown in flooded conditions with water more than 50 cm (20 in) deep for at least a month. Global paddies' emissions account for at least 10% of global methane emissions. Drip irrigation systems have been proposed as a possible environmental and commercial solution.
Etymology
The word "paddy" is derived from the Malay/Indonesian word padi, meaning "rice plant", which is itself derived from Proto-Austronesian *pajay ("rice in the field", "rice plant"). Cognates include Amis panay; Tagalog pálay; Kadazan Dusun paai; Javanese pari; and Chamorro fai, among others.
History
Neolithic southern China
Genetic evidence shows that all forms of paddy rice, including both indica and japonica, spring from a domestication of the wild rice Oryza rufipogon by cultures associated with pre-Austronesian and Hmong-Mien-speakers. This occurred 13,500 to 8,200 years ago south of the Yangtze River in present-day China.
There are two likely centers of domestication for rice as well as the development of the wet-field technology. The first is in the lower Yangtze River, believed to be the homelands of the pre-Austronesians and possibly also the Kra-Dai, and associated with the Kuahuqiao, Hemudu, Majiabang, Songze, Liangzhu, and Maquiao cultures. The second is in the middle Yangtze River, believed to be the homelands of the early Hmong-Mien speakers and associated with the Pengtoushan, Nanmuyuan, Liulinxi, Daxi, Qujialing, and Shijiahe cultures. Both of these regions were heavily populated and had regular trade contacts with each other, as well as with early Austroasiatic speakers to the west, and early Kra-Dai speakers to the south, facilitating the spread of rice cultivation throughout southern China.
The earliest paddy field found dates to 4330 BC, based on carbon dating of grains of rice and soil organic matter found at the Chaodun site in Kunshan. At Caoxieshan, a site of the Neolithic Majiabang culture, archaeologists excavated paddy fields. Some archaeologists claim that Caoxieshan may date to 4000–3000 BC. There is archaeological evidence that unhusked rice was stored for the military and for burial with the deceased from the Neolithic period to the Han dynasty in China.
By the late Neolithic (3500 to 2500 BC), population in the rice cultivating centers had increased rapidly, centered around the Qujialing-Shijiahe and Liangzhu cultures. There was also evidence of intensive rice cultivation in paddy fields as well as increasingly sophisticated material cultures in these two regions. The number of settlements among the Yangtze cultures and their sizes increased, leading some archeologists to characterize them as true states, with clearly advanced socio-political structures. However, it is unknown if they had centralized control.
In the terminal Neolithic (2500 to 2000 BC), Shijiahe shrank in size, and Liangzhu disappeared altogether. This is largely believed to be the result of the southward expansion of the early Sino-Tibetan Longshan culture. Fortifications like walls (as well as extensive moats in Liangzhu cities) are common features in settlements during this period, indicating widespread conflict. This period also coincides with the southward movement of rice-farming cultures to the Lingnan and Fujian regions, as well as the southward migrations of the Austronesian, Kra-Dai, and Austroasiatic-speaking peoples to Mainland Southeast Asia and Island Southeast Asia.
Austronesian expansion
The spread of japonica rice cultivation and paddy field agriculture to Southeast Asia started with the migrations of the Austronesian Dapenkeng culture into Taiwan between 3500 and 2000 BC. The Nanguanli site in Taiwan, dated to ca. 2800 BC, has yielded numerous carbonized remains of both rice and millet in waterlogged conditions, indicating intensive wetland rice cultivation and dryland millet cultivation.
From about 2000 to 1500 BC, the Austronesian expansion began, with settlers from Taiwan moving south to migrate to Luzon in the Philippines, bringing rice cultivation technologies with them. From Luzon, Austronesians rapidly colonized the rest of Maritime Southeast Asia, moving westwards to Borneo, the Malay Peninsula and Sumatra; and southwards to Sulawesi and Java. By 500 BC, there is evidence of intensive wetland rice agriculture already established in Java and Bali, especially near very fertile volcanic islands.
Rice did not survive the Austronesian voyages into Micronesia and Polynesia; however, wet-field agriculture was transferred to the cultivation of other crops, most notably for taro cultivation. The Austronesian Lapita culture also came into contact with the non-Austronesian (Papuan) early agriculturists of New Guinea and introduced wetland farming techniques to them. In turn, they assimilated their range of indigenous cultivated fruits and tubers before spreading further eastward to Island Melanesia and Polynesia. In Hawaii, the conditions of available taro pondfields (loʻi) as worked by native Hawaiians later proved feasible for rice cultivation by Chinese and Japanese migrant farmers in the late 19th to early 20th century; rice plots were often enlargened by dismantling bunds (kuāuna) that bordered between smaller established loʻi.
Rice and wet-field agriculture were also introduced to Madagascar, the Comoros, and the coast of East Africa around the 1st millennium AD by Austronesian settlers from the Greater Sunda Islands.
Korea
There are ten archaeologically excavated rice paddy fields in Korea. The two oldest are the Okhyun and Yaumdong sites, found in Ulsan, dating to the early Mumun pottery period.
Paddy field farming goes back thousands of years in Korea. A pit-house at the Daecheon-ni site yielded carbonized rice grains and radiocarbon dates, indicating that rice cultivation in dry-fields may have begun as early as the Middle Jeulmun pottery period (c. 3500–2000 BC) in the Korean Peninsula. Ancient paddy fields have been carefully unearthed in Korea by institutes such as Kyungnam University Museum (KUM) of Masan. They excavated paddy field features at the Geumcheon-ni Site near Miryang, South Gyeongsang Province. The paddy field feature was found next to a pit-house that is dated to the latter part of the Early Mumun pottery period (c. 1100–850 BC). KUM has conducted excavations, that have revealed similarly dated paddy field features, at Yaeum-dong and Okhyeon, in modern-day Ulsan.
The earliest Mumun features were usually located in low-lying narrow gullies, that were naturally swampy and fed by the local stream system. Some Mumun paddy fields in flat areas were made of a series of squares and rectangles, separated by bunds approximately 10 cm in height, while terraced paddy fields consisted of long irregular shapes that followed natural contours of the land at various levels.
Mumun Period rice farmers used all of the elements that are present in today's paddy fields, such as terracing, bunds, canals, and small reservoirs. We can grasp some paddy-field farming techniques of the Middle Mumun (c. 850–550 BC), from the well-preserved wooden tools excavated from archaeological rice fields at the Majeon-ni Site. However, iron tools for paddy-field farming were not introduced until sometime after 200 BC. The spatial scale of paddy-fields increased, with the regular use of iron tools, in the Three Kingdoms of Korea Period (c. AD 300/400-668).
Japan
The first paddy fields in Japan date to the Early Yayoi period (300 BC – 250 AD). The Early Yayoi has been re-dated, and based on studies of early Japanese paddy formations in Kyushu it appears that wet-field rice agriculture in Japan was directly adopted from the Lower Yangtze river basin in Eastern China.
Culture
China
Although China's agricultural output is the largest in the world, only about 15% of its total land area can be cultivated. About 75% of the cultivated area is used for food crops. Rice is China's most important crop, raised on about 25% of the cultivated area. Most rice is grown south of the Huai River, in the Yangtze valley, the Zhu Jiang delta, and in Yunnan, Guizhou, and Sichuan provinces.
Rice appears to have been used by the Early Neolithic populations of Lijiacun and Yunchanyan in China. Evidence of possible rice cultivation from ca. 11,500 BC has been found, however it is still questioned whether the rice was indeed being cultivated, or instead being gathered as wild rice. Bruce Smith, an archaeologist at the Smithsonian Institution in Washington, D.C., who has written on the origins of agriculture, says that evidence has been mounting that the Yangtze was probably the site of the earliest rice cultivation. In 1998, Crawford & Shen reported that the earliest of 14 AMS or radiocarbon dates on rice from at least nine Early to Middle Neolithic sites is no older than 7000 BC, that rice from the Hemudu and Luojiajiao sites indicates that rice domestication likely began before 5000 BC, but that most sites in China from which rice remains have been recovered are younger than 5000 BC.
During the Spring and Autumn period (722–481 BC), two revolutionary improvements in farming technology took place. One was the use of cast iron tools and beasts of burden to pull plows, and the other was the large-scale harnessing of rivers and development of water conservation projects. Sunshu Ao of the 6th century BC and Ximen Bao of the 5th century BC are two of the earliest hydraulic engineers from China, and their works were focused upon improving irrigation systems. These developments were widely spread during the ensuing Warring States period (403–221 BC), culminating in the enormous Du Jiang Yan Irrigation System engineered by Li Bing by 256 BC for the State of Qin in ancient Sichuan. During the Eastern Jin (317–420) and the Northern and Southern Dynasties (420–589), land-use became more intensive and efficient, rice was grown twice a year and cattle began to be used for plowing and fertilization.
By about 750, 75% of China's population lived north of the Yangtze, but by 1250, 75% of China's population lived south of it. Such large-scale internal migration was possible due to introduction of quick-ripening strains of rice from Vietnam suitable for multi-cropping.
Famous rice paddies in China include the Longsheng Rice Terraces and the fields of Yuanyang County, Yunnan.
India
India has the largest paddy output in the world and is also the largest exporter of rice in the world as of 2020. In India, West Bengal is the largest rice producing state. Paddy fields are a common sight throughout India, both in the northern Gangetic Plains and the southern peninsular plateaus. Paddy is cultivated at least twice a year in most parts of India, the two seasons being known as Rabi and Kharif respectively. The former cultivation is dependent on irrigation, while the latter depends on the Monsoon. The paddy cultivation plays a major role in socio-cultural life of rural India. Many regional festivals celebrate the harvest, such as Onam, Bihu, Thai Pongal, Makar Sankranti, and Nabanna. The Kaveri delta region of Thanjavur is historically known as the rice bowl of Tamil Nadu, and Kuttanadu is called the rice bowl of Kerala. Gangavathi is known as the rice bowl of Karnataka.
Indonesia
Prime Javanese paddies yield roughly 6 metric tons of unmilled rice (2.5 metric tons of milled rice) per hectare. When irrigation is available, rice farmers typically plant Green Revolution rice varieties allowing three growing seasons per year. Since fertilizer and pesticide are relatively expensive inputs, farmers typically plant seeds in a very small plot. Three weeks following germination, the 15-20 centimetre (6–8 in) stalks are picked and replanted at greater separation, in a backbreaking manual procedure.
Rice harvesting in Central Java is often performed not by owners or sharecroppers of paddies, but rather by itinerant middlemen, whose small firms specialize in the harvest, transport, milling, and distribution of rice.
The fertile volcanic soil of much of the Indonesian archipelago—particularly the islands of Java and Bali—has made rice a central dietary staple. Steep terrain on Bali resulted in complex irrigation systems, locally called subak, to manage water storage and drainage for rice terraces.
Italy
Rice is grown in Northern Italy, especially in the valley of the Po River. The paddy fields are irrigated by fast-flowing streams descending from the Alps. In the 19th century and much of the 20th century, the paddy fields were farmed by the mondine, a subculture of seasonal rice paddy workers composed mostly of poor women.
Japan
The acidic soil conditions common in Japan due to volcanic eruptions have made the paddy field the most productive farming method. Paddy fields are represented by the kanji (commonly read as ta or as den) that has had a strong influence on Japanese culture. In fact, the character , which originally meant 'field' in general, is used in Japan exclusively to refer to paddy fields. One of the oldest samples of writing in Japan is widely credited to the kanji found on pottery at the archaeological site of Matsutaka in Mie Prefecture that dates to the late 2nd century.
Ta () is used as a part of many place names as well as in many family names. Most of these places are somehow related to the paddy field and, in many cases, are based on the history of a particular location. For example, where a river runs through a village, the place east of the river may be called Higashida (), literally "east paddy field." A place with a newly irrigated paddy field, especially those made during or after the Edo period, may be called Nitta or Shinden (both ), "new paddy field." In some places, lakes and marshes were likened to a paddy field and were named with ta, like Hakkōda ().
Today, many family names have ta as a component, a practice which can be largely attributed to a government edict in the early Meiji Period which required all citizens to have a family name. Many chose a name based on some geographical feature associated with their residence or occupation, and as nearly three-fourths of the population were farmers, many made family names using ta. Some common examples are Tanaka (), literally meaning "in the paddy field;" Nakata (), "middle paddy field;" Kawada (川田), "river paddy field;" and Furuta (), "old paddy field."
In recent years, rice consumption in Japan has fallen and many rice farmers are increasingly elderly. The government has subsidized rice production since the 1970s, and favors protectionist policies regarding cheaper imported rice.
Korea
Arable land in small alluvial flats of most rural river valleys in South Korea are dedicated to paddy-field farming. Farmers assess paddy fields for any necessary repairs in February. Fields may be rebuilt, and bund breaches are repaired. This work is carried out until mid-March, when warmer spring weather allows the farmer to buy or grow rice seedlings. They are transplanted (usually by rice transplanter) from the indoors into freshly flooded paddy fields in May. Farmers tend and weed their paddy fields through the summer until around the time of Chuseok, a traditional holiday held on 15 August of the Lunar Calendar (circa mid-September on the Solar Calendar). The harvest begins in October. Coordinating the harvest can be challenging because many Korean farmers have small paddy fields in a number of locations around their villages, and modern harvesting machines are sometimes shared between extended family members. Farmers usually dry the harvested grains in the sun before bringing them to market.
The Hanja character for 'field', jeon (), is found in some place names, especially small farming townships and villages. However, the specific Korean term for 'paddy' is a purely Korean word, "non" ().
Madagascar
In Madagascar, the average annual consumption of rice is 130 kg per person, one of the largest in the world.
According to a 1999 study of UPDRS / FAO:
The majority of rice is related to irrigation (1,054,381 ha). The choice of methods conditioning performance is determined by the variety and quality control of water.
The tavy is traditionally the culture of flooded upland rice on burning of cleared natural rain forest (135,966 ha). Criticized as being the cause of deforestation, tavy is still widely practiced by farmers in Madagascar, who find a good compromise between climate risks, availability of labour and food security.
By extension, the tanety, which literally means "hill," is also growing upland rice, carried out on the grassy slopes that have been deforested for the production of charcoal (139,337 ha).
Among the many varieties, rice of Madagascar includes: Vary lava - a translucent long and large grain rice, considered a luxury rice; Vary Makalioka - a translucent long and thin grain rice; Vary Rojofotsy - a half-long grain rice; and Vary mena, or red rice, exclusive to Madagascar.
Malaysia
Paddy fields can be found in most states on the Malaysian Peninsula, with most of the fields being located in the northern states such as Kedah, Perlis, Perak, and Penang. Paddy fields can also be found on Malaysia's east coast region, in Kelantan and Terengganu. The central state of Selangor also has its fair share of paddy fields, especially in the districts of Kuala Selangor and Sabak Bernam.
Before Malaysia became heavily reliant on its industrial output, people were mainly involved in agriculture, especially in the production of rice. It was for that reason, that people usually built their houses next to paddy fields. The very spicy chili pepper that is often eaten in Malaysia, the bird's eye chili, is locally called cili padi, literally "paddy chili". Some research pertaining to Rainfed lowland rice in Sarawak has been reported.
Myanmar
Rice is grown in Myanmar primarily in three areas – the Irrawaddy Delta, the area along and the delta of the Kaladan River, and the Central plains around Mandalay, though there has been an increase in rice farming in Shan State and Kachin State in recent years. Up until the later 1960s, Myanmar was the main exporter of rice. Termed the rice basket of Southeast Asia, much of the rice grown in Myanmar does not rely on fertilizers and pesticides, thus, although "organic" in a sense, it has been unable to cope with population growth and other rice economies which utilized fertilizers.
Rice is now grown in all the three seasons of Myanmar, though primarily in the Monsoon season – from June to October. Rice grown in the delta areas relies heavily on the river water and sedimented minerals from the northern mountains, whilst the rice grown in the central regions require irrigation from the Irrawaddy River.
The fields are tilled when the first rains arrive – traditionally measured at 40 days after Thingyan, the Burmese New Year – around the beginning of June. In modern times, tractors are used, but traditionally, buffalos were employed. The rice plants are planted in nurseries and then transplanted by hand into the prepared fields. The rice is then harvested in late November – "when the rice bends with age". Most of the rice planting and harvesting is done by hand. The rice is then threshed and stored, ready for the mills.
Nepal
In Nepal, rice (Nepali: धान, Dhaan) is grown in the Terai and hilly regions. It is mainly grown during the summer monsoon in Nepal.
Philippines
Paddy fields are a common sight in the Philippines. Several vast paddy fields exist in the provinces of Ifugao, Nueva Ecija, Isabela, Cagayan, Bulacan, Quezon, and other provinces. Nueva Ecija is considered the main rice growing province of the Philippines.
The Banaue Rice Terraces are an example of paddy fields in the country. They are located in Banaue in Northern Luzon, Philippines and were built by the Ifugaos 2,000 years ago. Streams and springs found in the mountains were tapped and channeled into irrigation canals that run downhill through the rice terraces. Other notable Philippine paddy fields are the Batad Rice Terraces, the Bangaan Rice Terraces, the Mayoyao Rice Terraces and the Hapao Rice Terraces.
Located at Barangay Batad in Banaue, the Batad Rice Terraces are shaped like an amphitheatre, and can be reached by a 12-kilometer ride from Banaue Hotel and a 2-hour hike uphill through mountain trails. The Bangaan Rice Terraces portray the typical Ifugao community, where the livelihood activities are within the village and its surroundings. The Bangaan Rice Terraces are accessible by a one-hour ride from Poblacion, Banaue, then a 20-minute trek down to the village. It can be viewed best from the road to Mayoyao. The Mayoyao Rice Terraces are located at Mayoyao, 44 kilometers away from Poblacion, Banaue. The town of Mayoyao lies in the midst of these rice terraces. All dikes are tiered with flat stones. The Hapao Rice Terraces are within 55 kilometers from the capital town of Lagawe. Other Ifugao stone-walled rice terraces are located in the municipality of Hungduan.
Sri Lanka
Sri Lankan paddy cultivation history dates back to more than 2000 years ago. The historical reports say that Sri Lanka is regarded as the "paddy store of the east" because it produced an excessive quantity of rice. Paddy cultivation can be found all over the island and a considerable amount of land is allocated for it. Both upcountry and low country wetlands use paddy cultivation. The majority of paddy land is in the dry zone, and it uses special irrigation systems for cultivation. The water storing tank called "Wewa" facilitates a supply of water to paddy lands in the cultivation period. Agriculture in Sri Lanka mainly depends on rice production. Sri Lanka sometimes exports rice to its neighboring countries. Around 1.5 million hectares of land are cultivated in Sri Lanka for paddy in 2008/2009 maha: 64% of which is cultivated during the dry season and 35% cultivated during the wet season. Around 879,000 farmer families are engaged in paddy cultivation in Sri Lanka. They make up 20% of the country's population and 32% of the employment.
Thailand
Rice production in Thailand represents a significant portion of the Thai economy. It uses over half of the farmable land area and labor force in Thailand.
Thailand has a strong tradition of rice production. It has the fifth-largest amount of land used for rice cultivation in the world and is the world's largest exporter of rice. Thailand has plans to further increase its land available for rice production, with a goal of adding 500,000 hectares to the 9.2 million hectares of rice-growing areas already cultivated. The Thai Ministry of Agriculture expected rice production to yield around 30 million tons of rice for 2008. The most produced strain of rice in Thailand is jasmine rice, which has a significantly lower yield rate than other types of rice, but also normally fetches more than double the price of other strains in a global market.
Vietnam
Rice fields in Vietnam (ruộng or cánh đồng in Vietnamese) are the predominant land use in the valley of the Red River and the Mekong Delta. In the Red River Delta of northern Vietnam, control of seasonal river flooding is achieved by an extensive network of dykes which over the centuries total some 3000 km. In the Mekong Delta of southern Vietnam, there is an interlacing drainage and irrigation canal system that has become the symbol of this area. The canals additionally serve as transportation routes, allowing farmers to bring their produce to market. In Northwestern Vietnam, Thai people built their "valley culture" based on the cultivation of glutinous rice planted in upland fields, requiring terracing of the slopes.
The primary festival related to the agrarian cycle is "lễ hạ điền" (literally "descent into the fields") held as the start of the planting season in hope of a bountiful harvest. Traditionally, the event was officiated with much pomp. The monarch carried out the ritual plowing of the first furrow while local dignitaries and farmers followed suit. Thổ địa (deities of the earth), thành hoàng làng (the village patron spirit), Thần Nông (god of agriculture), and thần lúa (god of rice plants) were all venerated with prayers and offerings.
In colloquial Vietnamese, wealth is frequently associated with the vastness of the individual's land holdings. Paddy fields so large as for "storks to fly with their wings out-stretched" ("đồng lúa thẳng cánh cò bay") can be heard as a common metaphor. Wind-blown undulating rice plants across a paddy field in literary Vietnamese is termed figuratively "waves of rice plants" ("sóng lúa").
Ecology
Paddy fields are a major source of atmospheric methane which contributes to global warming, having been estimated to contribute in the range of 50 to 100 million tonnes of the gas per annum. Studies have shown that this can be significantly reduced while also boosting crop yield by draining the paddies to allow the soil to aerate to interrupt methane production. Studies have also shown the variability in assessment of methane emission using local, regional and global factors and calling for better inventorization based on micro level data.
Rice paddies are responsible for 10% of global methane emissions, roughly equal to the emissions of the aviation industry. Drip irrigation systems developed by Netafim and N-Drip were introduced in several countries and according to the Times of Israel can reduce up to 85% of emissions.
Gallery
See also
Kuk Swamp
Rice-fish system
References
Bibliography
Bale, Martin T. Archaeology of Early Agriculture in Korea: An Update on Recent Developments. Bulletin of the Indo-Pacific Prehistory Association 21(5):77–84, 2001.
Barnes, Gina L. Paddy Soils Now and Then. World Archaeology 22(1):1–17, 1990.
Crawford, Gary W. and Gyoung-Ah Lee. Agricultural Origins in the Korean Peninsula. Antiquity 77(295):87–95, 2003.
Kwak, Jong-chul. Urinara-eui Seonsa – Godae Non Bat Yugu [Dry- and Wet-field Agricultural Features of the Korean Prehistoric].In Hanguk Nonggyeong Munhwa-eui Hyeongseong [The Formation of Agrarian Societies in Korea]: 21–73. Papers of the 25th National Meetings of the Korean Archaeological Society, Busan, 2001.
External links
How a paddy-field works
Paddy cultivation
Chinese inventions
Crops
Land management
Rice
Riparian zone
Sustainable agriculture
Water and the environment | Paddy field | Environmental_science | 5,972 |
62,182,176 | https://en.wikipedia.org/wiki/Mitragynine | Mitragynine is an indole-based alkaloid and is one of the main psychoactive constituents in the Southeast Asian plant Mitragyna speciosa, commonly known as kratom. It is an opioid that is typically consumed as a part of kratom for its pain-relieving and euphoric effects. It has also been researched for its use to potentially manage symptoms of opioid withdrawal.
Mitragynine is the most abundant active alkaloid in kratom. In Thai varieties of kratom, mitragynine is the most abundant component (up to 66% of total alkaloids), while 7-hydroxymitragynine (7-OH) is a minor constituent (up to 2% of total alkaloid content). In Malaysian kratom varieties, mitragynine is present at lower concentration (12% of total alkaloids). Total alkaloid concentration in dried leaves ranges from 0.5 to 1.5%. Such preparations are orally consumed and typically involve dried kratom leaves which are brewed into tea or ground and placed into capsules.
Uses
Medical
, the US Food and Drug Administration (FDA) had stated that there were no approved clinical uses for kratom, and that there was no evidence that kratom was safe or effective for treating any condition. This reiterated the conclusion of an earlier report by the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA): , mitragynine had not been approved for any medical use. , the FDA had noted, in particular, that there had been no clinical trials to study safety and efficacy of kratom in the treatment of opioid addiction.
Pain
Mitragynine-containing kratom extracts, with their accompanying array of alkaloids and other natural products, have been used for their perceived pain-mitigation properties for at least a century. In Southeast Asia, the consumption of mitragynine from whole leaf kratom preparations is common among laborers who report utilizing kratom's mild stimulant and perceived analgesic properties to increase endurance and ease pain while working. In one laboratory study in a rat model in 2016, alkaloid-containing extracts of kratom gave evidence of inducing naloxone-reversible antinociceptive effects in hotplate and tail-flick tests to a level comparable to oxycodone.
Chronic pain
Kratom is commonly used in the United States as self-medication for pain. A 2019 review of existing literature suggested the potential of kratom as substitution therapy for chronic pain.
Opioid withdrawal
As early as the 19th century, kratom was in use for the treatment of opioid addiction and withdrawal. , a review of mental health aspects of kratom use mentioned opioid replacement and withdrawal as primary motivations for kratom use: almost 50% of the approximately 8,000 kratom users surveyed indicated kratom use that resulted in reduced or discontinued use of opioids. Some animal models of opioid withdrawal suggest mitragynine can suppress and ameliorate withdrawal from other opioid agonists (e.g., after chronic administration of morphine in zebra fish).
Recreational
Mitragynine and its metabolite 7-hydroxymitragynine (7-OH) are thought to underlie the effects of kratom. Consumption of dried kratom leaves yields different responses depending on the dose consumed. At low doses, kratom is reported to induce a mild stimulating effect, while larger doses are reported to produce sedation and analgesia typical of opioids. The concentration of mitragynine and other alkaloids in kratom have been found to vary between particular "strains" of the plant, thus indicating "strain-specific" effects from consumption, as well. Kratom extracts are often mixed with other easily attainable psychoactive compounds—such are found in over-the-counter cough medicines—to potentiate the effects of the concentrated levels of mitragynine. Effects of mitragynine-containing preparations from M. speciosa include analgesic, anti-inflammatory, antidepressant, and muscle relaxant properties; adverse effects include a negative impact on cognition; in animal studies the potential for misuse has been found, including through the use of the conditioned place preference (CPP) test, which indicated a distinct reward effect for 7-hydroxymitragynine.
Adverse effects
Dependence and withdrawal
Due at least in part to the activity on opioid receptors, mitragynine can result in dependence and lead to withdrawal symptoms when discontinued. Regular users reported withdrawal symptoms after discontinuing kratom such as pain, muscle spasms, insomnia, nausea, diarrhea, restlessness, anxiety, and anger, all of which are characteristic of opioid withdrawal. In one study, symptoms of withdrawal lasted less than three days for most subjects. In an animal study, mitragynine withdrawal symptoms were observed following 14 days of mitragynine intraperitoneal injections in mice and included displays of anxiety, teeth chattering, and piloerection, all of which are characteristic signs of opioid withdrawal in mice and are comparable to morphine withdrawal symptoms.
Chemistry
Solubility
The solubility of mitragynine from kratom in neutral-pH and alkaline water is very low (0.0187 mg/ml at pH 9). The solubility of mitragynine in acidic water is higher (3.5 mg/ml at pH 4), however, this alkaloid can become unstable, so certain products, such as low-pH beverages, have a very short shelf life. Many vendors offer concentrated kratom products with claims of improved mitragynine solubility, however, those products are often formulated with solvents such as propylene glycol, which can make products unpleasant.
Pharmacology
Pharmacodynamics
Mitragynine acts on a variety of receptors in the central nervous system (CNS), most notably the mu, delta, and kappa opioid receptors. The nature of mitragynine's interaction with opioid receptors has yet to be fully classified, with some reports suggesting partial agonist activity at the mu-opioid receptor and others suggesting full agonist activity. Additionally, mitragynine is known to interact with delta and kappa opioid receptors as well, but these interactions remain ambiguous, with some reports indicating mitragynine as a delta and kappa opioid receptor competitive antagonist and others as a full agonist of these receptors. In either case, mitragynine is reported to have lower affinity to delta and kappa opioid receptors compared to mu opioid receptors. Mitragynine is also known to interact with dopamine D2, adenosine, serotonin, and alpha-2 adrenergic receptors, though the significance of these interactions is not fully understood. Additionally, several reports of mitragynine pharmacology indicate potential biased agonism activity favoring G protein signaling pathways independent of beta arrestin recruitment, which was originally thought to be a primary component in reducing opioid-induced respiratory depression. However, recent evidence suggests that low intrinsic efficacy at the mu-opioid receptor is responsible for the improved side effect profile of mitragynine, as opposed to G protein bias.
Pharmacokinetics
Pharmacokinetic analysis have largely taken place in live rodents as well as rodent and human microsomes. Owing to the heterogeneity of analysis and paucity of human experiments conducted thus far, the pharmacokinetic profile of mitragynine is not complete. However, initial pharmacokinetic studies in humans have yielded preliminary information. In a study of 10 healthy volunteers taking orally administered mitragynine from whole leaf preparations, mitragynine appeared to have a much longer half-life than typical opioid agonists (7–39 hours) and reached peak plasma concentration (Tmax) within 1 hour of administration. However, another study involving a Kratom tea preparation reported a much shorter half-life of 3 hours. Mitragynine is estimated to have a bioavailability of 21%.
Metabolism
Mitragynine is primarily metabolized in the liver, producing many metabolites during both phase I and phase II.
Phase I
During phase I metabolism, mitragynine undergoes hydrolysis of the methylester group on C16 as well as o-demethylation of both methoxy groups on positions 9 and 17. Following this step, oxidation and reduction reactions convert aldehyde intermediates into alcohols and carboxylic acids. P450 metabolic enzymes are known to facilitate the phase I metabolism of mitragynine which reportedly has an inhibitory effect on multiple P450 enzymes, raising the possibility of adverse drug interactions.
Phase II
During phase II metabolism, phase I metabolites undergo glucuronidation and sulfation to form multiple glucuronide and sulfate conjugates, which are then excreted via urine.
History
Mitragynine consumption for medicinal and recreational purposes dates back centuries, although early use was primarily limited to Southeast Asian countries such as Indonesia and Thailand, where the plant grows indigenously. Recently, mitragynine use has spread throughout Europe and the Americas as both a recreational and medicinal drug. While research into the effects of kratom have begun to emerge, investigations on the active compound mitragynine are less common.
Legality
In the United States, kratom and its active ingredients are not scheduled under DEA guidelines. Despite the current legal status of the plant and its constituents, the legality of kratom has been turbulent in recent years. In August 2016, the DEA issued a report of intent stating that mitragynine and 7-hydroxymitragynine would undergo emergency scheduling and be placed under Schedule I classification until further notice, making kratom strictly illegal and thus hindering research on its active constituents. Following this report, the DEA faced significant public and administrative opposition in the form of a White House petition signed by 140,000 citizens and a letter to the DEA administrator backed by 51 members of the House of Representatives resisting the proposed scheduling. This opposition led the DEA to withdraw its report of intent in October 2016, allowing for unencumbered research into the potential benefits and health risks associated with mitragynine and other alkaloids in the kratom plant. Kratom and its active constituents are unscheduled and legally sold in stores and online in the United States except for a small number of states. As of June 2019, the FDA continues to warn consumers not to use kratom, while advocating for more research for a better understanding of kratom's safety profile.
Research
Research limitations
Inconsistencies in dosing, purity, and concomitant drug use makes evaluating the effects of mitragynine in humans difficult. Conversely, animal studies control for such variability, but offer limited translatable information relevant to humans. Experimental limitations aside, mitragynine has been found to interact with a variety of receptors, although the nature and extent of receptor interactions has yet to be fully characterized. Additionally, the toxicity of mitragynine and associated kratom alkaloids have yet to be fully determined in humans, nor has the risk of overdose. More studies are necessary to assess safety and potential therapeutic utility.
Toxicology
Mitragynine toxicity in humans is largely unknown, as animal studies show significant species-specific differences in mitragynine tolerance. Mitragynine toxicity in humans is rarely reported although specific examples of seizures and liver toxicity in kratom consumers have been reported. Due to Cytochrome P450 enzyme inhibition, the combination of mitragynine with other drugs poses concern for adverse reactions to mitragynine. Fatalities involving mitragynine tend to include its use in combination with opioids and some cough suppressants. Post-mortem toxicology screens indicate a wide range of mitragynine blood concentrations ranging from 10 μg/L to 4800 μg/L, making it difficult to calculate what constitutes a toxic dose in humans. These variations are suggested to result from differences in the toxicology assays used, and how long after death the assays were conducted.
See also
Mitragyna speciosa (Kratom)
7-Hydroxymitragynine (7-OH)
References
Indoloquinolizines
Tryptamine alkaloids
Methoxy compounds
Methyl esters
Mu-opioid receptor agonists
Delta-opioid receptor agonists | Mitragynine | Chemistry | 2,659 |
60,066,606 | https://en.wikipedia.org/wiki/CoRoT-15 | CoRoT-15 is an eclipsing binary star system about away in the constellation Monoceros, discovered by the CoRoT space telescope in 2010. It consists of an F7V star and an orbiting brown dwarf companion, which was one of the first transiting brown dwarfs to be discovered.
References
Monoceros
Eclipsing binaries
F-type main-sequence stars
Brown dwarfs
Astronomical objects discovered in 2010
15b
J06282781+0611105 | CoRoT-15 | Astronomy | 99 |
956,906 | https://en.wikipedia.org/wiki/Postmaster%20%28computing%29 | In computers and technology, a postmaster is the administrator of a mail server. Nearly every domain should have the e-mail address postmaster@example.com where errors in e-mail processing are directed. Error e-mails automatically generated by mail servers' MTAs usually appear to have been sent to the postmaster address.
Every domain that supports the SMTP protocol for electronic mail is required by and, as early as 1982, by , to have the postmaster address. The rfc-ignorant.org website used to maintain a list of domains that do not comply with the RFC based on this requirement, but was shut down in November 2012. The website RFC2 Realtime List expanded to include rfc-ignorant's lists after they shut down.
Quoting from the RFC:
Any system that includes an SMTP server supporting mail relaying or delivery MUST support the reserved mailbox "postmaster" as a case-insensitive local name. This postmaster address is not strictly necessary if the server always returns 554 on connection opening (as described in section 3.1). The requirement to accept mail for postmaster implies that RCPT commands which specify a mailbox for postmaster at any of the domains for which the SMTP server provides mail service, as well as the special case of "RCPT TO:<Postmaster>" (with no domain specification), MUST be supported.
SMTP systems are expected to make every reasonable effort to accept mail directed to Postmaster from any other system on the Internet. In extreme cases (such as to contain a denial of service attack or other breach of security) an SMTP server may block mail directed to Postmaster. However, such arrangements SHOULD be narrowly tailored so as to avoid blocking messages which are not part of such attacks.
Since most domains have a postmaster address, it is commonly targeted by spamming operations. Even if not directly spammed, a postmaster address may be sent bounced spam from other servers that mistakenly trust fake return-paths commonly used in spam.
References
External links
: The SMTP Protocol
Email | Postmaster (computing) | Technology | 411 |
53,464,494 | https://en.wikipedia.org/wiki/Chronocinematograph | Chronocinematograph is an astronomical instrument consisting of a film camera, chronometer and chronograph. The device records images using a more precise timetable for observing an eclipse. It was invented in 1927 by a Polish astronomer, mathematician and geodesist Tadeusz Banachiewicz for observing total solar eclipses. During the same year, Banachiewcz used his device for solar observations in Lapland (Sweden), then in USA (1932) and Greece, Japan and Siberia (1936).
The invention enhanced the precision for determining the time of an eclipse, due to more precisely timed photos of Baily's beads, and quantifying the duration of totality. This could not have been observed as closely as before due to the brightness of the sun.
References
Astronomical instruments | Chronocinematograph | Astronomy | 166 |
27,271,977 | https://en.wikipedia.org/wiki/Wireless%20security%20camera | Wireless security cameras are closed-circuit television (CCTV) cameras that transmit a video and audio signal to a wireless receiver through a radio band. Many wireless security cameras require at least one cable or wire for power; "wireless" refers to the transmission of video/audio. However, some wireless security cameras are battery-powered, making the cameras truly wireless from top to bottom.
Wireless cameras are proving very popular among modern security consumers due to their low installation costs (there is no need to run expensive video extension cables) and flexible mounting options; wireless cameras can be mounted/installed in locations previously unavailable to standard wired cameras. In addition to the ease of use and convenience of access, wireless security camera allows users to leverage broadband wireless internet to provide seamless video streaming over-internet.
Types
Analog wireless
Analog wireless is the transmission of audio and video signals using radio frequencies. Typically, analog wireless has a transmission range of around in open space; walls, doors, and furniture will reduce this range.
Analog wireless is found in three frequencies: 900 MHz, 2.4 GHz, and 5.8 GHz. Currently, the majority of wireless security cameras operate on the 2.4 GHz frequency. Most household routers, cordless phones, video game controllers, and microwaves operate on the 2.4 GHz frequency and may cause interference with a wireless security camera. The main difference between 2.4 and 5 GHz frequencies is range. 900 MHz is known for its ability to penetrate through barriers like walls and vegetation.
Advantages
Cost effective: the cost of individual cameras is low
Multiple receivers per camera: the signal from one camera can be picked up by any receiver; you can have multiple receivers in various locations to create your wireless surveillance network
Disadvantages
Susceptible to interference from other household devices, such as microwaves, cordless phones, video game controllers, and routers.
No signal strength indicator: there is no visual alert (like the bars on a cellular phone) indicating the strength of your signal.
Susceptible to interception: because analog wireless uses a consistent frequency, it is possible for the signals to be picked up by other receivers.
One-way communication only: it is not possible for the receiver to send signals back to the camera.
Digital wireless cameras
Digital wireless is the transmission of audio and video analog signals encoded as digital packets over high-bandwidth radio frequencies.
Advantages
Wide transmission range—usually close to 450 feet (open space, clear line of sight between camera and receiver)
High quality video and audio
Two-way communication between the camera and the receiver
Digital signal means you can transmit commands and functions, such as turning lights on and off
You can connect multiple receivers to one recording device, such as security DVR
Uses and applications
Home security systems
Wireless security cameras are becoming more and more popular in the consumer market, being a cost-effective way to have a comprehensive surveillance system installed in a home or business for an often less expensive price. Wireless cameras are also ideal for people renting homes or apartments. Since there is no need to run video extension cables through walls or ceilings (from the camera to the receiver or recording device) one does not need approval of a landlord to install a wireless security camera system. Additionally, the lack of wiring allows for less mess, avoiding damage to the look of a building.
A wireless security camera is also a great option for seasonal monitoring and surveillance. For example, one can observe a pool or patio.
Barn Cameras
Wireless cameras are also very useful for monitoring outbuildings as wireless signals can be sent from one building to another where it is not possible to run wires due to roads or other obstructions. One common use of these is for watching animals in a barn from a house located on the same property. One such example of this can be seen in this story of one of the first "BarnCam" in the New York Times.
Law enforcement
Wireless security cameras are also used by law enforcement agencies to deter crimes. The cameras can be installed in many remote locations and the video data is transmitted through government-only wireless network. An example of this application is the deployment of hundreds of wireless security cameras by New York City Police Department on lamp posts at many streets throughout the city.
Wireless range
Wireless security cameras function best when there is a clear line of sight between the camera(s) and the receiver. If digital wireless cameras are outdoors and have a clear line of sight, they typically have a range between 250 and 450 feet. If located indoors, the range can be limited to 100 to 150 feet. Cubical walls, drywall, glass, and windows generally do not degrade wireless signal strength. Brick, concrete floors, and walls degrade signal strength. Trees that are in the line of sight of the wireless camera and receiver may also impact signal strength.
The signal range also depends on whether there are competing signals using the same frequency as the camera. For example, signals from cordless phones or routers may affect signal strength. When this happens, the camera image may freeze, or appear "choppy". Typically, the solution is to lock the channel that the wireless router operates on.
See also
Dashcam
Spy Camera
References
Video
Wireless | Wireless security camera | Engineering | 1,037 |
9,615,240 | https://en.wikipedia.org/wiki/Nitazoxanide | Nitazoxanide, sold under the brand name Alinia among others, is a broad-spectrum antiparasitic and broad-spectrum antiviral medication that is used in medicine for the treatment of various helminthic, protozoal, and viral infections. It is indicated for the treatment of infection by Cryptosporidium parvum and Giardia lamblia in immunocompetent individuals and has been repurposed for the treatment of influenza. Nitazoxanide has also been shown to have in vitro antiparasitic activity and clinical treatment efficacy for infections caused by other protozoa and helminths; evidence suggested that it possesses efficacy in treating a number of viral infections as well.
Chemically, nitazoxanide is the prototype member of the thiazolides, a class of drugs which are synthetic nitrothiazolyl-salicylamide derivatives with antiparasitic and antiviral activity. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class.
Nitazoxanide tablets were approved as a generic medication in the United States in 2020.
Uses
Nitazoxanide is an effective first-line treatment for infection by Blastocystis species and is indicated for the treatment of infection by Cryptosporidium parvum or Giardia lamblia in immunocompetent adults and children. It is also an effective treatment option for infections caused by other protozoa and helminths (e.g., Entamoeba histolytica, Hymenolepis nana, Ascaris lumbricoides, and Cyclospora cayetanensis).
Chronic hepatitis B
Nitazoxanide alone has shown preliminary evidence of efficacy in the treatment of chronic hepatitis B over a one-year course of therapy. Nitazoxanide 500 mg twice daily resulted in a decrease in serum HBV DNA in all of 4 HBeAg-positive patients, with undetectable HBV DNA in 2 of 4 patients, loss of HBeAg in 3 patients, and loss of HBsAg in one patient. Seven of 8 HBeAg-negative patients treated with nitazoxanide 500 mg twice daily had undetectable HBV DNA and 2 had loss of HBsAg. Additionally, nitazoxanide monotherapy in one case and nitazoxanide plus adefovir in another case resulted in undetectable HBV DNA, loss of HBeAg and loss of HBsAg. These preliminary studies showed a higher rate of HBsAg loss than any currently licensed therapy for chronic hepatitis B. The similar mechanism of action of interferon and nitazoxanide suggest that stand-alone nitazoxanide therapy or nitazoxanide in concert with nucleos(t)ide analogs have the potential to increase loss of HBsAg, which is the ultimate end-point of therapy. A formal phase 2 study is being planned for 2009.
Chronic hepatitis C
Romark initially decided to focus on the possibility of treating chronic hepatitis C with nitazoxanide. The drug garnered interest from the hepatology community after three phase II clinical trials involving the treatment of hepatitis C with nitazoxanide produced positive results for treatment efficacy and similar tolerability to placebo without any signs of toxicity. A meta-analysis from 2014 concluded that the previous held trials were of low-quality and withheld with a risk of bias. The authors concluded that more randomized trials with low risk of bias are needed to determine if Nitazoxanide can be used as an effective treatment for chronic hepatitis C patients.
Contraindications
Nitazoxanide is contraindicated only in individuals who have experienced a hypersensitivity reaction to nitazoxanide or the inactive ingredients of a nitazoxanide formulation.
Adverse effects
The side effects of nitazoxanide do not significantly differ from a placebo treatment for giardiasis; these symptoms include stomach pain, headache, upset stomach, vomiting, discolored urine, excessive urinating, skin rash, itching, fever, flu syndrome, and others. Nitazoxanide does not appear to cause any significant adverse effects when taken by healthy adults.
Overdose
Information on nitazoxanide overdose is limited. Oral doses of 4 grams in healthy adults do not appear to cause any significant adverse effects. In various animals, the oral LD50 is higher than 10 .
Interactions
Due to the exceptionally high plasma protein binding (>99.9%) of nitazoxanide's metabolite, tizoxanide, the concurrent use of nitazoxanide with other highly plasma protein-bound drugs with narrow therapeutic indices (e.g., warfarin) increases the risk of drug toxicity. In vitro evidence suggests that nitazoxanide does not affect the CYP450 system.
Pharmacology
Pharmacodynamics
The anti-protozoal activity of nitazoxanide is believed to be due to interference with the pyruvate:ferredoxin oxidoreductase (PFOR) enzyme-dependent electron-transfer reaction that is essential to anaerobic energy metabolism. PFOR inhibition may also contribute to its activity against anaerobic bacteria.
It has also been shown to have activity against influenza A virus in vitro. The mechanism appears to be by selectively blocking the maturation of the viral hemagglutinin at a stage preceding resistance to endoglycosidase H digestion. This impairs hemagglutinin intracellular trafficking and insertion of the protein into the host plasma membrane.
Nitazoxanide modulates a variety of other pathways in vitro, including glutathione-S-transferase and glutamate-gated chloride ion channels in nematodes, respiration and other pathways in bacteria and cancer cells, and viral and host transcriptional factors.
Pharmacokinetics
Following oral administration, nitazoxanide is rapidly hydrolyzed to the pharmacologically active metabolite, tizoxanide, which is 99% protein bound. Tizoxanide is then glucuronide conjugated into the active metabolite, tizoxanide glucuronide. Peak plasma concentrations of the metabolites tizoxanide and tizoxanide glucuronide are observed 1–4 hours after oral administration of nitazoxanide, whereas nitazoxanide itself is not detected in blood plasma.
Roughly of an oral dose of nitazoxanide is excreted as its metabolites in feces, while the remainder of the dose excreted in urine. Tizoxanide is excreted in the urine, bile and feces. Tizoxanide glucuronide is excreted in urine and bile.
Chemistry
Acetic acid [2-[(5-nitro-2-thiazolyl)amino]-oxomethyl]phenyl ester is a carboxylic ester and a member of benzamides. It is functionally related to a salicylamide.
Nitazoxanide is the prototype member of the thiazolides, which is a drug class of structurally-related broad-spectrum antiparasitic compounds. Nitazoxanide belongs to the class of drugs known as thiazolides. It is a broad-spectrum anti-infective drug that significantly modulates the survival, growth, and proliferation of a range of extracellular and intracellular protozoa, helminths, anaerobic and microaerophilic bacteria, in addition to viruses. Nitazoxanide is a light yellow crystalline powder. It is poorly soluble in ethanol and practically insoluble in water.
The molecular formula of Nitazoxanide is C12H9N3O5S and its molecular weight is 307.28 g/mol2.
Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class.
IUPAC Name: [[[2-[(5-nitro-1,3-thiazol-2-yl)carbamoyl]phenyl] acetate2]]
Canonical SMILES: CC(=O)OC1=CC=CC=C1C(=O)NC2=NC=C(S2)N+[O-]2
MeSH Synonyms:
1) 2-(acetolyloxy)-n-(5-nitro-2-thiazolyl)benzamide
2) Alinia
3) Colufase
4) Cryptaz
5) Daxon
6) Heliton
7) Ntz
8) Taenitaz
History
Nitazoxanide was originally discovered in the 1980s by Jean-François Rossignol at the Pasteur Institute. Initial studies demonstrated activity versus tapeworms. In vitro studies demonstrated much broader activity. Dr. Rossignol co-founded Romark Laboratories, with the goal of bringing nitazoxanide to market as an anti-parasitic drug. Initial studies in the USA were conducted in collaboration with Unimed Pharmaceuticals, Inc. (Marietta, GA) and focused on development of the drug for treatment of cryptosporidiosis in AIDS. Controlled trials began shortly after the advent of effective anti-retroviral therapies. The trials were abandoned due to poor enrollment and the FDA rejected an application based on uncontrolled studies.
Subsequently, Romark launched a series of controlled trials. A placebo-controlled study of nitazoxanide in cryptosporidiosis demonstrated significant clinical improvement in adults and children with mild illness. Among malnourished children in Zambia with chronic cryptosporidiosis, a three-day course of therapy led to clinical and parasitologic improvement and improved survival. In Zambia and in a study conducted in Mexico, nitazoxanide was not successful in the treatment of cryptosporidiosis in advanced infection with human immunodeficiency virus at the doses used. However, it was effective in patients with higher CD4 counts. In treatment of giardiasis, nitazoxanide was superior to placebo and comparable to metronidazole. Nitazoxanide was successful in the treatment of metronidazole-resistant giardiasis. Studies have suggested efficacy in the treatment of cyclosporiasis, isosporiasis, and amebiasis. Recent studies have also found it to be effective against beef tapeworm (Taenia saginata).
Pharmaceutical products
Dosage forms
Nitazoxanide is currently available in two oral dosage forms: a tablet (500 mg) and an oral suspension (100 mg per 5 ml when reconstituted).
An extended release tablet (675 mg) has been used in clinical trials for chronic hepatitis C; however, this form is not currently marketed or available for prescription.
Brand names
Nitazoxanide is sold under the brand names Adonid, Alinia, Allpar, Annita, Celectan, Colufase, Daxon, Dexidex, Diatazox, Kidonax, Mitafar, Nanazoxid, Parazoxanide, Netazox, Niazid, Nitamax, Nitax, Nitaxide, Nitaz, Nizonide, , Pacovanton, Paramix, Toza, and Zox.
Research
, nitazoxanide was in phase 3 clinical trials for the treatment influenza due to its inhibitory effect on a broad range of influenza virus subtypes and efficacy against influenza viruses that are resistant to neuraminidase inhibitors like oseltamivir. Nitazoxanide is also being researched as a potential treatment for COVID-19, chronic hepatitis B, chronic hepatitis C, rotavirus and norovirus gastroenteritis.
References
Further reading
Acetate esters
Antiparasitic agents
Antiviral drugs
Nitrothiazoles
Salicylamide ethers | Nitazoxanide | Biology | 2,621 |
5,693,489 | https://en.wikipedia.org/wiki/Kenneth%20Kellermann | Kenneth Irwin Kellermann (born July 1, 1937) is an American astronomer at the National Radio Astronomy Observatory. He is best known for his work on quasars. He won the Helen B. Warner Prize for Astronomy of the American Astronomical Society in 1971, and the Bruce Medal of the Astronomical Society of the Pacific in 2014.
Kellerman was a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society.
Kellermann was born in New York City to Alexander Kellermann and Rae Kellermann (née Goodstein). His paternal grandparents emigrated from Hungary and his maternal grandparents from Romania.
Publications
Direct Link
References
Living people
1937 births
Members of the Eurasian Astronomical Society
Foreign members of the Russian Academy of Sciences
Scientists from New York City
Jewish American scientists
American people of Hungarian-Jewish descent
American people of Romanian-Jewish descent
21st-century American Jews
Members of the American Philosophical Society | Kenneth Kellermann | Astronomy | 184 |
67,134,422 | https://en.wikipedia.org/wiki/The%20Fiel%20contraste | The Fiel contraste is a sculptural group created by the Spanish sculptor Ramón Conde, located in Pontevedra, Spain. It stands in Alhóndiga street, behind the Pontevedra City Hall, and was inaugurated on 30 April 2010.
History
The sculptural group is located in the place where the Alhóndiga or municipal grain market was in the Middle Ages. The central statue recalls the medieval Civil Servant (hired by the town hall) who, at the entrance to the walls of Pontevedra, near the Bastida Tower, faithfully contrasted with his scales the weights and measures of the goods that were to be sold in the city.
Until the 16th century, the Alhóndiga was located where the Pontevedra City Hall is today. At the entrance to the Alhóndiga, the Fiel Contraste was responsible for checking the weights and measures of all the goods that arrived there to be sold. The taxes on the market depended on the verification of the weight of bread or cereals or the measures of wine. The disappearance of this profession occurred with the unification of weights and measures brought about by the Bourbon administration, with the appearance of the metric system and, finally, with the inauguration of the current City Hall in 1880.
The commercial and fishing boom in Pontevedra had boosted the holding of markets in the city, notably the Feira Franca granted to Pontevedra in 1467 by King Henry IV of Castile, when the city was the main port in Galicia.
Description
The sculptural group consists of five pieces. The central bronze piece is the Faithful Contrast, which represents a Herculean man (characteristic of Ramón Conde's work) and timeless, with his left arm extended holding a pair of scales in his hand. The statue is high and weighs . His strong features denote power and authority in the exercise of his function to resolve conflicts about the exact weight of products in the city's fairs and markets.
Around this central statue are four two-dimensional pieces of Corten Steel in the form of silhouettes or shadows depicting popular characters from a medieval city market, such as shopkeepers with their baskets in front of them or merchants on any given day in a city market.
The sculptural group is valued at 100,000 euros.
Gallery
References
See also
Related articles
Pontevedra City Hall
External links
https://www.outono.net/elentir/2016/01/26/el-fiel-contraste-un-monumento-al-almotacen-de-pontevedra/
http://esculturayarte.com/022739/Fiel-Contraste-1-en-Pontevedra.html#.X8aQe86g82w
Pontevedra
Colossal statues
Bronze sculptures in Spain
Outdoor sculptures in Pontevedra
Sculptures in Spain
21st-century sculptures
Sculptures of men in Spain
Tourist attractions in Galicia (Spain)
Monuments and memorials in Pontevedra
Monuments and memorials in Galicia (Spain)
Sculptures in Pontevedra
History of Pontevedra | The Fiel contraste | Physics,Mathematics | 633 |
48,827,098 | https://en.wikipedia.org/wiki/Iodopindolol | Iodopindolol is a beta-adrenergic selective antagonist tagged with radioactive iodine-125. It has been used to map beta receptors in cellular experiments.
See also
Pindolol
References
Beta blockers
Radiopharmaceuticals
Organoiodides
Isopropylamino compounds
N-isopropyl-phenoxypropanolamines
Indoles | Iodopindolol | Chemistry | 82 |
10,939 | https://en.wikipedia.org/wiki/Formal%20language | In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules called a formal grammar.
The alphabet of a formal language consists of symbols, letters, or tokens that concatenate into strings called words. Words that belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar, which consists of its formation rules.
In computer science, formal languages are used, among others, as the basis for defining the grammar of programming languages and formalized versions of subsets of natural languages, in which the words of the language represent concepts that are associated with meanings or semantics. In computational complexity theory, decision problems are typically defined as formal languages, and complexity classes are defined as the sets of the formal languages that can be parsed by machines with limited computational power. In logic and the foundations of mathematics, formal languages are used to represent the syntax of axiomatic systems, and mathematical formalism is the philosophy that all of mathematics can be reduced to the syntactic manipulation of formal languages in this way.
The field of formal language theory studies primarily the purely syntactic aspects of such languages—that is, their internal structural patterns. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages.
History
In the 17th century, Gottfried Leibniz imagined and described the characteristica universalis, a universal and formal language which utilised pictographs. Later, Carl Friedrich Gauss investigated the problem of Gauss codes.
Gottlob Frege attempted to realize Leibniz's ideas, through a notational system first outlined in Begriffsschrift (1879) and more fully developed in his 2-volume Grundgesetze der Arithmetik (1893/1903). This described a "formal language of pure language."
In the first half of the 20th century, several developments were made with relevance to formal languages. Axel Thue published four papers relating to words and language between 1906 and 1914. The last of these introduced what Emil Post later termed 'Thue Systems', and gave an early example of an undecidable problem. Post would later use this paper as the basis for a 1947 proof "that the word problem for semigroups was recursively insoluble", and later devised the canonical system for the creation of formal languages.
In 1907, Leonardo Torres Quevedo introduced a formal language for the description of mechanical drawings (mechanical devices), in Vienna. He published "Sobre un sistema de notaciones y símbolos destinados a facilitar la descripción de las máquinas" ("On a system of notations and symbols intended to facilitate the description of machines"). Heinz Zemanek rated it as an equivalent to a programming language for the numerical control of machine tools.
Noam Chomsky devised an abstract representation of formal and natural languages, known as the Chomsky hierarchy. In 1959 John Backus developed the Backus-Naur form to describe the syntax of a high level programming language, following his work in the creation of FORTRAN. Peter Naur was the secretary/editor for the ALGOL60 Report in which he used Backus–Naur form to describe the Formal part of ALGOL60.
Words over an alphabet
An alphabet, in the context of formal languages, can be any set; its elements are called letters. An alphabet may contain an infinite number of elements; however, most definitions in formal language theory specify alphabets with a finite number of elements, and many results apply only to them. It often makes sense to use an alphabet in the usual sense of the word, or more generally any finite character encoding such as ASCII or Unicode.
A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words. The result of concatenating a word with the empty word is the original word.
In some applications, especially in logic, the alphabet is also known as the vocabulary and words are known as formulas or sentences; this breaks the letter/word metaphor and replaces it by a word/sentence metaphor.
Definition
A formal language L over an alphabet Σ is a subset of Σ*, that is, a set of words over that alphabet. Sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of 'well-formed expressions'.
In computer science and mathematics, which do not usually deal with natural languages, the adjective "formal" is often omitted as redundant.
While formal language theory usually concerns itself with formal languages that are described by some syntactic rules, the actual definition of the concept "formal language" is only as above: a (possibly infinite) set of finite-length strings composed from a given alphabet, no more and no less. In practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the intuitive concept of a "language", one described by syntactic rules. By an abuse of the definition, a particular formal language is often thought of as being accompanied with a formal grammar that describes it.
Examples
The following rules describe a formal language over the alphabet Σ = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, =}:
Every nonempty string that does not contain "+" or "=" and does not start with "0" is in .
The string "0" is in .
A string containing "=" is in if and only if there is exactly one "=", and it separates two valid strings of .
A string containing "+" but not "=" is in if and only if every "+" in the string separates two valid strings of .
No string is in other than those implied by the previous rules.
Under these rules, the string "23+4=555" is in , but the string "=234=+" is not. This formal language expresses natural numbers, well-formed additions, and well-formed addition equalities, but it expresses only what they look like (their syntax), not what they mean (semantics). For instance, nowhere in these rules is there any indication that "0" means the number zero, "+" means addition, "23+4=555" is false, etc.
Constructions
For finite languages, one can explicitly enumerate all well-formed words. For example, we can describe a language as just = {a, b, ab, cba}. The degenerate case of this construction is the empty language, which contains no words at all ( = ∅).
However, even over a finite (non-empty) alphabet such as Σ = {a, b} there are an infinite number of finite-length words that can potentially be expressed: "a", "abb", "ababba", "aaababbbbaab", .... Therefore, formal languages are typically infinite, and describing an infinite formal language is not as simple as writing L = {a, b, ab, cba}. Here are some examples of formal languages:
= Σ*, the set of all words over Σ;
= {a}* = {an}, where n ranges over the natural numbers and "an" means "a" repeated n times (this is the set of words consisting only of the symbol "a");
the set of syntactically correct programs in a given programming language (the syntax of which is usually defined by a context-free grammar);
the set of inputs upon which a certain Turing machine halts; or
the set of maximal strings of alphanumeric ASCII characters on this line, i.e., the set {the, set, of, maximal, strings, alphanumeric, ASCII, characters, on, this, line, i, e}.
Language-specification formalisms
Formal languages are used as tools in multiple disciplines. However, formal language theory rarely concerns itself with particular languages (except as examples), but is mainly concerned with the study of various types of formalisms to describe languages. For instance, a language can be given as
those strings generated by some formal grammar;
those strings described or matched by a particular regular expression;
those strings accepted by some automaton, such as a Turing machine or finite-state automaton;
those strings for which some decision procedure (an algorithm that asks a sequence of related YES/NO questions) produces the answer YES.
Typical questions asked about such formalisms include:
What is their expressive power? (Can formalism X describe every language that formalism Y can describe? Can it describe other languages?)
What is their recognizability? (How difficult is it to decide whether a given word belongs to a language described by formalism X?)
What is their comparability? (How difficult is it to decide whether two languages, one described in formalism X and one in formalism Y, or in X again, are actually the same language?).
Surprisingly often, the answer to these decision problems is "it cannot be done at all", or "it is extremely expensive" (with a characterization of how expensive). Therefore, formal language theory is a major application area of computability theory and complexity theory. Formal languages may be classified in the Chomsky hierarchy based on the expressive power of their generative grammar as well as the complexity of their recognizing automaton. Context-free grammars and regular grammars provide a good compromise between expressivity and ease of parsing, and are widely used in practical applications.
Operations on languages
Certain operations on languages are common. This includes the standard set operations, such as union, intersection, and complement. Another class of operation is the element-wise application of string operations.
Examples: suppose and are languages over some common alphabet .
The concatenation consists of all strings of the form where is a string from and is a string from .
The intersection of and consists of all strings that are contained in both languages
The complement of with respect to consists of all strings over that are not in .
The Kleene star: the language consisting of all words that are concatenations of zero or more words in the original language;
Reversal:
Let ε be the empty word, then , and
for each non-empty word (where are elements of some alphabet), let ,
then for a formal language , .
String homomorphism
Such string operations are used to investigate closure properties of classes of languages. A class of languages is closed under a particular operation when the operation, applied to languages in the class, always produces a language in the same class again. For instance, the context-free languages are known to be closed under union, concatenation, and intersection with regular languages, but not closed under intersection or complement. The theory of trios and abstract families of languages studies the most common closure properties of language families in their own right.
{| class="wikitable"
|+ align="top"|Closure properties of language families ( Op where both and are in the language family given by the column). After Hopcroft and Ullman.
|-
! Operation
!
! Regular
! DCFL
! CFL
! IND
! CSL
! recursive
! RE
|-
|Union
|
|
|
|
|
|
|
|
|-
|Intersection
|
|
|
|
|
|
|
|
|-
|Complement
|
|
|
|
|
|
|
|
|-
|Concatenation
|
|
|
|
|
|
|
|
|-
|Kleene star
|
|
|
|
|
|
|
|
|-
|(String) homomorphism
|
|
|
|
|
|
|
|
|-
|ε-free (string) homomorphism
|
|
|
|
|
|
|
|
|-
|Substitution
|
|
|
|
|
|
|
|
|-
|Inverse homomorphism
|
|
|
|
|
|
|
|
|-
|Reverse
|
|
|
|
|
|
|
|
|-
|Intersection with a regular language
|
|
|
|
|
|
|
|
|}
Applications
Programming languages
A compiler usually has two distinct components. A lexical analyzer, sometimes generated by a tool like lex, identifies the tokens of the programming language grammar, e.g. identifiers or keywords, numeric and string literals, punctuation and operator symbols, which are themselves specified by a simpler formal language, usually by means of regular expressions. At the most basic conceptual level, a parser, sometimes generated by a parser generator like yacc, attempts to decide if the source program is syntactically valid, that is if it is well formed with respect to the programming language grammar for which the compiler was built.
Of course, compilers do more than just parse the source code – they usually translate it into some executable format. Because of this, a parser usually outputs more than a yes/no answer, typically an abstract syntax tree. This is used by subsequent stages of the compiler to eventually generate an executable containing machine code that runs directly on the hardware, or some intermediate code that requires a virtual machine to execute.
Formal theories, systems, and proofs
In mathematical logic, a formal theory is a set of sentences expressed in a formal language.
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules, which may be interpreted as valid rules of inference, or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions. Although a formal language can be identified with its formulas, a formal system cannot be likewise identified by its theorems. Two formal systems and may have all the same theorems and yet differ in some significant proof-theoretic way (a formula A may be a syntactic consequence of a formula B in one but not another for instance).
A formal proof or derivation is a finite sequence of well-formed formulas (which may be interpreted as sentences, or propositions) each of which is an axiom or follows from the preceding formulas in the sequence by a rule of inference. The last sentence in the sequence is a theorem of a formal system. Formal proofs are useful because their theorems can be interpreted as true propositions.
Interpretations and models
Formal languages are entirely syntactic in nature, but may be given semantics that give meaning to the elements of the language. For instance, in mathematical logic, the set of possible formulas of a particular logic is a formal language, and an interpretation assigns a meaning to each of the formulas—usually, a truth value.
The study of interpretations of formal languages is called formal semantics. In mathematical logic, this is often done in terms of model theory. In model theory, the terms that occur in a formula are interpreted as objects within mathematical structures, and fixed compositional interpretation rules determine how the truth value of the formula can be derived from the interpretation of its terms; a model for a formula is an interpretation of terms such that the formula becomes true.
See also
Combinatorics on words
Formal method
Free monoid
Grammar framework
Mathematical notation
String (computer science)
Notes
References
Citations
Sources
Works cited
General references
A. G. Hamilton, Logic for Mathematicians, Cambridge University Press, 1978, .
Seymour Ginsburg, Algebraic and automata theoretic properties of formal languages, North-Holland, 1975, .
Michael A. Harrison, Introduction to Formal Language Theory, Addison-Wesley, 1978.
Grzegorz Rozenberg, Arto Salomaa, Handbook of Formal Languages: Volume I-III, Springer, 1997, .
Patrick Suppes, Introduction to Logic, D. Van Nostrand, 1957, .
External links
University of Maryland, Formal Language Definitions
James Power, "Notes on Formal Language Theory and Parsing" , 29 November 2002.
Drafts of some chapters in the "Handbook of Formal Language Theory", Vol. 1–3, G. Rozenberg and A. Salomaa (eds.), Springer Verlag, (1997):
Alexandru Mateescu and Arto Salomaa, "Preface" in Vol.1, pp. v–viii, and "Formal Languages: An Introduction and a Synopsis", Chapter 1 in Vol. 1, pp. 1–39
Sheng Yu, "Regular Languages", Chapter 2 in Vol. 1
Jean-Michel Autebert, Jean Berstel, Luc Boasson, "Context-Free Languages and Push-Down Automata", Chapter 3 in Vol. 1
Christian Choffrut and Juhani Karhumäki, "Combinatorics of Words", Chapter 6 in Vol. 1
Tero Harju and Juhani Karhumäki, "Morphisms", Chapter 7 in Vol. 1, pp. 439–510
Jean-Eric Pin, "Syntactic semigroups", Chapter 10 in Vol. 1, pp. 679–746
M. Crochemore and C. Hancart, "Automata for matching patterns", Chapter 9 in Vol. 2
Dora Giammarresi, Antonio Restivo, "Two-dimensional Languages", Chapter 4 in Vol. 3, pp. 215–267
Theoretical computer science
Combinatorics on words | Formal language | Mathematics | 3,721 |
24,098,129 | https://en.wikipedia.org/wiki/C16H12O4 | {{DISPLAYTITLE:C16H12O4}}
The molecular formula C16H12O4 (molar mass: 268.26 g/mol, exact mass: 268.073559 u) may refer to:
3-Hydroxy-4'-methoxyflavone, a flavonol
Formononetin, an isoflavone
Isoformononetin (4'-hydroxy-7-methoxyisoflavone), an isoflavone
Pratol, a flavone
Techtochrysin, a flavone
Molecular formulas | C16H12O4 | Physics,Chemistry | 130 |
55,677,839 | https://en.wikipedia.org/wiki/Matrix%20variate%20Dirichlet%20distribution | In statistics, the matrix variate Dirichlet distribution is a generalization of the matrix variate beta distribution and of the Dirichlet distribution.
Suppose are positive definite matrices with also positive-definite, where is the identity matrix. Then we say that the have a matrix variate Dirichlet distribution, , if their joint probability density function is
where and is the multivariate beta function.
If we write then the PDF takes the simpler form
on the understanding that .
Theorems
generalization of chi square-Dirichlet result
Suppose are independently distributed Wishart positive definite matrices. Then, defining (where is the sum of the matrices and is any reasonable factorization of ), we have
Marginal distribution
If , and if , then:
Conditional distribution
Also, with the same notation as above, the density of is given by
where we write .
partitioned distribution
Suppose and suppose that is a partition of (that is, and if ). Then, writing and (with ), we have:
partitions
Suppose . Define
where is and is . Writing the Schur complement we have
and
See also
Inverse Dirichlet distribution
References
A. K. Gupta and D. K. Nagar 1999. "Matrix variate distributions". Chapman and Hall.
Probability distributions | Matrix variate Dirichlet distribution | Mathematics | 253 |
2,717,842 | https://en.wikipedia.org/wiki/Gerolsteiner%20Brunnen | Gerolsteiner Brunnen GmbH & Co. KG (Gerolsteiner) is a leading German mineral water firm with its seat in Gerolstein in the Eifel mountains. The firm is well known for its Gerolsteiner Sprudel brand, a bottled, naturally carbonated mineral water. Gerolsteiner was also the chief sponsor of Team Gerolsteiner a cycling team.
History
On 1 January 1888 the mine manager, Wilhelm Castendyck, founded the firm, Gerolsteiner Sprudel, as a Gesellschaft mit beschränkter Haftung (GmbH) in Gerolstein. Its first well was drilled in the same year. By November, the water from the well had become a sort of 'official' water of the city. It was popular because of its high amount of natural carbonic acid. In 1889, its star-and-lion symbol was trademarked. By 1895, the water was being exported to Australia.
Brunnen table water supplied water to Buckingham Palace during the reign of Queen Victoria.
The first exports of Gerolsteiner to the United States started in 1890, primarily to Chicago, known for its high concentration of German emigrants. Having been interrupted by World War I, U.S. shipments resumed in 1928.
The Gerolsteiner factory was completely destroyed in a bombing raid during Christmas 1944. The filling machines were repaired on 1948, and by 1948 both the full building and the installation equipment had been rebuilt.
In 1986, Gerolsteiner introduced a brand with a lower amount of carbonic acid to meet changing tastes.
In 1998, the company introduced Germany's first PET reusable deposit carrying mineral water bottle to a chorus of criticism from environmental groups. The use of returnable, deposit-bearing glass bottles for water, beer, and other mainstream drinks has long been normal in Germany and other European countries.
Gerolsteiner Brunnen's majority shareholder is .
See also
Mineral water
Staatl. Fachingen
Apollinaris (water)
Badoit
Evian
Farris
Perrier
Panna
Rosbacher
S.Pellegrino
Ramlösa
Spa
References
111 Jahre Gerolsteiner Brunnen, HG: Gerolsteiner Brunnen GmbH & CO
External links
Official website
Bottled water brands
Drink companies of Germany
German brands
Mineral water
Companies based in Rhineland-Palatinate | Gerolsteiner Brunnen | Chemistry | 487 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.