id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
239,454
https://en.wikipedia.org/wiki/Extended%20Display%20Identification%20Data
Extended Display Identification Data (EDID) and Enhanced EDID (E-EDID) are metadata formats for display devices to describe their capabilities to a video source (e.g., graphics card or set-top box). The data format is defined by a standard published by the Video Electronics Standards Association (VESA). The EDID data structure includes manufacturer name and serial number, product type, phosphor or filter type (as chromaticity data), timings supported by the display, display size, luminance data and (for digital displays only) pixel mapping data. DisplayID is a VESA standard targeted to replace EDID and E-EDID extensions with a uniform format suited for both PC monitor and consumer electronics devices. Background EDID structure (base block) versions range from v1.0 to v1.4; all these define upwards-compatible 128-byte structures. Version 2.0 defined a new 256-byte structure but it has been deprecated and replaced by E-EDID which supports multiple extension blocks. HDMI versions 1.0–1.3c use E-EDID v1.3. Before Display Data Channel (DDC) and EDID were defined, there was no standard way for a graphics card to know what kind of display device it was connected to. Some VGA connectors in personal computers provided a basic form of identification by connecting one, two or three pins to ground, but this coding was not standardized. This problem is solved by EDID and DDC, as it enables the display to send information to the graphics card it is connected to. The transmission of EDID information usually uses the Display Data Channel protocol, specifically DDC2B, which is based on I²C-bus (DDC1 used a different serial format which never gained popularity). The data is transmitted via the cable connecting the display and the graphics card; VGA, DVI, DisplayPort and HDMI are supported. The EDID is often stored in the monitor in the firmware chip called serial EEPROM (electrically erasable programmable read-only memory) and is accessible via the I²C-bus at address . The EDID PROM can often be read by the host PC even if the display itself is turned off. Many software packages can read and display the EDID information, such as read-edid for Linux and DOS, PowerStrip for Microsoft Windows and the X.Org Server for Linux and BSD unix. Mac OS X natively reads EDID information and programs such as SwitchResX or DisplayConfigX can display the information as well as use it to define custom resolutions. E-EDID was introduced at the same time as E-DDC, which supports multiple extensions blocks and deprecated EDID version 2.0 structure (it can be incorporated in E-EDID as an optional extension block). Data fields for preferred timing, range limits, and monitor name are required in E-EDID. E-EDID also adds support for the Dual GTF curve concept and partially changed the encoding of aspect ratio within the standard timings. With the use of extensions, E-EDID structure can be extended up to 32 KiB, because the E-DDC added the capability to address multiple (up to 128) 256 byte segments. EDID Extensions assigned by VESA Timing Extension () Additional Timing Data Block (CTA EDID Timing Extension) () Video Timing Block Extension (VTB-EXT) () EDID 2.0 Extension () Display Information Extension (DI-EXT) () Localized String Extension (LS-EXT) () Microdisplay Interface Extension (MI-EXT) () Display ID Extension () Display Transfer Characteristics Data Block (DTCDB) (, , ) Block Map () Display Device Data Block (DDDB) (): contains information such as subpixel layout Extension defined by monitor manufacturer (): According to LS-EXT, actual contents varies from manufacturer. However, the value is later used by DDDB. Revision history August 1994, DDC standard version 1 – introduce EDID v1.0. April 1996, EDID standard version 2 – introduce EDID v1.1. November 1997, EDID standard version 3 – introduce EDID v1.2 and EDID v2.0. September 1999, E-EDID Standard Release A – introduce EDID v1.3 and E-EDID v1.0, which supports multiple extensions blocks. February 2000, E-EDID Standard Release A - introduce E-EDID v1.3 (used in HDMI), based on EDID v1.3. EDID v2.0 deprecated. September 2006, E-EDID Standard Release A – introduce E-EDID v1.4, based on EDID v1.4. Limitations Some graphics card drivers have historically coped poorly with the EDID, using only its standard timing descriptors rather than its Detailed Timing Descriptors (DTDs). Even in cases where the DTDs were read, the drivers are/were still often limited by the standard timing descriptor limitation that the horizontal/vertical resolutions must be evenly divisible by 8. This means that many graphics cards cannot express the native resolutions of the most common widescreen flat-panel displays and liquid-crystal display TVs. The number of vertical pixels is calculated from the horizontal resolution and the selected aspect ratio. To be fully expressible, the size of widescreen display must thus be a multiple of 16×9 pixels. For 1366×768 pixel Wide XGA panels the nearest resolution expressible in the EDID standard timing descriptor syntax is 1360×765 pixels, typically leading to 3-pixel-thin black bars. Specifying 1368 pixels as the screen width would yield an unnatural screen height of 769.5 pixels. Many Wide XGA panels do not advertise their native resolution in the standard timing descriptors, instead offering only a resolution of 1280×768. Some panels advertise a resolution only slightly smaller than the native, such as 1360×765. For these panels to be able to show a pixel perfect image, the EDID data must be ignored by the display driver or the driver must correctly interpret the DTD and be able to resolve resolutions whose size is not divisible by 8. Special programs are available to override the standard timing descriptors from EDID data. Even this is not always possible, as some vendors' graphics drivers (notably those of Intel) require specific registry hacks to implement custom resolutions, which can make it very difficult to use the screen's native resolution. EDID 1.4 data format Structure, version 1.4 Detailed Timing Descriptor When used for another descriptor, the pixel clock and some other bytes are set to 0: Monitor Descriptors Currently defined descriptor types are: : Monitor serial number (ASCII text) : Unspecified text (ASCII text) : Monitor range limits. 6- or 13-byte (with additional timing) binary descriptor. : Monitor name (ASCII text), for example "PHL 223V5". : Additional white point data. 2× 5-byte descriptors, padded with 0A 20 20. : Additional standard timing identifiers. 6× 2-byte descriptors, padded with 0A. : Display Color Management (DCM). : CVT 3-Byte Timing Codes. : Additional standard timing 3. : Dummy identifier. : Manufacturer reserved descriptors. Display Range Limits Descriptor With GTF secondary curve With CVT support Additional white point descriptor Color management data descriptor CVT 3-byte timing codes descriptor Additional standard timings CTA EDID Timing Extension Block The CTA EDID Extension was first introduced in EIA/CEA-861. CTA-861 Standard The ANSI/CTA-861 industry standard, which according to CTA is now their "Most Popular Standard", has since been updated several times, most notably with the 861-B revision (published in May 2002, which added version 3 of the extension, adding Short Video Descriptors and advanced audio capability/configuration information), 861-D (published in July 2006 and containing updates to the audio segments), 861-E in March 2008, 861-F, which was published on June 4, 2013, 861-H in December 2020, and, most recently, 861-I, which was published in February 2023. Coinciding with the publication of CEA-861-F in 2013, Brian Markwalter, senior vice president, research and standards, stated: "The new edition includes a number of noteworthy enhancements, including support for several new Ultra HD and widescreen video formats and additional colorimetry schemes.” Version CTA-861-G, originally published in November 2016, was made available for free in November 2017, along with updated versions -E and -F, after some necessary changes due to a trademark complaint. All CTA standards are free to everyone since May 2018. The most recent full version is CTA-861-I, published in February 2023, available for free after registration. It combines the previous version, CTA-861-H, from January 2021 with an amendment, CTA-861.6, published in February 2022 and includes a new formula to calculate Video Timing Formats, OVT. Other changes include a new annex to elaborate on the audio speaker room configuration system that was introduced with the 861.2 amendment, and some general clarifications and formatting cleanup. An amendment to CTA-861-I, CTA-861.7, was published in June 2024. It contains updates to CTA 3D Audio, and clarifications on Content Type Indication, and on 4:2:0 support for VTDBs and VFDBs. It also introduces a new Product ID Data Block, to replace the Manufacturer PNP ID in the first block of the EDID, since the UEFI is phasing out assigning new PNP IDs. CTA Extension Block Version 1 of the extension block (as defined in CEA−861) allowed the specification of video timings only through the use of 18-byte Detailed Timing Descriptors (DTD) (as detailed in EDID 1.3 data format above). DTD timings are listed in order of preference in the CEA EDID Timing Extension. Version 2 (as defined in 861-A) added the capability to designate a number of DTDs as "native" (i.e., matching the resolution of the display) and also included some "basic discovery" functionality for whether the display device contains support for "basic audio", YCBCR pixel formats, and underscan. Version 3 (from the 861-B spec onward) allows two different ways to specify digital video timing formats: As in Version 1 & 2 by the use of 18-byte DTDs, or by the use of the Short Video Descriptor (SVD) (see below). HDMI 1.0–1.3c uses this version. Version 3 also defines a format for a collection of data blocks, which in turn can contain a number of individual descriptors. This Data Block Collection (DBC) initially had four types of Data Blocks (DBs): Video Data Blocks containing the aforementioned Short Video Descriptor (SVD), Audio Data Blocks containing Short Audio Descriptors (SAD), Speaker Allocation Data Blocks containing information about the speaker configuration of the display device, and Vendor Specific Data Blocks which can contain information specific to a given vendor's use. Subsequent versions of CTA-861 defined additional data blocks. CTA Extension data format The Data Block Collection contains one or more data blocks detailing video, audio, and speaker placement information about the display. The blocks can be placed in any order, and the initial byte of each block defines both its type and its length: If the Tag code is 7, an Extended Tag Code is present in the first payload byte of the data block, and the second payload byte represents the first payload byte of the extended data block. Once one data block has ended, the next byte is assumed to be the beginning of the next data block. This is the case until the byte (designated in byte 2, above) where the DTDs are known to begin. CTA Data Blocks As noted, several data blocks are defined by the extension. Video Data Blocks The Video Data Blocks will contain one or more 1-byte Short Video Descriptors (SVDs). EIA/CEA-861 predefined standard resolutions and timings Notes: Parentheses indicate instances where pixels are repeated to meet the minimum speed requirements of the interface. For example, in the 720x240p case, the pixels on each line are double-clocked. In the (2880)x480i case, the number of pixels on each line, and thus the number of times that they are repeated, is variable, and is sent to the DTV monitor by the source device. Increased Hactive expressions include “2x” and “4x” indicate two and four times the reference resolution, respectively. Video modes with vertical refresh frequency being a multiple of 6Hz (i.e. 24, 30, 60, 120, and 240Hz) are considered to be the same timing as equivalent NTSC modes where vertical refresh is adjusted by a factor of 1000/1001. As VESA DMT specifies 0.5% pixel clock tolerance, which 5 times more than the required change, pixel clocks can be adjusted to maintain NTSC compatibility; typically, 240p, 480p, and 480i modes are adjusted, while 576p, 576i and HDTV formats are not. The EIA/CEA-861 and 861-A standards included only numbers 1–7 and numbers 17–22 (only in -A) above (but not as short video descriptors which were introduced in EIA/CEA-861-B) and are considered primary video format timings. The EIA/CEA-861-B standard has the first 34 short video descriptors above. It is used by HDMI 1.0–1.2a. The EIA/CEA-861-C and -D standards have the first 59 short video descriptors above. EIA/CEA-861-D is used by HDMI 1.3–1.3c. The EIA/CEA-861-E standard has the first 64 short video descriptors above. It is used by HDMI 1.4–1.4b. The CTA-861-F standard has the first 107 short video descriptors above. It is used by HDMI 2.0–2.0b. The CTA-861-G standard has the full list of 154 (1–127, 193–219) short video descriptors above. It is used by HDMI 2.1. Audio Data Blocks The Audio Data Blocks contain one or more 3-byte Short Audio Descriptors (SADs). Each SAD details audio format, channel number, and bitrate/resolution capabilities of the display as follows: Vendor Specific Data Block A Vendor Specific Data Block (if any) contains as its first three bytes the vendor's IEEE 24-bit registration number, least significant byte first. The remainder of the Vendor Specific Data Block is the "data payload", which can be anything the vendor considers worthy of inclusion in this EDID extension block. For example, IEEE registration number means this is a "HDMI Licensing, LLC" specific data block (contains HDMI 1.4 info), means this is a "HDMI Forum" specific data block (contains HDMI 2.0 info), means this is "DOLBY LABORATORIES, INC." (contains Dolby Vision info) and is "HDR10+ Technologies, LLC" (contains HDR10+ info as part of HDMI 2.1 Amendment A1 standard). It starts with a two byte source physical address, least significant byte first. The source physical address provides the CEC physical address for upstream CEC devices. HDMI 1.3a specifies some requirements for the data payload. Speaker Allocation Data Block If a Speaker Allocation Data Block is present, it will consist of three bytes. The first and second bytes contain information about which speakers (or speaker pairs) are present in the display device: Some speaker flags have been deprecated in the SADB, but are still available in the RCDB's SPM. These speakers could not be indicated with a CA value in the Audio InfoFrame, and can only be used with Delivery According to the Speaker Mask, which corresponds to the RCDB only. Room Configuration Data Block The Room Configuration Data Block and Speaker Location Data Blocks describe the speaker setup using room coordinates. References External links edid-decode utility edidreader.com – Web Based EDID Parser Edid Repository – EDID files repository Display Industry Standards Archive UEFI Forum PNP IDs Display technology VESA
Extended Display Identification Data
Engineering
3,607
56,351,395
https://en.wikipedia.org/wiki/Regulation%20of%20pesticides%20in%20the%20European%20Union
A pesticide, also called Plant Protection Product (PPP), which is a term used in regulatory documents, consists of several different components. The active ingredient in a pesticide is called “active substance” and these active substances either consist of chemicals or micro-organisms. The aims of these active substances are to specifically take action against organisms that are harmful to plants (Art. 2(2), Regulation (EC) No 1107/2009). In other words, active substances are the active components against pests and plant diseases. In the Regulation (EC) No 1107/2009, a pesticide is defined based on how it is used. Thus, pesticides have to fulfill certain criteria in order to be called pesticides. Among others, the criteria include that they either protect plants against harmful organisms - by killing or in other ways preventing the organism from performing harm, that they enhance the natural ability of plants to defend themselves against these harmful organisms, or that they kill off competing plants such as weeds. Within the European Union a 2-tiered approach is used for the approval and authorisation of pesticides. Firstly, before an actual pesticide can be developed and put on the European market, the active substance of the pesticide needs to be approved for the European Union. Only after approval of an active substance, a procedure of approval of the Plant Protection Product (PPP) can begin in the individual Member States. In case of approval, there is a monitoring programme to make sure the pesticide residues in food are below the limits set by the European Food Safety Authority (EFSA). The use of PPPs (i.e. pesticides) in the European Union (EU) is regulated by the Regulation No 1107/2009 on Plant Protection Products in cooperation with other EU Regulations and Directives (e.g. the regulation on maximum residue levels in food (MRL); Regulation (EC) No 396/2005, and the Directive on sustainable use of pesticides; Directive 2009/128/EC). These regulatory documents are set to ensure safe use of pesticides in the EU regarding human health and environmental sustainability. The responsible authorities within the EU working with pesticide regulation are the European Commission, European Food Safety Authority (EFSA), European Chemical Agency (ECHA); working in cooperation with the EU Member States. Additionally, important stakeholders are the chemical producing companies, which develop PPPs and active substances that are to be evaluated by the regulatory authorities mentioned above. Conservative Agriculture Spokesman Anthea McIntyre MEP and colleague Daniel Dalton MEP were appointed to the European Parliament's special committee on pesticides on 16 March 2018. Sitting for nine months, the committee will examine the scientific evaluation of glyphosate, the world's most commonly used weed killer which was relicensed for five years by the EU in December after months of uncertainty. They will also consider wider issues around the authorisation of pesticides. Procedure of active substance approval In the EU, there is a detailed procedure (Regulation (EC) No 1107/2009) to evaluate whether an active substance is regarded as safe for human health and the environment. The procedure of approving new substances follows the steps listed below. Submission of the application and dossier The first step requires that an applicant (a company or association of producers) should submit a dossier to a Member State (called the Rapporteur Member State) in order to ask for the permission before putting an active substance on the market. The application must contain supporting scientific data and studies (i.e. toxicological and ecotoxicological relevance of metabolites, acceptable operator exposure level (AOEL), acceptable daily intake (ADI), genotoxicity testing etc. (Art. 4 and Annex II of Regulation (EC) No 1107/2009.) and Regulation (EC) No 283/2013) Evaluation by the rapporteur member state The Rapporteur Member State evaluates the application and shall within 45 days communicate (Art. 9(1) Regulation (EC) No 1107/2009) to the applicant that submitted the dossier. Furthermore, they will check whether the dossier is complete. If elements are missing, the applicant has 3 months to complete the dossier, otherwise the application is not considered admissible. If the dossier is considered admissible, the Rapporteur Member State will notify the applicant and the competent authorities (other Member States, EFSA and the European Commission) and start evaluating the active substance. The applicant will then send the dossier to the three mentioned authorities. Moreover, EFSA will create a summary of the dossier and make it available for the public. Draft Assessment Report from Rapporteur Member State Within 12 months after the notification of admissibility, the Rapporteur Member State produces a Draft Assessment Report. This report aims to check if the active substance satisfies the criteria for approval listed in the Regulation. This report is submitted to the European Commission and EFSA. If additional information is needed, the Rapporteur Member State will set a period of maximum 6 months for the submission of the revised application. In addition, the European Commission and EFSA shall be informed (Art. 11 Regulation (EC) No 1107/2009). Peer review by European Food Safety Authority and conclusion European Food Safety Authority’s Pesticides Unit is responsible for the peer reviewing of the risk assessments on active substances. The EFSA is required to provide a conclusion on whether the active substance satisfies the criteria for approval. When the Draft Assessment Report from the Rapporteur Member State has been received by EFSA, the report will be shared among other Member States and the applicant within 30 days after it has been received, and it shall be made available for the public (Art. 12 Regulation (EC) No 1107/2009). The applicant, the Member State and the public have 60 days to provide comments. After that, EFSA has 120 days to submit a conclusion and forward it to the applicant, the Member States, and the European Commission. The EFSA will also make the conclusion available to the public (Art. 12 Regulation (EC) No 1107/2009). After the conclusion of EFSA, the European Commission presents a review report to the Standing Committee for Food Chain and Animal Health. This standing committee votes on approval or non-approval of the active substance. Publication Based on the review report, a Regulation will be adopted according to the final decision (i.e. whether the substance is approved, not approved, or the application should be modified). All approved active substances are included in the Official Journal of the European Union, which contains the list of active substances that have been already approved (Annex I of Directive 98/8/EC). The European Commission has subsequently the task of managing and updating the list of approved active substances, which is available online for the public. Renewing the approval of active substances Active substances may be approved for a maximum of up to 15 years. This approval period is proportional to the risks posed by the use of these substances. However, when an active substance is considered necessary by the European Commission, it could be approved for a maximum 5 years, even if not all approval criteria of the Regulation (EC) No 1107/2009 are met. At the renewal time, new knowledge regarding the active substance will be taken into consideration. Procedure of Plant Protection Product approval The procedure of applying for an authorisation of a PPP begins with the applicant who wishes to produce a PPP. Authorisation for the product must be sought from every Member State that the applicant wants to sell the product to. The procedure and requirements for authorising a PPP are explained below. Requirements and content The authorisation of a PPP, its use and placing on the market is done by the Member States. For that, a PPP has to meet specific requirements: Scientific and technical knowledge of its active substances, synergists, safeners, co-formulants. Scientific knowledge regarding toxicological, ecotoxicological and environmental aspects. For instance, the ecotoxicological data required consist of, among others, acute toxicity to fish, aquatic invertebrates or effects on aquatic algae and macrophytes. It also includes studies on earthworms and other terrestrial species (Art. 29(1) and (2) of Regulation (EC) No 1107/2009 and Regulation (EC) No 284/2013). Technical knowledge, including production, use, storage and residue handling. Moreover, in the authorisation it is necessary to define elements in which the PPP may be used. This includes, among others, non-agricultural areas, plant products or plants, and their purpose. Other information that can be included, cover the maximum dosage per hectare in each individual application, the period of time between the most recent application and harvest, and the maximum application numbers each year. The authorisation of a PPP shall not exceed one year, counting from the expire date of the approval for the active substances, synergists and safeners contained in the PPP. Re-evaluation of similar PPP for comparative assessment containing candidates for substitution may be granted. The authorisation procedure of a Plant Protection Product (Pesticides) The application for authorisation itself contains many parts and it should first and foremost clearly state where and how the PPP should be applied. Secondly, the applicants themselves should specify which Member State they wish, would carry out the evaluation of the PPP. If the PPP previously has been evaluated in another Member State, a copy of the conclusions from that evaluation should be attached. Moreover, the application should be accompanied by several dossiers containing, among other things, ecotoxicological data (see section “Requirements and content” above). One dossier for the PPP itself, and one for each active ingredient in the PPP is required. The applicant should also provide a draft of the product label clearly showing the hazard labels necessary for the specific product. There are several other things an application should include. This is more thoroughly described in Art. 33-35 of Regulation (EC) No 1107/2009. The Member State assessing the PPP needs to perform an objective evaluation and allow other Member States to express their opinions. The evaluation results in an authorisation or a rejection of the PPP. This assessment takes many things into consideration. Among others, the Member State specifically looks at all the ingredients in the PPP and assesses whether they are approved for this type of use or not. They further look at if the risks associated with the PPP are limited without compromising the function of the product. If a PPP is given authorisation, it often has certain restrictions regarding the distribution and use, like mentioned in the “Requirements and content” above, in order to protect human health and the environment. If the PPP is shown to pose an unacceptable risk to humans or nature, it is not authorised. No matter what the Member State decides on, they have to justify the outcome of the evaluation in a document and provide it to both the applicant seeking authorisation, and the European Commission (Art. 36-38 of Regulation (EC) No 1107/2009). Mutual recognition A company or organization in possession of a valid authorisation for a PPP can apply for mutual recognition and obtain the approval for such products with the same use(s) under similar agricultural conditions. Requirements, contents and procedures for the recognition are stated in Articles 40-42 of Regulation (EU) 1107/2009. Mutual Recognition can only be applied if there is an existing authorisation for the PPP in another Member State. Applications can be made through the Plant Protection Products Application Management System for products that have been authorised via the system. Some parts of the application procedure are managed and handled outside of the Plant Protection Products Application Management System by manual or electronic processes in the Member States. The Plant Protection Products Application Management System is an online tool, thought to enable industry users to create applications for PPPs and submit these to Member States for evaluation and authorisation. The objectives of the system are: help harmonising the formal requirements for application among Member States, facilitate mutual recognition of authorisations between Member States in order to speed up time to market, improve the management of the evaluation of the authorisation process as wells as providing correct information to stakeholders on time. Renewal, withdrawal and amendment After an active substance has been re-approved for use, all PPPs containing this active substance also have to be re-approved within three months. If the applicants do not hand in a re-application, their authorisation for the product will expire in accordance to Art. 32. If expired, the PPP is allowed to stay on the market for sale up to six months, and allowed to be stored and disposed up to a year. To re-apply for authorisation the applicant has to provide a Renewal Assessment Report which shall contain any newly submitted data supporting the re-approval, as well as the original data if still relevant. The Member States also conduct this evaluation, but in the future this will be done through the Plant Protection Products Application Management Systems. Any holder of authorisation can choose to withdraw or amend its application at any time, though the reason why should be stated. If there are acute concerns for human, animal and/or environmental health the PPPs should be immediately withdrawn from the market. A withdrawal can also be made on the base of false and misleading informations, and/or on improvements in scientific and technological knowledge. All information about renewal, amendment and withdrawal can be found in Art. 43 - 46 of Regulation 1107/2009. Special cases When all active substances in a product are considered as low-risk active substances, a PPP will be approved as a low-risk PPP, except risk mitigation measures are required. The applicant of a PPP must demonstrate that all criteria for a low-risk PPP are met (Art. 47 of Regulation (EC) No 1107/2009). Among others, the criteria for consideration of low-risk active substances include not being classified as mutagenic, carcinogenic, toxic to reproduction, very toxic or toxic. Furthermore, they must not be persistent, deemed as endocrine disruptor or have neurotoxic or immunotoxic properties (Art. 47 and Annex II of Regulation (EC) No 1107/2009). Applicants are encouraged to make use of this special case through a prolonged approval duration of 15 years and the possibility of a fast-track authorisation in 120 days instead of one year to facilitate the placement on the market of such PPPs. Plant Protection Products comprising a genetically modified organism will also be examined in accordance to Directive 2001/18/EC on the deliberate release of genetically modified organisms into the environment. An authorisation will only be granted, when a written consent referring to this Directive is approved (Art. 48 of Regulation (EC) No 1107/2009). The use and placing on the market of PPP-treated seeds is regulated through Art. 49 of Regulation (EC) No 1107/2009 and will not be prohibited, when authorisation is granted by at least one other Member State. But, when there is substantial concern that PPPs from treated seeds pose a serious risk to animals, humans or the environment, and no adequate mitigation measures are available, measures will be taken immediately, that is restricting or prohibiting the use of the respective PPP. For PPPs containing candidates for substitution, a comparative assessment will be conducted. An authorisation will not be granted where the assessment of risks and benefits concludes that – among others – a substitution of the PPP is significantly safer for the environment, humans and animals, and not economically or practically disadvantageous (Art. 50 of Regulation (EC) No 1107/2009). An applicant may ask for an extension to minor uses of PPPs (Art. 51 of Regulation (EC) No 1107/2009). This procedure is a facilitated way for the authorisation of a PPP. Lists on what minor uses in a specific Member State are provided by the European Minor Uses Database (EUMUDA). A PPP already approved in one Member State will be permitted for parallel trade, thus the introduction, placement on the market, and uses in another Member State. Following Art. 52 of Regulation (EC) No 1107/2009, the application will be authorised - provided the applicant demonstrates that the PPP meets the requirements to be identical to the already authorised one. Derogation By partial suppression of Article 28, which states that a PPP will not be marketed or used in a Member State without authorisation, derogation states that such a PPP can be used under limited and controlled conditions where it appears necessary. A Member State authorising such a product will inform other Member States with detailed information that led to such a decision. It may be for purposes of research and development (Art. 53, 54 Regulation (EC) No 1107/2009). Use and information To ensure that PPPs are handled properly, a considerable amount of information is to be provided by the holder of an authorisation for such a product (Art. 56 Regulation (EC) No 1107/2009). New information on potential harmful effects on human or animal health concerning the PPP itself, its active substances, any associated metabolites, safeners or co-formulants have to be reported immediately to the Member State(s) that granted its authorisation. In such a case, it is up to the first Member State in a zone that granted the product's authorisation to evaluate and assess this information and come to a decision whether the product should be withdrawn or its conditions for use should be amended. The same Member State is also responsible for communicating this information to other Member States that might be concerned. Information on PPPs authorised for use or that have been withdrawn shall also be available to the public in electronic form and updated every three months. This information shall at the very least include the business name of the holder of the product's authorisation, the trade name of the product, its type of preparation, its composition, its authorised uses (including minor uses) and its safety classifications. As well as this, information on withdrawal of a product's authorisation should be provided to the public if it is related to safety concerns. Monitoring of Plant Protection Products Monitoring of pesticide residues in food products In order to protect human health and the environment, monitoring of PPP residues in food is a crucial step. With this process the EU can check the prediction of the safe use of respective PPPs. In September 2008, the European Union issued new and revised Maximum Residue Limits (MRLs) in plants for the roughly 1,100 pesticides ever used in the world. The revision was intended to simplify the previous system, under which certain pesticide residues were regulated by the Commission, others were regulated by Member States, and others were not regulated at all. How Maximum Residue Levels are monitored The monitoring of the determined Maximum Residue Levels (MRLs) of pesticides in food is the duty of the responsible authorities of the Member States. In addition to the national monitoring programmes, all reporting countries are requested to monitor and analyse food products and processed cereal-based baby food according to the Regulation ((EU) No 400/2014) for the European monitoring programme. Annually, the EFSA is modelling and assessing the risk of residues of pesticides in food. In this process, short-term (acute) exposure and long-term (chronic) exposure scenarios are analysed. A risk assessment based on a short-term exposure includes mainly the comparison of the estimated uptake and/or exposure of pesticide residues via food in a short time period (one meal or within 24-hours). The chronic risk assessment is the estimated uptake and/or exposure levels of pesticide residues via food for a long-term period (predicted lifetime of a human). The evaluated data from the calculation models are compared to the experimental data (ecotoxicological reference data) for acute and chronic toxicity, to establish a safe level for human health. There is not a high probability of health risk for consumers if the modelled values are equal or lower than the reference data. The modelling starts with a conservative approach (e.g. consumers do not wash and/or cook the products) which may result in an overestimation of the actual toxicity of the respective pesticide. Results of recent (2015) Maximum Residue Level monitoring Recent results of the European monitoring programme were presented by EFSA (The 2015 European Union report on pesticide residues in food). The data was collected from the responsible authorities of the Member States and subsequently assessed and analysed by EFSA. When looking at the pesticide residue levels, EFSA compared the analysed samples with the maximum residue level values previously established by the European Commission. The samples were taken from 11 different food products (aubergines, bananas, broccoli, virgin olive oil, orange juice, peas without pods, sweet peppers, table grapes, wheat, butter, and eggs). The samples were collected from food products, produced both within and outside the EU, and pesticides that were analysed included compounds that are banned in the EU (e.g. Dichlorvos). In 97.2% of the samples, there were residue levels below the determined Maximum Residue Limit. No detectable residues were found in 53.3% of the samples, while 43.9% contained residues but did not exceed the MRL, and 2.8% contained residues that exceeded the MRL. The countries reporting data to EFSA analysed a total of 84,341 samples for 774 different pesticides, with variety between countries and test-sites. EFSA concluded that there was a negligible risk for short term (acute) exposure of pesticide residues in food, and overall a low risk for human health consequences. The long term (chronic) exposure assessment showed that the estimated exposure did not exceed the acceptable daily intake values (ADI) for any of the tested pesticides except for dichlorvos, which is not allowed to be used as a pesticide in the EU. Monitoring of pesticides in the environment The concentrations of pesticides alongside other chemical substances that pose a significant risk to the environment or to human health in surface waters in the European Union are limited to Environmental Quality Standards. These are defined in the Directive on Environmental Quality Standards in the Field of Water Policy. This Directive covers a total of 45 priority substances as defined by the Water Framework Directive and 8 additional pollutants. Environmental Quality Standards set limits to the average annual concentration as well as the maximum allowable concentrations for short-term exposure and differentiates inland surface waters (rivers and lakes) and other surface waters (transitional, coastal, and territorial waters). It is the responsibility of the Member States to establish monitoring programmes for Environmental Quality Standards and to incorporate these into river basin management plans. The Directive on Environmental Quality Standards was amended by Directive 2013/39/EU to establish a watch list of substances for monitoring and future prioritisation containing up to 10 substances, which include three pharmaceutical substances (Diclofenac, 17-beta-Estradiol (E2) and 17-alpha-Ethinylestradiol (EE2)). Pesticides included in Directive on Environmental Quality Standards are to a large extent banned and not used in Europe. Reported data are sparse due to the geographic extent and the large quantity of different chemical substances used as pesticides in agriculture. However, there is a variety of pesticide monitoring programmes within the European Union, independently of EU regulatory frameworks. A 2014 review of organic chemicals (including pesticides) in surface waters in Europe concluded that there was concern for acute lethal effects in 14% and for chronic long-term effects in 42% of the 4,000 monitoring sites. References Further reading SAPEA. (2018) Improving Authorisation Processes for Plant Protection Products in Europe: A scientific perspective on the assessment of potential risks to human health. doi:10.26356/plantprotectionproducts Big sales, no carrots: Assessment of pesticide policy in Spain. Pablo Alonso González, Eva Parga-Dans & Octavio Pérez Luzardo. Crop Protection, 105428. https://doi.org/10.1016/j.cropro.2020.105428 European Union and the environment Pesticides by region European Union regulations Pesticide regulation
Regulation of pesticides in the European Union
Chemistry
5,030
47,769,600
https://en.wikipedia.org/wiki/Phellodon%20niger
Phellodon niger, commonly known as the black tooth, is a species of tooth fungus in the family Bankeraceae, and the type species of the genus Phellodon. It was originally described by Elias Magnus Fries in 1815 as a species of Hydnum. Petter Karsten included it as one of the original three species when he circumscribed Phellodon in 1881. The fungus is found in Europe and North America, although molecular studies suggest that the North American populations represent a similar but genetically distinct species. Taxonomy Phellodon niger was originally described by Swedish mycologist Elias Fries in 1815 as a species of Hydnum. The genus Phellodon was circumscribed in 1881 by Finnish mycologist Petter Karsten to contain white-toothed fungi. Karsten included three species: P. cyathiformis, P. melaleucus, and the type, P. niger (originally published with the epithet "nigrum"). The variety Phellodon niger var. alboniger, published by Kenneth Harrison in 1961, is considered synonymous with Phellodon melaleucus. Lucien Quélet's 1886 Calodon niger is a synonym of Phellodon niger. Taxonomic synonyms (i.e., based on a different type) include: Hydnum olidum (Berkeley, 1877); Hydnum cuneatum (Lloyd 1925); and Hydnum confluens (Peck 1874). The DNA sequences of the internal transcribed spacer regions of collections from the United Kingdom were compared with collections made in the Southern United States. They showed a 92–93% similarity, suggesting that the North American populations are a different species with very similar morphological characteristics. Phellodon niger is commonly known as the "black scented spine fungus", and the "black tooth". Description Fruitbodies of Phellodon niger have a cap and a stipe, and so fall into the general class of "stipitate hydnoid fungi". Individual caps are up to in diameter, but caps of neighboring fruitbodies often fuse together to create larger compound growths. Caps are flat to depressed to somewhat funnel-shaped, with a felt-like texture at first before developing concentric pits, wrinkles, and ridges. Initially whitish (sometimes with purplish tints), the cap later darkens in the center to grey, grey-brown, or black. The stipe, measuring up to long, is roughly the same color as the cap. On the underside of the caps are grey spines, up to 4 mm long. The outer covering of the stipe is a thick felty layer of mycelium that absorbs water like a sponge. In conditions of high humidity, P. niger can form striking drops of black liquid on the actively growing caps. The flesh has an odor of fenugreek when it is dry. The mushroom tissue turns bluish-green when tested with a solution of potassium hydroxide. The ellipsoid, hyaline (translucent) spores measure 3.5–5 by 3–4 μm. The basidia (spore-bearing cells) are club-shaped, four-spored, and measure 25–40 by 5–7 μm. Phellodon niger has a monomitic hyphal system, producing generative hyphae with a diameter of 2.5–5 μm. This fungus is considered inedible. Habitat and distribution The ectomycorrhizae that P. niger forms with Norway spruce (Picea abies) has been comprehensively described. It is distinguished from the ectomycorrhizae of other Thelephorales species by the unique shape of its chlamydospores. Stable isotope ratio analysis of the abundance of the stable isotope carbon-13 shows that P. niger has a metabolic signature close to that of saprotrophic fungi, indicating that it may be able to obtain carbon from sources other than a tree host. Phellodon niger is found in continental Europe, where it has a widespread distribution, and in North America. In a preliminary assessment for a red list of threatened British fungi, P. niger is considered rare. In Switzerland, it is considered a vulnerable species. Phellodon niger was included in a Scottish study to develop species-specific PCR primers that can be used to detect the mycelia of stipitate hydnoids in soil. Collections labelled as P. niger from the United Kingdom that were DNA tested, revealed additional cryptic species. Analysis using PCR can determine the presence of a Phellodon species up to four years after the appearance of fruitbodies, allowing a more accurate determination of their possible decline and threat of extinction. Chemistry Phellodon niger has been a source for several bioactive compounds: the cyathane-type diterpenoids, nigernin A and B; a terphenyl derivative called phellodonin (2',3'-diacetoxy-3,4,5',6',4''-pentahydroxy-p-terphenyl); grifolin; and 4-O-methylgrifolic acid. Additional nigernins (C through F) were reported in 2011. Fruitbodies are used to make a gray-blue or green dye. References External links Fungi described in 1815 Fungi of Europe Fungi of North America Inedible fungi niger Taxa named by Elias Magnus Fries Fungus species
Phellodon niger
Biology
1,136
4,876,423
https://en.wikipedia.org/wiki/Ultrafast%20molecular%20process
An ultrafast molecular process is any technology that relies on properties of molecules that are only extant for a very short period of time (less than 1e-9 seconds). Such processes are very important in areas such as combustion chemistry and in the study of proteins. References Ultrafast molecular processes from Sandia National Laboratories Chemical reactions
Ultrafast molecular process
Chemistry
67
9,888,705
https://en.wikipedia.org/wiki/Uniform%20Resource%20Characteristic
In IETF specifications, a Uniform Resource Characteristic (URC) is a string of characters representing the metadata of a Uniform Resource Identifier (URI), a string identifying a Web resource. URC metadata was envisioned to include sufficient information to support persistent identifiers, such as mapping a Uniform Resource Name (URN) to a current Uniform Resource Locator (URL). URCs were proposed as a specification in the mid-1990s, but were never adopted. The use of a URC would allow the location of a Web resource to be obtained from its standard name, via the use of a resolving service. It was also to be possible to obtain a URC from a URN by the use of a resolving service. The design goals of URCs were that they should be simple to use, easy to extend, and compatible with a wide range of technological systems. The URC syntax was intended to be easily understood by both humans and software. History The term "URC" was first coined as Uniform Resource Citation in 1992 by John Kunze within the IETF URI working group as a small package of metadata elements (which became the ERC) to accompany a hypertext link and meant to help users decide if the link might be interesting. The working group later changed the acronym expansion to Uniform Resource Characteristic, intended to provide a standardized representation of document properties, such as owner, encoding, access restrictions or cost. The group discussed URCs around 1994/1995, but it never produced a final standard and URCs were never widely adopted in practice. Even so, the concepts on which URCs were based influenced subsequent technologies such as the Dublin Core and Resource Description Framework. References External links IETF URC working group charter History of the Internet Technical specifications URI schemes
Uniform Resource Characteristic
Technology
363
49,926,944
https://en.wikipedia.org/wiki/FCMR
Fc fragment of IgM receptor is a protein that in humans is encoded by the FCMR gene. Function Fc receptors specifically bind to the Fc region of immunoglobulins (Igs) to mediate the unique functions of each Ig class. FAIM3 encodes an Fc receptor for IgM (see MIM 147020) (Kubagawa et al., 2009 [PubMed 19858324]; Shima et al., 2010 [PubMed 20042454]). References Further reading Human proteins
FCMR
Chemistry
113
15,898,570
https://en.wikipedia.org/wiki/19%20Cephei
19 Cephei is a supergiant star in the northern circumpolar constellation of Cepheus. It has a spectral class of O9 and is a member of Cep OB2, an OB association of massive stars located about from the Sun. The spectrum of 19 Cephei shows line profile variability on an hourly and daily timescale. This is thought to be due to the changes in the stellar wind. Double star catalogues list several companions for 19 Cephei. The Washington Double Star Catalog describes four companions: 11th magnitude stars 20" and 56" away, and two 15th magnitude stars 4-5" away. The Catalog of Components of Double and Multiple Stars gives only the two 11th magnitude stars. A scattered cluster of faint stars has been detected associated with 19 Cephei. The brightest likely members apart from 19 Cep itself are 10th magnitude stars. References External links Cepheus (constellation) Cephei, 19 109017 O-type supergiants 209975 08428 BD+61 2246
19 Cephei
Astronomy
213
26,107,685
https://en.wikipedia.org/wiki/Spongiforma%20thailandica
Spongiforma thailandica is a species of fungus in the family Boletaceae, genus Spongiforma. The stemless sponge-like species, first described in 2009, was found in Khao Yai National Park in central Thailand, where it grows in soil in old-growth forests. The rubbery fruit body, which has a strong odor of coal-tar similar to Tricholoma sulphureum, consists of numerous internal cavities lined with spore-producing tissue. Phylogenetic analysis suggests the species is closely related to the Boletaceae genera Porphyrellus and Strobilomyces. Taxonomy and phylogeny The species was first described scientifically in 2009 by E. Horak, T. Flegel and D.E. Desjardin, based on specimens collected in July 2002 in Khao Yai National Park, central Thailand, and roughly three years later in the same location. Prior to this, the species had been mentioned in a 2001 Thai publication as an unidentified species of Hymenogaster. Phylogenetic analysis of ribosomal DNA sequences shows that Spongiforma is sister (sharing a common ancestor) to the genus Porphyrellus. The next most closely related genus is Strobilomyces. All three genera are members of the family Boletaceae, and in the Boletineae, one of several lineages of Boletales recognized taxonomically at the level of suborder. The genus name Spongiforma refers to the sponge-like nature of the fruit body, while the specific epithet thailandica denotes the country in which the species is found. Description The fruit body of Spongiforma thailandica is relatively large, up to in diameter by tall, and pale brownish gray to brown or reddish brown. It is sponge-like and rubbery—if water is squeezed out it will assume its original shape. The surface has irregular, relatively large cavities (locules), in diameter, lined with fertile (spore-producing) tissue. The mushrooms do not have a stem, but rather a columella—a small internal structure at the base of the fruit body, resembling a column, extending up into the fruit body. The columella has dimensions of 10–15 mm tall by 8–10 mm diameter (at the apex) by 3–4 mm (at the base), and it is attached to copious, fine white rhizomorphs. Fruit bodies have a strong odor of coal tar or burnt rubber (likened to Tricholoma sulphureum). The mushroom tissue turns purple when a drop of 3–10% potassium hydroxide is applied. In mass, the spores appear to be brown to reddish-brown in color. Viewed with a microscope, they are amygdaliform (almond-shaped), and typically measure 10–11.5 by 5.5–7 μm. The basidia (spore-bearing cells) are cylindrical to roughly club-shaped, four-spored, with dimensions of 25–32 by 6.5–9.5 μm. They have straight sterigmata (slender extensions that attach to the spores) up to 9.5 μm long. The cystidia (large, sterile cells in the hymenium) are cylindrical to roughly club-shaped, thin-walled, and measure 25–48 by 5–10 μm. They are inamyloid, meaning they will not absorb iodine when stained with Melzer's reagent. The cystidia are plentiful on the edges of the locules, and occasional among the basidia. The hymenophore is made of interwoven branched hyphae that are arranged in a roughly parallel fashion. These thin-walled cylindrical hyphae have inflated septa (intracellular partitions), and are gelatinous, hyaline (translucent) and inamyloid. The subhymenium (the tissue layer immediately under the hymenium) is made of inflated hyphae that are hyaline, inamyloid, thin-walled, and non-gelatinous, measuring 9–20 by 9–14 μm. The fruit bodies vaguely resemble those of the species Gymnopaxillus nudus, found in Australia growing in association with Eucalyptus. However, Gymnopaxillus fruit bodies grow underground, lack a strong odor, do not stain purple with potassium hydroxide, and have longer spores, typically 11–16 μm. Habitat and distribution Spongiforma thailandica was found growing on the ground in an old growth forest in Khao Yai National Park (Nakhon Nayok Province, Thailand), at an elevation of about . The fungus is thought to grow in a mycorrhizal association with Shorea henryana and Dipterocarpus gracilis. References External links Boletaceae Fungi of Asia Fungi described in 2009 Fungus species
Spongiforma thailandica
Biology
1,022
67,785,300
https://en.wikipedia.org/wiki/Mucoromycota
Mucoromycota is a division within the kingdom fungi. It includes a diverse group of various molds, including the common bread molds Mucor and Rhizopus. It is a sister phylum to Dikarya. Informally known as zygomycetes I, Mucoromycota includes Mucoromycotina, Mortierellomycotina, and Glomeromycotina, and consists of mainly mycorrhizal fungi, root endophytes, and plant decomposers. Mucoromycotina and Glomeromycotina can form mycorrhiza-like relationships with nonvascular plants. Mucoromycota contain multiple mycorrhizal lineages, root endophytes, and decomposers of plant-based carbon sources. Mucoromycotina species known as mycoparasites, or putative parasites of arthropods are like saprobes. When Mucoromycota infect animals, they are seen as opportunistic pathogens. Mucoromycotina are fast-growing fungi and early colonizers of carbon-rich substrates. Mortierellomycotina are common soil fungi that occur as root endophytes of woody plants and are isolated as saprobes. Glomeromycotina live in soil, forming a network of hyphae, but depend on organic carbon from host plants. In exchange, the arbuscular mycorrhizal fungi provide nutrients to the plant. Reproduction Known reproduction states of Mucoromycota are zygospore production and asexual reproduction. Zygospores can have decorations on their surface and range up to several millimeters in diameter. Asexual reproduction typically involves the production of sporangiospores or chlamydospores. Multicellular sporcaps are present within Mucoromycotina, Mortierellomycotina and as aggregations of spore-producing in species of Glomeromycotina. Shown in Mucorales, sexual reproduction is under the control of mating type genes, sexP and sexM, which regulate the production of pheromones required for the maturation of hyphae into gametangia. The sexP gene is expressed during vegetative growth and matting, while the sexM gene is expressed during mating. Sexual reproduction in Glomeromycotina is unknown, although its occurrence is inferred from genomic studies. However, specialized hyphae produce chlamydospore-like spores asexually; these may be borne at terminal (apical) or lateral positions on the hyphae, or intercalary (formed within the hypha, between sub-apical cells). Species of Glomeromycotina produce coenocytic hyphae that can have bacterial endosymbionts. Mortierellomycotina reproduce asexually by sporangia that either lack or have a reduced columella, which support the sporangium. Species of Mortierellomycotina only form microscopic colonies, but some make multicellular sporocarps. Mucoromycotina sexual reproduction is by prototypical zygospore formation and asexual reproduction and involves the large production of sporangia. Morphology Mucoromycotina contain discoidal hemispherical spindle pole bodies. Although spindle pole bodies function as microtubule organizing centers, they lack remnants of the centrioles' characteristic 9+2 microtubule arrangement. Species of Mucoromycotina and Mortierellomycotina produce large-diameter, coenocytic hyphae. Glomeromycotina also form coenocytic hyphae with highly branched, narrow hyphal arbuscules in host cells. When septations occur in Mucoromycota they are formed at the base of reproductive structures. Production of lipids, polyphosphates, and carotenoids Mucoromycota's metabolism can utilize many substrates that are from various nitrogen and phosphorus resources to produce lipids, chitin, polyphosphates, and carotenoids. They have been found to co-produce metabolites in a single fermentation process like polyphosphates and lipids. The overproduction of chitin from Mucoromycota fungi can be accomplished by limiting inorganic phosphorus. Mucoromycota are capable of accumulating high amounts of lipids in their cell biomass, which allows the fungi to produce polyunsaturated fatty acids and carotenoids. They have been found to induce antimicrobial activity from fungal crude total lipids. The high production of lipids from Mucoromycota have the potential for use in biodiesel production. Gallery See also Mucor circinelloides References External links Zygomycota Fungus phyla Fungi by classification
Mucoromycota
Biology
1,027
826,220
https://en.wikipedia.org/wiki/%CE%91-Ethyltryptamine
α-Ethyltryptamine (αET, AET), also known as etryptamine, is an entactogen and stimulant drug of the tryptamine family. It was originally developed and marketed as an antidepressant under the brand name Monase by Upjohn in the 1960s before being withdrawn due to toxicity. Side effects of αET include facial flushing, headache, gastrointestinal distress, insomnia, irritability, appetite loss, and sedation, among others. A rare side effect of αET is agranulocytosis. αET acts as a releasing agent of serotonin, norepinephrine, and dopamine, as a weak serotonin receptor agonist, and as a weak monoamine oxidase inhibitor. It may also produce serotonergic neurotoxicity. αET is a substituted tryptamine and is closely related to α-methyltryptamine (αMT) and other α-alkylated tryptamines. αET was first described in 1947. It was used as an antidepressant for about a year around 1961. The drug started being used recreationally in the 1980s and several deaths have been reported. αET is a controlled substance in various countries, including the United States and United Kingdom. There has been renewed interest in αET, for instance as an alternative to MDMA, with the development of psychedelics and entactogens as medicines in the 2020s. Medical uses αET was previously used medically as an antidepressant and "psychic energizer" to treat people with depression. It was used for this indication under the brand name Monase. Available forms αET was available pharmaceutically as the acetate salt under the brand name Monase in the form of 15mg oral tablets. Effects αET is reported to have entactogen and weak psychostimulant effects. Euphoria, increased energy, openness, and empathy have been specifically reported. Unlike αMT and other tryptamines, αET is not reported to have psychedelic or hallucinogenic effects. The drug is described as less stimulating and intense than MDMA ("ecstasy") but as otherwise having entactogenic effects resembling those of MDMA. The dose of αET used recreationally has been reported to be 100 to 160mg, its onset of action has been reported to be 0.5 to 1.5hours, and its duration of action at the preceding doses is described as 6 to 8hours. Rapid tolerance to repeated administration of αET has been described. Side effects Side effects of αET at antidepressant doses have included facial flushing, headache, gastrointestinal distress, insomnia, irritability, and sedation. Additional side effects of αET at recreational doses have included appetite loss and feelings of intoxication. Feelings of lethargy and sedation can occur once the drug wears off. As with many other serotonin releasing agents, toxicity, such as serotonin syndrome, can occur when excessive doses are taken or when combined with certain drugs such as monoamine oxidase inhibitors (MAOIs). Several deaths have been associated with recreational use of αET. Rarely, agranulocytosis has occurred with prolonged administration of αET at antidepressant doses and has been said to have resulted in several cases and/or deaths. Overdose αET has been administered in clinical studies at doses of up to 300mg per day. An approximate but unconfirmed 700mg dose resulted in fatal hyperthermia and agitated delirium in one case. LD50 doses of αET for various species have been studied and described. Treatment of αET intoxication or overdose is supportive. Severe and potentially life-threatening hyperthermia may occur. Serotonergic toxicity associated with serotonergic agents like αET can be managed with benzodiazepines and with the serotonin receptor antagonist cyproheptadine. Pharmacology Pharmacodynamics Similarly to αMT, αET is a releasing agent of serotonin, norepinephrine and dopamine, with serotonin being the primary neurotransmitter affected. It is about 10-fold more potent in inducing serotonin release than in inducing dopamine release and about 28-fold more potent in inducing serotonin release than in inducing norepinephrine release. The (+)-enantiomer of αET, (+)-αET, is a serotonin–dopamine releasing agent (SDRA) and is one of the few such agents known. It is about 1.7-fold more potent in inducing serotonin release than in inducing dopamine release, about 17-fold more potent in inducing serotonin release than in inducing norepinephrine release, and is about 10-fold more potent in inducing dopamine release than in inducing norepinephrine release. In addition to acting as a monoamine releasing agent, αET acts as a serotonin receptor agonist. It is known to act as a weak partial agonist of the serotonin 5-HT2A receptor ( > 10,000nM; Emax = 21%). (–)-αET is inactive as a 5-HT2A receptor agonist at concentrations of up to 10μM, whereas (+)-αET is a 5-HT2A receptor agonist with an EC50 value of 1,250nM and an Emax value of 61%. αET has also been found to have weak affinity for the 5-HT1, 5-HT1E, 5-HT1F, and 5-HT2B receptors. αET is a weak monoamine oxidase inhibitor (MAOI). It is specifically a selective and reversible inhibitor of monoamine oxidase A (MAO-A). An value of 260μM in vitro and 80 to 100% inhibition of MAO-A at a dose of 10mg/kg in rats in vivo have been reported. αET is described as slightly more potent as an MAOI than dextroamphetamine. Both enantiomers of αET have similar activity as MAOIs, whereas αET's major metabolite 6-hydroxy-αET is inactive. The relatively weak MAOI actions of αET have been considered unlikely to be involved in its stimulant, antidepressant, and other psychoactive effects by certain sources. The stimulant effects of αET have been said to lie primarily in (–)-αET, whereas hallucinogenic effects have been said to be present in (+)-αET. However, these claims appear to be based on animal drug discrimination studies and are not necessarily in accordance with functional studies. Generalization to may have been anomalous and due to the serotonin-releasing actions of αET rather than due to serotonin 5-HT2A receptor activation and associated psychedelic effects. Accordingly, αET does not produce the head-twitch response in rodents, unlike known psychedelics. In addition, clear hallucinogenic effects of αET have never been documented in humans even at high doses, although the individual enantiomers of αET have never been studied in humans. αET has been found to produce serotonergic neurotoxicity similar to that of MDMA and para-chloroamphetamine (PCA) in rats. This has included long-lasting reductions in serotonin levels, 5-hydroxyindoleacetic acid (5-HIAA) levels, and serotonin uptake sites in the frontal cortex and hippocampus. The dosage of αET employed was 8doses of 30mg/kg by subcutaneous injection with doses spaced by 12-hour intervals. There are prominent species differences in the neurotoxicity of monoamine releasing agents. Primates appear to be more susceptible to the damage caused by serotonergic neurotoxins like MDMA than rodents. Pharmacokinetics The absorption of αET appears to be rapid. It has a relatively large volume of distribution. The drug undergoes hydroxylation to form the major metabolite 6-hydroxy-αET (3-(2-aminobutyl)-6-hydroxyindole). This metabolite is inactive. αET is eliminated primarily in urine and a majority of a dose is excreted in urine within 12 to 24hours. Its elimination half-life is approximately 8hours. Chemistry αET, also known as 3-(2-aminobutyl)indole, is a substituted tryptamine and α-alkyltryptamine derivative. Analogues of αET include α-methyltryptamine (αMT) and other substituted α-alkylated tryptamines like 5-MeO-αET, 5-chloro-αMT (PAL-542), and 5-fluoro-αET (PAL-545). History αET was first described in the scientific literature in 1947. The enantiomers of αET were first individually described in 1970. Originally believed to exert its effects predominantly via monoamine oxidase inhibition, αET was developed during the 1960s as an antidepressant by Upjohn chemical company in the United States under the generic name etryptamine and the brand name Monase, but was withdrawn from potential commercial use due to incidence of idiosyncratic agranulocytosis in several patients. It was on the market for about a year, around 1961, and was given to more than 5,000patients, before being withdrawn. αET was usually used as an antidepressant at doses of 30 to 40mg/day (but up to 75mg/day), which are lower than the doses that have been used recreationally. αET gained limited recreational popularity as a designer drug with MDMA-like effects in the 1980s. Subsequently, in the United States it was added to the Schedule I list of illegal substances in 1993 or 1994. Society and culture Names Etryptamine is the formal generic name of the drug and its and . In the case of the acetate salt, its generic name is etryptamine acetate and this is its . Etryptamine was used pharmaceutically as etryptamine acetate. Etryptamine is much more well-known as alpha-ethyltryptamine or α-ethyltryptamine (abbreviated as αET, α-ET, or AET). Other synonyms of αET and/or its acetate salt include 3-(2-aminobutyl)indole, 3-indolylbutylamine, PAL-125, U-17312E, Ro 3-1932, NSC-63963, and NSC-88061, as well as its former brand name Monase. Recreational use αET has been used as a recreational drug since the 1980s. Purported street names include Trip, ET, Love Pearls, and Love Pills. Legal status αET is a Schedule I controlled substance in the United States and a Class A controlled substance in the United Kingdom. Research Besides depression, αET has been studied in people with schizophrenia and other conditions. References External links #11. α-ET - TiHKAL - Erowid #11 α-ET - TiHKAL - isomer design AET - Erowid Alpha-Alkyltryptamines Antidepressants Designer drugs Entactogens and empathogens Monoamine oxidase inhibitors Monoaminergic neurotoxins Non-hallucinogenic 5-HT2A receptor agonists Serotonin-norepinephrine-dopamine releasing agents Serotonin receptor agonists Stimulants Withdrawn drugs
Α-Ethyltryptamine
Chemistry
2,505
9,734,557
https://en.wikipedia.org/wiki/MKS%20units
The metre, kilogram, second system of units, also known more briefly as MKS units or the MKS system, is a physical system of measurement based on the metre, kilogram, and second (MKS) as base units. Distances are described in terms of metres, mass in terms of kilograms and time in seconds. Derived units are defined using the appropriate combinations, such as velocity in metres per second. Some units have their own names, such as the newton unit of force which is the combination kilogram metre per second squared. The modern International System of Units (SI), from the French Système international d'unités, was originally created as a formalization of the MKS system. The SI has been redefined several times since then and is now based entirely on fundamental physical constants, but still closely approximates the original MKS units for most practical purposes. History By the mid-19th century, there was a demand by scientists to define a coherent system of units. A coherent system of units is one where all units are directly derived from a set of base units, without the need of any conversion factors. The United States customary units are an example of a non-coherent set of units. In 1874, the British Association for the Advancement of Science (BAAS) introduced the CGS system, a coherent system based on the centimetre, gram and second. These units were inconvenient for electromagnetic applications, since electromagnetic units derived from these did not correspond to the commonly used practical units, such as the volt, ampere and ohm. After the Metre Convention of 1875, work started on international prototypes for the kilogram and the metre, which were formally sanctioned by the General Conference on Weights and Measures (CGPM) in 1889, thus formalizing the MKS system by using the kilogram and metre as base units. In 1901, Giovanni Giorgi proposed to the Associazione elettrotecnica italiana (AEI) that the MKS system, extended with a fourth unit to be taken from the practical units of electromagnetism, such as the volt, ohm or ampere, be used to create a coherent system using practical units. This system was strongly promoted by electrical engineer George A. Campbell. The CGS and MKS systems were both widely used in the 20th century, with the MKS system being primarily used in practical areas, such as commerce and engineering. The International Electrotechnical Commission (IEC) adopted Giorgi's proposal as the M.K.S. System of Giorgi in 1935 without specifying which electromagnetic unit would be the fourth base unit. In 1939, the Consultative Committee for Electricity (CCE) recommended the adoption of Giorgi's proposal, using the ampere as the fourth base unit. This was subsequently approved by the CGPM in 1954. The rmks system (rationalized metre–kilogram–second) combines MKS with rationalization of electromagnetic equations. The MKS units with the ampere as a fourth base unit is sometimes referred to as the MKSA system. This system was extended by adding the kelvin and candela as base units in 1960, thus forming the International System of Units. The mole was added as a seventh base unit in 1971. Derived units Mechanical units Electromagnetic units See also Centimetre–gram–second system of units (CGS) Foot–pound–second system (FPS) List of metric units Metre–tonne–second system of units (MTS) Vacuum permeability § Systems of units and historical origin of value of μ0 References External links Description of the MKS system Metric system Systems of units
MKS units
Mathematics
751
2,183,554
https://en.wikipedia.org/wiki/DEC%20RADIX%2050
RADIX 50 or RAD50 (also referred to as RADIX50, RADIX-50 or RAD-50), is an uppercase-only character encoding created by Digital Equipment Corporation (DEC) for use on their DECsystem, PDP, and VAX computers. RADIX 50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit machine word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20), three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15), or three characters into one 16-bit word (PDP-11, VAX). The actual encoding differs between the 36-bit and 16-bit systems. 36-bit systems In 36-bit DEC systems RADIX 50 was commonly used in symbol tables for assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol. For its similarities to the SQUOZE character encoding scheme used in IBM's SHARE Operating System for representing object code symbols, DEC's variant was also sometimes called DEC Squoze, however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word. RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as six six-bit characters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word. 18-bit systems RADIX 50 (also called Radix 508 format) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits"). 16-bit systems Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also called MOD40). In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position. For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a word containing the value 1×402 + 2×401 + 3×400 = , followed by a second word containing the value 4×402 + 5×401 + 6×400 = . Thus, 16-bit words encoded values ranging from 0 (three spaces) to ("999"). When there were fewer than three characters in a word, the last word for the string was padded with trailing spaces. There were several minor variations of this encoding with differing interpretations of the 27, 28, 29 code points. Where RADIX 50 was used for filenames stored on media, the code points represent the , , characters, and will be shown as such when listing the directory with utilities such as DIR. When encoding strings in the PDP-11 assembler and other PDP-11 programming languages the code points represent the , , characters, and are encoded as such with the default RAD50 macro in the global macros file, and this encoding was used in the symbol tables. Some early documentation for the RT-11 operating system considered the code point 29 to be undefined. The use of RADIX 50 was the source of the filename size conventions used by Digital Equipment Corporation PDP-11 operating systems. Using RADIX 50 encoding, six characters of a filename could be stored in two 16-bit words, while three more extension (file type) characters could be stored in a third 16-bit word. Similary, a three-character device name such as "DL1" could also be stored in a 16-bit word. The period that separated the filename and its extension, and the colon separating a device name from a filename, was implied (i.e., was not stored and always assumed to be present). See also Base 40 Base conversion Chen–Ho encoding Densely packed decimal (DPD) Hertz encoding Packed BCD Six-bit character code Split octal References Further reading External links https://github.com/turbo/ptt-its/blob/master/doc/info/midas.25 Character encoding Character sets Digital Equipment Corporation
DEC RADIX 50
Technology
956
56,214,308
https://en.wikipedia.org/wiki/C/2017%20T1%20%28Heinze%29
C/2017 T1 (Heinze) is a hyperbolic comet that passed closest to Earth on 4 January 2018 at a distance of . Discovery and observations It was discovered on 2 October 2017 by Aren N. Heinze of the University of Hawaiʻi, using the 0.5-m Schmidt telescope at the Mauna Loa Observatory used for the Asteroid Terrestrial-impact Last Alert System (ATLAS). Perihelion was reached on 21 February 2018, and it was expected peak magnitude about 8.8. However, this intrinsically faint comet began to disintegrate around this time. It was last observed as a dim 16th-magnitude object on 23 April 2018. Observation path References External links Non-periodic comets Hyperbolic comets Near-Earth comets Destroyed comets 20171002 Comets in 2017 Comets in 2018
C/2017 T1 (Heinze)
Astronomy
166
61,495,693
https://en.wikipedia.org/wiki/C12H18N4O4
{{DISPLAYTITLE:C12H18N4O4}} The molecular formula C12H18N4O4 (molar mass: 282.30 g/mol, exact mass: 282.1328 u) may refer to: Dupracetam ICRF 193
C12H18N4O4
Chemistry
62
13,791,717
https://en.wikipedia.org/wiki/8%20Cygni
8 Cygni is a single star in the northern constellation of Cygnus. Based upon its parallax of 3.79 mas, it is approximately 860 light-years (260 parsecs) away from Earth. It is visible to the naked eye as a faint, bluish-white hued star with an apparent visual magnitude of about 4.7. The star is moving closer to the Earth with a heliocentric radial velocity of −21 km/s. This is an aging subgiant star, as indicated by its spectral type of B3IV. Its effective temperature of 16,100 K fits into the normal range of B-type stars: 11,000 to 25,000 K. 8 Cygni is about twice as hot as the Sun, and it is six times larger and many times brighter in comparison. The elemental abundances are near solar. References B-type subgiants Cygnus (constellation) Durchmusterung objects Cygni, 08 184171 096052 7426
8 Cygni
Astronomy
211
31,683,471
https://en.wikipedia.org/wiki/Cortinarius%20caninus
Cortinarius caninus is a basidiomycota mushroom in the family of Cortinariaceae. General The Cortinarius are a superior mushroom, due to their cortina (a type of very fine veil). This is the most prolific genus of fungus, and numbers in the thousands. Description Cortinarius caninus has a creamy brown cap measuring up to 9 cm in diameter. The foot is fibrous and bulbous and measures from 5–11 cm in height, with a diameter of 0.8 to 1.4 cm. It sprouts in autumn in forests, especially conifer. The species is inedible. Gallery References https://web.archive.org/web/20110722071902/http://www.cegep-sept-iles.qc.ca/raymondboyer/champignons/Cortinaires_S.html caninus Fungi described in 1821 Inedible fungi Fungus species
Cortinarius caninus
Biology
209
45,297,852
https://en.wikipedia.org/wiki/Animation%20department
Animation departments (or animation production departments) are the teams within a film studio that work on various aspects of animation such as storyboarding or 3D modeling. It can refer to a single department that handles animation as a whole or to multiple departments that handle specific tasks. It can also refer to a college department. Departments of animation Retake department - looks for mistakes in animation and has it redone. An animator will check all frames one by one in order to ensure they flow smoothly. Compositing department - handles special effects such as chroma keying and other aspects of compositing. Inbetweening department - creates in-betweens, the frames that go between key frames (the main points of action in a scene) that make up the bulk of an animation. Editing department - compiles and edits the animation (either in part or in its entirety) so that it is consistent. Background department - draws the background art for scenes. Storyboard department - plans out the animation using sketches of its main points (a storyboard). Scanning department - converts traditionally-drawn media to digital and ensures frames aren't lost in the process. Sound effects and musical scoring department - creates soundtracks and sound effects, such as with choirs, instruments, and Foley. Layout department - stages scenes and creates plans for how a scene should look. See also Graphics Cinematography Computer Technology References Animation Design occupations Arts occupations
Animation department
Engineering
288
37,020,373
https://en.wikipedia.org/wiki/Cheng%20rotation%20vane
A fluid flow conditioning device, the cheng rotation vane (CRV) is a stationary vane fabricated within a pipe piece as a single unit and welded directly upstream of an elbow before the pump inlet, flow meters, compressors, or other downstream equipment. The cheng rotation vane is used to eliminate elbow induced turbulence, cavitation, erosion, vibration, which effect pump performance, seal life, impeller life, lead to bearing failure, flow meter accuracy, pipe bursts, and other common pipe problems. References Fluid mechanics
Cheng rotation vane
Engineering
105
1,989,091
https://en.wikipedia.org/wiki/Black%20tar%20heroin
Black tar heroin, also known as black dragon, is a form of heroin that is sticky like tar or hard like coal. Its dark color is the result of crude processing methods that leave behind impurities. Despite its name, black tar heroin can also be dark orange or dark brown in appearance. Black tar heroin is impure diacetylmorphine. Other forms of heroin require additional steps of purification post acetylation. With black tar, the product's processing stops immediately after acetylation. Its unique consistency however is due to acetylation without a reflux apparatus. As in homebake heroin in Australia and New Zealand the crude acetylation results in a gelatinous mass. Black tar as a type holds a variable admixture of morphine derivatives—predominantly 6-MAM (6-monoacetylmorphine), which is another result of crude acetylation. The lack of proper reflux during acetylation fails to remove much of the moisture retained in the acetylating agent, acetic anhydride. The acetic anhydride reacts with the moisture to produce the milder acetylating agent glacial acetic acid which is unable to acetylate the 3 position of the morphine molecule. Black tar heroin is often produced in Latin America, and is most commonly found in the western and southern parts of the United States, while also being occasionally found in Western Africa. It has a varying consistency depending on manufacturing methods, cutting agents, and moisture levels, from tarry goo in the unrefined form to a uniform, light-brown powder when further processed and cut with a variety of agents. One of the more notable compounds added to heroin is lactose. Composition Pure morphine and heroin are both fine white powders. Black tar heroin's unique appearance and texture are due to its acetylation without the benefit of the usual reflux apparatus. The assumption that tar has fewer adulterants and diluents is a misconception. The most common adulterant is lactose, which is added to tar via dissolving of both substances in a liquid medium, reheating and filtering, and then recrystallizing. This process is very simple and can be accomplished in any kitchen with no level of expertise needed. The price per kilogram of black tar heroin has increased from one-tenth that of South American powder heroin in the mid-1990s to between one-half and three-quarters in 2003 due to increased distributional acumen combined with increased demand in black tar's traditional realm of distribution. Black tar heroin distribution has steadily risen in recent years, while that of U.S. East Coast powder varieties has dropped; heroin production in Colombia decreased from the late 1990s into the early 2000s. Adverse effects People who intravenously inject black tar heroin are at higher risk of venous sclerosis than those injecting powder heroin. In this condition, the veins narrow and harden which makes repeated injection there nearly impossible. The presence of 6-monoacetylcodeine found in tar heroin has not been tested in humans but has been shown to be toxic alone and more toxic when mixed with mono- or di- acetyl morphine, potentially making tar more toxic than refined diamorphine. Black tar heroin injectors can be at increased risk of life-threatening bacterial infections, in particular necrotizing soft tissue infection. The practice of "skin-popping" or subcutaneous injection predisposes to necrotizing fasciitis or necrotizing cellulitis from Clostridium perfringens, while deep intramuscular injection predisposes to necrotizing myositis. Tar heroin injection can also be associated with Clostridium botulinum infection. Since the final stage of black tar heroin production would kill any spores (a combination of high temperature and strong acid), contamination is likely due to choice of cutting agent. Almost all cases occur in users who inject intramuscularly or subcutaneously, rather than injecting intravenously. Black tar heroin users can also be at increased risk of bone and joint infections that stem from hematogenous seeding or local extension of the skin and soft tissue infections. Any joint can be infected, though previous studies have shown that the knee and hip are most commonly affected in heroin injectors. Associated bone infections can include septic bursitis, septic tenosynovitis, and osteomyelitis. Septic arthritis and skin and soft tissue infections often present visible and/or systematic symptoms, while osteomyelitis usually presents localized pain. Alternative routes of administration In some parts of the United States, black tar may be the only form of heroin that is available. Many users do not inject. Grinding into a powder form: This is one of the more popular ways of consuming black tar for those who do not wish to use needles. The black tar heroin is put into some sort of blender and mixed in with lactose. This creates a fine black powder product that can be easily snorted. Water looping: Water looping is when a user places the heroin in an empty eye dropper bottle, or a syringe with the needle removed. The user allows the heroin to completely dissolve into water and the solution is dropped into the nose. This at times can be wasteful if a user allows too much of the solution to go down the throat. Vaporizing (Chasing the dragon): A user puts the heroin on a piece of foil and heats the foil with a lighter underneath it. The user uses a straw or similar apparatus and inhales the vapor. Drinking: This is done similar to the water looping method. Instead of being delivered through the nose, the solution is swallowed. Suppository: The most effective route of administration which does not require a needle, is accomplished by delivering a solution (via syringe) or lubricated mass of the narcotic deep into the rectum or vagina. See also Black cocaine References External links National Drug Threat Assessment 2005, National Drug Intelligence Center. Accessed 30 December 2019. . Accessed 15 December 2005. Heroin Adulteration Slang
Black tar heroin
Chemistry
1,272
38,663,146
https://en.wikipedia.org/wiki/NDR%20kinase
NDR (nuclear dbf2-related) kinases, are an ancient and highly conserved subclass of AGC protein kinases that control diverse processes related to cell morphogenesis, proliferation, and mitotic events. Function and medical relevance Like most AGC kinases, the NDR kinase subclass is activated by phosphorylation of a conserved serine or threonine in an activation region C-terminal to the kinase catalytic domain. The NDR kinases are distinguished by an apparently functionally essential binding of MOB co-activator proteins that are also widely present in eukaryotes. Most NDR kinase catalytic domains also contain an extended insert region that may function as an auto-inhibitory element. The NDR kinase family can be further divided into two subgroups, the Ndr family and the Wts/Lats family. Humans have four NDR kinases: Ndr1 (or STK38), Ndr2 (or STK38L), Lats1 (large tumor suppressor-1) and Lats2. In animals these kinases have reported roles in the regulation of diverse processes, including cell proliferation control, activity of proto-oncogenic proteins, apoptosis, centrosome duplication, and organization of neuronal dendrites. In unicellular eukaryotes, Ndr kinases play important roles in the control of the cell cycle and morphogenesis. In the fission yeast Schizosaccharomyces pombe, an organism amenable for the study of cell morphogenesis, the Ndr kinase Orb6 has a role in cell polarity and morphogenesis control in part by the regulation of small Rho-type GTPase Cdc42. Specifically, Orb6 kinase spatially restricts Cdc42 activation to be at the polarized tips of a cell, causing the Cdc42-dependent formins, For3 (an F-actin cable polymerization factor), to also be activated at the cell tips, ensuring proper cell growth and polarization. Upon loss of Orb6 kinase function, cells fail to maintain a polarized cell shape and become round. References EC 2.7.1 Cell cycle Cell movement
NDR kinase
Biology
463
2,647,057
https://en.wikipedia.org/wiki/Trinomial
In elementary algebra, a trinomial is a polynomial consisting of three terms or monomials. Examples of trinomial expressions with variables with variables with variables , the quadratic polynomial in standard form with variables. with variables, nonnegative integers and any constants. where is variable and constants are nonnegative integers and any constants. Trinomial equation A trinomial equation is a polynomial equation involving three terms. An example is the equation studied by Johann Heinrich Lambert in the 18th century. Some notable trinomials The quadratic trinomial in standard form (as from above): sum or difference of two cubes: A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable ( below). This form is factored as: where For instance, the polynomial is an example of this type of trinomial with . The solution and of the above system gives the trinomial factorization: . The same result can be provided by Ruffini's rule, but with a more complex and time-consuming process. See also Trinomial expansion Monomial Binomial Multinomial Simple expression Compound expression Sparse polynomial Notes References Elementary algebra Polynomials
Trinomial
Mathematics
272
29,005,334
https://en.wikipedia.org/wiki/Davis%20%26%20Shirtliff
The Davis & Shirtliff Group is the leading supplier of water and energy-related equipment in the East African region. Founded in Kenya in 1946, the company specializes in eight principal sectors: Water Pumps: Offering a wide range of high-quality pumps, including leading brands such as Dayliff, Pedrollo, Grundfos, Davey, DAB, Rovatti Pompe, and Flowserve. Solar Solutions: Providing solar panels, support structures, water heaters, inverters, backup systems, solar pumps, controls, accessories, and energy storage systems. General Machinery: Supplying diesel and petrol generators, welding generators, engines, mowers, trimmers, compressors, pressure washers, outboard engines, and agricultural machinery such as hammer mills. Swimming Pools: Specializing in pool filters, pumps, chemicals, chlorinators, accessories, spas, saunas, and fountain nozzles. Water Treatment: Offering solutions for domestic and industrial applications, including reverse osmosis systems, UV systems, water treatment plants, filters, softeners, chemical dosage systems, and treatment media. Chemicals: Providing water treatment chemicals, laboratory chemicals, and associated equipment. Irrigation & Water Supply Accessories: Supplying irrigation kits, Hunter accessories, pressure tanks, water meters, and other related products. Controllers & Digital Solutions: Including pump controllers, control panels, and meters for enhanced system management. Headquartered in Kenya, Davis & Shirtliff operates under an extensive network of branches across 12 countries namely Uganda, Tanzania, Zambia, Rwanda, South Sudan, Democratic Republic of the Congo (DRC), Zimbabwe, Somalia and Burundi, alongside a partnership in Ethiopia. Recently, the company has expanded its operations to include Senegal. This represents 109 branches across the various countries. The company is renowned for its innovative, sustainable, and efficient solutions that cater to residential, commercial, and industrial needs in the Water & Energy sectors. History The Davis & Shirtliff Group was founded in 1946 as a partnership between EC 'Eddie' Davis and FR 'Dick' Shirtliff after Dick Shirtliff purchased 50% of RH Paige & Co., a small plumbing and water engineering firm founded in 1926 and which was bought into by Eddie Davis in 1945. In 1947, it became a founding member of the Kenya Association of Building and Civil Engineering Contractors (KABCEC) and, in 1955, after purchasing two new plots of land and constructing new offices and workshops, moved its operations to its new site. In 1965 it took delivery of the first consignment of pumps from Grundfos in Denmark and, in 1968, imported a consignment of Davey pumps from Australia. In 1982 the range of Grundfos products was expanded to include solar pumps. 1985 saw the appointment of Butech Limited as the Mombasa distributor of their products and the commencement of fibreglass filter production. In 1992, the group & Shirtliff purchased a 20% shareholding in Butech Limited; which was finally bought out in 2000, becoming the Mombasa Branch of Davis & Shirtliff. In 1993 they imported their first Linz pumps from Pedrollo, Italy. In 1995 they introduced the 'Pump Centre' and the establishment of a countrywide range of dealers. 1995 also saw the opening of a Branch in Eldoret. Subsidiaries opened in Kampala and Dar-es-Salaam in 1996 and 1998 respectively. The business has recently expanded. Throughout the first decade of the millennium, Davis & Shirtliff opened branches and subsidiaries in Lusaka; Kigali; Nakuru and Arusha; Zanzibar and Kitwe; Malindi; Addis Ababa, Mwanza and Diani;, Juba; and Mbeya in, 2001, 2004, 2005, 2006, 2007, 2008, 2010 and 2021 respectively, thereby expanding the distribution of its products and its consumer base. In 2001, Davis & Shirtliff was appointed the regional distributor for Pedrollo Pumps. In 2003, the firm's solar division was established as a regional distributor for Shell Solar. Also, in 2004, it became a Certikin distributor and, in 2005, was appointed as a Lister Petter engine generator distributor for the region. Community work The company assists needy institutions to obtain water supplies. Funding for community activities is largely provided through an annual contribution from the company and its staff. Business partners are also involved and Grundfos has been supportive of the group’s activities through the donation of money, equipment and supplies. Davis and Shirtliff has also partnered with Kenya Airways on a water project in Runana. References External links Official Website Dayliff Website Bioliff Website Companies based in Nairobi Water industry
Davis & Shirtliff
Environmental_science
959
34,995,809
https://en.wikipedia.org/wiki/PCLake
PCLake is a dynamic, mathematical model used to study eutrophication effects in shallow lakes and ponds. PCLake models explicitly the most important biotic groups and their interrelations, within the general framework of nutrient cycles. PCLake is used both by scientist and water managers. PCLake is in 2019 extended to PCLake+, which can be applied to stratifying lakes. Background Typically, shallow lakes are in one of two contrasting alternative stable states: a clear state with submerged macrophytes and piscivorous fish, or a turbid state dominated by phytoplankton and benthivorous fish. A switch from one state to the other is largely driven by the input of nutrients (phosphorus and nitrogen) to the ecosystem. If the nutrient loading exceeds a critical value, eutrophication causes a switch from the clear to the turbid state. As a result of urban water pollution and/or intensive agriculture in catchment areas, many of the world’s shallow lakes and ponds are in a eutrophic state with turbid waters and poor ecological quality. In this turbid state, the lake also becomes subject to algal blooms of toxic cyanobacteria (also called blue-green algae). Recovery of the clear state however is difficult as the critical nutrient loading for the switch back is often found to be lower than the critical loading towards the turbid state. Lowering the nutrient input thus does not automatically lead to a switch back to the clear water phase. Hence, the system shows hysteresis. Application PCLake is designed to study the effects of eutrophication on shallow lakes and ponds. On one hand, the model is used by scientists to study the general behavior of these ecosystems. For example, PCLake is used to understand the phenomena of alternative stable states and hysteresis, and in that light, the relative importance of lake features such as water depth or fetch length. Also the potential effects of climate warming for shallow lakes have been studied. On the other hand, PCLake is applied by lake water resource managers that consider the turbid state as undesirable. They can use the model to define the critical loadings for their specific lakes and evaluate the effectiveness of restoration measures. For this purpose also a meta-model has been developed. The meta-model can be used by water managers to derive an estimate of the critical loading values for a certain lake based on only a few important parameters, without the need of running the full dynamical model. Model content Mathematically, PCLake is composed of a set of coupled differential equations. With a large number of state variables (>100) and parameters (>300), the model may be characterized as relatively complex. The main biotic variables are phytoplankton and submerged aquatic vegetation, describing primary production. A simplified food web is made up of zooplankton, zoobenthos, young and adult whitefish and piscivorous fish. The main abiotic factors are transparency and the nutrients phosphorus (P), nitrogen (N) and silica (Si). At the base of the model are the water and nutrient budgets (in- and outflow). The model describes a completely mixed water body and comprises both the water column and the upper sediment layer. The overall nutrient cycles for N, P and Si are described as completely closed (except for in- and outflow and denitrification). Inputs to the model are: lake hydrology, nutrient loading, dimensions and sediment characteristics. The model calculates chlorophyll-a, transparency, cyanobacteria, vegetation cover and fish biomass, as well as the concentrations and fluxes of nutrients N, P and Si, and oxygen. Optionally, a wetland zone with marsh vegetation and water exchange with the lake can be included. PCLake is calibrated against nutrient, transparency, chlorophyll and vegetation data on more than 40 European (but mainly Dutch) lakes, and systematic sensitivity and uncertainty analysis have been performed. Although PCLake is primarily used for Dutch lakes, it is likely that the model is also applicable to comparable non-stratifying lakes in other regions, if parameters are adjusted or some small changes to the model are made. Model development The first version of PCLake (by then called PCLoos) was built in the early 1990s at the Netherlands National Institute for Public Health and the Environment (RIVM), within the framework of a research and restoration project on Lake Loosdrecht. It has been extended and improved since then. Parallel to PCLake, PCDitch was created, which is an ecosystem model for ditches and other linear water bodies. The models were further developed by dr. Jan H. Janse and colleagues at the Netherlands Environmental Assessment Agency (PBL), formerly part of the RIVM. Since 2009, the model is jointly owned by PBL and the Netherlands Institute of Ecology, where further development and application of PCLake is taking place, related to aquatic-ecological research. See also Ecosystem model Water quality modelling Ecopath References Mathematical modeling Environmental chemistry
PCLake
Chemistry,Mathematics,Environmental_science
1,052
35,556,771
https://en.wikipedia.org/wiki/C5H7N3O
{{DISPLAYTITLE:C5H7N3O}} C5H7N3O may refer to: Methylcytosine 5-Methylcytosine 1-Methylcytosine, a nucleic acid in Hachimoji DNA N(4)-Methylcytosine 6-Methylcytosine Methylisocytosine 1-Methylisocytosine 3-Methylisocytosine 4-Methylisocytosine 5-Methylisocytosine 6-Methylisocytosine () See also Cytosine Isocytosine Nucleic acid analogue
C5H7N3O
Chemistry
125
77,309,027
https://en.wikipedia.org/wiki/Jun12682
Jun12682 is an experimental antiviral medication being studied as a potential treatment for COVID-19. It is believed to work by inhibiting SARS-CoV-2 papain-like protease (PLpro), a crucial enzyme for viral replication. Mechanism of action The SARS-CoV-2 virus utilizes several proteases to assist in creating proteins that are essential for viral replication. Among these, the papain-like protease (PLpro) is responsible for cleaving specific sites in the viral polyproteins, facilitating the production of functional viral proteins. By binding to both the BL2 groove and Val70Ub site of PLpro protease, Jun12682 is believed to interfere with the virus's ability to produce new viral proteins, thereby inhibiting the viral replication process. In a study involving mice infected with SARS-CoV-2, mice orally administered Jun12682 experienced reduced viral loads in their lungs, decreased lung lesions, reduced weight loss, and improved survival when compared to those in the control group. The protease targeted by Jun12682 (PLpro) is distinct from the protease targeted by some other antiviral medications, such as nirmatrelvir/ritonavir, which specifically inhibit the SARS-CoV-2 main protease (Mpro). Laboratory studies have indicated that Jun12682 may retain efficacy against certain strains of SARS-CoV-2 that have developed resistance to other antiviral agents, including nirmatrelvir. This characteristic may position Jun12682 as an option in the treatment of COVID-19 in cases where viral resistance to existing therapies is a concern. References COVID-19 drug development Experimental antiviral drugs Dimethylamino compounds Pyrazoles Ethanolamines Benzamides
Jun12682
Chemistry
397
26,176,522
https://en.wikipedia.org/wiki/Mycoforestry
Mycoforestry is an ecological forest management system implemented to enhance forest ecosystems and plant communities, by introducing the mycorrhizal and saprotrophic fungi. Mycoforestry is considered a type of permaculture and can be implemented as a beneficial component of an agroforestry system. It can enhance the yields of tree crops and produce edible mushrooms, an economically valuable product. By integrating plant-fungal associations into a forestry management system, native forests can be preserved, wood waste can be recycled back into the ecosystem, carbon sequestration can be increased, planted restoration sites are enhanced, and the sustainability of forest ecosystems are improved. Mycoforestry is an alternative to the practice of clearcutting, which removes dead wood from forests, thereby diminishing nutrient availability and reducing soil depth. Selection of fungal species According to Paul Stamets, the first principle for the creation of a mycoforestry system is to utilize native fungal species. Implementing a mycoforestry system provides the potential of improving restoration efforts and the possibility of economic gain through mushroom cropping and harvesting. However to utilize native fungal flora, first the relationships between present fungal species and growth substrate, and habitat need to be studied. A simple way to introduce a mycoforestry system and enhance out-plantings for crops and forest restoration sites is to "use mycorrhizal spore inoculum when replanting forest lands." For this process it is best to match native trees with native mycorrhizal fungi. This method keeps and will promote the functioning of the native ecosystem, and native biodiversity. It is assumed in a functioning forest ecosystem an underground mycelial network persists even if no fruiting bodies are visible. A period of disappearance of mushrooms from an area should not cause alarm. In order to trigger the formation of fruiting bodies, many fungal species require specific environmental conditions. Most species of fungi do not fruit year round. Mycoforestry is an emergent scientific field and practice. Until broadly standardized protocols are created and perfected, the collection of both current and historical ecological site conditions will improve the success of the project. Therefore, a survey of fungal relations at the site under both prime and poor conditions is beneficial to implementation of a mycoforestry system. Saprotrophic fungi The second principle is to promote saprotrophic fungi in the environment. Saprophytic fungi are crucial to mycoforestry systems because these are the primary composers breaking down wood and returning nutrients to the soil for use by the rest of the forest ecosystem. This can be accomplished through inoculation of wood debris at site. Spored oils (biodegradable oils containing fungal spores) can be used in chainsaws when problematic or invasive hardwood requires felling. This method is a simple means to inoculate a tree. Additionally plug spawn can be implemented and injected into wood mass again prompting colonization by the selected fungus. Eventually repeated colonization efforts should not be necessary as many fungal life forms are strong and will spread and sustain in the soil on their own. In management of the mycoforestry system, it is important that dead wood be in contact with the ground. This allows fungus to reach up from the soil and decompose fallen wood releasing nutrients at a much quicker rate then if the wood is left standing. Additionally, it is important to leave dead wood on site for decomposition back into the soil. This philosophy is similarly based to the fact that clearcutting of a forest reduces soil nutrients and thickness. Beneficial fungal interactions The third principle is to implement species known to benefit plant species. These are commonly mycorrhizal fungus that form long term associations with plants, often extending inside of plants roots, acting as an additional root system, and improving absorption of nutrients and water. Utilizing mushroom species that attract insects could be a useful source of fish food. This practice makes the mycoforestry a larger system. Unlike most agriculture systems it helps the environment in a number of ways. It ties all biological aspects of the environment together, creating sustainable living and food production as well as sustainable fisheries similar to the ancient Hawaiian Ahupua'a, which utilized sustainable all portions of the land for environmental and food security. Additionally fungal species can be implemented that compete with disease-causing agents like Armillaria root rots, to provide long term protection of the forestry system. Additionally, the implementation of an agroforestry system performs mycoremediation and mycofiltration activities, cleaning up toxins and restoring the environment. See also Mycorestoration References External links Spinosa, Ron. Fungi and Sustainability. Fungi magazine. Spring 2008. Stamets, Paul. Mycotechnology. Fungi Perfecti. Forestry Mycology Agroforestry Sustainable forest management Habitat management equipment and methods Habitat Permaculture
Mycoforestry
Biology
997
42,729,306
https://en.wikipedia.org/wiki/AXELOS
AXELOS is a joint venture set up in 2014 by the Government of the United Kingdom and Capita, to develop, manage and operate qualifications in best practice, in methodologies formerly owned by the Office of Government Commerce (OGC). PeopleCert, an examination institute that was responsible for delivering AXELOS exams, acquired AXELOS in 2021. Portfolio AXELOS manages: ITIL (Information Technology Infrastructure Library) – IT Service Management published in 1989 (updated 2000, 2007, 2011 & 2019/20) PRINCE2 (Projects IN Controlled Environments) – Project Management published in 1996 (updated 1998, 2002, 2005, 2009 & 2017) MSP (Managing Successful Programmes) – Program Management published in 1999 (updated 2003, 2007, 2011 & 2020) M_o_R (Management of Risk) – Risk Management published in 2002 (updated 2007 & 2010) P3M3 or Portfolio, Programme and Project Management Maturity Model published in 2005 (updated 2008 & 2015) P3O (Portfolio, Programme and Project Offices) published in 2008 (updated 2013) MoV (Management of Value) – Value Management published in 2010 MoP (Management of Portfolios) – Portfolio Management published in 2011 RESILIApublished in 2015 PRINCE2 Agile – Agile Project Managementpublished in 2015 AgileSHIFT published in 2018 There are third-party training providers, but Axelos manages certification. In April 2014, AXELOS announced that it was also launching a cyber-resilience qualification; this would complement guidance available from CESG. PeopleCert have been chosen by AXELOS as the sole EI (Examination Institute) for the delivery of Accreditation and Examination services worldwide, starting 1 January 2018. Background The portfolio was originally developed for UK government, and is valuable; the government periodically requests tenders for private-sector partners to manage it. Historically, this had been APMG. However, in April 2013 Capita won the contract, under a new arrangement which required them to invest in a joint venture. Capita hold a 51% majority stake, the Cabinet Office the remaining 49%. This joint venture, AXELOS, was formed in July 2013, and it took over from APMG on 1 January 2014. References External links OGC (archived) Cabinet Office (United Kingdom) Government procurement in the United Kingdom Information technology management Information technology organisations based in the United Kingdom
AXELOS
Technology
475
1,955,806
https://en.wikipedia.org/wiki/New%20Valley%20Project
The New Valley Project or Toshka Project consists of building a system of canals to carry water from Lake Nasser to irrigate part of the sandy wastes of the Western Desert of Egypt, which is part of the Sahara Desert. History In 1997, the Egyptian government decided to develop a new valley (as opposed to the existing Nile Valley) where agricultural and industrial communities would develop. It has been an ambitious project which was meant to help Egypt cope with its rapidly growing population. Project The canal inlet starts from a site 8 km to the north of Toshka Bay (Khor) on Lake Nasser. The canal is meant to continue westwards until it reaches the Darb el-Arbe'ien route, then northwards along the Darb el- Arbe'ien to the Baris Oasis, covering a distance of 310 km. But as of April 2012, the canal is still 60 km short of the Baris Oasis. The Mubarak Pumping Station in Toshka is the centerpiece of the project and was inaugurated in March 2005. It pumps water from Lake Nasser to be transported by way of a canal through the valley, with the idea of transforming 2340 km2 (588,000 acres) of desert into agricultural land. The Toshka Project has now been revived by President Abdel Fattah el-Sisi. Half of the land will be given to college graduates, 1 acre each, funded by the Long Live Egypt Fund. The essential problem is that the Western Desert's high saline levels and the presence of underground aquifers in the area act as a major obstacle to any irrigation project. As the land is irrigated, the salt would mix with the aquifers and would reduce access to potable water. There is also the difficulty that the clay minerals found in the soil are posing technical problems to the big wheeled structures moving around autonomously to irrigate the land. Often their wheels get stuck in a little bowl created by wet clay that dried, and the irrigation machines come to a standstill. The only objective met up to April 2012 is the diversion of water from Lake Nasser into what little of the Sheikh Zayed Canal has been built. The Toshka Lakes are a by-product of the rising level of Lake Nasser and lie in the same general region as much of the New Valley Project. See also New Valley Governorate Baris Oasis Kharga Oasis Dakhla Oasis Farafra Oasis Bahariya Oasis Siwa Oasis External links South Valley Development Project in Toshka, Egyptian Ministry of Water Resources and Irrigation Egypt's new Nile Valley grand plan gone bad, The National, 22 April 2012 On Toshka New Valley's mega-failure Toshka Project - Mubarak Pumping Station / Sheikh Zayed Canal, Egypt Photographs Gallery New Valley Governorate Geography of Egypt Agriculture in Egypt Irrigation in Egypt Interbasin transfer Western Desert (Egypt)
New Valley Project
Environmental_science
593
2,996,488
https://en.wikipedia.org/wiki/Choked%20flow
Choked flow is a compressible flow effect. The parameter that becomes "choked" or "limited" is the fluid velocity. Choked flow is a fluid dynamic condition associated with the Venturi effect. When a flowing fluid at a given pressure and temperature passes through a constriction (such as the throat of a convergent-divergent nozzle or a valve in a pipe) into a lower pressure environment the fluid velocity increases. At initially subsonic upstream conditions, the conservation of energy principle requires the fluid velocity to increase as it flows through the smaller cross-sectional area of the constriction. At the same time, the venturi effect causes the static pressure, and therefore the density, to decrease at the constriction. Choked flow is a limiting condition where the mass flow cannot increase with a further decrease in the downstream pressure environment for a fixed upstream pressure and temperature. For homogeneous fluids, the physical point at which the choking occurs for adiabatic conditions is when the exit plane velocity is at sonic conditions; i.e., at a Mach number of 1. At choked flow, the mass flow rate can be increased only by increasing the upstream density of the substance. The choked flow of gases is useful in many engineering applications because the mass flow rate is independent of the downstream pressure, and depends only on the temperature and pressure and hence the density of the gas on the upstream side of the restriction. Under choked conditions, valves and calibrated orifice plates can be used to produce a desired mass flow rate. Choked flow in liquids If the fluid is a liquid, a different type of limiting condition (also known as choked flow) occurs when the venturi effect acting on the liquid flow through the restriction causes a decrease of the liquid pressure beyond the restriction to below that of the liquid's vapor pressure at the prevailing liquid temperature. At that point, the liquid partially flashes into bubbles of vapor and the subsequent collapse of the bubbles causes cavitation. Cavitation is quite noisy and can be sufficiently violent to physically damage valves, pipes and associated equipment. In effect, the vapor bubble formation in the restriction prevents the flow from increasing any further. Mass flow rate of a gas at choked conditions All gases flow from higher pressure to lower pressure. Choked flow can occur at the change of the cross section in a de Laval nozzle or through an orifice plate. The choked velocity is observed upstream of an orifice or nozzle. The upstream volumetric flow rate is lower than the downstream condition because of the higher upstream density. The choked velocity is a function of the upstream pressure but not the downstream. Although the velocity is constant, the mass flow rate is dependent on the density of the upstream gas, which is a function of the upstream pressure. Flow velocity reaches the speed of sound in the orifice, and it may be termed a . Choking in change of cross section flow Assuming ideal gas behavior, steady-state choked flow occurs when the downstream pressure falls below a critical value . That critical value can be calculated from the dimensionless critical pressure ratio equation , where is the heat capacity ratio of the gas and where is the total (stagnation) upstream pressure. For air with a heat capacity ratio , then ; other gases have in the range 1.09 (e.g. butane) to 1.67 (monatomic gases), so the critical pressure ratio varies in the range , which means that, depending on the gas, choked flow usually occurs when the downstream static pressure drops to below 0.487 to 0.587 times the absolute pressure in stagnant upstream source vessel. When the gas velocity is choked, one can obtain the mass flowrate as a function of the upstream pressure. For isentropic flow Bernoulli's equation should hold: , where - is the enthalpy of gas, - molar specific heat at constant pressure, with being the universal gas constant, - absolute temperature. If we neglect the initial gas velocity upstream, we can obtain the ultimate gas velocity as follows: In a choked flow this velocity happens to coincide exactly with the sonic velocity at the critical cross-section: , where is the density at the critical cross-section. We can now obtain the pressure as: , taking in account that . Now remember that we have neglected gas velocity upstream, that is pressure at the critical section must be essentially the same or close to the stagnation pressure upstream , and . Finally we obtain: as an approximate equation for the mass flowrate. The more precise equation for the choked mass flow rate is: The mass flow rate is primarily dependent on the cross-sectional area of the nozzle throat and the upstream pressure , and only weakly dependent on the temperature . The rate does not depend on the downstream pressure at all. All other terms are constants that depend only on the composition of the material in the flow. Although the gas velocity reaches a maximum and becomes choked, the mass flow rate is not choked. The mass flow rate can still be increased if the upstream pressure is increased as this increases the density of the gas entering the orifice. The value of can be calculated using the below expression: The above equations calculate the steady state mass flow rate for the pressure and temperature existing in the upstream pressure source. If the gas is being released from a closed high-pressure vessel, the above steady state equations may be used to approximate the initial mass flow rate. Subsequently, the mass flow rate decreases during the discharge as the source vessel empties and the pressure in the vessel decreases. Calculating the flow rate versus time since the initiation of the discharge is much more complicated, but more accurate. The technical literature can be confusing because many authors fail to explain whether they are using the universal gas law constant R, which applies to any ideal gas or whether they are using the gas law constant Rs, which only applies to a specific individual gas. The relationship between the two constants is Rs = R / M where M is the molecular weight of the gas. Real gas effects If the upstream conditions are such that the gas cannot be treated as ideal, there is no closed form equation for evaluating the choked mass flow. Instead, the gas expansion should be calculated by reference to real gas property tables, where the expansion takes place at constant enthalpy. Minimum pressure ratio required for choked flow to occur The minimum pressure ratios required for choked conditions to occur (when some typical industrial gases are flowing) are presented in Table 1. The ratios were obtained using the criterion that choked flow occurs when the ratio of the absolute upstream pressure to the absolute downstream pressure is equal to or greater than , where is the specific heat ratio of the gas. The minimum pressure ratio may be understood as the ratio between the upstream pressure and the pressure at the nozzle throat when the gas is traveling at Mach 1; if the upstream pressure is too low compared to the downstream pressure, sonic flow cannot occur at the throat. Notes: Pu, absolute upstream gas pressure Pd, absolute downstream gas pressure Venturi nozzles with pressure recovery The flow through a venturi nozzle achieves a much lower nozzle pressure than downstream pressure. Therefore, the pressure ratio is the comparison between the upstream and nozzle pressure. Therefore, flow through a venturi can reach Mach 1 with a much lower upstream to downstream ratio. Thin-plate orifices The flow of real gases through thin-plate orifices never becomes fully choked. The mass flow rate through the orifice continues to increase as the downstream pressure is lowered to a perfect vacuum, though the mass flow rate increases slowly as the downstream pressure is reduced below the critical pressure. Cunningham (1951) first drew attention to the fact that choked flow does not occur across a standard, thin, square-edged orifice. Vacuum conditions In the case of upstream air pressure at atmospheric pressure and vacuum conditions downstream of an orifice, both the air velocity and the mass flow rate become choked or limited when sonic velocity is reached through the orifice. The flow pattern Figure 1a shows the flow through the nozzle when it is completely subsonic (i.e. the nozzle is not choked). The flow in the chamber accelerates as it converges toward the throat, where it reaches its maximum (subsonic) speed at the throat. The flow then decelerates through the diverging section and exhausts into the ambient as a subsonic jet. In this state, lowering the back pressure increases the flow speed everywhere in the nozzle. When the back pressure, pb, is lowered enough, the flow speed is Mach 1 at the throat, as in figure 1b. The flow pattern is exactly the same as in subsonic flow, except that the flow speed at the throat has just reached Mach 1. Flow through the nozzle is now choked since further reductions in the back pressure can't move the point of M=1 away from the throat. However, the flow pattern in the diverging section does change as you lower the back pressure further. As pb is lowered below that needed to just choke the flow, a region of supersonic flow forms just downstream of the throat. Unlike in subsonic flow, the supersonic flow accelerates as it moves away from the throat. This region of supersonic acceleration is terminated by a normal shock wave. The shock wave produces a near-instantaneous deceleration of the flow to subsonic speed. This subsonic flow then decelerates through the remainder of the diverging section and exhausts as a subsonic jet. In this regime if you lower or raise the back pressure you move the shock wave away from (increase the length of supersonic flow in the diverging section before the shock wave) the throat. If the pb is lowered enough, the shock wave sits at the nozzle exit (figure 1d). Due to the long region of acceleration (the entire nozzle length) the flow speed reaches its maximum just before the shock front. However, after the shock the flow in the jet is subsonic. Lowering the back pressure further causes the shock to bend out into the jet (figure 1e), and a complex pattern of shocks and reflections is set up in the jet that create a mixture of subsonic and supersonic flow, or (if the back pressure is low enough) just supersonic flow. Because the shock is no longer perpendicular to the flow near the nozzle walls, it deflects the flow inward as it leaves the exit producing an initially contracting jet. This is referred as overexpanded flow because in this case the pressure at the nozzle exit is lower than that in the ambient (the back pressure)- i.e. the flow has been expanded by the nozzle too much. A further lowering of the back pressure changes and weakens the wave pattern in the jet. Eventually the back pressure becomes low enough so that it is now equal to the pressure at the nozzle exit. In this case, the waves in the jet disappear altogether (figure 1f), and the jet becomes uniformly supersonic. This situation, since it is often desirable, is referred to as the 'design condition'. Finally, lowering the back-pressure even further creates a new imbalance between the exit and back pressures (exit pressure greater than back pressure), figure 1g. In this situation (called 'underexpanded') expansion waves (that produce gradual turning perpendicular to the axial flow and acceleration in the jet) form at the nozzle exit, initially turning the flow at the jet edges outward in a plume and setting up a different type of complex wave pattern. See also Accidental release source terms includes mass flow rate equations for non-choked gas flows as well. Orifice plate includes derivation of non-choked gas flow equation. de Laval nozzles are venturi tubes that produce supersonic gas velocities as the tube and the gas are first constricted and then the tube and gas are expanded beyond the choke plane. Rocket engine nozzles discusses how to calculate the exit velocity from nozzles used in rocket engines. Hydraulic jump High pressure jet References External links Choked flow of gases Development of source emission models Restriction orifice sizing control Perform orifice plate, restriction orifice sizing calculation for a single phase flow. Flow regimes Aerodynamics Gas technologies
Choked flow
Chemistry,Engineering
2,490
24,776
https://en.wikipedia.org/wiki/Piston
A piston is a component of reciprocating engines, reciprocating pumps, gas compressors, hydraulic cylinders and pneumatic cylinders, among other similar mechanisms. It is the moving component that is contained by a cylinder and is made gas-tight by piston rings. In an engine, its purpose is to transfer force from expanding gas in the cylinder to the crankshaft via a piston rod and/or connecting rod. In a pump, the function is reversed and force is transferred from the crankshaft to the piston for the purpose of compressing or ejecting the fluid in the cylinder. In some engines, the piston also acts as a valve by covering and uncovering ports in the cylinder. Piston engines Internal combustion engines An internal combustion engine is acted upon by the pressure of the expanding combustion gases in the combustion chamber space at the top of the cylinder. This force then acts downwards through the connecting rod and onto the crankshaft. The connecting rod is attached to the piston by a swivelling gudgeon pin (US: wrist pin). This pin is mounted within the piston: unlike the steam engine, there is no piston rod or crosshead (except big two stroke engines). The typical piston design is on the picture. This type of piston is widely used in car diesel engines. According to purpose, supercharging level and working conditions of engines the shape and proportions can be changed. High-power diesel engines work in difficult conditions. Maximum pressure in the combustion chamber can reach 20 MPa and the maximum temperature of some piston surfaces can exceed 450 °C. It is possible to improve piston cooling by creating a special cooling cavity. Injector supplies this cooling cavity «A» with oil through oil supply channel «B». For better temperature reduction construction should be carefully calculated and analysed. Oil flow in the cooling cavity should be not less than 80% of the oil flow through the injector. The pin itself is of hardened steel and is fixed in the piston, but free to move in the connecting rod. A few designs use a 'fully floating' design that is loose in both components. All pins must be prevented from moving sideways and the ends of the pin digging into the cylinder wall, usually by circlips. Gas sealing is achieved by the use of piston rings. These are a number of narrow iron rings, fitted loosely into grooves in the piston, just below the crown. The rings are split at a point in the rim, allowing them to press against the cylinder with a light spring pressure. Two types of ring are used: the upper rings have solid faces and provide gas sealing; lower rings have narrow edges and a U-shaped profile, to act as oil scrapers. There are many proprietary and detail design features associated with piston rings. Pistons are usually cast or forged from aluminium alloys. For better strength and fatigue life, some racing pistons may be forged instead. Billet pistons are also used in racing engines because they do not rely on the size and architecture of available forgings, allowing for last-minute design changes. Although not commonly visible to the naked eye, pistons themselves are designed with a certain level of ovality and profile taper, meaning they are not perfectly round, and their diameter is larger near the bottom of the skirt than at the crown. Early pistons were of cast iron, but there were obvious benefits for engine balancing if a lighter alloy could be used. To produce pistons that could survive engine combustion temperatures, it was necessary to develop new alloys such as Y alloy and Hiduminium, specifically for use as pistons. A few early gas engines had double-acting cylinders, but otherwise effectively all internal combustion engine pistons are single-acting. During World War II, the US submarine Pompano was fitted with a prototype of the infamously unreliable H.O.R. double-acting two-stroke diesel engine. Although compact, for use in a cramped submarine, this design of engine was not repeated. Trunk pistons Trunk pistons are long relative to their diameter. They act both as a piston and cylindrical crosshead. As the connecting rod is angled for much of its rotation, there is also a side force that reacts along the side of the piston against the cylinder wall. A longer piston helps to support this. Trunk pistons have been a common design of piston since the early days of the reciprocating internal combustion engine. They were used for both petrol and diesel engines, although high speed engines have now adopted the lighter weight slipper piston. A characteristic of most trunk pistons, particularly for diesel engines, is that they have a groove for an oil ring below the gudgeon pin, in addition to the rings between the gudgeon pin and crown. The name 'trunk piston' derives from the 'trunk engine', an early design of marine steam engine. To make these more compact, they avoided the steam engine's usual piston rod with separate crosshead and were instead the first engine design to place the gudgeon pin directly within the piston. Otherwise these trunk engine pistons bore little resemblance to the trunk piston; they were extremely large diameter and double-acting. Their 'trunk' was a narrow cylinder mounted in the centre of the piston. Crosshead pistons Large slow-speed Diesel engines may require additional support for the side forces on the piston. These engines typically use crosshead pistons. The main piston has a large piston rod extending downwards from the piston to what is effectively a second smaller-diameter piston. The main piston is responsible for gas sealing and carries the piston rings. The smaller piston is purely a mechanical guide. It runs within a small cylinder as a trunk guide and also carries the gudgeon pin. Lubrication of the crosshead has advantages over the trunk piston as its lubricating oil is not subject to the heat of combustion: the oil is not contaminated by combustion soot particles, it does not break down owing to the heat and a thinner, less viscous oil may be used. The friction of both piston and crosshead may be only half of that for a trunk piston. Because of the additional weight of these pistons, they are not used for high-speed engines. Slipper pistons A slipper piston is a piston for a petrol engine that has been reduced in size and weight as much as possible. In the extreme case, they are reduced to the piston crown, support for the piston rings, and just enough of the piston skirt remaining to leave two lands so as to stop the piston rocking in the bore. The sides of the piston skirt around the gudgeon pin are reduced away from the cylinder wall. The purpose is mostly to reduce the reciprocating mass, thus making it easier to balance the engine and so permit high speeds. In racing applications, slipper piston skirts can be configured to yield extremely light weight while maintaining the rigidity and strength of a full skirt. Reduced inertia also improves mechanical efficiency of the engine: the forces required to accelerate and decelerate the reciprocating parts cause more piston friction with the cylinder wall than the fluid pressure on the piston head. A secondary benefit may be some reduction in friction with the cylinder wall, since the area of the skirt, which slides up and down in the cylinder is reduced by half. However, most friction is due to the piston rings, which are the parts which actually fit the tightest in the bore and the bearing surfaces of the wrist pin, and thus the benefit is reduced. Deflector pistons Deflector pistons are used in two-stroke engines with crankcase compression, where the gas flow within the cylinder must be carefully directed in order to provide efficient scavenging. With cross scavenging, the transfer (inlet to the cylinder) and exhaust ports are on directly facing sides of the cylinder wall. To prevent the incoming mixture passing straight across from one port to the other, the piston has a raised rib on its crown. This is intended to deflect the incoming mixture upwards, around the combustion chamber. Much effort, and many different designs of piston crown, went into developing improved scavenging. The crowns developed from a simple rib to a large asymmetric bulge, usually with a steep face on the inlet side and a gentle curve on the exhaust. Despite this, cross scavenging was never as effective as hoped. Most engines today use Schnuerle porting instead. This places a pair of transfer ports in the sides of the cylinder and encourages gas flow to rotate around a vertical axis, rather than a horizontal axis. Racing pistons In racing engines, piston strength and stiffness is typically much higher than that of a passenger car engine, while the weight is much less, to achieve the high engine RPM necessary in racing. Hydraulic cylinders Hydraulic cylinders can be both single-acting or double-acting. A hydraulic actuator controls the movement of the piston back and/or forth. Guide rings guides the piston and rod and absorb the radial forces that act perpendicularly to the cylinder and prevent contact between sliding the metal parts. Steam engines Steam engines are usually double-acting (i.e. steam pressure acts alternately on each side of the piston) and the admission and release of steam is controlled by slide valves, piston valves or poppet valves. Consequently, steam engine pistons are nearly always comparatively thin discs: their diameter is several times their thickness. (One exception is the trunk engine piston, shaped more like those in a modern internal-combustion engine.) Another factor is that since almost all steam engines use crossheads to translate the force to the drive rod, there are few lateral forces acting to try and "rock" the piston, so a cylinder-shaped piston skirt isn't necessary. Pumps Piston pumps can be used to move liquids or compress gases. For liquids For gases Air cannons There are two special type of pistons used in air cannons: close tolerance pistons and double pistons. In close tolerance pistons O-rings serve as a valve, but O-rings are not used in double piston types. See also Air gun Fire piston Fruit press Gas-operated reloading, using a gas piston Hydraulic cylinder List of auto parts Piston motion equations Shock absorber Slide whistle Steam locomotive components Syringe Wankel engine, an internal combustion engine design with a rotor instead of pistons Notes References Bibliography External links Piston Engines Essay How Stuff Works – Basic Engine Parts Piston Motion Equations Engine technology
Piston
Technology
2,098
57,773,393
https://en.wikipedia.org/wiki/Magic%20string%20%28therapeutic%20aid%29
Magic string is a psychological therapeutic aid used to make radiotherapy treatment for children less stressful. Without its use, many children need to have general anaesthetic in order to receive their treatment. Background Patients receiving radiotherapy have to be alone inside a lead-lined room, since only they can be exposed to the radiation, and also have to stay still during the treatment, with the necessary immobility being achieved through the use of a radiotherapy mask that covers the face and shoulders and is fastened to the treatment bed. Adult patients often find this claustrophobic and it can be particularly distressing for young children. Consequently, many young patients have required sedation with a general anaesthetic in order to meet the requirements of their radiotherapy. Implementation The use of magic string, simply a multi-coloured ball of twine, was learned about in 2007 by Lobke Marsden, a play specialist at the Bexley Wing oncology unit of St James's University Hospital in Leeds, as a low-cost solution to the problem of children's difficulties with radiotherapy. One end of the string is held by the patient and the other end by the parent. In 2017, Marsden told Ellen Wallwork of The Huffington Post, "String is perfect for children that really need that connection with their parents. They often give it a little tug, and the parents tug it back from the other room to let the child know they are right there with them", adding that, "It has proven to be the cheapest and one of the best pieces of 'equipment' we own". Writing in The Guardian in June 2018, Rachel Clarke said, "Cheap as chips and priceless, magic string was created not for profit or personal gain – but simply because someone cared". References Play (activity)
Magic string (therapeutic aid)
Biology
365
58,432,047
https://en.wikipedia.org/wiki/Particle%20chauvinism
Particle chauvinism is the term used by British astrophysicist Martin Rees to describe the (allegedly erroneous) assumption that what we think of as normal matter – atoms, quarks, electrons, etc. (excluding dark matter or other matter) – is the basis of matter in the universe, rather than a rare phenomenon. Dominance of dark matter With the growing recognition in the late 20th century of the presence of dark matter in the universe, ordinary baryonic matter has come to be seen as something of a cosmic afterthought. As J.D. Barrow put it: "This would be the final Copernican twist in our status in the material universe. Not only are we not at the center of the universe: We are not even made of the predominant form of matter." The 21st century saw the share of baryonic matter in the total mass-energy of the universe downgraded further, to perhaps as low as 1%, further extending what has been called the demise of particle-chauvinism, before being revised up to some 5% of the contents of the universe. See also Anthropic principle Carbon chauvinism Mediocrity principle References External links Astronomical hypotheses Chauvinism Exceptionalism Dark matter
Particle chauvinism
Physics,Astronomy
265
49,265,835
https://en.wikipedia.org/wiki/Gas%20vesicle
Gas vesicles, also known as gas vacuoles, are nanocompartments in certain prokaryotic organisms, which help in buoyancy. Gas vesicles are composed entirely of protein; no lipids or carbohydrates have been detected. Function Gas vesicles occur primarily in aquatic organisms as they are used to modulate the cell's buoyancy and modify the cell's position in the water column so it can be optimally located for photosynthesis or move to locations with more or less oxygen. Organisms that could float to the air–liquid interface out competes other aerobes that cannot rise in a water column, through using up oxygen in the top layer. In addition, gas vesicles can be used to maintain optimum salinity by positioning the organism in specific locations in a stratified body of water to prevent osmotic shock. High concentrations of solute will cause water to be drawn out of the cell by osmosis, causing cell lysis. The ability to synthesize gas vesicles is one of many strategies that allow halophilic organisms to tolerate environments with high salt content. Evolution Gas vesicles are likely one of the most early mechanisms of motility among microscopic organisms due to the fact that it is the most widespread form of motility conserved within the genome of prokaryotes, some of which have evolved about 3 billion years ago. Modes of active motility such as flagella movement require a mechanism that could convert chemical energy into mechanical energy, and thus is much more complex and would have evolved later. Functions of the gas vesicles are also largely conserved among species, although the mode of regulation might differ, suggesting the importance of gas vesicles as a form of motility. In certain organism such as enterobacterium Serratia sp. flagella-based motility and gas vesicle production are regulated oppositely by a single RNA binding protein, RsmA, suggesting alternate modes of environmental adaptation which would have developed into different taxons through regulation of the development between motility and flotation. Although there is evidence suggesting the early evolution of gas vesicles, plasmid transfer serves as an alternate explanation of the widespread and conserved nature of the organelle. Cleavage of a plasmid in Halobacterium halobium resulted in the loss of the ability to biosynthesize gas vesicles, indicating the possibility of horizontal gene transfer, which could result in a transfer of the ability to produce gas vesicles among different strains of bacteria. Structure Gas vesicles are generally lemon-shaped or cylindrical, hollow tubes of protein with conical caps on both ends. The vesicles vary most in their diameter. Larger vesicles can hold more air and use less protein making them the most economic in terms of resource use, however, the larger a vesicle is the structurally weaker it is under pressure and the less pressure required before the vesicle would collapse. Organisms have evolved to be the most efficient with protein use and use the largest maximum vesicle diameter that will withstand the pressure the organism could be exposed to. In order for natural selection to have affected gas vesicles, the vesicles' diameter must be controlled by genetics. Although genes encoding gas vesicles are found in many species of haloarchaea, only a few species produce them. The first Haloarchaeal gas vesicle gene, GvpA was cloned from Halobacterium sp. NRC-1. 14 genes are involved in forming gas vesicles in haloarchaea. The first gas vesicle gene, GvpA was identified in Calothrix. There are at least two proteins that compose a cyanobacterium's gas vesicle: GvpA, and GvpC. GvpA forms ribs and much of the mass (up to 90%) of the main structure. GvpA is strongly hydrophobic and may be one of the most hydrophobic proteins known. GvpC is hydrophilic and helps to stabilize the structure by periodic inclusions into the GvpA ribs. GvpC is capable of being washed out of the vesicle and a consequential decreases in the vesicle's strength. The thickness of the vesicle's wall may range from 1.8 to 2.8 nm. The ribbed structure of the vesicle is evident on both inner and outer surfaces with a spacing of 4–5 nm between ribs. Vesicles may be 100–1400 nm long and 45–120 nm in diameter. Within a species gas vesicle sizes are relatively uniform with a standard deviation of ±4%. Growth It appears that gas vesicles begin their existence as small biconical (two cones with the flat bases joined) structures which enlarge to the specific diameter than grow and expand their length. It is unknown exactly what controls the diameter but it may be a molecule that interferes with GvpA or the shape of GvpA may change. Regulation Formation of gas vesicles are regulated by two Gvp proteins: GvpD, which represses the expression of GvpA and GvpC proteins, and GvpE, which induces expression. Extracellular environmental factors also affect vesicle formation, either by regulating Gvp protein production or by directly disturbing the vesicle structure. Light intensity Light intensity has been found to affect gas vesicles production and maintenance differently between different bacteria and archaea. For Anabaena flos-aquae, higher light intensities leads to vesicle collapse from an increase in turgor pressure and greater accumulation of photosynthetic products. In cyanobacteria, vesicle production decreases at high light intensity due to exposure of the bacterial surface to UV radiation, which can damage the bacterial genome. Carbohydrates Accumulation of glucose, maltose, or sucrose in Haloferax mediterranei and Haloferax volcanii were found to inhibit the expression of GvpA proteins and, therefore, a decrease of gas vesicle production. However, this only occurred at the cell's early exponential growth phase. Vesicle formation could also be induced in decreasing extracellular glucose concentrations. Oxygen A lack of oxygen was found to negatively affect gas vesicle formation in halophilic archaea. Halobacterium salinarum produce little or no vesicles under anaerobic conditions due to reduced synthesis of mRNA transcripts encoding for Gvp proteins. H. mediterranei and H. volcanii do not produce any vesicles under anoxic conditions due to a decrease in synthesized transcripts encoding for GvpA and truncated transcripts expressing GvpD. pH Increased extracellular pH levels have been found to increase vesicle formation in Microcytis species. Under increased pH, levels of gvpA and gvpC transcripts increase, allowing more exposure to ribosomes for expression and leading to upregulation of Gvp proteins. It may be attributed to greater transcription of these genes, decreased decay of the synthesized transcripts or the higher stability of the mRNA. Ultrasonic irradiation Ultrasonic irradiation, at certain frequencies, was found to collapse gas vesicles in cyanobacteria Spirulina platensis, preventing them from blooming. Quorum sensing In enterobacterium; Serratia sp. strain ATCC39006, gas vesicle is produced only when there is sufficient concentration of a signalling molecule, N-acyl homoserine lactone. In this case, the quorum sensing molecule, N-acyl homoserine lactone acts as a morphogen initiating organelle development. This is advantageous to the organism as resources for gas vesicle production are utilized only when there is oxygen limitation caused by an increase in bacterial population. Role in vaccine development Gas vesicle gene gvpC from Halobacterium sp. is used as delivery system for vaccine studies. Several characteristics of the protein encoded by the gas vesicle gene gvpC allow it to be used as carrier and adjuvant for antigens: it is stable, resistant to biological degradation, tolerates relatively high temperatures (up to 50 °C), and non-pathogenic to humans. Several antigens from various human pathogens have been recombined into the gvpC gene to create subunit vaccines with long-lasting immunologic responses. Different genomic segments encoding for several Chlamydia trachomatis pathogen's proteins, including MOMP, OmcB, and PompD, are joined to the gvpC gene of Halobacteria. In vitro assessments of cells show expression of the Chlamydia genes on cell surfaces through imaging techniques and show characteristic immunologic responses such as TLRs activities and pro-inflammatory cytokines production. Gas vesicle gene can be exploited as a delivery vehicle to generate a potential vaccine for Chlamydia. Limitations of this method include the need to minimize the damage of the GvpC protein itself while including as much of the vaccine target gene into the gvpC gene segment. A similar experiment uses the same gas vesicle gene and Salmonella enterica pathogen's secreted inosine phosphate effector protein SopB4 and SopB5 to generate a potential vaccine vector. Immunized mice secrete pro-inflammatory cytokines IFN-γ, IL-2, and IL-9. Antibody IgG is also detected. After an infection challenge, none or significantly less amount of bacteria were found in the harvested organs such as the spleen and the liver. Potential vaccines using gas vesicle as an antigen display can be given via the mucosal route as an alternative administration pathway, increasing its accessibility to more people and eliciting a wider range of immune responses within the body. Role as contrast agents and reporter genes Gas vesicles have several physical properties that make them visible on various medical imaging modalities. The ability of gas vesicle to scatter light has been used for decades for estimating their concentration and measuring their collapse pressure . The optical contrast of gas vesicles also enables them to serve as contrast agents in optical coherence tomography, with applications in ophthalmology. The difference in acoustic impedance between the gas in their cores and the surrounding fluid gives gas vesicles robust acoustic contrast. Moreover, the ability of some gas vesicle shells to buckle generates harmonic ultrasound echoes that improves the contrast to tissue ratio. Finally, gas vesicles can be used as contrast agents for magnetic resonance imaging (MRI), relying on the difference between the magnetic susceptibility of air and water. The ability to non-invasively collapse gas vesicles using pressure waves provides a mechanism for erasing their signal and improving their contrast. Subtracting the images before and after acoustic collapse can eliminate background signals enhancing the detection of gas vesicles. Heterologous expression of gas vesicles in bacterial and mammalian cells enabled their use as the first family of acoustic reporter genes. While fluorescent reporter genes like green fluorescent protein (GFP) had widespread use in biology, their in vivo applications are limited by the penetration depth of light in tissue, typically a few mm. Luminescence can be detected deeper within the tissue, but have a low spatial resolution. Acoustic reporter genes provide sub-millimeter spatial resolution and a penetration depth of several centimeters, enabling the in vivo study of biological processes deep within the tissue. References Bacteria Prokaryotic cell anatomy Vesicles
Gas vesicle
Biology
2,409
291,472
https://en.wikipedia.org/wiki/Lattice%20model%20%28physics%29
In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory. Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models (when they are solvable) includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups. The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics. However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers. Mathematical description A number of lattice models can be described by the following data: A lattice , often taken to be a lattice in -dimensional Euclidean space or the -dimensional torus if the lattice is periodic. Concretely, is often the cubic lattice. If two points on the lattice are considered 'nearest neighbours', then they can be connected by an edge, turning the lattice into a lattice graph. The vertices of are sometimes referred to as sites. A spin-variable space . The configuration space of possible system states is then the space of functions . For some models, we might instead consider instead the space of functions where is the edge set of the graph defined above. An energy functional , which might depend on a set of additional parameters or 'coupling constants' . Examples The Ising model is given by the usual cubic lattice graph where is an infinite cubic lattice in or a period cubic lattice in , and is the edge set of nearest neighbours (the same letter is used for the energy functional but the different usages are distinguishable based on context). The spin-variable space is . The energy functional is The spin-variable space can often be described as a coset. For example, for the Potts model we have . In the limit , we obtain the XY model which has . Generalising the XY model to higher dimensions gives the -vector model which has . Solvable models We specialise to a lattice with a finite number of points, and a finite spin-variable space. This can be achieved by making the lattice periodic, with period in dimensions. Then the configuration space is also finite. We can define the partition function and there are no issues of convergence (like those which emerge in field theory) since the sum is finite. In theory, this sum can be computed to obtain an expression which is dependent only on the parameters and . In practice, this is often difficult due to non-linear interactions between sites. Models with a closed-form expression for the partition function are known as exactly solvable. Examples of exactly solvable models are the periodic 1D Ising model, and the periodic 2D Ising model with vanishing external magnetic field, but for dimension , the Ising model remains unsolved. Mean field theory Due to the difficulty of deriving exact solutions, in order to obtain analytic results we often must resort to mean field theory. This mean field may be spatially varying, or global. Global mean field The configuration space of functions is replaced by the convex hull of the spin space , when has a realisation in terms of a subset of . We'll denote this by . This arises as in going to the mean value of the field, we have . As the number of lattice sites , the possible values of fill out the convex hull of . By making a suitable approximation, the energy functional becomes a function of the mean field, that is, The partition function then becomes As , that is, in the thermodynamic limit, the saddle point approximation tells us the integral is asymptotically dominated by the value at which is minimised: where is the argument minimising . A simpler, but less mathematically rigorous approach which nevertheless sometimes gives correct results comes from linearising the theory about the mean field . Writing configurations as , truncating terms of then summing over configurations allows computation of the partition function. Such an approach to the periodic Ising model in dimensions provides insight into phase transitions. Spatially varying mean field Suppose the continuum limit of the lattice is . Instead of averaging over all of , we average over neighbourhoods of . This gives a spatially varying mean field . We relabel with to bring the notation closer to field theory. This allows the partition function to be written as a path integral where the free energy is a Wick rotated version of the action in quantum field theory. Examples Condensed matter physics Ising model ANNNI model Potts model Chiral Potts model XY model Classical Heisenberg model n-vector model Vertex model Toda lattice cellular automata Polymer physics Bond fluctuation model 2nd model High energy physics QCD lattice model See also Crystal structure Scaling limit QCD matter Lattice gas References
Lattice model (physics)
Physics,Materials_science
1,183
2,457,060
https://en.wikipedia.org/wiki/De%20sphaera%20mundi
De sphaera mundi (Latin title meaning On the Sphere of the World, sometimes rendered The Sphere of the Cosmos; the Latin title is also given as Tractatus de sphaera, Textus de sphaera, or simply De sphaera) is a medieval introduction to the basic elements of astronomy written by Johannes de Sacrobosco (John of Holywood) c. 1230. Based heavily on Ptolemy's Almagest, and drawing additional ideas from Islamic astronomy, it was one of the most influential works of pre-Copernican astronomy in Europe. Reception Sacrobosco's De sphaera mundi was the most successful of several competing thirteenth-century textbooks on this topic. It was used in universities for hundreds of years and the manuscript copied many times before the invention of the printing press; hundreds of manuscript copies have survived. The first printed edition appeared in 1472 in Ferrara, and at least 84 editions were printed in the next two hundred years. The work was frequently supplemented with commentaries on the original text. The number of copies and commentaries reflects its importance as a university text. Content The 'sphere of the world' is not the earth but the heavens, and Sacrobosco quotes Theodosius saying it is a solid body. It is divided into nine parts: the "first moved" (primum mobile), the sphere of the fixed stars (the firmament), and the seven planets, Saturn, Jupiter, Mars, the sun, Venus, Mercury and the moon. There is a 'right' sphere and an oblique sphere: the right sphere is only observed by those at the equator (if there are such people), everyone else sees the oblique sphere. There are two movements: one of the heavens from east to west on its axis through the Arctic and Antarctic poles, the other of the inferior spheres at 23° in the opposite direction on their own axes. The world, or universe, is divided into two parts: the elementary and the ethereal. The elementary consists of four parts: the earth, about which is water, then air, then fire, reaching up to the moon. Above this is the ethereal which is immutable and called the 'fifth essence' by the philosophers. All are mobile except heavy earth which is the center of the world. The universe as a machine Sacrobosco spoke of the universe as the machina mundi, the machine of the world, suggesting that the reported eclipse of the Sun at the crucifixion of Jesus was a disturbance of the order of that machine. This concept is similar to the clockwork universe analogy that became very popular centuries later, during the Enlightenment. Spherical Earth Though principally about the universe, De sphaera 1230 A.D. contains a clear description of the Earth as a sphere which agrees with widespread opinion in Europe during the higher Middle Ages, in contrast to statements of some 19th- and 20th-century historians that medieval scholars thought the Earth was flat. As evidence for the Earth being a sphere, in Chapter One he cites the observation that stars rise and set sooner for those in the east ("Orientals"), and lunar eclipses happen earlier; that stars near the North Pole are visible to those further north and those in the south can see different ones; that at sea one can see further by climbing up the mast; and that water seeks its natural shape which is round, as a drop. See also Armillary sphere Orrery References Sources External links Summary of the contents of each chapter (Adam Mosley, Department of History and Philosophy of Science, University of Cambridge (1999)) Sacrobosco's De Sphaera – complete treatise in English translation Book, The Sphere of Sacrobosco and its Commentators, by Lynn Thorndike, year 1949. Text in Latin, English translation, and commentary. Selected images from Sphaera mundi From The College of Physicians of Philadelphia Digital Library Digitised 1564 copy of Sphaera mundi from The University of Sydney Library 1230s books Astronomy books Astrological texts 13th-century books in Latin Treatises pt:Johannes de Sacrobosco
De sphaera mundi
Astronomy
847
231,079
https://en.wikipedia.org/wiki/Demographic%20transition
In demography, demographic transition is a phenomenon and theory in the social sciences referring to the historical shift from high birth rates and high death rates to low birth rates and low death rates as societies attain more technology, education (especially of women), and economic development. The demographic transition has occurred in most of the world over the past two centuries, bringing the unprecedented population growth of the post-Malthusian period, then reducing birth rates and population growth significantly in all regions of the world. The demographic transition strengthens economic growth process through three changes: a reduced dilution of capital and land stock, an increased investment in human capital, and an increased size of the labour force relative to the total population and changed age population distribution. Although this shift has occurred in many industrialized countries, the theory and model are frequently imprecise when applied to individual countries due to specific social, political, and economic factors affecting particular populations. However, the existence of some kind of demographic transition is widely accepted because of the well-established historical correlation linking dropping fertility to social and economic development. Scholars debate whether industrialization and higher incomes lead to lower population or whether lower populations lead to industrialization and higher incomes. Scholars also debate to what extent various proposed and sometimes interrelated factors such as higher per capita income, lower mortality, old-age security, and rise of demand for human capital are involved. Human capital gradually increased in the second stage of the industrial revolution, which coincided with the demographic transition. The increasing role of human capital in the production process led to the investment of human capital in children by families, which may be the beginning of the demographic transition. History The theory is based on an interpretation of demographic history developed in 1930 by the American demographer Warren Thompson (1887–1973). Adolphe Landry of France made similar observations on demographic patterns and population growth potential around 1934. In the 1940s and 1950s Frank W. Notestein developed a more formal theory of demographic transition. In the 2000s Oded Galor researched the "various mechanisms that have been proposed as possible triggers for the demographic transition, assessing their empirical validity, and their potential role in the transition from stagnation to growth." In 2011, the unified growth theory was completed, the demographic transition becomes an important part in unified growth theory. By 2009, the existence of a negative correlation between fertility and industrial development had become one of the most widely accepted findings in social science. The Jews of Bohemia and Moravia were among the first populations to experience a demographic transition, in the 18th century, prior to changes in mortality or fertility in other European Jews or in Christians living in the Czech lands. John Caldwell (demographer) explained fertility rates in the third world are not dependent on the spread of industrialization or even on economic development and also illustrates fertility decline is more likely to precede industrialization and to help bring it about than to follow it. Summary The transition involves four stages, or possibly five. In stage one, pre-industrial society, death rates and birth rates are high and roughly in balance. All human populations are believed to have had this balance until the late 18th century when this balance ended in Western Europe. In fact, growth rates were less than 0.05% at least since the Agricultural Revolution over 10,000 years ago. Population growth is typically very slow in this stage because the society is constrained by the available food supply; therefore, unless the society develops new technologies to increase food production (e.g. discovers new sources of food or achieves higher crop yields), any fluctuations in birth rates are soon matched by death rates. In stage two, that of a developing country, the death rates drop quickly due to improvements in food supply and sanitation, which increase life expectancy and reduce disease. The improvements specific to food supply typically include selective breeding and crop rotation and farming techniques. Numerous improvements in public health reduce mortality, especially childhood mortality. Prior to the mid-20th century, these improvements in public health were primarily in the areas of food handling, water supply, sewage, and personal hygiene. One of the variables often cited is the increase in female literacy combined with public health education programs which emerged in the late 19th and early 20th centuries. In Europe, the death rate decline started in the late 18th century in northwestern Europe and spread to the south and east over approximately the next 100 years. Without a corresponding fall in birth rates this produces an imbalance, and the countries in this stage experience a large increase in population. In stage three, birth rates fall due to various fertility factors such as access to contraception, increases in wages, urbanization, a reduction in subsistence agriculture, an increase in the status and education of women, a reduction in the value of children's work, an increase in parental investment in the education of children, and other social changes. Population growth begins to level off. The birth rate decline in developed countries started in the late 19th century in northern Europe. While improvements in contraception do play a role in birth rate decline, contraceptives were not generally available nor widely used in the 19th century and as a result likely did not play a significant role in the decline then. It is important to note that birth rate decline is caused also by a transition in values, not just because of the availability of contraceptives. In stage four, there are low birth rates and low death rates. Birth rates may drop to well below replacement level, as has happened in countries like Germany, Italy, and Japan, leading to a shrinking population, a threat to many industries that rely on population growth. As the large group born during stage two ages, it creates an economic burden on the shrinking working population. Death rates may remain consistently low or increase slightly due to increases in lifestyle diseases due to low exercise levels and high obesity rates and an aging population in developed countries. By the late 20th century, birth rates and death rates in developed countries leveled off at lower rates. Some scholars break out, from stage four, a "stage five" of below-replacement fertility levels. Others hypothesize a different "stage five" involving an increase in fertility. As with all models, this is an idealized picture of population change in these countries. The model is a generalization that applies to these countries as a group and may not accurately describe all individual cases. The extent to which it applies to less-developed societies today remains to be seen. Many countries such as China, Brazil and Thailand have passed through the Demographic Transition Model (DTM) very quickly due to fast social and economic change. Some countries, particularly African countries, appear to be stalled in the second stage due to stagnant development and the effects of under-invested and under-researched tropical diseases such as malaria and AIDS to a limited extent. Stages Stage one In pre-industrial society, death rates and birth rates were both high, fluctuating rapidly according to natural events, such as drought and disease, to produce a relatively constant and young population. Family planning and contraception were virtually nonexistent; therefore, birth rates were essentially only limited by the ability of women to bear children. Emigration depressed death rates in some special cases (for example, Europe and particularly the Eastern United States during the 19th century), but, overall, death rates tended to match birth rates, often exceeding 40 per 1000 per year. Children contributed to the economy of the household from an early age by carrying water, firewood, and messages, caring for younger siblings, sweeping, washing dishes, preparing food, and working in the fields. Raising a child cost little more than feeding him or her; there were no education or entertainment expenses. Thus, the total cost of raising children barely exceeded their contribution to the household. In addition, as they became adults they became a major input to the family business, mainly farming, and were the primary form of insurance for adults in old age. In India, an adult son was all that prevented a widow from falling into destitution. While death rates remained high there was no question as to the need for children, even if the means to prevent them had existed. During this stage, the society evolves in accordance with Malthusian paradigm, with population essentially determined by the food supply. Any fluctuations in food supply (either positive, for example, due to technology improvements, or negative, due to droughts and pest invasions) tend to translate directly into population fluctuations. Famines resulting in significant mortality are frequent. Overall, population dynamics during stage one are comparable to those of animals living in the wild. This is the earlier stage of demographic transition in the world and also characterized by primary activities such as small fishing activities, farming practices, pastoralism, and petty businesses. Stage two This stage leads to a fall in death rates and an increase in population. The changes leading to this stage in Europe were initiated in the Agricultural Revolution of the eighteenth century and were initially quite slow. In the twentieth century, the falls in death rates in developing countries tended to be substantially faster. Countries in this stage include Yemen, Afghanistan, and Iraq and much of Sub-Saharan Africa (but this does not include South Africa, Botswana, Eswatini, Lesotho, Namibia, Gabon and Ghana, which have begun to move into stage 3). The decline in the death rate is due initially to two factors: First, improvements in the food supply brought about by higher yields in agricultural practices and better transportation reduce death due to starvation and lack of water. Agricultural improvements included crop rotation, selective breeding, and seed drill technology. Second, significant improvements in public health reduce mortality, particularly in childhood. These are not so much medical breakthroughs (Europe passed through stage two before the advances of the mid-twentieth century, although there was significant medical progress in the nineteenth century, such as the development of vaccination) as they are improvements in water supply, sewerage, food handling, and general personal hygiene following from growing scientific knowledge of the causes of disease and the improved education and social status of mothers. A consequence of the decline in mortality in Stage Two is an increasingly rapid growth in population growth (a.k.a. "population explosion") as the gap between deaths and births grows wider and wider. Note that this growth is not due to an increase in fertility (or birth rates) but to a decline in deaths. This change in population occurred in north-western Europe during the nineteenth century due to the Industrial Revolution. During the second half of the twentieth century less-developed countries entered Stage Two, creating the worldwide rapid growth of number of living people that has demographers concerned today. In this stage of DT, countries are vulnerable to become failed states in the absence of progressive governments. Another characteristic of Stage Two of the demographic transition is a change in the age structure of the population. In Stage One, the majority of deaths are concentrated in the first 5–10 years of life. Therefore, the decline in death rates in Stage Two entails the increasing survival of children and a growing population. Hence, the age structure of the population becomes increasingly youthful and start to have big families and more of these children enter the reproductive cycle of their lives while maintaining the high fertility rates of their parents. The bottom of the "age pyramid" widens first where children, teenagers and infants are here, accelerating population growth rate. The age structure of such a population is illustrated by using an example from the Third World today. Stage three In Stage 3 of the Demographic Transition Model (DTM), death rates are low and birth rates diminish, as a rule accordingly of enhanced economic conditions, an expansion in women's status and education, and access to contraception. The decrease in birth rate fluctuates from nation to nation, as does the time span in which it is experienced. Stage Three moves the population towards stability through a decline in the birth rate. Several fertility factors contribute to this eventual decline, and are generally similar to those associated with sub-replacement fertility, although some are speculative: In rural areas continued decline in childhood death meant that at some point parents realized that they did not need as many children to ensure a comfortable old age. As childhood death continues to fall and incomes increase, parents can become increasingly confident that fewer children will suffice to help in family business and care for them at old age. Increasing urbanization changes the traditional values placed upon fertility and the value of children in rural society. Urban living also raises the cost of dependent children to a family. A recent theory suggests that urbanization also contributes to reducing the birth rate because it disrupts optimal mating patterns. A 2008 study in Iceland found that the most fecund marriages are between distant cousins. Genetic incompatibilities inherent in more distant out breeding makes reproduction harder. In both rural and urban areas, the cost of children to parents is exacerbated by the introduction of compulsory education acts and the increased need to educate children so they can take up a respected position in society. Children are increasingly prohibited under law from working outside the household and make an increasingly limited contribution to the household, as school children are increasingly exempted from the expectation of making a significant contribution to domestic work. Even in equatorial Africa, children (under the age of 5) are now required to have clothes and shoes, and may even need school uniforms. Parents begin to consider it a duty to buy children's books and toys. Partly due to education and access to family planning, people begin to reassess their need for children and their ability to raise them. Increasing literacy and employment lowers the uncritical acceptance of childbearing and motherhood as measures of the status of women. Working women have less time to raise children; this is particularly an issue where fathers traditionally make little or no contribution to child-raising, such as southern Europe or Japan. Valuation of women beyond childbearing and motherhood becomes important. Improvements in contraceptive technology are now a major factor in fertility decline. Changes in values regarding children and gender play as significant a role as the availability of contraceptives and knowledge of how to use them. The resulting changes in the age structure of the population include a decline in the youth dependency ratio and eventually population aging. The population structure becomes less triangular and more like an elongated balloon. During the period between the decline in youth dependency and rise in old age dependency there is a demographic window of opportunity that can potentially produce economic growth through an increase in the ratio of working age to dependent population; the demographic dividend. However, unless factors such as those listed above are allowed to work, a society's birth rates may not drop to a low level in due time, which means that the society cannot proceed to stage three and is locked in what is called a demographic trap. Countries that have witnessed a fertility decline of over 50% from their pre-transition levels include: Costa Rica, El Salvador, Panama, Jamaica, Mexico, Colombia, Ecuador, Guyana, Philippines, Indonesia, Malaysia, Sri Lanka, Turkey, Azerbaijan, Turkmenistan, Uzbekistan, Tunisia, Algeria, Morocco, Lebanon, South Africa, India, Saudi Arabia, and many Pacific islands. Countries that have experienced a fertility decline of 25–50% include: Guatemala, Tajikistan, Egypt and Zimbabwe. Countries that have experienced a fertility decline of less than 25% include: Sudan, Niger, Afghanistan. Stage four This occurs where birth and death rates are both low, leading to total population stability. Death rates are low for a number of reasons, primarily due to lower rates of diseases and increased food production. The birth rate is low because people have more opportunities to choose if they want children. This is made possible by improvements in contraception or women gaining more independence and work opportunities. The DTM (Demographic Transition model) is only a suggestion about the future population levels of a country, not a prediction. Countries that were at this stage (total fertility rate between 2.0 and 2.5) in 2015 include: Antigua and Barbuda, Argentina, Bahrain, Bangladesh, Bhutan, Cabo Verde, El Salvador, Faroe Islands, Grenada, Guam, India, Indonesia, Kosovo, Libya, Malaysia, Maldives, Mexico, Myanmar, Nepal, New Caledonia, Nicaragua, Palau, Peru, Seychelles, Sri Lanka, Suriname, Tunisia, Turkey, and Venezuela. Stage five The original Demographic Transition model has just four stages, but additional stages have been proposed. Both more-fertile and less-fertile futures have been claimed as a Stage Five. Some countries have sub-replacement fertility (that is, below 2.1–2.2 children per woman). Replacement fertility is generally slightly higher than 2 (the level which replaces the two parents, achieving equilibrium) both because boys are born more often than girls (about 1.05–1.1 to 1), and to compensate for deaths prior to full reproduction. Many European and East Asian countries now have higher death rates than birth rates. Population aging and population decline may eventually occur, assuming that the fertility rate does not change and sustained mass immigration does not occur. Using data through 2005, researchers have suggested that the negative relationship between development, as measured by the Human Development Index (HDI), and birth rates had reversed at very high levels of development. In many countries with very high levels of development, fertility rates were approaching two children per woman in the early 2000s. However, fertility rates declined significantly in many very high development countries between 2010 and 2018, including in countries with high levels of gender parity. The global data no longer support the suggestion that fertility rates tend to broadly rise at very high levels of national development. From the point of view of evolutionary biology, wealthier people having fewer children is unexpected, as natural selection would be expected to favor individuals who are willing and able to convert plentiful resources into plentiful fertile descendants. This may be the result of a departure from the environment of evolutionary adaptedness. Most models posit that the birth rate will stabilize at a low level indefinitely. Some dissenting scholars note that the modern environment is exerting evolutionary pressure for higher fertility, and that eventually due to individual natural selection or cultural selection, birth rates may rise again. Part of the "cultural selection" hypothesis is that the variance in birth rate between cultures is significant; for example, some religious cultures have a higher birth rate that is not accounted for by differences in income. In his book Shall the Religious Inherit the Earth?, Eric Kaufmann argues that demographic trends point to religious fundamentalists greatly increasing as a share of the population over the next century. Jane Falkingham of Southampton University has noted that "We've actually got population projections wrong consistently over the last 50 years... we've underestimated the improvements in mortality... but also we've not been very good at spotting the trends in fertility." In 2004 a United Nations office published its guesses for global population in the year 2300; estimates ranged from a "low estimate" of 2.3 billion (tending to −0.32% per year) to a "high estimate" of 36.4 billion (tending to +0.54% per year), which were contrasted with a deliberately "unrealistic" illustrative "constant fertility" scenario of 134 trillion (obtained if 1995–2000 fertility rates stay constant into the far future). Effects on age structure The decline in death rate and birth rate that occurs during the demographic transition may transform the age structure. When the death rate declines during the second stage of the transition, the result is primarily an increase in the younger population. This is because when the death rate is high (stage one), the infant mortality rate is very high, often above 200 deaths per 1000 children born. As the death rate falls or improves, this may lead to a lower infant mortality rate and increased child survival. Over time, as individuals with increased survival rates age, there may also be an increase in the number of older children, teenagers, and young adults. This implies that there is an increase in the fertile population proportion which, with constant fertility rates, may lead to an increase in the number of children born. This will further increase the growth of the child population. The second stage of the demographic transition, therefore, implies a rise in child dependency and creates a youth bulge in the population structure. As a population continues to move through the demographic transition into the third stage, fertility declines and the youth bulge prior to the decline ages out of child dependency into the working ages. This stage of the transition is often referred to as the golden age, and is typically when populations see the greatest advancements in living standards and economic development. However, further declines in both mortality and fertility will eventually result in an aging population, and a rise in the aged dependency ratio. An increase of the aged dependency ratio often indicates that a population has reached below replacement levels of fertility, and as result does not have enough people in the working ages to support the economy, and the growing dependent population. Historical studies Britain Between 1750 and 1975 England experienced the transition from high to low levels of both mortality and fertility. A major factor was the sharp decline in the death rate due to infectious diseases, which has fallen from about 11 per 1,000 to less than 1 per 1,000. By contrast, the death rate from other causes was 12 per 1,000 in 1850 and has not declined markedly. Scientific discoveries and medical breakthroughs did not, in general, contribute importantly to the early major decline in infectious disease mortality. Ireland In the 1980s and early 1990s, the Irish demographic status converged to the European norm. Mortality rose above the European Community average, and in 1991 Irish fertility fell to replacement level. The peculiarities of Ireland's past demography and its recent rapid changes challenge established theory. The recent changes have mirrored inward changes in Irish society, with respect to family planning, women in the work force, the sharply declining power of the Catholic Church, and the emigration factor. France France displays real divergences from the standard model of Western demographic evolution. The uniqueness of the French case arises from its specific demographic history, its historic cultural values, and its internal regional dynamics. France's demographic transition was unusual in that the mortality and the natality decreased at the same time, thus there was no demographic boom in the 19th century. France's demographic profile is similar to its European neighbors and to developed countries in general, yet it seems to be staving off the population decline of Western countries. With 62.9 million inhabitants in 2006, it was the second most populous country in the European Union, and it displayed a certain demographic dynamism, with a growth rate of 2.4% between 2000 and 2005, above the European average. More than two-thirds of that growth can be ascribed to a natural increase resulting from high fertility and birth rates. In contrast, France is one of the developed nations whose migratory balance is rather weak, which is an original feature at the European level. Several interrelated reasons account for such singularities, in particular the impact of pro-family policies accompanied by greater unmarried households and out-of-wedlock births. These general demographic trends parallel equally important changes in regional demographics. Since 1982, the same significant tendencies have occurred throughout mainland France: demographic stagnation in the least-populated rural regions and industrial regions in the northeast, with strong growth in the southwest and along the Atlantic coast, plus dynamism in metropolitan areas. Shifts in population between regions account for most of the differences in growth. The varying demographic evolution regions can be analyzed though the filter of several parameters, including residential facilities, economic growth, and urban dynamism, which yield several distinct regional profiles. The distribution of the French population therefore seems increasingly defined not only by interregional mobility but also by the residential preferences of individual households. These challenges, linked to configurations of population and the dynamics of distribution, inevitably raise the issue of town and country planning. The most recent census figures show that an outpouring of the urban population means that fewer rural areas are continuing to register a negative migratory flow – two-thirds of rural communities have shown some since 2000. The spatial demographic expansion of large cities amplifies the process of peri-urbanization yet is also accompanied by movement of selective residential flow, social selection, and sociospatial segregation based on income. Asia McNicoll (2006) examines the common features behind the striking changes in health and fertility in East and Southeast Asia in the 1960s–1990s, focusing on seven countries: Taiwan and South Korea ("tiger" economies), Thailand, Malaysia, and Indonesia ("second wave" countries), and China and Vietnam ("market-Leninist" economies). Demographic change can be seen as a by-product of social and economic development and, in some cases, accompanied by strong government pressure. An effective, often authoritarian, local administrative system can provide a framework for promotion and services in health, education, and family planning. Economic liberalization increased economic opportunities and risks for individuals, while also increasing the price and often reducing the quality of these services, all affecting demographic trends. India Goli and Arokiasamy (2013) indicate that India has a sustainable demographic transition beginning in the mid-1960s and a fertility transition beginning in post-1965. As of 2013, India is in the later half of the third stage of the demographic transition, with a population of 1.23 billion. It is nearly 40 years behind in the demographic transition process compared to EU countries, Japan, etc. The present demographic transition stage of India along with its higher population base will yield a rich demographic dividend in future decades. Korea Cha (2007) analyzes a panel data set to explore how industrial revolution, demographic transition, and human capital accumulation interacted in Korea from 1916 to 1938. Income growth and public investment in health caused mortality to fall, which suppressed fertility and promoted education. Industrialization, skill premium, and closing gender wage gap further induced parents to opt for child quality. Expanding demand for education was accommodated by an active public school building program. The interwar agricultural depression aggravated traditional income inequality, raising fertility and impeding the spread of mass schooling. Landlordism collapsed in the wake of de-colonization, and the consequent reduction in inequality accelerated human and physical capital accumulation, hence leading to growth in South Korea. China China experienced a demographic transition with high death rate and low fertility rate from 1959 to 1961 due to the great famine. However, as a result of the economic improvement, the birth rate increased and mortality rate declined in China before the early 1970s. In the 1970s, China's birth rate fell at an unprecedented rate, which had not been experienced by any other population in a comparable time span. The birth rate fell from 6.6 births per women before 1970 to 2.2 births per women in 1980.The rapid fertility decline in China was caused by government policy: in particular the "later, longer, fewer" policy of the early 1970s and in the late 1970s the one-child policy was also enacted which highly influence China demographic transition. As the demographic dividend gradually disappeared, the government abandoned the one-child policy in 2011 and fully lifted the two-child policy from 2015.The two-child policy has had some positive effects on the fertility which causes fertility constantly to increase until 2018.However fertility started to decline after 2018 and meanwhile there was no significant change in mortality in recent 30 years. Madagascar Campbell has studied the demography of 19th-century Madagascar in the light of demographic transition theory. Both supporters and critics of the theory hold to an intrinsic opposition between human and "natural" factors, such as climate, famine, and disease, influencing demography. They also suppose a sharp chronological divide between the precolonial and colonial eras, arguing that whereas "natural" demographic influences were of greater importance in the former period, human factors predominated thereafter. Campbell argues that in 19th-century Madagascar the human factor, in the form of the Merina state, was the predominant demographic influence. However, the impact of the state was felt through natural forces, and it varied over time. In the late 18th and early 19th centuries Merina state policies stimulated agricultural production, which helped to create a larger and healthier population and laid the foundation for Merina military and economic expansion within Madagascar. From 1820, the cost of such expansionism led the state to increase its exploitation of forced labor at the expense of agricultural production and thus transformed it into a negative demographic force. Infertility and infant mortality, which were probably more significant influences on overall population levels than the adult mortality rate, increased from 1820 due to disease, malnutrition, and stress, all of which stemmed from state forced labor policies. Available estimates indicate little if any population growth for Madagascar between 1820 and 1895. The demographic "crisis" in Africa, ascribed by critics of the demographic transition theory to the colonial era, stemmed in Madagascar from the policies of the imperial Merina regime, which in this sense formed a link to the French regime of the colonial era. Campbell thus questions the underlying assumptions governing the debate about historical demography in Africa and suggests that the demographic impact of political forces be reevaluated in terms of their changing interaction with "natural" demographic influences. Russia Russia entered stage two of the transition in the 18th century, simultaneously with the rest of Europe, though the effect of transition remained limited to a modest decline in death rates and steady population growth. The population of Russia nearly quadrupled during the 19th century, from 30 million to 133 million, and continued to grow until the First World War and the turmoil that followed. Russia then quickly transitioned through stage three. Though fertility rates rebounded initially and almost reached 7 children/woman in the mid-1920s, they were depressed by the 1931–33 famine, crashed due to the Second World War in 1941, and only rebounded to a sustained level of 3 children/woman after the war. By 1970 Russia was firmly in stage four, with crude birth rates and crude death rates on the order of 15/1000 and 9/1000 respectively. Bizarrely, however, the birth rate entered a state of constant flux, repeatedly surpassing the 20/1000 as well as falling below 12/1000. In the 1980s and 1990s, Russia underwent a unique demographic transition; observers call it a "demographic catastrophe": the number of deaths exceeded the number of births, life expectancy fell sharply (especially for males) and the number of suicides increased. From 1992 through 2011, the number of deaths exceeded the number of births; from 2011 onwards, the opposite has been the case. United States Greenwood and Seshadri (2002) show that from 1800 to 1940 there was a demographic shift from a mostly rural US population with high fertility, with an average of seven children born per white woman, to a minority (43%) rural population with low fertility, with an average of two births per white woman. This shift resulted from technological progress. A sixfold increase in real wages made children more expensive in terms of forgone opportunities to work and increases in agricultural productivity reduced rural demand for labor, a substantial portion of which traditionally had been performed by children in farm families. A simplification of the DTM theory proposes an initial decline in mortality followed by a later drop in fertility. The changing demographics of the U.S. in the last two centuries did not parallel this model. Beginning around 1800, there was a sharp fertility decline; at this time, an average woman usually produced seven births per lifetime, but by 1900 this number had dropped to nearly four. A mortality decline was not observed in the U.S. until almost 1900—a hundred years after the drop in fertility. However, this late decline occurred from a very low initial level. During the 17th and 18th centuries, crude death rates in much of colonial North America ranged from 15 to 25 deaths per 1000 residents per year (levels of up to 40 per 1000 being typical during stages one and two). Life expectancy at birth was on the order of 40 and, in some places, reached 50, and a resident of 18th century Philadelphia who reached age 20 could have expected, on average, additional 40 years of life. This phenomenon is explained by the pattern of colonization of the United States. Sparsely populated interior of the country allowed ample room to accommodate all the "excess" people, counteracting mechanisms (spread of communicable diseases due to overcrowding, low real wages and insufficient calories per capita due to the limited amount of available agricultural land) which led to high mortality in the Old World. With low mortality but stage 1 birth rates, the United States necessarily experienced exponential population growth (from less than 4 million people in 1790, to 23 million in 1850, to 76 million in 1900). The only area where this pattern did not hold was the American South. High prevalence of deadly endemic diseases such as malaria kept mortality as high as 45–50 per 1000 residents per year in 18th century North Carolina. In New Orleans, mortality remained so high (mainly due to yellow fever) that the city was characterized as the "death capital of the United States" – at the level of 50 per 1000 population or higher – well into the second half of the 19th century. Today, the U.S. is recognized as having both low fertility and mortality rates. Specifically, birth rates stand at 14 per 1000 per year and death rates at 8 per 1000 per year. Critical evaluation Because the DTM is only a model, it cannot necessarily predict the future, but it does suggest an underdeveloped country's future birth and death rates, together with the total population size. Most particularly, of course, the DTM makes no comment on change in population due to migration. It is not necessarily applicable at very high levels of development. DTM does not account for recent phenomena such as AIDS; in these areas HIV has become the leading source of mortality. Some trends in waterborne bacterial infant mortality are also disturbing in countries like Malawi, Sudan and Nigeria; for example, progress in the DTM clearly arrested and reversed between 1975 and 2005. DTM assumes that population changes are induced by industrial changes and increased wealth, without taking into account the role of social change in determining birth rates, e.g., the education of women. In recent decades more work has been done on developing the social mechanisms behind it. DTM assumes that the birth rate is independent of the death rate. Nevertheless, demographers maintain that there is no historical evidence for society-wide fertility rates rising significantly after high mortality events. Notably, some historic populations have taken many years to replace lives after events such as the Black Death. Some have claimed that DTM does not explain the early fertility declines in much of Asia in the second half of the 20th century or the delays in fertility decline in parts of the Middle East. Nevertheless, the demographer John C Caldwell has suggested that the reason for the rapid decline in fertility in some developing countries compared to Western Europe, the United States, Canada, Australia and New Zealand is mainly due to government programs and a massive investment in education both by governments and parents. DTM does not well explain the impact of government policies on birth rate. In some developing countries, governments often implement some policies to control the growth of fertility rate. China, for example, underwent a fertility transition in 1970, and the Chinese experience was largely influenced by government policy. In particular the "later, longer, fewer" policy of 1970 and one birth policy was enacted in 1979 which all encouraged people to have fewer children in later life. The fertility transition indeed stimulated economic growth and influenced the demographic transition in China. Second demographic transition The Second Demographic Transition (SDT) is a conceptual framework first formulated in 1986 by Ron Lesthaeghe and Dirk van de Kaa. SDT addressed the changes in the patterns of sexual and reproductive behavior which occurred in North America and Western Europe in the period from about 1963, when the birth control pill and other cheap effective contraceptive methods such as the IUD were adopted by the general population, to the present. Combined with the sexual revolution and the increased role of women in society and the workforce the resulting changes have profoundly affected the demographics of industrialized countries resulting in a sub-replacement fertility level. The changes, increased numbers of women choosing to not marry or have children, increased cohabitation outside marriage, increased childbearing by single mothers, increased participation by women in higher education and professional careers, and other changes are associated with increased individualism and autonomy, particularly of women. Motivations have changed from traditional and economic ones to those of self-realization. In 2015, Nicholas Eberstadt, political economist at the American Enterprise Institute in Washington, described the Second Demographic Transition as one in which "long, stable marriages are out, and divorce or separation are in, along with serial cohabitation and increasingly contingent liaisons." S. Philip Morgan thought future development orientation for SDT is Social demographers should explore a theory that is not based on stages, a theory that does not set a single line, a development path for some final stage—in the case of SDT, a hypothesis that looks like the advanced Western countries that most embrace postmodern values. However, the Second Demographic Transition (SDT) theory has not proposed a single line or teleological evolution based on phases, as was the case for the theories of the First Demographic Transition (FDT). Instead, and this is strikingly in evidence in Lesthaeghe's empirical studies, major attention is being paid to historical path dependency, heterogeneity in the SDT patterns of development, forms of family and lineage organisation, economic and especially ideational developments. For instance, the European pattern of almost simultaneous manifestation of all SDT demographic characteristics is not being replicated elsewhere. The Latin American countries experienced a major growth in pre-marital cohabitation in which the upper social classes were catching up with pre-existing higher levels among the less educated and some ethnic groups. But so far, the other major SDT indicator, namely fertility postponement is largely absent. The opposite holds for Asian patriarchal societies which have traditionally strong rules of arranged endogamous marriage and male dominance. In industrialised East Asian societies a major postponement of union formation and parenthood took place, leading to an expansion of numbers of singles and to very low levels of sub-replacement fertility. In such historically patriarchal societies, free partner choice is to be avoided, and hence there is a strong stigma against pre-marital cohabitation. However, after the turn of the century it was noted that cohabitation did develop in Japan, China, Taiwan and the Philippines. The proportions are still moderate, and pregnancies in cohabiting unions are typically followed by shot-gun marriages or abortions. Parenthood among cohabitants is still very rare. Finally, Hindu and Muslim countries can reach replacement level fertility, but no significant fertility postponement or take off of pre-marital cohabitation have occurred. Hence they are completing the FDT and are not in any type of initiation phase of the SDT. Sub-Saharan African populations exhibit yet another sui generis pattern. These societies have exogamous union formation and weaker marriage institutions. Under these conditions cohabitation seems to grow both among poorer and wealthier population segments alike. Among the former cohabitation reflects the "Pattern of Disadvantage" and among the latter cohabitation is a means of avoiding inflated bride price. However, Sub-Saharan African populations have not yet completed the FDT fertility transition, and several West-African ones have barely started it. Hence, there is a striking disconnection between evolutions of fertility and of partnership formation. The conclusion is that the unfolding of the SDT is characterised by just as much pattern heterogeneity as was the by now historical FDT. See also Birth dearth Demographic dividend Demographic economics Demographic trap Demographic window Epidemiological transition Mathematical model of self-limiting growth Neolithic demographic transition Migration transition model Population pyramid Rate of natural increase Self-limiting growth in biological population at carrying capacity Transition economy Waithood World population milestones r/K life history theory Russian cross Footnotes References Carrying capacity Chesnais, Jean-Claude. The Demographic Transition: Stages, Patterns, and Economic Implications: A Longitudinal Study of Sixty-Seven Countries Covering the Period 1720–1984. Oxford U. Press, 1993. 633 pp. Coale, Ansley J. 1973. "The demographic transition," IUSSP Liege International Population Conference. Liege: IUSSP. Volume 1: 53–72. . . . Classic article that introduced concept of transition. Davis, Kingsley. 1963. "The theory of change and response in modern demographic history." Population Index 29(October): 345–66. Kunisch, Sven; Boehm, Stephan A.; Boppel, Michael (eds): From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg 2011, , full text in Ebsco. . Gillis, John R., Louise A. Tilly, and David Levine, eds. The European Experience of Declining Fertility, 1850–1970: The Quiet Revolution. 1992. Landry, Adolphe, 1982 [1934], La révolution démographique – Études et essais sur les problèmes de la population, Paris, INED-Presses Universitaires de France Mercer, Alexander (2014), Infections, Chronic Disease, and the Epidemiological Transition. Rochester, NY: University of Rochester Press/Rochester Studies in Medical History, . Notestein, Frank W. 1945. "Population — The Long View," in Theodore W. Schultz, Ed., Food for the World. Chicago: University of Chicago Press. . Soares, Rodrigo R., and Bruno L. S. Falcão. "The Demographic Transition and the Sexual Division of Labor," Journal of Political Economy, Vol. 116, No. 6 (Dec., 2008), pp. 1058–104 . , full text in Project Muse and Ebsco . World Bank, Fertility Rate Demographic economics Human geography Population geography Economic systems
Demographic transition
Environmental_science
8,564
2,832,170
https://en.wikipedia.org/wiki/Bit%20manipulation
Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions. Source code that does bit manipulation makes use of the bitwise operations: AND, OR, XOR, NOT, and possibly other operations analogous to the boolean operators; there are also bit shifts and operations to count ones and zeros, find high and low one or zero, set, reset and test bits, extract and insert fields, mask and zero fields, gather and scatter bits to and from specified bit positions or fields. Integer arithmetic operators can also effect bit-operations in conjunction with the other operators. Bit manipulation, in some cases, can obviate or reduce the need to loop over a data structure and can give manyfold speed-ups, as bit manipulations are processed in parallel. Terminology Bit twiddling, bit fiddling, bit bashing, and bit gymnastics are often used interchangeably with bit manipulation, but sometimes exclusively refer to clever or non-obvious ways or uses of bit manipulation, or tedious or challenging low-level device control data manipulation tasks. The term bit twiddling dates from early computing hardware, where computer operators would make adjustments by tweaking or twiddling computer controls. As computer programming languages evolved, programmers adopted the term to mean any handling of data that involved bit-level computation. Bitwise operation A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the central processing unit (CPU), and is used to manipulate values for comparisons and calculations. On most processors, the majority of bitwise operations are single cycle - substantially faster than division and multiplication and branches. While modern processors usually perform some arithmetic and logical operations just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources. Example of bit manipulation To determine if a number is a power of two, conceptually we may repeatedly do integer divide by two until the number won't divide by 2 evenly; if the only factor left is 1, the original number was a power of 2. Using bit and logical operators, there is a simple expression which will return true (1) or false (0): bool isPowerOfTwo = (x != 0) && ((x & (x - 1)) == 0); The second half uses the fact that powers of two have one and only one bit set in their binary representation: x == 0...010...0 x-1 == 0...001...1 x & (x-1) == 0...000...0 If the number is neither zero nor a power of two, it will have '1' in more than one place: x == 0...1...010...0 x-1 == 0...1...001...1 x & (x-1) == 0...1...000...0 If inline assembly language code is used, then an instruction (popcnt) that counts the number of 1's or 0's in the operand might be available; an operand with exactly one '1' bit is a power of 2. However, such an instruction may have greater latency than the bitwise method above. Bit manipulation operations Processors typically provide only a subset of the useful bit operators. Programming languages don't directly support most bit operations, so idioms must be used to code them. The 'C' programming language, for example provides only bit-wise AND(&), OR(|), XOR(^) and NOT(~). Fortran provides AND(.and.), OR (.or.), XOR (.neqv.) and EQV(.eqv.). Algol provides syntactic bitfield extract and insert. When languages provide bit operations that don't directly map to hardware instructions, compilers must synthesize the operation from available operators. An especially useful bit operation is count leading zeros used to find the high set bit of a machine word, though it may have different names on various architectures. There's no simple programming language idiom, so it must be provided by a compiler intrinsic or system library routine. Without that operator, it is very expensive (see Find first set#CLZ) to do any operations with regard to the high bit of a word, due to the asymmetric carry-propagate of arithmetic operations. Fortunately, most cpu architectures have provided that since the middle 1980s. An accompanying operation count ones, also called POPCOUNT, which counts the number of set bits in a machine word, is also usually provided as a hardware operator. Simpler bit operations like bit set, reset, test and toggle are often provided as hardware operators, but are easily simulated if they aren't - for example (SET R0, 1; LSHFT R0, i; OR x, R0) sets bit i in operand x. Some of the more useful and complex bit operations that must be coded as idioms in the programming language and synthesized by compilers include: clear from specified bit position up (leave lower part of word) clear from specified bit position down (leave upper part of word) mask from low bit down (clear lower word) mask from high bit up (clear lower word) bitfield extract bitfield insert bitfield scatter/gather operations which distribute contiguous portions of a bitfield over a machine word, or gather disparate bitfields in the word into a contiguous portion of a bitfield (see recent Intel PEXT/PDEP operators). Used by cryptography and video encoding. matrix inversion Some arithmetic operations can be reduced to simpler operations and bit operations: reduce multiply by constant to sequence of shift-add Multiply by 9 for example, is copy operand, shift up by 3 (multiply by 8), and add to original operand. reduce division by constant to sequence of shift-subtract Masking A mask is data that is used for bitwise operations, particularly in a bit field. Using a mask, multiple bits in a Byte, nibble, word (etc.) can be set either on, off or inverted from on to off (or vice versa) in a single bitwise operation. More comprehensive applications of masking, when applied conditionally to operations, are termed predication. See also Bit array Bit banding Bit banging Bit field Bit manipulation instruction set — bit manipulation extensions for the x86 instruction set. BIT predicate Bit specification (disambiguation) Bit twiddler (disambiguation) Nibble — unit of data consisting of 4 bits, or half a byte Predication (computer architecture) where bit "masks" are used in Vector processors Single-event upset References Further reading (Draft of Fascicle 1a available for download) External links Bit Manipulation Tricks with full explanations and source code Intel Intrinsics Guide xchg rax, rax: x86_64 riddles and hacks The Aggregate Magic Algorithms from University of Kentucky Binary arithmetic Computer arithmetic
Bit manipulation
Mathematics
1,567
46,661,811
https://en.wikipedia.org/wiki/Iodine%20%28131%20I%29%20derlotuximab%20biotin
{{DISPLAYTITLE:Iodine (131 I) derlotuximab biotin}} Iodine (131 I) derlotuximab biotin is a monoclonal antibody designed for the treatment of recurrent glioblastoma multiforme. This drug was developed by Peregrine Pharmaceuticals, Inc. References Experimental cancer drugs Monoclonal antibodies for tumors Antibody-drug conjugates Iodine compounds Radiopharmaceuticals
Iodine (131 I) derlotuximab biotin
Chemistry,Biology
97
1,006,658
https://en.wikipedia.org/wiki/Ephraim%20Katzir
Ephraim Katzir (; – 30 May 2009) was an Israeli biophysicist and Labor Party politician. He was the fourth President of Israel from 1973 until 1978. Biography Efraim Katchalski (later Katzir) was the son of Yudel-Gersh (Yehuda) and Tzilya Katchalski, in Kiev, in the Russian Empire (today in Ukraine). In 1925 (several publications cite 1922), he immigrated to Mandatory Palestine with his family and settled in Jerusalem. In 1932, he graduated from Gymnasia Rehavia. A fellow classmate, Shulamit Laskov, remembers him as the "shining star" of the grade level. He was “an especially tall young man, a little pudgy, whose goodness of heart was splashed across his smiling face.” He excelled in all areas, “even in drawing and in gymnastics, where he was no slouch. He was the first in the class in arithmetic, and later on in mathematics. No one came close to him.” Like his elder brother, Aharon, Katzir was interested in science. He studied botany, zoology, chemistry and bacteriology at the Hebrew University of Jerusalem. In 1938 he received an MSc, and in 1941 he received a PhD degree. In 1939, he graduated from the first Haganah officers' course, and became commander of the student unit in the field forces (Hish). He and his brother worked on the development of new methods of warfare. In late 1947, after the outbreak of the 1948 Palestine war, and in anticipation of the War for Israel’s Independence, Katzir met the biochemist David Rittenberg, then working at Columbia University, stating: ‘I need germs and poisons for the [impending/ongoing Israeli] war of independence,’ Rittenberg referred the matter to Chaim Weizmann. Weizmann initially dismissed the request, branding Katzir a ‘savage’ and requested his dismissal from the Sieff Scientific Institute in Rehovot, but weeks later he relented, and his dismissal was rescinded. Shortly afterwards, in March 1948, his brother Aharon, who decades later was one of the victims of the Lod Airport Massacre, was appointed director of a research unit, HEMED, in Mandatory Palestine involving biological warfare. A decision to use such material against Palestinians was then taken in early April. In May Ben-Gurion appointed Ephraim to replace his brother as director of the HEMED research unit, given his success abroad in procuring biological warfare materials and equipment to produce them. Katzir was married to Nina (née Gottlieb), born in Poland, who died in 1986. As an English teacher, Nina developed a unique method for teaching language. As the president's wife, she introduced the custom of inviting the authors of children's books and their young readers to the President's Residence. She established the Nurit Katzir Jerusalem Theater Center in 1978 in memory of their deceased daughter, Nurit, who died from accidental carbon monoxide exposure. Another daughter, Irit, killed herself. They had a son, Meir, and three grandchildren. Katzir died on 30 May 2009 at his home in Rehovot. Scientific career After continuing his studies at the Polytechnic Institute of Brooklyn, Columbia University and Harvard University, he returned to Israel and became head of the Department of Biophysics at the Weizmann Institute of Science in Rehovot, an institution he helped to found. In 1966–1968, Katzir was Chief Scientist of the Israel Defense Forces. His initial research centered on simple synthetic protein models, but he also developed a method for binding enzymes, which helped lay the groundwork for what is now called enzyme engineering. Presidency In 1973, Golda Meir contacted Katzir at Harvard University, asking him to accept the presidency. He hebraicized his family name to Katzir, which means 'harvest'. On 10 March 1973, Katzir was elected by the Knesset to serve as the fourth President of Israel. He received 66 votes to 41 cast in favour of his opponent Ephraim Urbach and he assumed office on 24 May 1973. During his appointment, UN approved resolution 3379 which condemned "Zionism as Racism". He was involved in the dispute between Mexico (where the resolution was initially promoted during the World Conference on Women, 1975) and the US Jewish community because of a touristic boycott directed from the latter to that country. In November 1977, he hosted President Anwar Sadat of Egypt in the first ever official visit of an Arab head of state. In 1978, he declined to stand for a second term due to his wife's illness, and was succeeded by Yitzhak Navon. After stepping down as President, he returned to his scientific work. Awards and recognition In 1959, Katzir was awarded the Israel Prize in life sciences. In 1966, he was elected to the American Philosophical Society In 1966, he was elected to the United States National Academy of Sciences In 1972, he was awarded the Sir Hans Krebs Medal of the Federation of European Biochemical Societies In 1976, he was elected to the American Philosophical Society In 1977, he was elected a Foreign Member of the Royal Society (ForMemRS) In 1985, he was awarded the Japan Prize. In 2000, the Rashi Foundation established the Katzir Scholarship Program in honor of Katzir, one of the first members of its board of directors. He is also a recipient of the Tchernichovsky Prize for exemplary translation. He also received honorary degrees from various scientific societies and universities worldwide. The Department of Biotechnology Engineering at the ORT Braude Academic College of Engineering in Karmiel was named after him during his lifetime. See also List of Israel Prize recipients References External links My Contributions to Science and Society, Ephraim Katchalski-Katzir Ephraim Katzir Israel Ministry of Foreign Affairs PM Netanyahu eulogizes former President Ephraim Katzir Ephraim Katzir (Katchelsky) (1916–2009) Ehud Gazit, A vision of a scientific superpower, Ha'aretz, 8 June 2009 1916 births 2009 deaths Israeli Ashkenazi Jews Columbia University alumni Members of the French Academy of Sciences Foreign members of the Royal Society Foreign associates of the National Academy of Sciences Harvard University alumni Israel Prize in life sciences recipients who were biophysicists Israel Prize in life sciences recipients Israeli biophysicists Israeli Labor Party politicians Jewish scientists Members of the Israel Academy of Sciences and Humanities People from Kiev Governorate Jews from the Russian Empire People who emigrated to escape Bolshevism Presidents of Israel Soviet emigrants to Mandatory Palestine Ukrainian Jews People from Rehovot Academic staff of Weizmann Institute of Science Polytechnic Institute of New York University alumni Hebrew University of Jerusalem alumni 20th-century Israeli biologists Members of the American Philosophical Society Weizmann Prize recipients People related to biological warfare
Ephraim Katzir
Biology
1,419
3,072,290
https://en.wikipedia.org/wiki/Photon%20sphere
A photon sphere or photon circle arises in a neighbourhood of the event horizon of a black hole where gravity is so strong that emitted photons will not just bend around the black hole but also return to the point where they were emitted from and consequently display boomerang-like properties. As the source emitting photons falls into the gravitational field towards the event horizon the shape of the trajectory of each boomerang photon changes, tending to a more circular form. At a critical value of the radial distance from the singularity the trajectory of a boomerang photon will take the form of a non-stable circular orbit, thus forming a photon circle and hence in aggregation a photon sphere. The circular photon orbit is said to be the last photon orbit. The radius of the photon sphere, which is also the lower bound for any stable orbit, is, for a Schwarzschild black hole, where is the gravitational constant, is the mass of the black hole, is the speed of light in vacuum, and is the Schwarzschild radius (the radius of the event horizon); see below for a derivation of this result. This equation entails that photon spheres can only exist in the space surrounding an extremely compact object (a black hole or possibly an "ultracompact" neutron star). The photon sphere is located farther from the center of a black hole than the event horizon. Within a photon sphere, it is possible to imagine a photon that is emitted (or reflected) from the back of one's head and, following an orbit of the black hole, is then intercepted by the person's eye, allowing one to see the back of the head, see e.g. For non-rotating black holes, the photon sphere is a sphere of radius 3/2 rs. There are no stable free-fall orbits that exist within or cross the photon sphere. Any free-fall orbit that crosses it from the outside spirals into the black hole. Any orbit that crosses it from the inside escapes to infinity or falls back in and spirals into the black hole. No unaccelerated orbit with a semi-major axis less than this distance is possible, but within the photon sphere, a constant acceleration will allow a spacecraft or probe to hover above the event horizon. Another property of the photon sphere is centrifugal force (note: not centripetal) reversal. Outside the photon sphere, the faster one orbits, the greater the outward force one feels. Centrifugal force falls to zero at the photon sphere, including non-freefall orbits at any speed, i.e. an object weighs the same no matter how fast it orbits, and becomes negative inside it. Inside the photon sphere, faster orbiting leads to greater weight or inward force. This has serious ramifications for the fluid dynamics of inward fluid flow. A rotating black hole has two photon spheres. As a black hole rotates, it drags space with it. The photon sphere that is closer to the black hole is moving in the same direction as the rotation, whereas the photon sphere further away is moving against it. The greater the angular velocity of the rotation of a black hole, the greater the distance between the two photon spheres. Since the black hole has an axis of rotation, this only holds true if approaching the black hole in the direction of the equator. In a polar orbit, there is only one photon sphere. This is because when approaching at this angle, the possibility of traveling with or against the rotation does not exist. The rotation will instead cause the orbit to precess. Derivation for a Schwarzschild black hole Since a Schwarzschild black hole has spherical symmetry, all possible axes for a circular photon orbit are equivalent, and all circular orbits have the same radius. This derivation involves using the Schwarzschild metric, given by For a photon traveling at a constant radius r (i.e. in the φ-coordinate direction), . Since it is a photon, (a "light-like interval"). We can always rotate the coordinate system such that is constant, (e.g., ). Setting ds, dr and dθ to zero, we have Re-arranging gives To proceed, we need the relation . To find it, we use the radial geodesic equation Non vanishing -connection coefficients are where . We treat photon radial geodesics with constant r and , therefore Substituting it all into the radial geodesic equation (the geodesic equation with the radial coordinate as the dependent variable), we obtain Comparing it with what was obtained previously, we have where we have inserted radians (imagine that the central mass, about which the photon is orbiting, is located at the centre of the coordinate axes. Then, as the photon is travelling along the -coordinate line, for the mass to be located directly in the centre of the photon's orbit, we must have radians). Hence, rearranging this final expression gives which is the result we set out to prove. Photon orbits around a Kerr black hole In contrast to a Schwarzschild black hole, a Kerr (spinning) black hole does not have spherical symmetry, but only an axis of symmetry, which has profound consequences for the photon orbits, see e.g. Cramer for details and simulations of photon orbits and photon circles. There are two circular photon orbits in the equatorial plane (prograde and retrograde), with different Boyer–Lindquist radii: where is the angular momentum per unit mass of the black hole. There exist other constant-radius orbits, but they have more complicated paths which oscillate in latitude about the equator. References External links Step by Step into a Black Hole Virtual Trips to Black Holes and Neutron Stars Guide to Black Holes Spherical Photon Orbits Around a Kerr Black Hole General relativity Black holes
Photon sphere
Physics,Astronomy
1,191
255,573
https://en.wikipedia.org/wiki/V%C3%A4in%C3%A4m%C3%B6inen
Väinämöinen () is a demigod, hero and the central character in Finnish folklore and the main character in the national epic Kalevala by Elias Lönnrot. Väinämöinen was described as an old and wise man, and he possessed a potent, magical singing voice. In Finnish mythology The first extant mention of Väinämöinen in literature is in a list of Tavastian gods by Mikael Agricola in 1551, where it says: "Aeinemöinen wirdhet tacoi." () He and other writers described Väinämöinen as the god of chants, songs and poetry; in many stories Väinämöinen was the central figure at the birth of the world. The Karelian and Finnish national epic, the Kalevala, tells of his birth in the course of a creation story in its opening sections. This myth has elements of creation from chaos and from a cosmic egg, as well as of earth diver creation. At first there were only primal waters and Sky. But Sky also had a daughter named Ilmatar. One day, Ilmatar descended to the waters and became pregnant. She gestated for a very long time in the waters not being able to give birth. One day a goldeneye was seeking a resting place and flew to the knee of Ilmatar, where it laid its eggs. As the bird incubated its eggs Ilmatar's knee grew warmer and warmer. Eventually she was burned by the heat and responded by moving her leg, dislodging the eggs that then fell and shattered in the waters. Land was formed from the lower part of one of the eggshells, while sky formed from the top. The egg whites turned into the moon and stars, and the yolk became the sun. Ilmatar continued to float in the waters. Her footprints became pools for fish, and by pointing she created contours in the land. In this way she made all that is. Then one day she gave birth to Väinämöinen, the first man. Väinämöinen swam until he found land, but the land was barren. With Sampsa Pellervoinen he spread life over the land. In the eighteenth century folk tale collected by Cristfried Ganander, Väinämöinen is said to be son of Kaleva and thus brother of Ilmarinen. His name is believed to come from the Finnish word väinä, meaning stream pool. In the Kalevala In the nineteenth century, some folklorists, most notably Elias Lönnrot, the writer of Kalevala, disputed Väinämöinen's mythological background, claiming that he was an ancient hero, or an influential shaman who lived perhaps in the ninth century. Stripping Väinämöinen from his direct godlike characteristics, Lönnrot turned Väinämöinen into the son of the primal goddess Ilmatar, whom Lönnrot had invented himself. In this story, it was she who was floating in the sea when a duck laid eggs on her knee. He possessed the wisdom of the ages from birth, for he was in his mother's womb for seven hundred and thirty years, while she was floating in the sea and while the earth was formed. It is after praying to the sun, the moon, and the great bear (the stars, referring to Ursa Major) that he is able to leave his mother's womb and dive into the sea. Väinämöinen is presented as the 'eternal bard', who exerts order over chaos and established the land of Kaleva, and around whom revolve so many of the events in Kalevala. His search for a wife brings the land of Kaleva into, at first friendly, but later hostile contact with its dark and threatening neighbour in the north, Pohjola. This conflict culminates in the creation and theft of the Sampo, a magical artifact made by Ilmarinen, the subsequent mission to recapture it, and a battle which ends up splintering the Sampo and dispersing its parts around the world to parts unknown. Väinämöinen also demonstrated his magical voice by sinking the impetuous Joukahainen into a bog by singing. Väinämöinen also slays a great pike and makes a magical kantele from its jawbones. Väinämöinen's end is a hubristic one. The 50th and final poem of the Kalevala tells the story of the maiden Marjatta, who becomes pregnant after eating a berry, giving birth to a baby boy. This child is brought to Väinämöinen to examine and judge. His verdict is that such a strangely born infant needs to be put to death. In reply, the newborn child, mere two weeks old, chides the old sage for his sins and transgressions, such as allowing the maiden Aino, sister of Joukahainen, to drown herself. Following this, the baby is baptized and named king of Kalevala. Defeated, Väinämöinen goes to the shores of the sea, where he sings for himself a boat of copper, with which he sails away from the mortal realms. In his final words, he promises that there shall be a time when he shall return, when his crafts and might shall once again be needed. Thematically, the 50th poem thus echoes the arrival of Christianity to Finland and the subsequent fading into history of the old pagan beliefs. This is a common theme among epics, for in the tale of King Arthur, Arthur declares a similar promise before departing for Avalon. In the original 1888 translation of Kalevala into English by John Martin Crawford, Väinämöinen's name was anglicised as Wainamoinen. In other cultures In the Estonian national epic Kalevipoeg, a similar hero is called Vanemuine. In neighbouring Scandinavia, Odin shares many attributes with Väinämöinen, such as connections to magic and poetry. Popular culture The Kalevala has been translated into English and many other languages, in both verse and prose, in complete and abridged forms. For more details see list of Kalevala translations. J. R. R. Tolkien Väinämöinen has been identified as a source for Gandalf, the wizard in J. R. R. Tolkien's novel The Lord of the Rings. Another Tolkienian character with great similarities to Väinämöinen is Tom Bombadil. Like Väinämöinen, he is one of the most powerful beings in his world, and both are ancient and natural beings in their setting. Both Tom Bombadil and Väinämöinen rely on the power of song and lore. Likewise, Treebeard and the Ents in general have been compared to Väinämöinen. Akseli Gallen-Kallela In art (such as the accompanying picture by Akseli Gallen-Kallela), Väinämöinen is described as an old man with a long white beard, which is also a popular appearance for wizards in fantasy literature. Music In music, Finnish folk metal band Ensiferum wrote three songs based on/about Väinämöinen, called "Old Man", "Little Dreamer" and "Cold Northland". There is also a direct reference to him in their song "One More Magic Potion", where they have written "Who can shape a kantele from a pike's jaw, like the great One once did?". The band's mascot, who appears on all their albums, also bears a similarity to traditional depictions of Väinämöinen. Another Finnish metal band named Amorphis released their tenth album The Beginning of Times in 2011. It is a concept album based on the myths and stories of Väinämöinen. Yet another well-known Finnish metal band, Korpiklaani has released a song about the death of Väinämöinen, Tuonelan Tuvilla, as well as an English version named "At The Huts of the Underworld". A song on the album Archipelago by Scottish electronic jazz collective Hidden Orchestra is also named "Vainamoinen". Philadelphia based Black metal band Nihilistinen Barbaarisuus released a song about Väinämöinen simply called "Väinämöinen" on their second studio album The Child Must Die in 2015. In classical music, Väinämöinen appears as the main character in the first movement of Jean Sibelius' original music for the "Days of the Press" celebrations of 1899. The first tableaux in this music known as Väinämöinen's Song later became the first movement of Sibelius' 1911 orchestral suite Scènes Historiques. Väinämöinen is also the theme of a composition for choir and harp by Zoltán Kodály, "Wainamoinen makes music", premiered by David Watkins. Science fiction and fantasy Joan D. Vinge's The Summer Queen contains characters named Vanamoinen, Ilmarinen, and Kullervo. They are not the characters from the legend though but may have been inspired by them. That book is the sequel to her Hugo Award-winning novel The Snow Queen. Väinämöinen is also a major character in The Iron Druid Chronicles novel, Hammered by Kevin Hearne. The series follows the Tempe, Arizona-based 2,100 year-old Irish Druid, Atticus O'Sullivan. This book's main plot is the ingress of several characters - the Slavic thunder god Perun, O'Sullivan, a werewolf, a vampire, Finnish folk legend Väinämöinen, and Taoist fangshi Zhang Guolao - into Asgard to kill Norse thunder god Thor, all for their own varied reasons. Comic books There is a Finnish comic strip called "Väinämöisen paluu" (The Return of Väinämöinen) by Petri Hiltunen, where Väinämöinen returns from thousand-year exile to modern Finland to comment on the modern lifestyle with humor. In the storyline "Love her to Death" of the web-comic Nukees, Gav, having died, arrives to an afterlife populated by gods. Among them is Väinämöinen, who, among other things, complains that one only gets women by playing the electric kantele. In the Uncle Scrooge comic "The Quest for Kalevala", drawn by Don Rosa, Väinämöinen helps Scrooge and company to reassemble the Sampo (mythical mill that could produce gold from thin air) and then leaves with it back to Kalevala, but not before giving Scrooge its handle as a souvenir. In the webcomic "Axis Powers Hetalia", the character of Finland was given the human name Tino Väinämöinen. References External links Arts gods Characters in the Kalevala Creation myths Finnish gods Heroes in mythology and legend Magic gods Music and singing gods Demigods
Väinämöinen
Astronomy
2,208
24,762,111
https://en.wikipedia.org/wiki/National%20Outbreak%20Reporting%20System
The National Outbreak Reporting System (NORS) is a web-based application managed by the Centers for Disease Control and Prevention (CDC) used primarily for reporting outbreaks of enteric diseases. History NORS was launched in 2009 for use by staff working within public health departments in individual states, territories, and the Freely Associated States (composed of the Republic of the Marshall Islands, the Federated States of Micronesia and the Republic of Palau; formerly parts of the U.S.-administered Trust Territories of the Pacific Islands). Health departments are responsible for determining which staff members have access to NORS. NORS replaced the electronic Foodborne Outbreak Reporting System (eFORS), which was the primary tool for reporting foodborne disease outbreaks to the U.S. Centers for Disease Control and Prevention (CDC) since 2001. NORS also replaced the paper-based reporting system used during 1971–2008 to report waterborne disease outbreaks to the Waterborne Disease and Outbreak Reporting System (WBDOSS). The transition to electronic waterborne disease outbreak reporting is, in large part, a response to the Council of State and Territorial Epidemiologists (CSTE) position statement titled "Improving Detection, Investigation, and Reporting of Waterborne Disease Outbreaks." Separate sections in NORS for enteric person-to-person and animal-to-person disease outbreak reports are intended to enhance the information available to quantify, describe, and understand these types of outbreaks at a national level. Functionality Only authorized users of state, local, and territorial public health agencies or other organizations are granted access to use NORS. When an outbreak occurs, these agencies begin an investigation by collecting and testing specimens. The health departments then report their findings through NORS, where the data is aggregated and analyzed by the CDC. Detailed information on how to use NORS is available on the CDC website. These training materials explain the process for creating reports, uploading laboratory and outbreak data, and addressing entry issues. See also Waterborne Disease and Outbreak Reporting System (WBDOSS) References External links Council of State and Territorial Epidemiologists National Outbreak Reporting System (NORS) - Provides information on NORS, including forms and video training on using the NORS system. OutbreakNet Team at the United States Centers for Disease Control and Prevention. Epidemiology Centers for Disease Control and Prevention
National Outbreak Reporting System
Environmental_science
483
9,531,733
https://en.wikipedia.org/wiki/Atlas%20wild%20ass
The Atlas wild ass (Equus africanus atlanticus), also known as Algerian wild ass, is a purported extinct subspecies of the African wild ass that was once found across North Africa and parts of the Sahara. It was last represented in a villa mural ca. 300 AD in Bona, Algeria, and may have become extinct as a result of Roman sport hunting. Taxonomy Purported bones have been found in a number of rock shelters across Morocco and Algeria by paleontologists including Alfred Romer (1928, 1935) and Camille Arambourg (1931). While the existence of numerous prehistoric rock art depictions, and Roman mosaics leave no doubt about the former existence of African wild asses in North Africa, it has been claimed that the original bones that were used to describe the subspecies atlanticus actually belonged to a fossil zebra. Therefore, the name E. a. atlanticus would be "unavailable" to the Atlas wild ass. It was also hypothesized that the appearance of Nubian and Somali wild asses were clinal and that they appeared different as an artifact of the recent extinction of intermediate-looking populations. This would make the living African wild ass a monotypic species with no subspecies, and at least question the existence of extinct subspecies like the Atlas wild ass. However, genetic studies have shown since that Nubian and Somali wild asses are different enough to warrant subspecies status. Additionally, domestic donkeys carry two different haplotypes, one shared with the Nubian wild ass, and another of unknown origin that is not found in the Somali wild ass. The presence of the extinct Atlas wild ass in the Ancient Mediterranean makes it a plausible source for the second haplotype. Description Ancient art consistently depicts the African wild asses of North Africa as similar to, but darker colored than, the Nubian and Somali wild ass subspecies. The general color was gray, with marked black and white stripes on the legs, and a black shoulder cross (sometimes doubled). In comparison, the Nubian wild ass is gray with shoulder cross but no stripes, and the Somali wild ass is sandy with black stripes, but no shoulder cross. One or both features appear occasionally in domestic donkeys. Wild and primitive domestic asses are indistinguishable from their bones, which complicates their identification in archaeological sites. Range and ecology The Atlas wild ass was found in the region around the Atlas Mountains, across modern day Algeria, Tunisia and Morocco. It might also have occurred in rocky areas of the Saharan Desert, but not in sands which are avoided by wild asses. However, the 20th century reports of wild asses from northern Chad and the Hoggar Massif in the central Sahara are doubtful. References Harper, F. (1944.5). Extinct and Vanishing Mammals of the Old World, QL707.H37, p. 352 Ziswiler, V. (1967). Extinct and Vanishing Animals, QL88.Z513, p. 113 African wild ass Extinct mammals of Africa Holocene extinctions Species made extinct by human activities Controversial mammal taxa Mammals described in 1884
Atlas wild ass
Biology
634
36,132,824
https://en.wikipedia.org/wiki/Kosmos%20903
Kosmos 903 ( meaning Cosmos 903) was a Soviet US-K missile early warning satellite which was launched in 1977 as part of the Soviet military's Oko programme. The satellite was designed to identify missile launches using optical telescopes and infrared sensors. Kosmos 903 was launched from Site 43/3 at Plesetsk Cosmodrome in the Russian SSR. A Molniya-M carrier rocket with a 2BL upper stage was used to perform the launch, which took place at 01:38 UTC on 11 April 1977. The launch successfully placed the satellite into a molniya orbit. It subsequently received its Kosmos designation, and the international designator 1977-027A. The United States Space Command assigned it the Satellite Catalog Number 9911. It was reported in History and the Current Status of the Russian Early-Warning System, that it self-destructed. The primary portion of it re-entered on August 4, 2014, but several pieces of its debris still remain in orbit. See also List of Kosmos satellites (751–1000) List of R-7 launches (1975–1979) 1977 in spaceflight List of Oko satellites References Kosmos satellites 1977 in spaceflight Oko Spacecraft launched by Molniya-M rockets Spacecraft launched in 1977 Spacecraft which reentered in 2014 Spacecraft that broke apart in space
Kosmos 903
Technology
281
6,197,910
https://en.wikipedia.org/wiki/RecipeML
Recipe Markup Language, formerly known as DESSERT (Document Encoding and Structuring Specification for Electronic Recipe Transfer), is an XML-based format for marking up recipes. The format was created in 2000 by the company FormatData. The format provides detailed markup for defining ingredients, which facilitates automated conversions from one type of measurement to another. The markup language also provides for step-based instructions. Metadata can be added to a RecipeML document through the Dublin Core. Software programs that read and write the RecipeML format include Largo Recipes. References External links XML-based standards XML markup languages
RecipeML
Technology
123
1,728,613
https://en.wikipedia.org/wiki/Guillaume%20de%20l%27H%C3%B4pital
Guillaume François Antoine, Marquis de l'Hôpital (; sometimes spelled L'Hospital; 1661 – 2 February 1704) was a French mathematician. His name is firmly associated with l'Hôpital's rule for calculating limits involving indeterminate forms 0/0 and ∞/∞. Although the rule did not originate with l'Hôpital, it appeared in print for the first time in his 1696 treatise on the infinitesimal calculus, entitled Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes. This book was a first systematic exposition of differential calculus. Several editions and translations to other languages were published and it became a model for subsequent treatments of calculus. Biography L'Hôpital was born into a military family. His father was Anne-Alexandre de l'Hôpital, a Lieutenant-General of the King's army, Comte de Saint-Mesme and the first squire of Gaston, Duke of Orléans. His mother was Elisabeth Gobelin, a daughter of Claude Gobelin, Intendant in the King's Army and Councilor of the State. L'Hôpital abandoned a military career due to poor eyesight and pursued his interest in mathematics, which was apparent since his childhood. For a while, he was a member of Nicolas Malebranche's circle in Paris and it was there that in 1691 he met young Johann Bernoulli, who was visiting France and agreed to supplement his Paris talks on infinitesimal calculus with private lectures to l'Hôpital at his estate at Oucques. In 1693, l'Hôpital was elected to the French academy of sciences and even served twice as its vice-president. Among his accomplishments were the determination of the arc length of the logarithmic graph, one of the solutions to the brachistochrone problem, and the discovery of a turning point singularity on the involute of a plane curve near an inflection point. L'Hôpital exchanged ideas with Pierre Varignon and corresponded with Gottfried Leibniz, Christiaan Huygens, and Jacob and Johann Bernoulli. His Traité analytique des sections coniques et de leur usage pour la résolution des équations dans les problêmes tant déterminés qu'indéterminés ("Analytic treatise on conic sections") was published posthumously in Paris in 1707. Calculus textbook In 1696 l'Hôpital published his book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes ("Infinitesimal calculus with applications to curved lines"). This was the first textbook on infinitesimal calculus and it presented the ideas of differential calculus and their applications to differential geometry of curves in a lucid form and with numerous figures; however, it did not consider integration. The history leading to the book's publication became a subject of a protracted controversy. In a letter from 17 March 1694, l'Hôpital made the following proposal to Johann Bernoulli: in exchange for an annual payment of 300 Francs, Bernoulli would inform l'Hôpital of his latest mathematical discoveries, withholding them from correspondence with others, including Varignon. Bernoulli's immediate response has not been preserved, but he must have agreed soon, as the subsequent letters show. L'Hôpital may have felt fully justified in describing these results in his book, after acknowledging his debt to Leibniz and the Bernoulli brothers, "especially the younger one" (Johann). Johann Bernoulli grew increasingly unhappy with the accolades bestowed on l'Hôpital's work and complained in private correspondence about being sidelined. After l'Hôpital's death, he publicly revealed their agreement and claimed credit for the statements and portions of the text of Analyse, which were supplied to l'Hôpital in letters. Over a period of many years, Bernoulli made progressively stronger allegations about his role in the writing of Analyse, culminating in the publication of his old work on integral calculus in 1742: he remarked that this is a continuation of his old lectures on differential calculus, which he discarded since l'Hôpital had already included them in his famous book. For a long time, these claims were not regarded as credible by many historians of mathematics, because l'Hôpital's mathematical talent was not in doubt, while Bernoulli was involved in several other priority disputes. For example, both H. G. Zeuthen and Moritz Cantor, writing at the cusp of the 20th century, dismissed Bernoulli's claims on these grounds. However, in 1921 Paul Schafheitlin discovered a manuscript of Bernoulli's lectures on differential calculus from 1691 to 1692 in the Basel University library. The text showed remarkable similarities to l'Hôpital's writing, substantiating Bernoulli's account of the book's origin. Personal life L'Hôpital married Marie-Charlotte de Romilley de La Chesnelaye, also a mathematician and a member of the nobility, and inheritor of large estates in Brittany. Together, they had one son and three daughters. L'Hôpital passed away at the age of 42. The exact cause of his death is not widely recorded, and historical sources do not provide specific details regarding the circumstances of his passing. Notes References Bibliography G. L'Hôpital, E. Stone, The Method of Fluxions, both direct and inverse; the former being a translation from de l'Hospital's "Analyse des infinements petits," and the latter, supplied by the translator, Edmund Stone, London, 1730 G. L'Hôpital, Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes, Paris, 1696 G. L'Hôpital, Analyse des infinement petits, Paris 1715 William Fox, Guillaume-François-Antoine de L'Hôpital, Catholic Encyclopedia, vol 7, New York, Robert Appleton Company, 1910 C. Truesdell The New Bernoulli Edition Isis, Vol. 49, No. 1. (Mar., 1958), pp. 54–62, discusses the strange agreement between Bernoulli and de l'Hôpital on pages 59–62. A.P. Yushkevich (ed), History of mathematics from the most ancient times to the beginning of the 19th century, vol 2, Mathematics of the 17th century (in Russian). Moscow, Nauka, 1970 External links 1661 births 1704 deaths 17th-century French mathematicians 18th-century French mathematicians History of calculus French mathematical analysts Officers of the French Academy of Sciences
Guillaume de l'Hôpital
Mathematics
1,368
70,660,219
https://en.wikipedia.org/wiki/Precrastination
Precrastination, defined as the act of completing tasks immediately, often at the expense of increased effort or diminished quality of outcomes, is a phenomenon observed in certain individuals. This approach is often adopted to avoid the anxiety and stress associated with last-minute work and procrastination. Precrastination is considered an unhealthy behavior pattern and is accompanied by symptoms such as conscientiousness, eagerness to please, and high energy. People who precrastinate may try to find shortcuts to be more efficient and productive, but this may result in the application of non-effective energy management and cause the person to fulfill their tasks to an incomplete or insufficient degree. Precrastinators may be more likely to act impulsively instead of carefully planning ahead. Etymology The word Precrastination is a blend of pre- (before) and -crastinus (until next day). It is most likely a play on the word “procrastination”, which has an opposite definition. Research Rosenbaum et al. coined the term "precrastination" in 2014. David A. Rosenbaum is a Professor in the Department of Psychology at the University of California, Riverside. The study found evidence of human precrastination and coined the term precrastination. The Bucket Experiments The paper titled "Pre-crastination: Hastening Subgoal Completion at the Expense of Extra Physical Effort" consisted of nine different experiments that can be termed The Bucket Experiments. The goal of the research was “to get a better understanding of the evaluation of different kinds of costs in action planning”. While confirming the belief that individuals would rather carry a load weight a shorter distance rather than an equally heavy load of weight a longer distance they found contradictory results. Most university students irrationally carried the closer bucket over a long distance, rather than the further bucket over a short distance to the endpoint. When interviewed, the participants answered that they wanted to "get the task done as quickly as possible". These surprising results from the first three experiments changed the goal of research to "describing and providing a theoretical interpretation of this astonishing phenomenon". The first three experiments The Bucket Experiments were conducted using Pennsylvania State University students as participants and started with a set of three experiments that involved having a participant walk down an alleyway and from there pick up one of two weighted buckets and carry it a distance further until a stop line. Contrary to the experimenters’ expectations, participants would rather pick up and carry the bucket that was closer to them than pick up the bucket closer to the stop line, meaning that the participant would expend more energy to complete the task than necessary. When participants were questioned about their choice, they expressed the sentiment of wanting to get the task done as soon as possible. Testing the new phenomenon The fourth experiment was used to confirm the finding of the previous three while reducing statistical variability by moving the starting position of the buckets six feet further from the participants. Previous results were upheld and the statistical variability for the conclusion of the approach distance being the primary factor for picking up the bucket from the left or right was also upheld. The fifth and sixth experiments were used to remove the aspect of foot-hand coordination from the task, this was done by placing the participants in wheelchairs while they completed the task. The results from this experiment were in line with the rest of the experiments. In experiment seven, the researcher investigated “the possibility that participants actually preferred long carrying distances to short carrying distances”. This was done by altering the total distance that a left or right bucket would need to be carried, rather than just altering the approach and carry distances of the buckets. The result of this experiment was that the theory that participants preferred long carrying distances wasn't supported and that the students still cared about the total distance of the task (approach + carrying distance). Suggesting that when deciding task methodology, both approach and total distance are considered. For the eighth experiment, the experimenters test the hypothesis that “participants may have been attracted to the nearer object, grabbing it without considering the farther object because their attention was grabbed by the object that was closer at hand”. This attention hypothesis was tested by placing a screen at the end of the alleyway that told the participants to wait before starting the task, this could be for a two or four-second interval. After this an okay message would appear and the participants could complete the task at their leisure. The researchers argue that the screen “was to get participants to look down the alley so they would see the far as well as the near bucket.” and that their rationale for showing the wait message “was to discourage participants from impulsively initiating their excursion into the alley”. Ultimately there were no significant deviations from the results of the previous experiments, meaning that the close-object preference was replicated. Finally, the ninth experiment was to confirm that there was a noticeable difference in physical exertion between the two options for carrying distances and that the more labor-intensive option would be avoided. This was done by having light and heavy bucket options for the task while repeating the setup of experiments one through three. The results from experiment nine confirmed a general preference for the lighter bucket. Thereby confirming in the researchers’ mind that exertion did matter to participants, and despite this would engage in close-object preference when performing the other experiments. Even when it would result in a longer carrying distance, and thereby more exertion. Further research Sequencing preferences In April 2018 Lisa R. Fournier, Alexandra M. Stubblefield, Brian P. Dyre, and David A. Rosenbaum published a research article titled ‘Starting or finishing sooner? Sequencing preferences in object transfer tasks. This article examined task sequencing in regard to object transferring tasks in which either of two possible tasks could easily or logically come before the other task. This was done in hopes of whether precrastination would generalize a consistent logic of which of the two tasks to do first. For the experiment, they tested various conditions that differed in regard to “which task can be started sooner, finished sooner, or bring one’s current state closer to the goal state”. The testing of the experiment was to have participants transfer ping pong balls from two original buckets, one bucket at a time, into a bowl that was placed at the end of a corridor. The number of balls in the buckets, as well as the bucket distances along the corridor, differed between trials. An important factor for this experiment is the fact that the bucket selection didn't affect the total time to complete both of the transport tasks together. Also, participants walked, in grand total, an equal total distance and carried the buckets the same total distance regardless of which bucket was originally chosen. The conclusions from the experiments can be summed up with this statement extracted from the publication “We found that the chosen task order was strongly affected by which task or sub-goal could be started sooner." and thus replicating the tendency to engage in precrastination that was first observed in 2014 Rosenbaum et al. The end conclusion of the article differed from the 2014 Rosenbaum study by stating that, instead of precrastination being fueled by the wish to complete sub-goals as quickly as possible, precrastination is more likely the result of the desire to initiate the sub-goals as soon as possible. Precrastination and cognitive load On November 30 of 2018 Lisa R. Fournier, Emily Coder, Clark Kogan, Nisha Raghunath, Ezana Taddese, and David A. Rosenbaum published a research article titled ‘Which task will we choose first? Precrastination and cognitive load in task ordering’. The article “examined the generality of this recently discovered phenomenon by extending the methods used to study it, mainly to test the hypothesis that precrastination is motivated by cognitive load reduction.” The researchers experimented using the basic experimental setup from The Bucket Experiments, with a few alterations. One, instead of participants having to pick up and carry one of two possible buckets along a corridor, participants in this set of experiments were instructed to pick up both buckets that were placed in the workspace. Two, the participants had to carry both buckets at the same time in order to complete the task. Due to the task requiring the participants to pass by the location of the first bucket twice, once while walking to the second bucket and once more while walking back toward the end goal, it was determined that the action of picking up the first bucket on the first walk constituted a display of precrastination. Three, the addition of having half the participants memorize digit lists on top of performing the base physical action task mentioned previously. Four, the experiment was broken into two separate experiments. In the first experiment, the objects to be carried were buckets with golf balls, with possible variations of the number of golf balls in the near bucket versus the far bucket. In the second experiment, the objects to be carried were cups that were either full or half full of water. The conclusion of the research is split into two parts. For experiment one the researchers had the following statement: “The results of Experiment 1 showed that precrastination generalizes to the ordering of tasks (or task subgoals)." This result in another finding that replicates and extends the original findings of Rosenbaum et al. from 2014. The extension is seen with the findings that participants with a memory load showed a high probability of selecting the near-bucket-first something that wasn't the case for participants without a memory load. This means that while previous research is upheld, there is additional support to the claim that precrastination is the result of cognitive load reduction. The following quote highlights the researcher's interpretation of their results for experiment two: “Participants in the water cup transport task largely abandoned the near-object-first preference". They believe this was the result of the participants having to "pay a great deal of extra attention to the task if they started with the near cup". This interpretation supports the hypothesis that precrastination is the result of a reduction of cognitive load. As when the cognitive load would be increased as a result of engaging in precrastination, precrastination doesn't occur. End-state comfort and precrastination In January 2019, David A. Rosenbaum and Kyle S. Sauerberger published the research paper ‘End-state comfort meets pre-crastination’, with the goal of resolving an apparent conflict between the observed phenomena of precrastination and end-state comfort. End-state comfort can generally be defined as the tendency to act in physically awkward ways for the sake of comfortable or easy-to-control final postures. Through the course of the paper, the researchers determine that the two phenomena are in fact not contradictory and can interplay between them, to determine the end course of action. Studying precrastination in animals Precrastination has also been observed in animal behavior, as evidenced by Edward A. Wasserman and Stephen J. Brzykcy publishing their paper ‘Pre-crastination in the pigeon’ in 2014. In order to test the possibility of pigeons displaying precrastination, the researchers devised a basic two-alternative forced-choice task, where the pigeons had to at one time or another switch from pecking the center of three horizontally aligned buttons to pecking one of two side buttons. For the differing trials left or right was cued by red or green colors. The pigeons’ center-to-side button switch could be completed sooner (in the second step) or later (in the third step) in the sequence, with the difference between these two sequences resulting in no effect on the effort required to complete the task, the distance traveled over the course of the task, or the reward received from the task. As a result of this experiment, the researchers determined that pigeons displayed precrastination. This is due to the fact that despite the lack of any difference in the reward given for doing so, all of the pigeons “quickly came to peck the side location in Step 2 on which the upcoming star would next be presented in Step 3”. Explanations Evolutionary perspective Precrastination often involves hasty and sudden behavior as well as split-second decision-making. Evolution played an important role in making precrastination the “default response option” as animals' quick reactions and decisions might have been significant to their survival. The brain evolved to anticipate certain situations and to approach the situation in an appropriate way. Subsequently, the adapted brains' ability to adjust and adapt to different contexts in rapid manner and within a short timeframe allowed certain animals to have an advantage over others. A study from 2014, conducted by Wasserman and Brzykcy, showed further evidence that precrastination can be explained by evolution. The findings indicated that pigeons precrastinated. As the evolutionary ancestors of pigeons and humans went separate ways 300 billion years ago, it supports the evolutionary theory that the tendency to precrastinate may have emerged from a common ancestor. The suggestions why common ancestors should precrastinate can be linked to survival as well. Grabbing the lowest fruit from the tree, eating the grain while it is still nearby and getting food while it is still available, can be crucial for survival. Working memory off-loading perspective An alternative explanation for precrastination is related to decreasing the load on one's working memory. When a task is done immediately, it can be mentally checked off and does not need to be stored in one's working memory anymore. In contrast, when waiting to do a task the task-related information needs to be continuously remembered which can be taxing for one's working memory. Working memory is defined as the active manipulation and maintenance of short-term memory. In contrast to long-term memory, it has a limited capacity of about five to nine units and is readily accessible. In accordance to the limited capacity, Dr. Rosenbaum hypothesized that in order to off-load items stored within the working memory, people will try and complete certain tasks, even if this is at the expense of more physical effort. Further evidence for this perspective is shown in an experiment done by Lisa Fournier and co-authors in 2018. A similar task to The Bucket Experiments was given to participants, however, half of the participants were given an extra mental load to maintain while completing the task. This extra mental load was in the form of remembering a list of numbers that they had to recall after the task was completed. Recalling the list of numbers specifically targeted increases the mental load of the working memory of the participants. Dr. Fournier found that the participants with the extra mental load were ninety percent more likely to precrastinate during the task. It shows that people tend to structure their behavior in ways that decrease personal cognitive effort, which supports the idea of a trade-off between physical effort and cognitive load. In order to reduce the cognitive load, people are willing to put in extra physical effort. This suboptimal behavior is found within The Bucket Experiments. Individuals often chose to carry the closest bucket first, despite the fact that such action results in an increased physical effort required for competition of the task. One critique to this theory is that lifting a bucket does not tax the working memory that much, so it is not clear if this is the only thing that is contributing to precrastination. CLEAR hypothesis The working memory off-loading perspective was expanded on by the cognitive-load-reduction (CLEAR) hypothesis created by Rachel VonderHaar and coauthors in 2019. This hypothesis is aimed at explaining how the completion of tasks is ordered mentally. According to the CLEAR hypothesis, there is a strong drive to reduce the cognitive load, therefore tasks that are most efficient in doing so will be prioritized. This hypothesis takes a broader view of the cognitive load of a task and suggests that people will do what they can to free up cognitive resources. The study by VonderHaar et al. in 2019 specifically showed that when given the choice in what order to perform two different tasks, one being a physical task and the other being a cognitive task, participants were more likely to choose to undertake the cognitive task before the physical one. The CLEAR hypothesis includes off-loading working memory as well as freeing up other cognitive resources such as attention. This is illustrated by a study conducted by Raghunath et al. about precrastination and attention. The setup was similar to that of the bucket experiment, but then with half-full or full cups of water. This cup experiment showed that precrastination is dependent on the cognitive demands of the tasks. When the full cup of water was placed earlier and the half-cup later on, precrastination decreased as the full cup required more attention. This showed that when an earlier task requires more attention than a later one, the prevalence of precrastination decreases. The same was not seen when the weight of the buckets was adjusted. While precrastinating would increase the physical effort even more, participants still choose to do so. People who always precrastinated in the cup experiment even when the cognitive load was higher, mentioned that they did so out of habit. Reward perspective A simpler perspective is related to the rewards associated with completing a task. The reward center in the brain is activated when a task is completed that required less effort. The perspective suggests that individuals will perform tasks earlier as it is rewarded in the brain. Tasks that are completed earlier are more attractive, as this results in an instant reward. However, when a task requires a delay, this also delays the reward. Consequences As of 2022, not a lot is known about the positive and negative effects of precrastination. A possible positive effect that we derive from precrastinating is that we may relieve our working memory by getting a task done as soon as possible, thus making cognitive space for more important decisions. Another positive consequence may be that we are able to gain a lot of information as quickly as possible about the costs and benefits of task-related behaviors. Completing a task right away also enables us to feel an instant sense of accomplishment. However, precrastination can also lead to negative consequences due to behaving in an overall less efficient or irrational way. Making a rushed decision can also lead to completing a task incorrectly. Correlates Precrastination has been associated with different character traits. Impulsivity was expected to be among them because it has been evidenced to be a strong correlate of the contrary concept of procrastination. Results provided by the research of Sauerberger et al. showed that this trait might be unrelated to precrastination. In his published dissertation, further correlates were listed that have been found throughout studies of his research team. A negative correlation has been established between the trait of clumsiness and precrastination. Three of the big five personality traits have been investigated in more detail– conscientiousness, neuroticism, and agreeableness. Conscientiousness, neuroticism and agreeableness Conscientiousness has been shown to be positively associated with precrastination. Organization, responsibility, and productiveness are characteristics of it according to the big five inventory. Ego resilience, which resembles conscientiousness in many aspects, is also positively correlated to precrastination. Another trait is neuroticism for which the results showed no significant correlation. Depression, which is one of its features in the big five inventory next to anxiety and emotionality, has the strongest correlation with precrastination. Sauerberger stated that no observable correlation with neuroticism might be due to other factors, such as the participants' awareness of no consequence following their action and no feeling of uncertainty involved in the experiments as seen in real-life situations. He commented that there might be a correlation. A correlation with agreeableness has been proven. The relationship might be either positive or negative depending on the context, for example giving precrastinaters exhibit less prosocial behavior on planes but increased prosocial behavior on the freeway. According to the big five inventory, the main facets of this trait are compassion, respect, and trust. References Habits Anxiety Motivation Psychological stress Time management Waste of resources
Precrastination
Physics,Biology
4,168
53,747,734
https://en.wikipedia.org/wiki/Pose%20tracking
In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked. Pose tracking is sometimes referred to as positional tracking, but the two are separate. Pose tracking is different from positional tracking because pose tracking includes orientation whereas and positional tracking does not. In some consumer GPS systems, orientation data is added additionally using magnetometers, which give partial orientation information, but not the full orientation that pose tracking provides. In VR, it is paramount that pose tracking is both accurate and precise so as not to break the illusion of a being in virtual world. Several methods of tracking the position and orientation (pitch, yaw and roll) of the display and any associated objects or devices have been developed to achieve this. Many methods utilize sensors which repeatedly record signals from transmitters on or near the tracked object(s), and then send that data to the computer in order to maintain an approximation of their physical locations. A popular tracking method is Lighthouse tracking. By and large, these physical locations are identified and defined using one or more of three coordinate systems: the Cartesian rectilinear system, the spherical polar system, and the cylindrical system. Many interfaces have also been designed to monitor and control one's movement within and interaction with the virtual 3D space; such interfaces must work closely with positional tracking systems to provide a seamless user experience. Another type of pose tracking used more often in newer systems is referred to as inside-out tracking, including Simultaneous localization and mapping (SLAM) or Visual-inertial odometry (VIO). One example of a device that uses inside-out pose tracking is the Oculus Quest 2. Wireless tracking Wireless tracking uses a set of anchors that are placed around the perimeter of the tracking space and one or more tags that are tracked. This system is similar in concept to GPS, but works both indoors and outdoors. Sometimes referred to as indoor GPS. The tags triangulate their 3D position using the anchors placed around the perimeter. A wireless technology called Ultra Wideband has enabled the position tracking to reach a precision of under 100 mm. By using sensor fusion and high speed algorithms, the tracking precision can reach 5 mm level with update speeds of 200 Hz or 5 ms latency. Pros: User experiences unconstrained movement Allows wider range of motion Provides absolute location instead of just relative location Cons: Low sampling rate can decrease accuracy Low latency (define) rate relative to other sensors Optical tracking Optical tracking uses cameras placed on or around the headset to determine position and orientation based on computer vision algorithms. This method is based on the same principle as stereoscopic human vision. When a person looks at an object using binocular vision, they are able to define approximately at what distance the object is placed due to the difference in perspective between the two eyes. In optical tracking, cameras are calibrated to determine the distance to the object and its position in space. Optical systems are reliable and relatively inexpensive, but they can be difficult to calibrate. Furthermore, the system requires a direct line of light without occlusions, otherwise it will receive wrong data. Optical tracking can be done either with or without markers. Tracking with markers involves targets with known patterns to serve as reference points, and cameras constantly seek these markers and then use various algorithms (for example, POSIT algorithm) to extract the position of the object. Markers can be visible, such as printed QR codes, but many use infrared (IR) light that can only be picked up by cameras. Active implementations feature markers with built-in IR LED lights which can turn on and off to sync with the camera, making it easier to block out other IR lights in the tracking area. Passive implementations are retroreflectors which reflect the IR light back towards the source with little scattering. Markerless tracking does not require any pre-placed targets, instead using the natural features of the surrounding environment to determine position and orientation. Outside-in tracking In this method, cameras are placed in stationary locations in the environment to track the position of markers on the tracked device, such as a head mounted display or controllers. Having multiple cameras allows for different views of the same markers, and this overlap allows for accurate readings of the device position. The original Oculus Rift utilizes this technique, placing a constellation of IR LEDs on its headset and controllers to allow external cameras in the environment to read their positions. This method is the most mature, having applications not only in VR but also in motion capture technology for film. However, this solution is space-limited, needing external sensors in constant view of the device. Pros: More accurate readings, can be improved by adding more cameras Lower latency than inside-out tracking Cons: Occlusion, cameras need direct line of sight or else tracking will not work Necessity of outside sensors means limited play space area Inside-out tracking In this method, the camera is placed on the tracked device and looks outward to determine its location in the environment. Headsets that use this tech have multiple cameras facing different directions to get views of its entire surroundings. This method can work with or without markers. The Lighthouse system used by the HTC Vive is an example of active markers. Each external Lighthouse module contains IR LEDs as well as a laser array that sweeps in horizontal and vertical directions, and sensors on the headset and controllers can detect these sweeps and use the timings to determine position. Markerless tracking, such as on the Oculus Quest, does not require anything mounted in the outside environment. It uses cameras on the headset for a process called SLAM, or simultaneous localization and mapping, where a 3D map of the environment is generated in real time. Machine learning algorithms then determine where the headset is positioned within that 3D map, using feature detection to reconstruct and analyze its surroundings. This tech allows high-end headsets like the Microsoft HoloLens to be self-contained, but it also opens the door for cheaper mobile headsets without the need of tethering to external computers or sensors. Pros: Enables larger play spaces, can expand to fit room Adaptable to new environments Cons: More on-board processing required Latency can be higher Inertial tracking Inertial tracking use data from accelerometers and gyroscopes, and sometimes magnetometers. Accelerometers measure linear acceleration. Since the derivative of position with respect to time is velocity and the derivative of velocity is acceleration, the output of the accelerometer could be integrated to find the velocity and then integrated again to find the position relative to some initial point. Gyroscopes measure angular velocity. Angular velocity can be integrated as well to determine angular position relatively to the initial point. Magnetometers measure magnetic fields and magnetic dipole moments. The direction of Earth's magnetic field can be integrated to have an absolute orientation reference and to compensate for gyroscopic drifts. Modern inertial measurement units systems (IMU) are based on MEMS technology allows to track the orientation (roll, pitch, yaw) in space with high update rates and minimal latency. Gyroscopes are always used for rotational tracking, but different techniques are used for positional tracking based on factors like cost, ease of setup, and tracking volume. Dead reckoning is used to track positional data, which alters the virtual environment by updating motion changes of the user. The dead reckoning update rate and prediction algorithm used in a virtual reality system affect the user experience, but there is no consensus on best practices as many different techniques have been used. It is hard to rely only on inertial tracking to determine the precise position because dead reckoning leads to drift, so this type of tracking is not used in isolation in virtual reality. A lag between the user's movement and virtual reality display of more than 100ms has been found to cause nausea. Inertial sensors are not only capable of tracking rotational movement (roll, pitch, yaw), but also translational movement. These two types of movement together are known as the Six degrees of freedom. Many applications of virtual reality need to not only track the users’ head rotations, but also how their bodies move with them (left/right, back/forth, up/down). Six degrees of freedom capability is not necessary for all virtual reality experiences, but it is useful when the user needs to move things other than their head. Pros: Can track fast movements well relative to other sensors, and especially well when combined with other sensors Capable of high update rates Cons: Prone to errors, which accumulate quickly, due to dead reckoning Any delay or miscalculations when determining position can lead to symptoms in the user such as nausea or headaches May not be able to keep up with a user who is moving too fast Inertial sensors can typically only be used in indoor and laboratory environments, so outdoor applications are limited Sensor fusion Sensor fusion combines data from several tracking algorithms and can yield better outputs than only one technology. One of the variants of sensor fusion is to merge inertial and optical tracking. These two techniques are often used together because while inertial sensors are optimal for tracking fast movements they also accumulate errors quickly, and optical sensors offer absolute references to compensate for inertial weaknesses. Further, inertial tracking can offset some shortfalls of optical tracking. For example, optical tracking can be the main tracking method, but when an occlusion occurs inertial tracking estimates the position until the objects are visible to the optical camera again. Inertial tracking could also generate position data in-between optical tracking position data because inertial tracking has higher update rate. Optical tracking also helps to cope with a drift of inertial tracking. Combining optical and inertial tracking has shown to reduce misalignment errors that commonly occur when a user moves their head too fast. Microelectrical magnetic systems advancements have made magnetic/electric tracking more common due to their small size and low cost. Acoustic tracking Acoustic tracking systems use techniques for identifying an object or device's position similar to those found naturally in animals that use echolocation. Analogous to bats locating objects using differences in soundwave return times to their two ears, acoustic tracking systems in VR may use sets of at least three ultrasonic sensors and at least three ultrasonic transmitters on devices in order to calculate the position and orientation of an object (e.g. a handheld controller). There are two ways to determine the position of the object: to measure time-of-flight of the sound wave from the transmitter to the receivers or the phase coherence of the sinusoidal sound wave by receiving the transfer. Time-of-flight methods Given a set of three noncollinear sensors (or receivers) with distances between them d1 and d2, as well as the travel times of an ultrasonic soundwave (a wave with frequency greater than 20 kHz) from a transmitter to those three receivers, the relative Cartesian position of the transmitter can be calculated as follows:Here, each li represents the distance from the transmitter to each of the three receivers, calculated based on the travel time of the ultrasonic wave using the equation l = ctus. The constant c denotes the speed of sound, which is equal to 343.2 m/s in dry air at temperature 20°C. Because at least three receivers are required, these calculations are commonly known as triangulation. Beyond its position, determining a device's orientation (i.e. its degree of rotation in all directions) requires at least three noncollinear points on the tracked object to be known, mandating the number of ultrasonic transmitters to be at least three per device tracked in addition to the three aforementioned receivers. The transmitters emit ultrasonic waves in sequence toward the three receivers, which can then be used to derive spatial data on the three transmitters using the methods described above. The device's orientation can then be derived based on the known positioning of the transmitters upon the device and their spatial locations relative to one another. Phase-coherent methods As opposed to TOF methods, phase-coherent (PC) tracking methods have also been used to locate object acoustically. PC tracking involves comparing the phase of the current soundwave received by sensors to that of a prior reference signal, such that one can determine the relative change in position of transmitters from the last measurement. Because this method operates only on observed changes in position values, and not on absolute measurements, any errors in measurement tend to compound over more observations. Consequently, this method has lost popularity with developers over time. Pros: Accurate measurement of coordinates and angles Sensors are small and light, allowing more flexibility in how they are incorporated into design. Devices are cheap and simple to produce. No electromagnetic interference Cons: Variability of the speed of sound based on the temperature, atmospheric pressure, and humidity of one's environment can cause error in distance calculations. Range is limited, and requires a direct line of sight between emitters and receivers Compared to other methods, the largest possible sampling frequency is somewhat small (approximately a few dozen Hz) due to the relatively low speed of sound in air. This can create measurement delays as large as a few dozen milliseconds, unless sensor fusion is used to augment the ultrasound measurements Acoustic interference (i.e. other sounds in the surrounding environment) can hinder readings. In summary, implementation of acoustic tracking is optimal in cases where one has total control over the ambient environment that the VR or AR system resides in, such as a flight simulator. Magnetic tracking Magnetic tracking relies on measuring the intensity of inhomogenous magnetic fields with electromagnetic sensors. A base station, often referred to as the system's transmitter or field generator, generates an alternating or a static electromagnetic field, depending on the system's architecture. To cover all directions in the three dimensional space, three magnetic fields are generated sequentially. The magnetic fields are generated by three electromagnetic coils which are perpendicular to each other. These coils should be put in a small housing mounted on a moving target which position is necessary to track. Current, sequentially passing through the coils, turns them into electromagnets, which allows them to determine their position and orientation in space. Because magnetic tracking does not require a head-mounted display, which are frequently used in virtual reality, it is often the tracking system used in fully immersive virtual reality displays. Conventional equipment like head-mounted displays are obtrusive to the user in fully enclosed virtual reality experiences, so alternative equipment such as that used in magnetic tracking is favored. Magnetic tracking has been implemented by Polhemus and in Razer Hydra by Sixense. The system works poorly near any electrically conductive material, such as metal objects and devices, that can affect an electromagnetic field. Magnetic tracking worsens as the user moves away from the base emitter, and scalable area is limited and can't be bigger than 5 meters. Pros: Uses unobtrusive equipment that does not need to be worn by user, and does not interfere with the virtual reality experience Suitable for fully immersive virtual reality displays Cons: User needs to be close to base emitter Tracking worsens near metals or objects that interfere with the electromagnetic field Tend to have a lot of error and jitter due to frequent calibration requirements See also 3D pose estimation Head-mounted display Indoor positioning system Finger tracking FreeTrack Motion capture Simultaneous localization and mapping Tracking system References Bibliography Virtual reality Applications of computer vision Tracking Metaverse
Pose tracking
Technology
3,224
1,555,604
https://en.wikipedia.org/wiki/Gynandromorphism
A gynandromorph is an organism that contains both male and female characteristics. The term comes from the Greek γυνή (gynē) 'female', ἀνήρ (anēr) 'male', and μορφή (morphē) 'form', and is used mainly in the field of entomology. Gynandromorphism is most frequently recognized in organisms that have strong sexual dimorphism such as certain butterflies, spiders, and birds, but has been recognized in numerous other types of organisms. Occurrence Gynandromorphism has been noted in Lepidoptera (butterflies and moths) since the 1700s. It has also been observed in crustaceans, such as lobsters and crabs, in spiders, ticks, flies, locusts, crickets, dragonflies, ants, termites, bees, lizards, snakes, rodents, and birds. It is generally rare but reporting depends on ease of detecting it (whether a species is strongly sexually dimorphic) and how well-studied a region or organism is. For example, up until 2023 gynandromorphism had been reported in more than 40 bird species, but the vast majority of these are from the Palearctic and Nearctic, indicating that it likely is underreported in parts of the world that are not as biologically well-studied. Pattern of distribution of male and female tissues in a single organism A gynandromorph can have bilateral symmetry—one side female and one side male. Alternatively, the distribution of male and female tissue can be more haphazard. Bilateral gynandromorphy arises very early in development, typically when the organism has between 8 and 64 cells. Later stages produce a more random pattern. A notable example in birds is the zebra finch. These birds have lateralised brain structures in the face of a common steroid signal, providing strong evidence for a non-hormonal primary sex mechanism regulating brain differentiation. Causes The cause of this phenomenon is typically (but not always) an event in mitosis during early development. While the organism contains only a few cells, one of the dividing cells does not split its sex chromosomes typically. This leads to one of the two cells having sex chromosomes that cause male development and the other cell having chromosomes that cause female development. For example, an XY cell undergoing mitosis duplicates its chromosomes, becoming XXYY. Usually this cell would divide into two XY cells, but in rare occasions the cell may divide into an X cell and an XYY cell. If this happens early in development, then a large portion of the cells are X and a large portion are XYY. Since X and XYY dictate different sexes, the organism has tissue that is female and tissue that is male. A developmental network theory of how gynandromorphs develop from a single cell based on a working paper links between parental allelic chromosomes was proposed in 2012. The major types of gynandromorphs, bilateral, polar and oblique are computationally modeled. Many other possible gynandromorph combinations are computationally modeled, including predicted morphologies yet to be discovered. The article relates gynandromorph developmental control networks to how species may form. The models are based on a computational model of bilateral symmetry. As a research tool Gynandromorphs occasionally afford a powerful tool in genetic, developmental, and behavioral analyses. In Drosophila melanogaster, for instance, they provided evidence that male courtship behavior originates in the brain, that males can distinguish conspecific females from males by the scent or some other characteristic of the posterior, dorsal, integument of females, that the germ cells originate in the posterior-most region of the blastoderm, and that somatic components of the gonads originate in the mesodermal region of the fourth and fifth abdominal segment. See also Mosaicism Androgyny Chimerism Gynomorph Half-sider budgerigar Hermaphrodite References External links "Stunning Dual-Sex Animals" at Live Science Aayushi Pratap: This rare bird is male on one side and female on the other; on: Sciencenews; October 6, 2020; about a gynandromorph rose-breasted grosbeak. Insect physiology Sexual dimorphism
Gynandromorphism
Physics,Biology
916
66,628,662
https://en.wikipedia.org/wiki/Porodaedalea%20chrysoloma
Porodaedalea chrysoloma is a species of fungus belonging to the family Hymenochaetaceae. It is distributed across central Europe, also found in the south of Sweden, Norway and Finland. P. chrysoloma can be found parasiting on Norway's Spruce, typically on the branches. It's considered a key species of the old growth boreal forests. In Sweden, P. chrysoloma is classified as near threatened in the Swedish Red List due to the loss of its habitat. Porodaedalea abietis, (also known as Porodaedalea laricis) is a sister species of Porodaedalea chryoloma. Their main morphological difference is in the hymenium pores. P. chrysoloma has elongated, daedaleois to laberyinthine irregular pores, while P. abietis has more regular, cylindrical and some elongated pores. References Hymenochaetaceae Fungus species
Porodaedalea chrysoloma
Biology
205
71,380,095
https://en.wikipedia.org/wiki/Twin-width
The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins, vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors. Definition Twin-width is defined for finite simple undirected graphs. These have a finite set of vertices, and a set of edges that are unordered pairs of vertices. The open neighborhood of any vertex is the set of other vertices that it is paired with in edges of the graph; the closed neighborhood is formed from the open neighborhood by including the vertex itself. Two vertices are true twins when they have the same closed neighborhood, and false twins when they have the same open neighborhood; more generally, both true twins and false twins can be called twins, without qualification. The cographs have many equivalent definitions, but one of them is that these are the graphs that can be reduced to a single vertex by a process of repeatedly finding any two twin vertices and merging them into a single vertex. For a cograph, this reduction process will always succeed, no matter which choice of twins to merge is made at each step. For a graph that is not a cograph, it will always get stuck in a subgraph with more than two vertices that has no twins. The definition of twin-width mimics this reduction process. A contraction sequence, in this context, is a sequence of steps, beginning with the given graph, in which each step replaces a pair of vertices by a single vertex. This produces a sequence of graphs, with edges colored red and black; in the given graph, all edges are assumed to be black. When two vertices are replaced by a single vertex, the neighborhood of the new vertex is the union of the neighborhoods of the replaced vertices. In this new neighborhood, an edge that comes from black edges in the neighborhoods of both vertices remains black; all other edges are colored red. A contraction sequence is called a -sequence if, throughout the sequence, every vertex touches at most red edges. The twin-width of a graph is the smallest value of for which it has a -sequence. A dense graph may still have bounded twin-width; for instance, the cographs include all complete graphs. A variation of twin-width, sparse twin-width, applies to families of graphs rather than to individual graphs. For a family of graphs that is closed under taking induced subgraphs and has bounded twin-width, the following properties are equivalent: The graphs in the family are sparse, meaning that they have a number of edges bounded by a linear function of their number of vertices. The graphs in the family exclude some fixed complete bipartite graph as a subgraph. The family of all subgraphs of graphs in the given family has bounded twin-width. The family has bounded expansion, meaning that all its shallow minors are sparse. Such a family is said to have bounded sparse twin-width. The concept of twin-width can be generalized from graphs to various totally ordered structures (including graphs equipped with a total ordering on their vertices), and is in many ways simpler for ordered structures than for unordered graphs. It is also possible to formulate equivalent definitions for other notions of graph width using contraction sequences with different requirements than having bounded degree. Graphs of bounded twin-width Cographs have twin-width zero. In the reduction process for cographs, there will be no red edges: when two vertices are merged, their neighborhoods are equal, so there are no edges coming from only one of the two neighborhoods to be colored red. In any other graph, any contraction sequence will produce some red edges, and the twin-width will be greater than zero. The path graphs with at most three vertices are cographs, but every larger path graph has twin-width one. For a contraction sequence that repeatedly merges the last two edges of the path, only the edge incident to the single merged vertex will be red, so this is a 1-sequence. Trees have twin-width at most two, and for some trees this is tight. A 2-contraction sequence for any tree may be found by choosing a root, and then repeatedly merging two leaves that have the same parent or, if this is not possible, merging the deepest leaf into its parent. The only red edges connect leaves to their parents, and when there are two at the same parent they can be merged, keeping the red degree at most two. More generally, the following classes of graphs have bounded twin-width, and a contraction sequence of bounded width can be found for them in polynomial time: Every graph of bounded clique-width, or of bounded rank-width, also has bounded twin-width. The twin-width is at most exponential in the clique-width, and at most doubly exponential in the rank-width. These graphs include, for instance, the distance-hereditary graphs, the -leaf powers for bounded values of , and the graphs of bounded treewidth. Indifference graphs (equivalently, unit interval graphs or proper interval graphs) have twin-width at most two. Unit disk graphs defined from sets of unit disks that cover each point of the plane a bounded number of times have bounded twin-width. The same is true for unit ball graphs in higher dimensions. The permutation graphs coming from permutations with a forbidden permutation pattern have bounded twin-width. This allows twin-width to be applied to algorithmic problems on permutations with forbidden patterns. Every family of graphs defined by forbidden minors has bounded twin-width. For instance, by Wagner's theorem, the forbidden minors for planar graphs are the two graphs and , so the planar graphs have bounded twin-width. Every graph of bounded stack number or bounded queue number also has bounded twin-width. There exist families of graphs of bounded sparse twin-width that do not have bounded stack number, but the corresponding question for queue number remains open. The strong product of any two graphs of bounded twin-width, one of which has bounded degree, again has bounded twin-width. This can be used to prove the bounded twin-width of classes of graphs that have decompositions into strong products of paths and bounded-treewidth graphs, such as the -planar graphs. For the lexicographic product of graphs, the twin-width is exactly the maximum of the widths of the two factor graphs. Twin-width also behaves well under several other standard graph products, but not the modular product of graphs. In every hereditary family of graphs of bounded twin-width, it is possible to find a family of total orders for the vertices of its graphs so that the inherited ordering on an induced subgraph is also an ordering in the family, and so that the family is small with respect to these orders. This means that, for a total order on vertices, the number of graphs in the family consistent with that order is at most singly exponential in . Conversely, every hereditary family of ordered graphs that is small in this sense has bounded twin-width. It was originally conjectured that every hereditary family of labeled graphs that is small, in the sense that the number of graphs is at most a singly exponential factor times , has bounded twin-width. However, this conjecture was disproved using a family of induced subgraphs of an infinite Cayley graph that are small as labeled graphs but do not have bounded twin-width. There exist graphs of unbounded twin-width within the following families of graphs: Graphs of bounded degree. Interval graphs. Unit disk graphs. In each of these cases, the result follows by a counting argument: there are more graphs of the given type than there can be graphs of bounded twin-width. Properties If a graph has bounded twin-width, then it is possible to find a versatile tree of contractions. This is a large family of contraction sequences, all of some (larger) bounded width, so that at each step in each sequence there are linearly many disjoint pairs of vertices each of which could be contracted at the next step in the sequence. It follows from this that the number of graphs of bounded twin-width on any set of given vertices is larger than by only a singly exponential factor, that the graphs of bounded twin-width have an adjacency labelling scheme with only a logarithmic number of bits per vertex, and that they have universal graphs of polynomial size in which each -vertex graph of bounded twin-width can be found as an induced subgraph. Algorithms The graphs of twin-width at most one can be recognized in polynomial time. However, it is NP-complete to determine whether a given graph has twin-width at most four, and NP-hard to approximate the twin-width with an approximation ratio better than 5/4. Under the exponential time hypothesis, computing the twin-width requires time at least exponential in , on -vertex graphs. In practice, it is possible to compute the twin-width of graphs of moderate size using SAT solvers. For most of the known families of graphs of bounded twin-width, it is possible to construct a contraction sequence of bounded width in polynomial time. Once a contraction sequence has been given or constructed, many different algorithmic problems can be solved using it, in many cases more efficiently than is possible for graphs that do not have bounded twin-width. As detailed below, these include exact parameterized algorithms and approximation algorithms for NP-hard problems, as well as some problems that have classical polynomial time algorithms but can nevertheless be sped up using the assumption of bounded twin-width. Parameterized algorithms An algorithmic problem on graphs having an associated parameter is called fixed-parameter tractable if it has an algorithm that, on graphs with vertices and parameter value , runs in time for some constant and computable function . For instance, a running time of would be fixed-parameter tractable in this sense. This style of analysis is generally applied to problems that do not have a known polynomial-time algorithm, because otherwise fixed-parameter tractability would be trivial. Many such problems have been shown to be fixed-parameter tractable with twin-width as a parameter, when a contraction sequence of bounded width is given as part of the input. This applies, in particular, to the graph families of bounded twin-width listed above, for which a contraction sequence can be constructed efficiently. However, it is not known how to find a good contraction sequence for an arbitrary graph of low twin-width, when no other structure in the graph is known. The fixed-parameter tractable problems for graphs of bounded twin-width with given contraction sequences include: Testing whether the given graph models any given property in the first-order logic of graphs. Here, both the twin-width and the description length of the property are parameters of the analysis. Problems of this type include subgraph isomorphism for subgraphs of bounded size, and the vertex cover and dominating set problems for covers or dominating sets of bounded size. The dependence of these general methods on the length of the logical formula describing the property is tetrational, but for independent set, dominating set, and related problems it can be reduced to exponential in the size of the independent or dominating set, and for subgraph isomorphism it can be reduced to factorial in the number of vertices of the subgraph. For instance, the time to find a -vertex independent set, for an -vertex graph with a given -sequence, is , by a dynamic programming algorithm that considers small connected subgraphs of the red graphs in the forward direction of the contraction sequence. These time bounds are optimal, up to logarithmic factors in the exponent, under the exponential time hypothesis. For an extension of the first-order logic of graphs to graphs with totally ordered vertices, and logical predicates that can test this ordering, model checking is still fixed-parameter tractable for hereditary graph families of bounded twin-width, but not (under standard complexity-theoretic assumptions) for hereditary families of unbounded twin-width. Coloring graphs of bounded twin-width, using a number of colors that is bounded by a function of their twin-width and of the size of their largest clique. For instance, triangle-free graphs of twin-width can be -colored by a greedy coloring algorithm that colors vertices in the reverse of the order they were contracted away. This result shows that the graphs of bounded twin-width are χ-bounded. For graph families of bounded sparse twin-width, the generalized coloring numbers are bounded. Here, the generalized coloring number is at most if the vertices can be linearly ordered in such a way that each vertex can reach at most earlier vertices in the ordering, through paths of length through later vertices in the ordering. Speedups of classical algorithms In graphs of bounded twin-width, it is possible to perform a breadth-first search, on a graph with vertices, in time , even when the graph is dense and has more edges than this time bound. Approximation algorithms Twin-width has also been applied in approximation algorithms. In particular, in the graphs of bounded twin-width, it is possible to find an approximation to the minimum dominating set with bounded approximation ratio. This is in contrast to more general graphs, for which it is NP-hard to obtain an approximation ratio that is better than logarithmic. The maximum independent set and graph coloring problems can be approximated to within an approximation ratio of , for every , in polynomial time on graphs of bounded twin-width. In contrast, without the assumption of bounded twin-width, it is NP-hard to achieve any approximation ratio of this form with . References Further reading Graph invariants
Twin-width
Mathematics
2,844
24,116,257
https://en.wikipedia.org/wiki/C15H14O7
{{DISPLAYTITLE:C15H14O7}} The molecular formula C15H14O7 may refer to: Epigallocatechin Fusarubin, a naphthoquinone antibiotic Gallocatechol Leucocyanidin, a leucoanthocyanidin Melacacidin, a leucoanthocyanidin Molecular formulas
C15H14O7
Physics,Chemistry
83
5,171,566
https://en.wikipedia.org/wiki/ISA-88
S88, shorthand for ANSI/ISA88, is a standard addressing batch process control. It is a design philosophy for describing equipment and procedures. It is not a standard for software and is equally applicable to manual processes. It was approved by the ISA in 1995 and updated in 2010. Its original version was adopted by the IEC in 1997 as IEC 61512-1. The current parts of the S88 standard include: Models and terminology Data structures and guidelines for languages General and site recipe models and representation Batch Production Records Machine and Unit States: An Implementation Example of ISA-88 S88 provides a consistent set of standards and terminology for batch control and defines the physical model, procedures, and recipes. The standard sought to address the following problems: lack of a universal model for batch control, difficulty in communicating user requirement, integration among batch automation suppliers, and difficulty in batch-control configuration. The standard defines a process model that consists of a process that consists of an ordered set of process stages that consist of an ordered set of process operations that consist of an ordered set of process actions. The physical model begins with the enterprise, which may contain a site, which may contain areas, which may contain process cells, which must contain a unit, which may contain equipment modules, which may contain control modules. Some of these levels may be excluded, but not the Unit. The procedural control model consists of recipe procedures, which consist of an ordered set of unit procedures, which consist of an ordered set of operations, which consist of an ordered set of phases. Some of these levels may be excluded. Recipes can have the following types: general, site, master, control. The contents of the recipe include: header, formula, equipment requirements, procedure, and other information required to make the recipe. Implemented in other standards Like in Packml, the Machine and Unit States described by this standard are implemented in other standards. References Quality control American National Standards Institute standards
ISA-88
Technology
398
16,953,462
https://en.wikipedia.org/wiki/Blank%20%28solution%29
A blank solution is a solution containing little to no analyte of interest, usually used to calibrate instruments such as a colorimeter. According to the EPA, the "primary purpose of blanks is to trace sources of artificially introduced contamination." Different types of blanks are used to identify the source of contamination in the sample. The types of blanks include equipment blank, field blank, trip blank, method blank, and instrument blank. References Analytical chemistry
Blank (solution)
Chemistry
95
76,600,617
https://en.wikipedia.org/wiki/Xiaoying%20Zhuang
Xiaoying Zhuang (born 1983) is a researcher in computational mechanics, including continuum mechanics, peridynamics, and the analysis of vibration and fracture mechanics. She has applied these methods in the design of composite materials and nanostructures, including materials for the aerospace industry and nano-machines for harvesting vibrational energy. Originally from China, and educated in China and England, she has worked in Norway, China, and Germany, where she is Heisenberg Professor and Chair of Computational Science and Simulation Technology of Leibniz University Hannover. Education and career Zhuang was born in Shanghai in 1983. She was a student at Tongji University in Shanghai, graduating in 2007, and then traveled to Durham University in England for graduate study. She completed her doctoral dissertation, Meshless methods: theory and application in 3D fracture modelling with level sets, in 2010 in the Durham School of Engineering and Computing Sciences, supervised by Charles Augarde. She became a postdoctoral researcher at the Norwegian University of Science and Technology, and then from 2011 to 2014 she returned to Tongji University as a lecturer and later associate professor. She moved to Germany in 2014. After a year at Bauhaus University, Weimar, she became a research group leader within the Institute of Continuum Mechanics at Leibniz University Hannover in Germany in 2015. She became a full professor at the university in 2021; there, she is a Heisenberg Professor and Chair of Computational Science and Simulation Technology in the Institute for Photonics and Faculty of Mathematics and Physics. Books Zhuang is a coauthor of books including: Extended Finite Element and Meshfree Methods (with Timon Rabczuk, Jeong-Hoon Song, and Cosmin Anitescu, Academic Press, 2020) Computational Methods Based on Peridynamics and Nonlocal Operators: Theory and Applications (with Timon Rabczuk and Huilong Ren, Springer, 2023). Recognition The Association of Computational Mechanics in Engineering – UK (ACME-UK) gave Zhuang their 2010 Zienkiewicz Prize for the best annual doctoral dissertation in computational mechanics. Zhuang's move to Bauhaus University, Weimar was funded by a Marie Curie International Incoming Fellowship, funded by the European Commission. She was a 2015 recipient of the Sofia Kovalevskaya Award, funding her work as a group leader in the Hannover Institute of Continuum Mechanics. Zhuang was a 2018 recipient of the Heinz Maier-Leibnitz-Preis, given "for her research into lightweight materials for aviation". The International Chinese Association of Computational Mechanics gave her their Fellow Award in 2018, and she was also a 2018 recipient of the Science Prize of the State of Lower Saxony for Young Scientists. She was a 2019 recipient of the German Curious Mind Researcher Award, in the "materials and active ingredients" category, recognizing her research on the simulation of nano-scale mechanical-energy harvesters "at the intersection of mechanical engineering and materials science". She was named as a Heisenberg Professor in 2020. References External links 1983 births Living people Engineers from Shanghai Chinese mechanical engineers German mechanical engineers Women mechanical engineers Chinese materials scientists German materials scientists Women materials scientists and engineers Chinese nanotechnologists German nanotechnologists Tongji University alumni Alumni of Durham University Academic staff of Tongji University Academic staff of the University of Hanover
Xiaoying Zhuang
Materials_science,Technology
675
10,598,441
https://en.wikipedia.org/wiki/Frederick%20Rossini
Frederick Dominic Rossini (July 18, 1899 – October 12, 1990) was an American thermodynamicist noted for his work in chemical thermodynamics. In 1920, at the age of twenty-one, Rossini entered Carnegie-Mellon University in Pittsburgh, and soon was awarded a full-time teaching scholarship. He graduated with a B.S. in chemical engineering in 1925, followed by an M.S. degree in science in physical chemistry in 1926. As a result of reading Lewis and Randall's classical 1923 textbook Thermodynamics and the Free Energy of Chemical Substances he wrote to Gilbert N. Lewis and as a result he was offered a teaching fellowship at the University of California at Berkeley. Among his teachers were Gilbert Lewis and William Giauque. Rossini's doctoral dissertation on the heat capacities of strong electrolytes in aqueous solution was supervised by Merle Randall. His Ph.D. degree was awarded in 1928, after only 21 months of graduate work, even though he continued to serve as a teaching fellow throughout this entire period. He worked at the National Bureau of Standards (Washington, DC) from 1928 to 1950. In 1932, Frederick Rossini, Edward W. Washburn, and Mikkel Frandsen authored "The Calorimetric Determination of the Intrinsic Energy of Gases as a Function of the Pressure." This experiment resulted in the development of the Washburn Correction for bomb calorimetry, a decrease or correction of the results of a calorimetric procedure to normal states. In 1950, he published his popular textbook Chemical Thermodynamics. In that year he also moved to the Carnegie Institute of Technology (Pittsburgh), where he remained until 1960. He served as dean of the Notre Dame College of Science from 1960 to 1967. In 1973, Dr. Rossini spent the spring academic quarter at Baldwin-Wallace College, in Berea Ohio, as the first distinguished professor to occupy the Charles J. Strosacker Chair of Science. The Baldwin-Wallace College student union was named after "the late Dr. strosacker, who was vice president of The Dow Chemical Company, [and] was a B-W trustee for 17 years. The college union was named in his honor in 1963." Awards In 1965 he became the recipient of the Laetare Medal. In 1965 he received the John Price Wetherill Medal. In 1966 he received the William H. Nichols Medal. In 1971 he received the Priestley Medal. In 1977 he received the National Medal of Science for his "contributions to basic reference knowledge in chemical thermodynamics." References External links Encyclopedia of Baldwin Wallace University History: Dr. Frederick Rossini 1899 births 1990 deaths Thermodynamicists American physical chemists Carnegie Mellon University College of Engineering alumni National Medal of Science laureates Laetare Medal recipients University of Notre Dame faculty 20th-century American chemists
Frederick Rossini
Physics,Chemistry
594
59,652,617
https://en.wikipedia.org/wiki/K-outerplanar%20graph
In graph theory, a k-outerplanar graph is a planar graph that has a planar embedding in which the vertices belong to at most concentric layers. The outerplanarity index of a planar graph is the minimum value of for which it is -outerplanar. Definition An outerplanar graph (or 1-outerplanar graph) has all of its vertices on the unbounded (outside) face of the graph. A 2-outerplanar graph is a planar graph with the property that, when the vertices on the unbounded face are removed, the remaining vertices all lie on the newly formed unbounded face. And so on. More formally, a graph is -outerplanar if it has a planar embedding such that, for every vertex, there is an alternating sequence of at most faces and vertices of the embedding, starting with the unbounded face and ending with the vertex, in which each consecutive face and vertex are incident to each other. Properties and applications The -outerplanar graphs have treewidth at most . However, some bounded-treewidth planar graphs such as the nested triangles graph may be -outerplanar only for very large , linear in the number of vertices. Baker's technique covers a planar graph with a constant number of -outerplanar graphs and uses their low treewidth in order to quickly approximate several hard graph optimization problems. In connection with the GNRS conjecture on metric embedding of minor-closed graph families, the -outerplanar graphs are one of the most general classes of graphs for which the conjecture has been proved. A conjectured converse of Courcelle's theorem, according to which every graph property recognizable on graphs of bounded treewidth by finite state tree automata is definable in the monadic second-order logic of graphs, has been proven for the -outerplanar graphs. Recognition The smallest value of for which a given graph is -outerplanar (its outerplanarity index) can be computed in quadratic time. References Planar graphs
K-outerplanar graph
Mathematics
454
75,452,313
https://en.wikipedia.org/wiki/Janet%20Perna
Janet Perna is a computer scientist known for her work in coordinating IBM's work in the field of databases. Education and career Perna grew up in Poughkeepsie, New York. She graduated with a degree in mathematics from SUNY Oneonta in 1970, and started teaching mathematics. In 1974 she moved to California and got a job at IBM as a programmer. She worked first in San Jose, and then moved to IBM's Santa Teresa Laboratory. She later moved to the data management division, and then the information management group. Projects she worked on included preparing the IBM Db2 for public release, and encouraging IBM's 2001 purchase of the database company Informix Corporation. Janet Perna was recognized as an industry leader for her contributions to IBM’s data management business. She played a crucial role in expanding IBM's data management into new, lucrative areas and setting industry standards. By 2001 she was the most senior female executive at IBM. After 31 years at IBM, she decided to retire in 2006. Honors and Awards During her time at IBM, Janet was inducted into the Women In Technology International Hall of Fame, was recognized by Information Week as one of the nation's "Top 10 Women in IT," and included among the thinkers and innovators on Sm@rt Partner's list of "50 Smartest People". Perna received an honorary degree from State University of New York at Oneonta in 2012. In 2018, SUNY Oneota re-named a building the "Janet R. Perna Science Building". In addition to these honors, Janet Perna has been recognized for her leadership in database management at IBM. She was named one of the “Top 50 Women to Watch” by Women in Technology International and received the "Leadership Award" from Computerworld. Additionally, eWeek awarded her the "Top 100 Most Influential People in IT" recognition. Her contributions were pivotal in IBM’s acquisition of Informix Corporation, further cementing her legacy in the tech industry. References External links Living people Computer scientists IBM employees State University of New York at Oneonta alumni Business executives Year of birth missing (living people)
Janet Perna
Technology
439
76,456,164
https://en.wikipedia.org/wiki/Conocybe%20crispella
Conocybe crispella is a species of mushroom-producing fungus in the family Bolbitiaceae. Taxonomy It was described in 1942 by the American mycologist William Alphonso Murrill who classified it as Galerula crispella. It was reclassified as Conocybe crispella in 1950 by the German mycologist Rolf Singer. Description Cap: 1–2.5 cm wide and conical. The hygrophanous surface is pale cinnamon to clay brown with an ochre tint with most of the striations transparent and a minute pruinose surface visible with a lens. It is dry and very brittle. Stem: 4.5–9 cm long and 1.5-2mm thick with a bulbous base of up to 8mm thick. The surface is off white with a pale pink tint and minute white hairs over the entire length but absent striations. It is dry, hollow and extremely brittle. Gills: Free, crowded and pale rusty ochre. Spore print: Rusty ochre brown. Spores: 12–14.5 x 7-8 μm. Broadly amygdaliform in side view and ellipsoid in front view. Smooth with a thick wall of up to 1 μm and a distinct germ pore. Rusty ochre brown in colour. Basidia: 20-25 x 12-14 μm. 4 spored, clavate. Habitat and distribution The specimens studied by Singer were found amongst grass in shaded lawns and woods, on the soil or on dung from January to June in Argentina and in August in Florida. They have also been found on soil in gardens and in lawns in the Cook Islands, Mauritius, Seychelles and Réunion. References Fungi described in 1942 Fungus species Bolbitiaceae Taxa named by William Alphonso Murrill Fungi of Florida Fungi of South America Fungi of Oceania Fungi of Réunion
Conocybe crispella
Biology
391
4,816,797
https://en.wikipedia.org/wiki/Race%20and%20society
Social interpretations of race regard the common categorizations of people into different races. Race is often culturally understood to be rigid categories (Black, White, Pasifika, Asian, etc) in which people can be classified based on biological markers or physical traits such as skin colour or facial features. This rigid definition of race is no longer accepted by scientific communities. Instead, the concept of 'race' is viewed as a social construct. This means, in simple terms, that it is a human invention and not a biological fact. The concept of 'race' has developed over time in order to accommodate different societies' needs of organising themselves as separate from the 'other' (globalization and colonization have caused conceptions of race to be generally consolidated). The 'other' was usually viewed as inferior and, as such, was assigned worse qualities. Our current idea of race was developed primarily during the Enlightenment, in which scientists attempted to define racial boundaries, but their cultural biases ultimately impacted their findings and reproduced the prejudices that still exist in our society today. Social interpretation of physical variation Incongruities of racial classifications The biological anthropologist Jonathan Marks (1995) argued that even as the idea of "race" was becoming a powerful organizing principle in many societies, the shortcomings of the concept were apparent. In the Old World, the gradual transition in appearances from one racial group to adjacent racial groups emphasized that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them," as Blumenbach observed in his writings on human variation. In parts of the Americas, the situation was somewhat different. The immigrants to the New World came largely from widely separated regions of the Old World—western and northern Europe, western Africa, and, later, eastern Asia and southern and eastern Europe. In the Americas, the immigrant populations began to mix among themselves and with the indigenous inhabitants of the continent. In the United States, for example, most people who self-identify as African American have some European ancestors—in one analysis of genetic markers that have differing frequencies between continents, European ancestry ranged from an estimated 7% for a sample of Jamaicans to ~23% for a sample of African Americans from New Orleans. In a survey of college students who self-identified as white in a northeastern U.S. university, the west African and Native American genetic contribution were 0.7% and 3.2%. In the United States, social and legal conventions developed over time that forced individuals of mixed ancestry into simplified racial categories. An example is the "one-drop rule" implemented in some state laws that treated anyone with a single known African American ancestor as black. The decennial censuses conducted since 1790 in the United States also created an incentive to establish racial categories and fit people into those categories. In other countries in the Americas, where mixing among groups was more extensive, social non racial categories have tended to be more numerous and fluid, with people moving into or out of categories on the basis of a combination of socioeconomic status, social class, ancestry. Efforts to sort the increasingly mixed population of the United States into discrete racial categories generated many difficulties. Additionally, efforts to track mixing between census racial groups led to a proliferation of categories (such as mulatto and octoroon) and "blood quantum" distinctions that became increasingly untethered from self-reported ancestry. A person's racial identity can change over time. One study found differences between self-ascribed race and Veterans Affairs administrative data. Race as a social construct and populationism The notion of a biological basis for race originally emerged through speculations surrounding the "blood purity" of Jews during the Spanish Inquisition, eventually translating to a general association of one's biology with their social and personal characteristics. In the 19th century, this recurring ideology was intensified in the development of the racial sciences, eugenics and ethnology, which meant to further categorize groups of humans in terms of biological superiority or inferiority. While the field of racial sciences, also known as scientific racism, has expired in history, these antiquated conceptions of race have persisted throughout the 21st century. (See also: Historical origins of racial classification) Contrary to popular belief that the division of the human species based on physical variations is natural, there exists no clear, reliable distinctions that bind people to such groupings. According to the American Anthropological Association, "Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation, about 94%, lies within so-called racial groups. Conventional geographic "racial" groupings differ from one another only in about 6% of their genes." While there is a biological basis for differences in human phenotypes, most notably in skin color, the genetic variability of humans is found not amongst, but rather within racial groups – meaning the perceived level of dissimilarity amongst the species has virtually no biological basis. Genetic diversity has characterized human survival, rendering the idea of a "pure" ancestry as obsolete. Under this interpretation, race is conceptualized through a lens of artificiality, rather than through the skeleton of a scientific discovery. As a result, scholars have begun to broaden discourses of race by defining it as a social construct and exploring the historical contexts that led to its inception and persistence in contemporary society. A significant number of historians, anthropologists, and sociologists, such as David Roediger, Jennifer K. Wagner, Tanya Golash-Boza and Ann Morning, describe human races as a social construct. They argue that it would be more accurate to use the terms 'population' or 'ancestry', which can be given a clear operational definition. However, it is common for people who reject the formal concept of race, to continue their use of the word 'race''' in day-to-day language. This continuation could be credited to semantics, or to the underlying cultural significance of race within societies where racism is commonplace. Whilst the concept of race is challenged, it would be useful in medical contexts to have practical categorisation between 'individual' and 'species' because in the absence of affordable and widespread genetic tests, various race-linked gene mutations (see Cystic fibrosis, Lactose intolerance, Tay–Sachs disease and Sickle cell anemia) are difficult to address. As genetic tests for such conditions become cheaper, and as detailed haplotype maps and SNP databases become available, identifiers of race should diminish. Also, increasing interracial marriage is reducing the predictive power of race. For example, babies born with Tay–Sachs disease in North America are not only or primarily Ashkenazi Jews, despite stereotypes to contrary; French Canadians, Louisiana Cajuns, and Irish-Americans also see high rates of the disease. Michael Brooks, the author of “The Race Delusion” suggests that race is not determined biographically or genetically, but that it is socially constructed. He explains that nearly all scientists in the field of race, nationality, and ethnicity will confirm that race is a social construct. It has more to do with how people identify rather than genetics. He then goes on to explain how “black” and “white” have different meanings in other cultures. People in the United States tend to label themselves black if they have ancestors that are from Africa, but when you are in Brazil, you are not black if you have European ancestry. DNA shows that the human population is a result of populations that have moved across the world, splitting up and interbreeding. Even with this science to back up this concept, society has yet to believe and accept it. No one is born with the knowledge of race, the split between races and the decision to treat others differently based on skin color is completely learned and accepted by society. Experts in the fields of genetics, law, and sociology have offered their opinions on the subject. Audrey Smedley and Brian D. Smedley of Virginia Commonwealth University Institute of Medicine discuss the anthropological and historical perspectives on ethnicity, culture, and race. They define culture as the habits acquired by a society. Smedley states "Ethnicity and culture are related phenomena and bear no intrinsic connection to human biological variations or race" (Smedley 17). The authors state using physical characteristics to define an ethnic identity is inaccurate. The variation of humans has actually decreased over time since, as the author states, "Immigration, intermating, intermarriage, and reproduction have led to increasing physical heterogeneity of peoples in many areas of the world" (Smedley 18). They referred to other experts and their research, pointing out that humans are 99% alike. That one percent is caused by natural genetic variation, and has nothing to do with the ethnic group of the subject. Racial classification in the United States started in the 1700s with three ethnically distinct groups. These groups were the white Europeans, Native Americans, and Africans. The concept of race was skewed around these times because of the social implications of belonging to one group or another. The view that one race is biologically different from another rose out of society's grasp for power and authority over other ethnic groups. This did not only happen in the United States but around the world as well. Society created race to create hierarchies in which the majority would prosper most. Another group of experts in sociology has written on this topic. Guang Guo, Yilan Fu, Yi Li, Kathleen Mullan Harris of the University of North Carolina department of sociology as well as Hedwig Lee (University of Washington Seattle), Tianji Cai (University of Macau) comment on remarks made by one expert. The debate is over DNA differences, or lack thereof, between different races. The research in the original article they are referring to uses different methods of DNA testing between distinct ethnic groups and compares them to other groups. Small differences were found, but those were not based on race. They were from biological differences caused from the region in which the people live. They describe that the small differences cannot be fully explained because the understanding of migration, intermarriage, and ancestry is unreliable at the individual level. Race cannot be related to ancestry based on the research on which they are commenting. They conclude that the idea of "races as biologically distinct peoples with differential abilities and behaviors has long been discredited by the scientific community" (2338). One more expert in the field has given her opinion. Ann Morning of the New York University Department of Sociology, and member of the American Sociological Association, discusses the role of biology in the social construction of race. She examines the relationship between genes and race and the social construction of social race clusters. Morning states that everyone is assigned to a racial group because of their physical characteristics. She identifies through her research the existence of DNA population clusters. She states that society would want to characterize these clusters as races. Society characterizes race as a set of physical characteristics. The clusters though have an overlap in physical characteristics and thus cannot be counted as a race by society or by science. Morning concludes that "Not only can constructivist theory accommodate or explain the occasional alignment of social classifications and genetic estimates that Shiao et al.'s model hypothesizes, but empirical research on human genetics is far from claiming—let alone demonstrating—that statistically inferred clusters are the equivalent of races" (Morning 203). Only using ethnic groups to map a genome is entirely inaccurate, instead every individual must be viewed as having their own wholly unique genome (unique in the 1%, not the 99% all humans share). Ian Haney López, the John H. Boalt Professor of Law at the University of California, Berkeley explains ways race is a social construct. He uses examples from history of how race was socially constructed and interpreted. One such example was of the Hudgins v. Wright case. A slave woman sued for her freedom and the freedom of her two children on the basis that her grandmother was Native American. The race of the Wright had to be socially proven, and neither side could present enough evidence. Since the slave owner Hudgins bore the burden of proof, Wright and her children gained their freedom. López uses this example to show the power of race in society. Human fate, he argues, still depends upon ancestry and appearance. Race is a powerful force in everyday life. These races are not determined by biology though, they are created by society to keep power with the majority. He describes that there are not any genetic characteristics that all blacks have that non-whites do not possess and vice versa. He uses the example of Mexican. It truly is a nationality, yet it has become a catch-all for all Hispanic nationalities. This simplification is wrong, López argues, for it is not only inaccurate but it tends to treat all "Mexicans" as below fervent Americans. He describes that "More recently, genetic testing has made it clear the close connections all humans share, as well as the futility of explaining those differences that do exist in terms of racially relevant gene codes" (Lopez 199–200). Those differences clearly have no basis in ethnicity, so race is completely socially constructed. Some argue it is preferable when considering biological relations to think in terms of populations, and when considering cultural relations to think in terms of ethnicity, rather than of race. These developments had important consequences. For example, some scientists developed the notion of "population" to take the place of race. It is argued that this substitution is not simply a matter of exchanging one word for another. This view does not deny that there are physical differences among peoples; it simply claims that the historical conceptions of "race" are not particularly useful in accounting for these differences scientifically. In particular, it is claimed that: knowing someone's "race" does not provide comprehensive predictive information about biological characteristics, and only absolutely predicts those traits that have been selected to define the racial categories, e.g. knowing a person's skin color, which is generally acknowledged to be one of the markers of race (or taken as a defining characteristic of race), does not allow good predictions of a person's blood type to be made. in general, the worldwide distribution of human phenotypes exhibits gradual trends of difference across geographic zones, not the categorical differences of race; in particular, there are many peoples (like the San of S. W. Africa, or the people of northern India) who have phenotypes that do not neatly fit into the standard race categories. focusing on race has historically led not only to seemingly insoluble disputes about classification (e.g. are the Japanese a distinct race, a mixture of races, or part of the East Asian race? and what about the Ainu?) but has also exposed disagreement about the criteria for making decisions—the selection of phenotypic traits seemed arbitrary. Neven Sesardic has argued that such arguments are unsupported by empirical evidence and politically motivated. Arguing that races are not completely discrete biologically is a straw man argument. He argues "racial recognition is not actually based on a single trait (like skin color) but rather on a number of characteristics that are to a certain extent concordant and that jointly make the classification not only possible but fairly reliable as well". Forensic anthropologists can classify a person's race with an accuracy close to 100% using only skeletal remains if they take into consideration several characteristics at the same time. A.W.F. Edwards has argued similarly regarding genetic differences in "Human genetic diversity: Lewontin's fallacy". Race in biomedicine There is an active debate among biomedical researchers about the meaning and importance of race in their research. The primary impetus for considering race in biomedical research is the possibility of improving the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily ascertained characteristics. The most well-known examples of genetically determined disorders that vary in incidence between ethnic groups would be sickle cell disease and thalassemia among black and Mediterranean populations respectively and Tay–Sachs disease among people of Ashkenazi Jewish descent. Some fear that the use of racial labels in biomedical research runs the risk of unintentionally exacerbating health disparities, so they suggest alternatives to the use of racial taxonomies. Case studies in the social construction of race Race in the United States In the United States since its early history, Native Americans, African-Americans and European-Americans were classified as belonging to different races. For nearly three centuries, the criteria for membership in these groups were similar, comprising a person's appearance, his fraction of known non-White ancestry, and his social circle. But the criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of "Black blood" to be Black. By the early 20th century, this notion of invisible blackness was made statutory in many states and widely adopted nationwide. In contrast, Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum) due in large part to American slavery ethics. Race definitions in the United States The concept of race as used by the Census Bureau reflects self-identification by people according to the race or races with which they most closely identify. These categories are sociopolitical constructs and should not be interpreted as being scientific or anthropological in nature. They change from one census to another, and the racial categories include both racial and national-origin groups. Race in Brazil Compared to 19th-century United States, 20th-century Brazil was characterized by a relative absence of sharply defined racial groups. This pattern reflects a different history and different social relations. Basically, race in Brazil was recognized as the difference between ancestry (which determines genotype) and phenotypic differences. Racial identity was not governed by a rigid descent rule. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only two categories to choose from. Over a dozen racial categories are recognized in conformity with the combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and no one category stands significantly isolated from the rest. That is, race referred to appearance, not heredity. Through this system of racial identification, parents and children and even brothers and sisters were frequently accepted as representatives of opposite racial types. In a fishing village in the state of Bahia, an investigator showed 100 people pictures of three sisters and they were asked to identify the races of each. In only six responses were the sisters identified by the same racial term. Fourteen responses used a different term for each sister. In another experiment nine portraits were shown to a hundred people. Forty different racial types were elicited. It was found, in addition, that a given Brazilian might be called by as many as thirteen different terms by other members of the community. These terms are spread out across practically the entire spectrum of theoretical racial types. A further consequence of the absence of a descent rule was that Brazilians apparently not only disagreed about the racial identity of specific individuals, but they also seemed to be in disagreement about the abstract meaning of the racial terms as defined by words and phrases. For example, 40% of a sample ranked moreno claro as a lighter type than mulato claro, while 60% reversed this order. A further note of confusion is that one person might employ different racial terms to describe the same person over a short time span. The choice of which racial description to use may vary according to both the personal relationships and moods of the individuals involved. The Brazilian census lists one's race according to the preference of the person being interviewed. As a consequence, hundreds of races appeared in the census results, ranging from blue (which is blacker than the usual black) to pink (which is whiter than the usual white). However, Brazilians are not so naïve to ignore one's racial origins just because of his (or her) better social status. An interesting example of this phenomenon has occurred recently, when the famous football (soccer) player Ronaldo declared publicly that he considered himself as White, thus linking racism to a form or another of class conflict. This caused a series of ironic notes on newspapers, which pointed out that he should have been proud of his African origin (which is obviously noticeable), a fact that must have made life for him (and for his ancestors) more difficult, so, being a successful personality was, in spite of that, a victory for him. What occurs in Brazil that differentiates it largely from the US or South Africa, for example, is that black or mixed-race people are, in fact, more accepted in social circles if they have more education, or have a successful life (a euphemism for "having a better salary"). As a consequence, inter-racial marriages are more common, and more accepted, among highly educated Afro-Brazilians than lower-educated ones. So, although the identification of a person by race is far more fluid and flexible in Brazil than in the U.S., there still are racial stereotypes and prejudices. African features have been considered less desirable; Blacks have been considered socially inferior, and Whites superior. These white supremacist values were a legacy of European colonization and the slave-based plantation system. The complexity of racial classifications in Brazil is reflective of the extent of miscegenation in Brazilian society, which remains highly, but not strictly, stratified along color lines. Henceforth, Brazil's desired image as a perfect "post-racist" country, composed of the "cosmic race" celebrated in 1925 by José Vasconcelos, must be met with caution, as sociologist Gilberto Freyre demonstrated in 1933 in Casa Grande e Senzala. Race in politics and ethics Michel Foucault argued the popular historical and political use of a non-essentialist notion of "race" used in the "race struggle" discourse during the 1688 Glorious Revolution and under Louis XIV's end of reign. In Foucault's view, this discourse developed in two different directions: Marxism, which seized the notion and transformed it into "class struggle" discourse, and racists, biologists and eugenicists, who paved the way for 20th century "state racism". During the Enlightenment, racial classifications were used to justify enslavement of those deemed to be of "inferior", non-White races, and thus supposedly best fitted for lives of toil under White supervision. These classifications made the distance between races seem nearly as broad as that between species, easing unsettling questions about the appropriateness of such treatment of humans. The practice was at the time generally accepted by both scientific and lay communities. Arthur Gobineau's An Essay on the Inequality of the Human Races (1853–1855) was one of the milestones in the new racist discourse, along with Vacher de Lapouge's "anthroposociology" and Johann Gottfried Herder (1744–1803), who applied race to nationalist theory to develop militant ethnic nationalism. They posited the historical existence of national races such as German and French, branching from basal races supposed to have existed for millennia, such as the Aryan race, and believed political boundaries should mirror these supposed racial ones. Later, one of Hitler's favorite sayings was, "Politics is applied biology". Hitler's ideas of racial purity led to unprecedented atrocities in Europe. Since then, ethnic cleansing has occurred in Cambodia, the Balkans, Sudan, and Rwanda. In one sense, ethnic cleansing is another name for the tribal warfare and mass murder that has afflicted human society for ages. Racial inequality has been a concern of United States politicians and legislators since the country's founding. In the 19th century most White Americans (including abolitionists) explained racial inequality as an inevitable consequence of biological differences. Since the mid-20th century, political and civic leaders as well as scientists have debated to what extent racial inequality is cultural in origin. Some argue that current inequalities between Blacks and Whites are primarily cultural and historical, the result of past and present racism, slavery and segregation, and could be redressed through such programs as affirmative action and Head Start. Others work to reduce tax funding of remedial programs for minorities. They have based their advocacy on aptitude test data that, according to them, shows that racial ability differences are biological in origin and cannot be leveled even by intensive educational efforts. In electoral politics, many more ethnic minorities have won important offices in Western nations than in earlier times, although the highest offices tend to remain in the hands of Whites. In his famous Letter from Birmingham Jail'', Martin Luther King Jr. observed: History is the long and tragic story of the fact that privileged groups seldom give up their privileges voluntarily. Individuals may see the moral light and voluntarily give up their unjust posture; but as Reinhold Niebuhr has reminded us, groups are more immoral than individuals. King's hope, expressed in his I Have a Dream speech, was that the civil rights struggle would one day produce a society where people were not "judged by the color of their skin, but by the content of their character". Because of the identification of the concept of race with political oppression, many natural and social scientists today are wary of using the word "race" to refer to human variation, but instead use less emotive words such as "population" and "ethnicity". Some, however, argue that the concept of race, whatever the term used, is nevertheless of continuing utility and validity in scientific research. Race in law enforcement In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, a description needs to capture the features that stand out most clearly in the perception within the given society. Thus, in the UK, Scotland Yard use a classification based on the ethnic composition of British society: W1 (White British), W2 (White Irish), W9 (Other White); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). In the United States, the practice of racial profiling has been ruled to be both unconstitutional and also to constitute a violation of civil rights. There also an ongoing debate on the relationship between race and crime regarding the disproportional representation of certain minorities in all stages of the criminal justice system. There are many studies that have proved the reality of racial profiling. A huge study published in May 2020 of 95 million traffic stops between 2011 and 2018 shows that it was more common for black people to be pulled over and searched after a stop than whites even though white people were more likely to be found with illicit drugs. Another study found that in Travis County, Texas, despite black people comprising only around 9 percent of the population, they made up about 30 percent of police arrests for possessing less than a gram of illicit drugs, even though surveys consistently show that black and white people use illicit drugs at the same rate. Despite statistics and data that show that black people do not actually possess drugs more than white people, they are still targeted more by the police than white people which is largely due to the social construction of race. Studies in racial taxonomy based on DNA cluster analysis has led law enforcement to pursue suspects based on their racial classification as derived from their DNA evidence left at the crime scene. DNA analysis has been successful in helping police determine the race of both victims and perpetrators. This classification is called "biogeographical ancestry". See also Acculturation Colonial mentality Colonialism Colorism Creolization Cultural assimilation Cultural cringe Cultural identity Enculturation Ethnocide Globalization Intercultural competence Language shift Paper Bag Party Passing (racial identity) Race of the future Racialism Racialization Self-fulfilling prophecy Syncretism Westernization Footnotes Other references Bernstein, David E. (2022) Classified: The untold story of racial classification in America. Bombardier Books, NY. Cornell S, Hartmann D (1998) Ethnicity and race: making identities in a changing world. Pine Forge Press, Thousand Oaks, CA Dikötter F (1992) The discourse of race in modern China. Stanford University Press, Stanford Goldenberg DM (2003) The curse of ham: race and slavery in early Judaism, Christianity, and Islam. Princeton University Press, Princeton Huxley J, Haddon AC (1936) We Europeans: a survey of racial problems. Harper, New York Isaac B (2004) The invention of racism in classical antiquity. Princeton University Press, Princeton Brooks, Michael. (June 2019). “The Race Delusion” NewStatesman. “We Are Repeating The Discrimination Experiment Every Day, Says Educator Jane Elliott”. 8 July 2020. Retrieved 29 November 2020. Pierson, E., Simoiu, C., Overgoor, J. et al. “A large-scale analysis of racial disparities in police stops across the United States.” Nat Hum Behav 4, 736–745 (2020). “Ending the War on Drugs in Travis County”. February 2020. Retrieved 28 November 2020. Kinship and descent Social constructionism Social inequality
Race and society
Biology
6,135
11,340,066
https://en.wikipedia.org/wiki/Toxbot
Toxbot (a.k.a. Codbot) is a computer worm that targeted Microsoft Windows XP, Windows 2000, and Windows Server 2003 and was primarily active in 2005. On infected computers, it opened up a backdoor to allow command and control over the IRC network, thus creating a botnet that at its peak comprised about 1.5 million computers. The two unidentified makers of the botnet were arrested in October 2005 and received jail sentences of 24 and 18 months and fines from a Dutch court. References External links W32.Toxbot at Symantec Security Response Win32/Toxbot at CA Toxbot at F-Secure Computer worms
Toxbot
Technology
138
7,214,571
https://en.wikipedia.org/wiki/Common%20Criteria%20Testing%20Laboratory
The Common Criteria model provides for the separation of the roles of evaluator and certifier. Product certificates are awarded by national schemes on the basis of evaluations carried by independent testing laboratories. A Common Criteria testing laboratory is a third-party commercial security testing facility that is accredited to conduct security evaluations for conformance to the Common Criteria international standard. Such facility must be accredited according to ISO/IEC 17025 with its national certification body. Examples List of laboratory designations by country: In the US they are called Common Criteria Testing Laboratory (CCTL) In Canada they are called Common Criteria Evaluation Facility (CCEF) In the UK they are called Commercial Evaluation Facilities (CLEF) In France they are called Centres d’Evaluation de la Sécurité des Technologies de l’Information (CESTI) In Germany they are called IT Security Evaluation Facility (ITSEF) Common Criteria Recognition Arrangement Common Criteria Recognition Arrangement (CCRA) or Common Criteria Mutual Recognition Arrangement (MRA) is an international agreement that recognizes evaluations against the Common Criteria standard performed in all participating countries. There are some limitations to this agreement and, in the past, only evaluations up to EAL4+ were recognized. With on-going transition away from EAL levels and the introduction of NDPP evaluations that “map” to up to EAL4 assurance components continue to be recognized. United States In the United States the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs to meet National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme requirements and conduct IT security evaluations for conformance to the Common Criteria. CCTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General Requirements NIST Handbook 150-20, NVLAP Information Technology Security Testing — Common Criteria NIAP specific criteria for IT security evaluations and other NIAP defined requirements CCTLs enter into contractual agreements with sponsors to conduct security evaluations of IT products and Protection Profiles which use the CCEVS, other NIAP approved test methods derived from the Common Criteria, Common Methodology and other technology based sources. CCTLs must observe the highest standards of impartiality, integrity and commercial confidentiality. CCTLs must operate within the guidelines established by the CCEVS. To become a CCTL, a testing laboratory must go through a series of steps that involve both the NIAP Validation Body and NVLAP. NVLAP accreditation is the primary requirement for achieving CCTL status. Some scheme requirements that cannot be satisfied by NVLAP accreditation are addressed by the NIAP Validation Body. At present, there are only three scheme-specific requirements imposed by the Validation Body. NIAP approved CCTLs must agree to the following: Located in the U.S. and be a legal entity, duly organized and incorporated, validly existing and in good standing under the laws of the state where the laboratory intends to do business Accept U.S. Government technical oversight and validation of evaluation-related activities in accordance with the policies and procedures established by the CCEVS Accept U.S. Government participants in selected Common Criteria evaluations. CCTL accreditation A testing laboratory becomes a CCTL when the laboratory is approved by the NIAP Validation Body and is listed on the Approved Laboratories List. To avoid unnecessary expense and delay in becoming a NIAP-approved testing laboratory, it is strongly recommended that prospective CCTLs ensure that they are able to satisfy the scheme-specific requirements prior to seeking accreditation from NVLAP. This can be accomplished by sending a letter of intent to the NIAP prior to entering the NVLAP process. Additional laboratory-related information can be found in CCEVS publications: #1 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Organization, Management, and Concept of Operations and Scheme Publication #4 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Guidance to Common Criteria Testing Laboratories Canada In Canada the Communications Security Establishment Canada (CSEC) Canadian Common Criteria Scheme (CCCS) oversees Common Criteria Evaluation Facilities (CCEF). Accreditation is performed by Standards Council of Canada (SCC) under its Program for the Accreditation of Laboratories – Canada (PALCAN) according to CAN-P-1591, the SCC’s adaptation of ISO/IEC 17025-2005 for ITSET Laboratories. Approval is performed by the CCS Certification Body, a body within the CSEC, and is the verification of the applicant's ability to perform competent Common Criteria evaluations. Notes External links US: Common Criteria Evaluation and Validation Scheme US: Common Criteria Testing Laboratories Canada: Common Criteria Scheme Canada: Common Criteria Evaluation Facilities Common Criteria Recognition Agreement List of Common Criteria evaluated products ISO/IEC 15408 — available free as a public standard Computer security procedures Tests
Common Criteria Testing Laboratory
Engineering
989
12,194,882
https://en.wikipedia.org/wiki/C3H7NO
{{DISPLAYTITLE:C3H7NO}} The molecular formula C3H7NO may refer to: Acetone oxime (acetoxime) 1-Amino-2-propanone Dimethylformamide Isoxazolidine N-Methylacetamide Oxazolidine Propionamide
C3H7NO
Chemistry
71
8,330,129
https://en.wikipedia.org/wiki/Royal%20Opera%20House%2C%20Valletta
The Royal Opera House, also known as the Royal Theatre (, ), was an opera house and performing arts venue in Valletta, Malta. It was designed by the English architect Edward Middleton Barry and was erected in 1866. In 1873 its interior was extensively damaged by fire but was eventually restored by 1877. The theatre received a direct hit from aerial bombing in 1942 during World War II. Prior to its destruction, it was one of the most beautiful and iconic buildings in Valletta. After several abandoned plans to rebuild the theatre, the ruins were redesigned by the Italian architect Renzo Piano and in 2013 it once again started functioning as a performance venue, called Pjazza Teatru Rjal. History The design of the building was entrusted to Edward Middleton Barry, the architect of Covent Garden Theatre, and the classic design plan was completed by 1861. The original plans had to be altered because the sloping streets on the sides of the theatre had not been taken into consideration. This resulted in a terrace being added on the side of Strada Reale (nowadays Republic Street) designed by Maltese architects. The building of the by site started in 1862, after what was the Casa della Giornata was demolished. After four years, the Opera House, with a seating capacity of 1,095 and 200 standing, was ready for the official opening on 9 October 1866. The theatre was not to last long; on 25 May 1873, a mere six years after its opening, it was brought to a premature end by a fire. The exterior of the theatre was undamaged but the interior stonework was calcified by the intense heat. It was decided to rebuild the theatre, and after the issuing of tenders for the work and a lot of arguing whether the front had to be changed or not, the theatre was ready. On 11 October 1877, nearly four and a half years after the fire, the theatre reopened with a performance of Verdi's Aida. Some 65 years later, tragedy struck the Royal Opera House again: The remaining structures were razed in the late 1950s as a safety precaution. There is a claim that German prisoners-of-war in Malta offered to rebuild the theatre in 1946 with the Government declining due to Union pressure; the more likely reason might be that few of these prisoners of war could be expected to be qualified masons. All that remained of the Opera House were the terrace and parts of the columns. Site ruins and reconstruction plans Although the bombed site was cleared of much of the rubble and all of the remaining decorative sculpture, rebuilding was repeatedly postponed by successive post-War governments, in favour of reconstruction projects that were deemed to be more pressing. In 1953 six renowned architects submitted designs for the new theatre. The Committee chose Zavellani-Rossi's project and recommended its acceptance by Government subject to certain alterations. The project ground to a halt on Labour's re-election, contending that it was not in a position to spend so much money on a theatre when so many other projects needed attention. Although a provision of £280,000 for the reconstruction of the theatre had been made in the 1955-56 budget, these were never used. By 1957 the project had been shelved and after 1961 all references to the theatre in the country's development plans were omitted. In the 1980s contact was made with the architect Renzo Piano to design a building to be constructed on site and to rehabilitate the entrance of the city. Piano submitted the plans which were approved by the Government in 1990 but work never started. In 1996 the incoming Labour Government announced that reconstruction of the site into a commercial and cultural complex together with an underground car park would be Malta's millennium project. In the late 1990s the Maltese architect Richard England was also commissioned to come up with plans for a cultural centre. Each time, controversies killed off all initiatives. Pjazza Teatru Rjal In 2006 the government announced a proposal to redevelop the site for a dedicated House of Parliament, which by then was located in the former Armoury of the Grandmaster's Palace in Valletta. The proposal was not well received since it had always been assumed that the site would eventually be developed into something that would house a cultural institution; however, Renzo Piano was again approached and started to work on new designs. The proposal was ostensibly shelved until after the general elections of 2008 and, on 1 December 2008, Prime Minister Lawrence Gonzi revived the proposal with a budget of €80 million. Piano dissuaded the Government from building a Parliament on site of the Opera House, instead planning a House of Parliament on present-day Freedom Square and a re-modelling of the city gate. Piano proposed an open-air theatre for the site. The development of the theatre by Piano is the most controversial in its time. The government still went ahead with the plans and the open-air theatre was officially inaugurated on 8 August 2013. The theatre was named Pjazza Teatru Rjal after the original structure. The name translates to Royal Theatre Square, but the venue is always referred to by its Maltese name, even when written about in English. Further reading References External links A brief history of the Opera House The Renzo Piano Valletta City Gate Project Press Article Archive Pjazza Teatru Rjal (official website) Opera houses in Malta Valletta, Royal Opera House Neoclassical architecture in Malta Buildings and structures in Valletta Theatres completed in 1866 Music venues completed in 1866 1866 establishments in Malta Burned theatres 19th-century fires in Europe 1873 fires Buildings and structures in Malta destroyed during World War II Buildings and structures demolished in 1942 Controversies in Malta 21st-century controversies Architectural controversies Defunct police stations in Malta 1942 disestablishments in Malta Ruins in Malta Edward Middleton Barry buildings
Royal Opera House, Valletta
Engineering
1,167
24,616,899
https://en.wikipedia.org/wiki/CC-Link%20Open%20Automation%20Networks
The CC-Link Open Automation Networks Family are a group of open industrial networks that enable devices from numerous manufacturers to communicate. They are used in a wide variety of industrial automation applications at the machine, cell and line levels. History The CC-Link Partner Association (CLPA) offers a family of open-architecture networks. These originated with the CC-Link (Control & Communication) fieldbus in 1996, developed by Mitsubishi Electric Corporation. In 2000, this was released as an “Open” network so that independent automation equipment manufacturers could incorporate CLPA network compatibility into their products. In the same year, the CC-Link Partner Association (CLPA) was formed to manage and oversee the network technology and support manufacturer members. In 2007, the CLPA was the first organisation to introduce open gigabit Ethernet for automation with CC-Link IE (Industrial Ethernet). In 2018, the CLPA was the first organisation to combine open gigabit Ethernet with Time-Sensitive Networking (TSN) as CC-Link IE TSN. As of May 2020, over 2,100 CLPA compatible products from more than 340 automation manufacturers were available. CLPA offers a variety of open automation network technologies. These are the CC-Link fieldbus, CC-Link Safety fieldbus, CC-Link IE and CC-Link IE TSN. Compatible products include industrial PCs, PLCs, robots, servos, drives, valve manifolds, digital & analogue I/O modules, temperature controllers, mass flow controllers and others. As of May 2020, there was approximately 30 million devices installed worldwide. Structure The CLPA is a global organisation with branches in 11 locations worldwide (Japan, Taiwan, Singapore, Thailand, China, South Korea, India, Turkey, Germany, USA and Mexico). The headquarters are in Nagoya, Japan. Some branches offer conformance testing facilities (see below). The CLPA is controlled by a board of ten companies, who are 3M, Analog Devices, Balluff, Cisco, Cognex Corporation, IDEC Corporation, Mitsubishi Electric, Molex, NEC and Pro-face. The board controls the strategic direction of the organisation and oversees its operations, including the activities of the technical and marketing task forces and the global branches. Industry cooperation The CLPA has been involved in strategic cooperation with other open technology associations in the industrial automation space. These include PROFIBUS & PROFINET International (PI), the OPC Foundation and AutomationML. The cooperation with PI resulted in a standard for interoperability between CC-Link IE and PROFINET. The OPC Foundation activity created an OPC UA companion specification for the CLPA's "CSP+ (Control & Communication System Profile) For Machine" technology. Cooperation with AutomationML involved the signing of a Memorandum of Understanding to incorporate the "CSP+" and "CSP+ For Machine" device profile technologies into AutomationML models. Standardization CLPA has obtained the following certifications for its open network technologies: ISO standards: ISO15745-5 (CC-Link, January 2007) IEC standards: IEC61158 (CC-Link IE, August 2014), IEC61784 (CC-Link & CC-Link IE, August 2014), IEC61784-3-8 (CC-Link Safety, August 2016) SEMI standards: SEMI E54.12 (CC-Link, December 20010, SEMI E54.23-0513 (CC-Link IE Field, May 20130 Chinese National Standards: GB/Z 19760-2005 (CC-Link, December 2005), GB/T 20229.4-6 (CC-Link, December 2006), GB/Z 19760-2008 (CC-Link, June 2009), GB/Z 29496.1.2.3-2013, GB/T 33537.1~3-2017 (CC-Link IE, April 2017), GB/Z 37085-2018 (CC-Link IE Safety, December 2018) Japanese Industrial Standards: JIS TR B0031 (CC-Link Safety, certified May 2013) Korean National Standards: KBS ISO 15745-5 (CC-Link, March 2008) Taiwan Standard: CNS 15252X6068 (CC-Link, May 2009) Conformance testing All certification testing for CLPA networks is carried out by the CLPA and is compulsory in order to ensure that devices manufactured by suppliers meet the strict technical performance standards. These include noise resistance and correct communication functionality. To declare a product as CLPA certified, a vendor needs to successfully test their product at one of the CLPA test laboratories situated in the US, China, Korea, Japan or Germany. References External links CLPA Global Site CLPA Europe Site SEMI E54.12-0701E (Reapproved 1106) - Specification for Sensor/Actuator Network Communications for CC-Link Industrial automation Industrial computing
CC-Link Open Automation Networks
Technology,Engineering
1,006
78,428,610
https://en.wikipedia.org/wiki/Boosteroid
Boosteroid is a cloud gaming service that allows users to play on demand video games on a variety of web devices without the need for high-end or dedicated gaming hardware. Boosteroid games can be played on low-powered PCs, laptops, Chromebooks, smartphones, and Smart TVs. As of October 2024, they had 5.8 million users. Boosteroid is headquartered in Texas, USA, with its main research and development office in Kyiv, Ukraine. History Boosteroid was founded by Ivan Shvaichenko in Ukraine in 2016. It officially launched its cloud gaming service in 2019, initially focusing on the European market. Boosteroid began its expansion into the United States market in 2021. In 2022, the company partnered with ASUS for their GPU servers, acquiring hardware for their cloud gaming infrastructure. In March 2023, Boosteroid secured a 10-year partnership with Microsoft, bringing Xbox PC games to its platform, including popular blockbuster series like Call of Duty, following Microsoft's acquisition of Activision Blizzard. In April 2023, LG Electronics announced the addition of Boosteroid to their TVs in over 60 countries. In November 2023, Boosteroid partnered with Samsung to integrate its cloud gaming service into Samsung Gaming Hub on Samsung Smart TVs and monitors. In August 2024, Boosteroid partnered with Mercedes-Benz to integrate its cloud gaming service into the MBUX entertainment system of Mercedes-Benz vehicles. Boosteroid's primary revenue stream is generated through user subscriptions. The company offers a simple subscription model with a single tier providing access to its entire game library and features. References Cloud gaming services Video game platforms Online video game services Technology companies 2016 establishments 2016 establishments in Ukraine
Boosteroid
Technology
345
8,327,133
https://en.wikipedia.org/wiki/SDET
SDET also stands for software development engineer in test, a type of software engineer. SDET is a benchmark used in the systems software research community for measuring the throughput of a multi-user computer operating system. Its name stands for SPEC Software Development Environment Throughput (SDET), and is packaged along with Kenbus in the SPEC SDM91 benchmark. A more modern benchmark that is related to SDET is the reaim package, which is itself an up-to-date implementation of the venerable AIM Multiuser Benchmark. Sources and external links SDM91 Perspectives on the SPEC SDET Benchmark Computer performance Evaluation of computers
SDET
Technology
136
2,893,300
https://en.wikipedia.org/wiki/Horizontal%20scan%20rate
Horizontal scan rate, or horizontal frequency, usually expressed in kilohertz, is the number of times per second that a raster-scan video system transmits or displays a complete horizontal line, as opposed to vertical scan rate, the number of times per second that an entire screenful of image data is transmitted or displayed. Cathode ray tubes Within a cathode-ray tube (CRT), the horizontal scan rate is how many times in a second that the electron beam moves from the left side of the display to the right and back. The number of horizontal lines displayed per second can be roughly derived from this number multiplied by the vertical scan rate. The horizontal scan frequencies of a CRT include some intervals that occur during the vertical blanking interval, so the horizontal scan rate does not directly correlate to visible display lines unless the quantity of unseen lines are also known. The horizontal scan rate is one of the primary figures determining the resolution capability of a CRT, since it is determined by how quickly the electromagnetic deflection system can reverse the current flowing in the deflection coil in order to move the electron beam from one side of the display to the other. Reversing the current more quickly requires higher voltages, which require more expensive electrical components. In analog television systems, the horizontal frequency is between 15.625 kHz and 15.750 kHz. Other technologies While other display technologies such as liquid-crystal displays do not have the specific electrical characteristics that constrain horizontal scan rates on CRTs, there is still a horizontal scan rate characteristic in the signals that drive these displays. References Television technology
Horizontal scan rate
Technology
330
25,699,501
https://en.wikipedia.org/wiki/Baltic%20Sea%20hypoxia
Baltic Sea hypoxia refers to low levels of oxygen in bottom waters, also known as hypoxia, occurring regularly in the Baltic Sea. the total area of bottom covered with hypoxic waters with oxygen concentrations less than 2 mg/L in the Baltic Sea has averaged 49,000 km2 over the last 40 years. The ultimate cause of hypoxia is excess nutrient loading from human activities causing algal blooms. The blooms sink to the bottom and use oxygen to decompose at a rate faster than it can be added back into the system through the physical processes of mixing. The lack of oxygen (anoxia) kills bottom-living organisms and creates dead zones. Causes The rapid increase in hypoxia in coastal areas around the world is due to the excessive inputs of plant nutrients, such as nitrogen and phosphorus by human activities. The sources of these nutrients include agriculture, sewage, and atmospheric deposition of nitrogen containing compounds from the burning of fossil fuels. The nutrients stimulate the growth of algae causing problems with eutrophication. The algae sink to the bottom and use the oxygen when they decompose. If mixing of the bottom waters is slow, such that oxygen stocks are not renewed, hypoxia can occur. Description the total area of bottom covered with hypoxic waters with oxygen concentrations less than 2 mg/L in the Baltic Sea has averaged 49,000 km2 over the last 40 years. In the Baltic Sea, the input of salt water from the North Sea through the Danish Straits is important in determining the area of hypoxia each year. Denser, saltier water comes into the Baltic Sea and flows along the bottom. Although large salt water inputs help to renew the bottom waters and increase oxygen concentrations, the new oxygen added with the salt water inflow is rapidly used to decompose organic matter that is in the sediments. The denser salt water also reduces mixing of oxygen poor bottom waters with more brackish, lighter surface waters. Thus, large areas of hypoxia occur when more salt water comes into the Baltic Sea. Geological perspective Geological archives in sediments, primarily the appearance of laminated sediments that occur only when hypoxic conditions are present, are used to determine the historical time frame of oxygen conditions. Hypoxic conditions were common during the development of the early Baltic Sea called the Mastogloia Sea and Littorina Sea starting around 8,000 calendar years Before Present until 4,000 BP. Hypoxia disappeared for a period of nearly 2,000 years, appearing a second time just before the Medieval Warm Period around 1 AD until 1200 AD. The Baltic Sea became hypoxic again around 1900 AD and has remained hypoxic for the last 100 years. The causes of the various periods of hypoxia are being scientifically debated, but it is believed to result from high surface salinity, climate and human impacts. Impacts The deficiency of oxygen in bottom waters changes the types of organisms that live on the bottom. The species change from long-living, deep-burrowing, slow-growing animals to species that live on the sediment surface. They are small and fast-growing, and can tolerate low concentrations of oxygen. When oxygen concentrations are low enough only bacteria and fungi can survive, dead zones form. In the Baltic Sea, low oxygen concentrations also reduce the ability of cod to spawn in bottom waters. Cod spawning requires both high salinity and high oxygen concentrations for cod fry to develop, conditions that are rare in the Baltic Sea today. The lack of oxygen also increases the release of phosphorus from bottom sediments. Excess phosphorus in surface waters and the lack of nitrogen stimulates the growth of cyanobacteria. When the cyanobacteria die and sink to the bottom they consume oxygen leading to further hypoxia and more phosphorus is released from bottom sediments. This process creates a vicious circle of eutrophication that helps to sustain itself. Solutions The countries surrounding the Baltic Sea have established the HELCOM Baltic Marine Environment Protection Commission to protect and improve the environmental health of the Baltic Sea. In 2007, the Member States accepted the Baltic Sea Action Plan to reduce nutrients. Because the public and media have been frustrated by the lack of progress in improving the environmental status of the Baltic Sea, there have been calls for large-scale engineering solutions to add oxygen back into bottom waters and bring life back to the dead zones. An international committee evaluated different ideas and came to the conclusion that large-scale engineering approaches are not able to add oxygen to the extremely large dead zones in the Baltic Sea without completely changing the Baltic Sea ecosystem. The best long-term solution is to implement policies and measures to reduce the load of nutrients to the Baltic Sea. References External links HELCOM Baltic Sea Action Plan HYPER Project BONUS Baltic Nest Institute 2011 Baltic Sea 2020 Baltic Sea Algal blooms
Baltic Sea hypoxia
Chemistry,Biology,Environmental_science
994
16,739,998
https://en.wikipedia.org/wiki/Q-machine
A Q-machine is a device that is used in experimental plasma physics. The name Q-machine stems from the original intention of creating a quiescent plasma that is free from the fluctuations that are present in plasmas created in electric discharges. The Q-machine was first described in a publication by Rynn and D'Angelo. The Q-machine plasma is created at a plate that has been heated to about 2000 K and hence is called the hot plate. Electrons are emitted by the hot plate through thermionic emission, and ions are created through contact ionization of atoms of alkali metals that have low ionization potentials. The hot plate is made of a metal that has a large work function and can withstand high temperatures, e.g. tungsten or rhenium. The alkali metal is boiled in an oven that is designed to direct a beam of alkaline metal vapor onto the hot plate. A high value of the hot plate work function and a low ionization potential of the metal makes for a low potential barrier for an electron in the alkaline metal to overcome, thus making the ionization process more efficient. Sometimes barium is used instead of an alkaline metal due to its excellent spectroscopic properties. The fractional ionization of a Q-machine plasma can approach unity, which can be orders of magnitude greater than that predicted by the Saha ionization equation. The temperature of the Q-machine plasma is close to the temperature of the hot plate, and the ion and electron temperatures are similar. Although this temperature (about 2000 K) is high compared to room temperature, it is much lower than electron temperatures that are usually found in discharge plasma. The low temperature makes it possible to create a plasma column that is several ion gyro radii across. Since the alkaline metals are solids at room temperature they will stick to the walls of the machine on impact, and therefore the neutral pressure can be kept so low that for all practical purposes the plasma is fully ionised. Plasma research that has been performed using Q-machines includes current driven ion cyclotron waves, Kelvin-Helmholtz waves, and electron phase space holes. Today, Q-machines can be found at West Virginia University and at the University of Iowa in the USA, at Tohoku University in Sendai in Japan, and at the University of Innsbruck in Austria. References Plasma technology and applications
Q-machine
Physics
493
38,310,487
https://en.wikipedia.org/wiki/Pedelta%20Structural%20Engineers
PEDELTA is an independent multinational consultant firm headquartered in Barcelona, Spain which provides worldwide bridge and structural engineering services. The company is present in Canada, Colombia, Panama, Peru, Spain and the USA. The firm is internationally recognized by the introduction of advanced materials on bridges such as Cala Galdana Bridge, the first duplex stainless steel bridge, the GFRP Lleida Pedestrian Bridge, its innovative bridge aesthetics such as the Abetxuko Bridge and other cable supported structures. History The company was founded in Barcelona, (Spain) by Juan Sobrino in 1994 focus on bridge and structural design. The firm expanded to Colombia in 2001, the US in 2006, Canada in 2012 and Peru in 2013. In 2017 they started their geotechnical department in Colombia. Introduction of advanced materials PEDELTA is worldwide known for the introduction of GFRP and Stainless Steel as a main structural materials on bridges and building structures. The firm has developed the first hybrid structures combining Stainless-Steel and GFRP profiles and panels. First application is Zumaia pedestrian bridge(2008)and Vilafant Pedestrian Bridges (2011). International Awards PEDELTA has received various awards including the following: 2012 E. Figg Medal Award for The triplets bridges. International Bridge Conference. Pittsburgh, PA. 2012 National Engineering Award. Tunnel of Cune, Colombia. 2005 Footbridge Awards to the GFRP Lleida Pedestrian Bridge, Venize. 2004 Envigado bridge National Engineering Award of the Colombian Society of Engineering. 2003 Juan Sobrino recipient of the IABSE Award. 2001 Juan Sobrino recipient of the Award of the Association of Young Entrepreneurs of Catalonia. Signature bridge projects Road Bridges Abetxuko Bridge, Vitoria, Spain. Envigado bridge, Envigado, Colombia. Cala Galdana Bridge, Minorca, Spain. The triplets bridges, La Paz, Bolivia. High-Speed Rail Bridges Sant Boi Bridge, over LLobregat River. High Speed Railway Bridge over AP7, Llinars del Valles Pedestrian Bridges GFRP Lleida Pedestrian Bridge, Lleida, Spain Stainless Steel Pedestrian Bridge, Sant Fruitós, Spain. Hybrid Stainless Steel-GFRP Pedestrian Bridge, Zumaia, Spain. Pedestrian Bridge over Segre River, Lleida, Spain Pedestrian Bridge over Oria River, Andoain, Spain Vilafant Bridge, Spain Fort York Bridge, Toronto, Canada References External links International engineering consulting firms Construction and civil engineering companies of the United States Companies established in 1994
Pedelta Structural Engineers
Engineering
522
21,788,630
https://en.wikipedia.org/wiki/Lazy%20user%20model
The lazy user model of solution selection (LUM) is a model in information systems proposed by Tétard and Collan that tries to explain how an individual selects a solution to fulfill a need from a set of possible solution alternatives. LUM expects that a solution is selected from a set of available solutions based on the amount of effort the solutions require from the user – the user is supposed to select the solution that carries the least effort. The model is applicable to a number of different types of situations, but it can be said to be closely related to technology acceptance models. The model draws from earlier works on how least effort affects human behaviour in information seeking and in scaling of language. Earlier research within the discipline of information systems especially within the topic of technology acceptance and technology adoption is closely related to the lazy user model. The model structure The model starts from the observation that there is a "user need", i.e. it is expected that there is a "clearly definable, fully satisfiable want" that the user wants satisfied (it can also be said that the user has a problem that he/she wants solved). So there is a place for a solution, product, or service. The user need defines the set of possible solutions (products, services etc.) that fulfill the user need. The basic model considers for simplicity needs that are 100% satisfiable and services that 100% satisfy the needs. This means that only the solutions that solve the problem are relevant. This logically means that the need defines the possible satisfying solutions – a set of solutions (many different products/services) that all can fulfill the user need. LUM is not limited to looking at one solution separately. All of the solutions in the set that fulfill the need have their own characteristics; some are good and suitable for the user, others unsuitable and unacceptable – for example, if the user is in a train and wants to know what the result from a tennis match is right now, he/she may only use the types of solutions to the problem that are available to him/her. The "user state" determines the set of available/suitable solutions for the user and thus limits the (available) set of possible solutions to fulfill the user need. The user state is a very wide concept, it is the user characteristics at the time of the need. The user state includes, e.g., age, wealth, location ... everything that determines the state of the user in relation to the solutions in the set of the possible solutions to fulfill the user need. The model supposes that after the user need has defined the set of possible solutions that fulfill the user need and the user state has limited the set to the available plausible solutions that fulfill the user need the user will "select" a solution from the set to fulfill the need. Obviously if the set is empty the user does not have a way to fulfill the need. The lazy user model assumes that the user will make the selection from the limited set based on the lowest level of effort. Effort is understood as the combination of monetary cost + time needed + physical/mental effort needed. Considerations The lazy user theory has implications when thinking about the effect of learning in technology adoption (for example in the adoption of new information systems). See also Diffusion of innovations Technology acceptance model Technology adoption lifecycle Theory of planned behavior Unified theory of acceptance and use of technology References External links Behavior
Lazy user model
Biology
697
13,390,863
https://en.wikipedia.org/wiki/HD%20190647
HD 190647 is a yellow-hued star with an exoplanetary companion, located in the southern constellation of Sagittarius. It has an apparent visual magnitude of 7.78, making this an 8th magnitude star that is much too faint to be readily visible to the naked eye. The star is located at a distance of 178 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −40 km/s. It is also called HIP 99115. The stellar classification of this star is G5V, matching a G-type main-sequence star. However, the low gravity and high luminosity of this star may indicate it is slightly evolved. It is chrompsherically inactive with a slow rotation, having a projected rotational velocity of 1.6 km/s. The star's metallicity is high, with nearly 1.5 times the abundance of iron compared to the Sun. In 2007, a Jovian planet was found to be orbiting the star. It was detected using the radial velocity method with the HARPS spectrograph in Chile. The object is orbiting at a distance of from the host star with a period of and an eccentricity (ovalness) of 0.18. As the inclination of the orbital plane is unknown, only a lower bound on the planetary mass can be made. It has a minimum mass 1.9 times the mass of Jupiter. See also HD 100777 HD 221287 List of extrasolar planets References G-type main-sequence stars G-type subgiants Planetary systems with one confirmed planet Sagittarius (constellation) Durchmusterung objects 190647 099115
HD 190647
Astronomy
351
5,221,389
https://en.wikipedia.org/wiki/Cell%20damage
Cell damage (also known as cell injury) is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused. Cell death may occur by necrosis or apoptosis. Causes Physical agents such as heat or radiation can damage a cell by literally cooking or coagulating their contents. Impaired nutrient supply, such as lack of oxygen or glucose, or impaired production of adenosine triphosphate (ATP) may deprive the cell of essential materials needed to survive. Metabolic: Hypoxia and ischemia Chemical agents Microbial agents: Viruses & bacteria Immunologic agents: Allergies and autoimmune diseases, such as Parkinson's and Alzheimer's disease. Genetic factors, such as Down's syndrome and sickle cell anemia Targets The most notable components of the cell that are targets of cell damage are the DNA and the cell membrane. DNA damage: In human cells, both normal metabolic activities and environmental factors such as ultraviolet light and other radiations can cause DNA damage, resulting in as many as one million individual molecular lesions per cell per day. Membrane damage: Damage to the cell membrane disturbs the state of cell electrolytes, e.g. calcium, which when constantly increased, induces apoptosis. Mitochondrial damage: May occur due to ATP decrease or change in mitochondrial permeability. Ribosome damage: Damage to ribosomal and cellular proteins such as protein misfolding, leading to apoptotic enzyme activation. Types of damage Some cell damage can be reversed once the stress is removed or if compensatory cellular changes occur. Full function may return to cells but in some cases, a degree of injury will remain. Reversible Cellular swelling Cellular swelling (or cloudy swelling) may occur due to cellular hypoxia, which damages the sodium-potassium membrane pump; it is reversible when the cause is eliminated. Cellular swelling is the first manifestation of almost all forms of injury to cells. When it affects many cells in an organ, it causes some pallor, increased turgor, and increase in weight of the organ. On microscopic examination, small clear vacuoles may be seen within the cytoplasm; these represent distended and pinched-off segments of the endoplasmic reticulum. This pattern of non-lethal injury is sometimes called hydropic change or vacuolar degeneration. Hydropic degeneration is a severe form of cloudy swelling. It occurs with hypokalemia due to vomiting or diarrhea. The ultrastructural changes of reversible cell injury include: Blebbing Blunting distortion of microvilli loosening of intercellular attachments mitochondrial changes dilation of the endoplasmic reticulum Fatty change In fatty change, the cell has been damaged and is unable to adequately metabolize fat. Small vacuoles of fat accumulate and become dispersed within cytoplasm. Mild fatty change may have no effect on cell function; however, more severe fatty change can impair cellular function. In the liver, the enlargement of hepatocytes due to fatty change may compress adjacent bile canaliculi, leading to cholestasis. Depending on the cause and severity of the lipid accumulation, fatty change is generally reversible. Fatty change is also known as fatty degeneration, fatty metamorphosis, or fatty steatosis. Irreversible Necrosis Necrosis is characterised by cytoplasmic swelling, irreversible damage to the plasma membrane, and organelle breakdown leading to cell death. The stages of cellular necrosis include pyknosis, the clumping of chromosomes and shrinking of the nucleus of the cell; karyorrhexis, the fragmentation of the nucleus and break up of the chromatin into unstructured granules; and karyolysis, the dissolution of the cell nucleus. Cytosolic components that leak through the damaged plasma membrane into the extracellular space can incur an inflammatory response. There are six types of necrosis: Coagulative necrosis Liquefactive necrosis Caseous necrosis Fat necrosis Fibroid necrosis Gangrenous necrosis Apoptosis Apoptosis is the programmed cell death of superfluous or potentially harmful cells in the body. It is an energy-dependent process mediated by proteolytic enzymes called caspases, which trigger cell death through the cleaving of specific proteins in the cytoplasm and nucleus. The dying cells shrink and condense into apoptotic bodies. The cell surface is altered so as to display properties that lead to rapid phagocytosis by macrophages or neighbouring cells. Unlike necrotic cell death, neighbouring cells are not damaged by apoptosis as cytosolic products are safely isolated by membranes prior to undergoing phagocytosis. It is considered an important component of various bioprocesses including cell turnover, hormone-dependent atrophy, proper development and functioning of the immune and embryonic system, it also helps in chemical-induced cell death which is genetically mediated. There is some evidence that certain symptoms of "apoptosis" such as endonuclease activation can be spuriously induced without engaging a genetic cascade. It is also becoming clear that mitosis and apoptosis are toggled or linked in some way and that the balance achieved depends on signals received from appropriate growth or survival factors. There are research being conducted to focus on the elucidation and analysis of the cell cycle machinery and signaling pathways that controls cell cycle arrest and apoptosis. In the average adult between 50 and 70 billion cells die each day due to apoptosis. Inhibition of apoptosis can result in a number of cancers, autoimmune diseases, inflammatory diseases, and viral infections. Hyperactive apoptosis can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Repair When a cell is damaged, the body will try to repair or replace the cell to continue normal functions. If a cell dies, the body will remove it and replace it with another functioning cell, or fill the gap with connective tissue to provide structural support for the remaining cells. The motto of the repair process is to fill a gap caused by the damaged cells to regain structural continuity. Normal cells try to regenerate the damaged cells but this cannot always happen. Regeneration Regeneration of parenchyma cells, or the functional cells, of an organism. The body can make more cells to replace the damaged cells keeping the organ or tissue intact and fully functional. Replacement When a cell cannot be regenerated, the body will replace it with stromal connective tissue to maintain tissue or organ function. Stromal cells are the cells that support the parenchymal cells in any organ. Fibroblasts, immune cells, pericytes, and inflammatory cells are the most common types of stromal cells. Biochemical changes in cellular injury ATP (adenosine triphosphate) depletion is a common biological alteration that occurs with cellular injury. This change can happen despite the inciting agent of the cell damage. A reduction in intracellular ATP can have a number of functional and morphologic consequences during cell injury. These effects include: Failure of the ATP-dependent pumps ( pump and pump), resulting in a net influx of and ions and osmotic swelling. ATP-depleted cells begin to undertake anaerobic metabolism to derive energy from glycogen which is known as glycogenolysis. A consequent decrease in the intracellular pH of the cell arises, which mediates harmful enzymatic processes. Early clumping of nuclear chromatin then occurs, known as pyknosis, and leads to eventual cell death. DNA damage and repair DNA damage DNA damage (or RNA damage in the case of some virus genomes) appears to be a fundamental problem for life. As noted by Haynes, the subunits of DNA are not endowed with any peculiar kind of quantum mechanical stability, and thus DNA is vulnerable to all the "chemical horrors" that might befall any such molecule in a warm aqueous medium. These chemical horrors are DNA damages that include various types of modification of the DNA bases, single- and double-strand breaks, and inter-strand cross-links (see DNA damage (naturally occurring). DNA damages are distinct from mutations although both are errors in the DNA. Whereas DNA damages are abnormal chemical and structural alterations, mutations ordinarily involve the normal four bases in new arrangements. Mutations can be replicated, and thus inherited when the DNA replicates. In contrast, DNA damages are altered structures that cannot, themselves, be replicated. Several different repair processes can remove DNA damages (see chart in DNA repair). However, those DNA damages that remain un-repaired can have detrimental consequences. DNA damages may block replication or gene transcription. These blockages can lead to cell death. In multicellular organisms, cell death in response to DNA damage may occur by a programmed process, apoptosis. Alternatively, when DNA polymerase replicates a template strand containing a damaged site, it may inaccurately bypass the damage and, as a consequence, introduce an incorrect base leading to a mutation. Experimentally, mutation rates increase substantially in cells defective in DNA mismatch repair or in Homologous recombinational repair (HRR). In both prokaryotes and eukaryotes, DNA genomes are vulnerable to attack by reactive chemicals naturally produced in the intracellular environment and by agents from external sources. An important internal source of DNA damage in both prokaryotes and eukaryotes is reactive oxygen species (ROS) formed as byproducts of normal aerobic metabolism. For eukaryotes, oxidative reactions are a major source of DNA damage (see DNA damage (naturally occurring) and Sedelnikova et al.). In humans, about 10,000 oxidative DNA damages occur per cell per day. In the rat, which has a higher metabolic rate than humans, about 100,000 oxidative DNA damages occur per cell per day. In aerobically growing bacteria, ROS appear to be a major source of DNA damage, as indicated by the observation that 89% of spontaneously occurring base substitution mutations are caused by introduction of ROS-induced single-strand damages followed by error-prone replication past these damages. Oxidative DNA damages usually involve only one of the DNA strands at any damaged site, but about 1–2% of damages involve both strands. The double-strand damages include double-strand breaks (DSBs) and inter-strand crosslinks. For humans, the estimated average number of endogenous DNA DSBs per cell occurring at each cell generation is about 50. This level of formation of DSBs likely reflects the natural level of damages caused, in large part, by ROS produced by active metabolism. Repair of DNA damages Five major pathways are employed in repairing different types of DNA damages. These five pathways are nucleotide excision repair, base excision repair, mismatch repair, non-homologous end-joining and homologous recombinational repair (HRR) (see chart in DNA repair) and reference. Only HRR can accurately repair double-strand damages, such as DSBs. The HRR pathway requires that a second homologous chromosome be available to allow recovery of the information lost by the first chromosome due to the double-strand damage. DNA damage appears to play a key role in mammalian aging, and an adequate level of DNA repair promotes longevity (see DNA damage theory of aging and reference.). In addition, an increased incidence of DNA damage and/or reduced DNA repair cause an increased risk of cancer (see Cancer, Carcinogenesis and Neoplasm) and reference). Furthermore, the ability of HRR to accurately and efficiently repair double-strand DNA damages likely played a key role in the evolution of sexual reproduction (see Evolution of sexual reproduction and reference). In extant eukaryotes, HRR during meiosis provides the major benefit of maintaining fertility. See also Cellular adaptation References Cell biology Cellular senescence
Cell damage
Biology
2,612
34,998,420
https://en.wikipedia.org/wiki/Content%20protection%20network
A content protection network (also called content protection system or web content protection) is a term for anti-web scraping services provided through a cloud infrastructure. A content protection network is claimed to be a technology that protects websites from unwanted web scraping, web harvesting, blog scraping, data harvesting, and other forms of access to data published through the World Wide Web. A good content protection network will use various algorithms, checks, and validations to distinguish between desirable search engine web crawlers and human beings on the one hand, and Internet bots and automated agents that perform unwanted access on the other hand. A few web application firewalls have begun to implement limited bot detection capabilities. History The protection of copyrighted content has a long tradition, but technical tricks and mechanisms are more recent developments. For example, maps have sometimes been drawn with deliberate mistakes to protect the authors' copyright if someone else copies the map without permission. In 1998, a system called SiteShield eased the fears of theft and illicit re-use expressed by content providers who publish copyright-protected images on their websites. A research report published in November 2000 by IBM was one of the first to document a working system for web content protection, called WebGuard. Around 2002, several companies in the music recording industry had been issuing non-standard compact discs with deliberate errors burned into them, as copy protection measures. Google also notably installed an automated system to help detect and block YouTube video uploads with content that entail copyright infringement. However, as individuals and enterprises engaged in computer crime have become more skilled and sophisticated, they erode the effectiveness of established perimeter-based security controls. The response is more pervasive use of data encryption technologies. Forrester Research asserted in 2011 that there is an industry-wide "drive toward consolidated content security platforms", and they predict in 2012 that "proliferating malware threats will require better threat intelligence". Forrester also asserts that content protection networks (especially in the form of software as a service, or SaaS) enable companies to protect against both e-mail and web-borne theft of content. In some web applications, security is defined by URL patterns that identify protected content. For example, using the web.xml security-constraint element, content could be assigned values of NONE, INTEGRAL, and CONFIDENTIAL to describe the necessary transport guarantees. See also Digital rights management Internet security Web harvesting Web scraping References Internet terminology Web technology
Content protection network
Technology
491
31,474,404
https://en.wikipedia.org/wiki/Field%20strength%20meter
In telecommunications, a field strength meter is an instrument that measures the electric field strength emanating from a transmitter. The relation between the electric field and the transmitted power In ideal free space, the electric field strength produced by a transmitter with an isotropic radiator is readily calculated. where is the electric field strength in volts per meter is the transmitter power output in watts is the distance from the radiator in meters The factor is an approximation of where  is the impedance of free space. is the symbol for ohms. It is clear that electric field strength is inversely proportional to the distance between the transmitter and the receiver. However, this relation is impractical for calculating the field strength produced by terrestrial transmitters, where reflections and attenuation caused by objects around the transmitter or receiver may affect the electrical field strength considerably. Field strength meter A field strength meter is actually a simple receiver. The RF signal is detected and fed to a microammeter, which is scaled in dBμ. The frequency range of the tuner is usually within the terrestrial broadcasting bands. Some FS meters can also receive satellite (TVRO and RRO) frequencies. Most modern FS meters have AF and VF circuits and can be used as standard receivers. Some FS meters are also equipped with printers to record received field strength. Antennas When measuring with a field strength meter it is important to use a calibrated antenna such as the standard antenna supplied with the meter. For precision measurements the antenna must be at a standard height. A value of standard height frequently employed for VHF and UHF measurements is . Gain correction tables may be provided with the meter, that take into account the change of antenna gain with frequency. Minimum field strength criteria The CCIR defines the minimum field strength for satisfactory reception. These are shown in the table below. (Band II is reserved for FM radio broadcasting and the other bands are reserved for TV broadcasting.) References Electromagnetic radiation meters Broadcast engineering
Field strength meter
Physics,Technology,Engineering
396
8,537,169
https://en.wikipedia.org/wiki/Bis%28triphenylphosphine%29iminium%20chloride
Bis(triphenylphosphine)iminium chloride is the chemical compound with the formula , often abbreviated , where Ph is phenyl , or even abbreviated [PPN]Cl or [PNP]Cl or PPNCl or PNPCl, where PPN or PNP stands for . This colorless salt is a source of the cation (abbreviated or ), which is used as an unreactive and weakly coordinating cation to isolate reactive anions. is a phosphazene. Synthesis and structure is prepared in two steps from triphenylphosphine : This triphenylphosphine dichloride is related to phosphorus pentachloride . Treatment of this species with hydroxylamine in the presence of results in replacement of the two single P–Cl bonds in by one double P=N bond: Triphenylphosphine oxide is a by-product. Bis(triphenylphosphine)iminium chloride is described as . The structure of the bis(triphenylphosphine)iminium cation is . The P=N=P angle in the cation is flexible, ranging from ~130 to 180° depending on the salt. Bent and linear forms of the P=N=P connections have been observed in the same unit cell. The same shallow potential well for bending is observed in the isoelectronic species bis(triphenylphosphoranylidene)methane, , as well as the more distantly related molecule carbon suboxide, . For the solvent-free chloride salt , the P=N=P bond angle was determined to be 133°. The two P=N bonds are equivalent, and their length is 1.597(2) Å. Use as reagent In the laboratory, is the main precursor to salts. Using salt metathesis reactions, nitrite, azide, and other small inorganic anions can be obtained with cations. The resulting salts , , etc. are soluble in polar organic solvents. forms crystalline salts with a range of anions that are otherwise difficult to crystallize. Its effectiveness is partially attributable to its rigidity, reflecting the presence of six phenyl rings. Often forms salts that are more air-stable than salts with smaller cations such as those containing quaternary ammonium cation , or alkali metal cations. This effect is attributed to the steric shielding provided by this voluminous cation. Illustrative salts of reactive anions include , , (M = Cr, Mo, W), and . The role of ion pairing in chemical reactions is often clarified by examination of the related salt derived from . Related cations A phosphazenium cation related to is . References Organophosphorus compounds Chlorides Phosphazenes Phenyl compounds
Bis(triphenylphosphine)iminium chloride
Chemistry
590
5,504,424
https://en.wikipedia.org/wiki/Number%20Forms
Number Forms is a Unicode block containing Unicode compatibility characters that have specific meaning as numbers, but are constructed from other characters. They consist primarily of vulgar fractions and Roman numerals. In addition to the characters in the Number Forms block, three fractions (¼, ½, and ¾) were inherited from ISO-8859-1, which was incorporated whole as the Latin-1 Supplement block. List of characters Block History The following Unicode-related documents record the purpose and process of defining specific characters in the Number Forms block: See also Latin script in Unicode Unicode symbols References Symbols Unicode Unicode blocks
Number Forms
Mathematics
122
5,401,424
https://en.wikipedia.org/wiki/Cadmium%20nitrate
Cadmium nitrate describes any of the related members of a family of inorganic compounds with the general formula . The most commonly encountered form being the tetrahydrate.The anhydrous form is volatile, but the others are colourless crystalline solids that are deliquescent, tending to absorb enough moisture from the air to form an aqueous solution. Like other cadmium compounds, cadmium nitrate is known to be carcinogenic. According to X-ray crystallography, the tetrahydrate features octahedral Cd2+ centers bound to six oxygen ligands. Uses Cadmium nitrate is used for coloring glass and porcelain and as a flash powder in photography. Preparation Cadmium nitrate is prepared by dissolving cadmium metal or its oxide, hydroxide, or carbonate, in nitric acid followed by crystallization: Reactions Thermal dissociation at elevated temperatures produces cadmium oxide and oxides of nitrogen. When hydrogen sulfide is passed through an acidified solution of cadmium nitrate, yellow cadmium sulfide is formed. A red modification of the sulfide is formed under boiling conditions. When treated with sodium hydroxide, solutions of cadmium nitrate yield a solid precipitate of cadmium hydroxide. Many insoluble cadmium salts are obtained by such precipitation reactions. References External links Cadmium compounds Nitrates Deliquescent materials IARC Group 1 carcinogens
Cadmium nitrate
Chemistry
287
5,599,330
https://en.wikipedia.org/wiki/Sensitivity%20and%20specificity
In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives: Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive. Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative. If the true status of the condition cannot be known, sensitivity and specificity can be defined relative to a "gold standard test" which is assumed correct. For all testing, both diagnoses and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa. A test which reliably detects the presence of a condition, resulting in a high number of true positives and low number of false negatives, will have a high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects. A test which reliably excludes individuals who do not have the condition, resulting in a high number of true negatives and low number of false positives, will have a high specificity. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc. The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947. There are different definitions within laboratory quality control, wherein "analytical sensitivity" is defined as the smallest amount of substance in a sample that can accurately be measured by an assay (synonymously to detection limit), and "analytical specificity" is defined as the ability of an assay to measure one particular organism or substance, rather than others. However, this article deals with diagnostic sensitivity and specificity as defined at top. Application to screening study Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting: True positive: Sick people correctly identified as sick False positive: Healthy people incorrectly identified as sick True negative: Healthy people correctly identified as healthy False negative: Sick people incorrectly identified as healthy After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated. Definition Sensitivity Consider the example of a medical test for diagnosing a condition. Sensitivity (sometimes also named the detection rate in a clinical setting) refers to the test's ability to correctly detect ill patients out of those who do have the condition. Mathematically, this can be expressed as: A negative result in a test with high sensitivity can be useful for "ruling out" disease, since it rarely misdiagnoses those who do have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. In this case, a negative test result would definitively rule out the presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for "ruling in" disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease. The calculation of sensitivity does not take into account indeterminate test results. If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it). A test with a higher sensitivity has a lower type II error rate. Specificity Consider the example of a medical test for diagnosing a disease. Specificity refers to the test's ability to correctly reject healthy patients without a condition. Mathematically, this can be written as: A positive result in a test with high specificity can be useful for "ruling in" disease, since the test rarely gives positive results in healthy patients. A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitively rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for "ruling out" disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for "ruling out" the disease. A test with a higher specificity has a lower type I error rate. Graphical illustration The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives. The middle solid line in both figures above that show the level of sensitivity and specificity is the test cutoff point. As previously described, moving this line results in a trade-off between the level of sensitivity and specificity. The left-hand side of this line contains the data points that tests below the cut off point and are considered negative (the blue dots indicate the False Negatives (FN), the white dots True Negatives (TN)). The right-hand side of the line shows the data points that tests above the cut off point and are considered positive (red dots indicate False Positives (FP)). Each side contains 40 data points. For the figure that shows high sensitivity and low specificity, there are 3 FN and 8 FP. Using the fact that positive results = true positives (TP) + FP, we get TP = positive results - FP, or TP = 40 - 8 = 32. The number of sick people in the data set is equal to TP + FN, or 32 + 3 = 35. The sensitivity is therefore 32 / 35 = 91.4%. Using the same method, we get TN = 40 - 3 = 37, and the number of healthy people 37 + 8 = 45, which results in a specificity of 37 / 45 = 82.2 %. For the figure that shows low sensitivity and high specificity, there are 8 FN and 3 FP. Using the same method as the previous figure, we get TP = 40 - 3 = 37. The number of sick people is 37 + 8 = 45, which gives a sensitivity of 37 / 45 = 82.2 %. There are 40 - 8 = 32 TN. The specificity therefore comes out to 32 / 35 = 91.4%. The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from ). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class. Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from ). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test. Medical usage In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate). If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease prevalence in the population of interest. Positive and negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this applet Bayesian clinical diagnostic model which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity. Misconceptions It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative. This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by the prevalence of the condition being tested, the test's sensitivity and its specificity. The SNNOUT mnemonic has some validity when the prevalence of the condition in question is extremely low in the tested sample. The tradeoff between specificity and sensitivity is explored in ROC analysis as a trade off between TPR and FPR (that is, recall and fallout). Giving them equal weight optimizes informedness = specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information). Sensitivity index The sensitivity index or d′ (pronounced "dee-prime") is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations and , and and , respectively, d′ is defined as: An estimate of d′ can be also found from measurements of the hit rate and false-alarm rate. It is calculated as: d′ = Z(hit rate) − Z(false alarm rate), where function Z(p), p ∈ [0, 1], is the inverse of the cumulative Gaussian distribution. d′ is a dimensionless statistic. A higher d′ indicates that the signal can be more readily detected. Confusion matrix The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as well as derivations of several metrics using the four outcomes, as follows: Estimation of errors in quoted sensitivity or specificity Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the gold standard four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state the binomial proportion confidence interval, often calculated using a Wilson score interval. Confidence intervals for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%). Terminology in information retrieval In information retrieval, the positive predictive value is called precision, and sensitivity is called recall. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications. The F-score can be used as a single measure of performance of the test for the positive class. The F-score is the harmonic mean of precision and recall: In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors. Terminology in genome analysis Similarly to the domain of information retrieval, in the research area of gene prediction, the number of true negatives (non-genes) in genomic sequences is generally unknown and much larger than the actual number of genes (true positives). The convenient and intuitively understood term specificity in this research area has been frequently used with the mathematical formula for precision and recall as defined in biostatistics. The pair of thus defined specificity (as positive predictive value) and sensitivity (true positive rate) represent major parameters characterizing the accuracy of gene prediction algorithms. Conversely, the term specificity in a sense of true negative rate would have little, if any, application in the genome analysis research area. See also Notes References Further reading External links UIC Calculator Vassar College's Sensitivity/Specificity Calculator MedCalc Free Online Calculator Bayesian clinical diagnostic model applet Accuracy and precision Bioinformatics Biostatistics Cheminformatics Medical statistics Statistical ratios Statistical classification
Sensitivity and specificity
Chemistry,Engineering,Biology
3,214
82,576
https://en.wikipedia.org/wiki/Hilaeira
In Greek mythology, Hilaera (Ancient Greek: Ἱλάειρα; also Ilaeira) was a Messenian princess. Family Hilaera was a daughter of Leucippus and Philodice, daughter of Inachus. She and her sister Phoebe are commonly referred to as Leucippides (that is, "daughters of Leucippus"). In another account, they were the daughters of Apollo. Hilaera married Castor and bore him a son, named either Anogon or Anaxis. Mythology Hilaera and Phoebe were priestesses of Artemis and Athena, and betrothed to Lynceus and Idas, the sons of Aphareus. Castor and Pollux were charmed by their beauty and carried them off. When Idas and Lynceus tried to rescue their brides-to-be they were both slain, but Castor himself fell. Pollux persuaded Zeus to allow him to share his immortality with his brother. Cultural depictions Hilaera and Phoebe are both portrayed in the painting The Rape of the Daughters of Leucippus. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Ovidius Naso, Fasti translated by James G. Frazer. Online version at the Topos Text Project. Publius Ovidius Naso, Fasti. Sir James George Frazer. London; Cambridge, MA. William Heinemann Ltd.; Harvard University Press. 1933. Latin text available at the Perseus Digital Library. Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website. Theocritus, Idylls from The Greek Bucolic Poets translated by Edmonds, J M. Loeb Classical Library Volume 28. Cambridge, MA. Harvard Univserity Press. 1912. Online version at theoi.com Theocritus, Idylls edited by R. J. Cholmeley, M.A. London. George Bell & Sons. 1901. Greek text available at the Perseus Digital Library. External links Princesses in Greek mythology Children of Apollo Mythological rape victims Mythological Messenians Messenian mythology Castor and Pollux Greek mythological priestesses
Hilaeira
Astronomy
708
28,293,188
https://en.wikipedia.org/wiki/Bid%E2%80%93ask%20matrix
The bid–ask matrix is a matrix with elements corresponding with exchange rates between the assets. These rates are in physical units (e.g. number of stocks) and not with respect to any numeraire. The element of the matrix is the number of units of asset which can be exchanged for 1 unit of asset . Mathematical definition A matrix is a bid-ask matrix, if for . Any trade has a positive exchange rate. for . Can always trade 1 unit with itself. for . A direct exchange is always at most as expensive as a chain of exchanges. Example Assume a market with 2 assets (A and B), such that units of A can be exchanged for 1 unit of B, and units of B can be exchanged for 1 unit of A. Then the bid–ask matrix is: It is required that by rule . With 3 assets, let be the number of units of traded for unit of . The bid–ask matrix is: Rule applies the following inequalities: For higher values of , note that 3-way trading satisfies Rule as Relation to solvency cone If given a bid–ask matrix for assets such that and is the number of assets which with any non-negative quantity of them can be "discarded" (traditionally ). Then the solvency cone is the convex cone spanned by the unit vectors and the vectors . Similarly given a (constant) solvency cone it is possible to extract the bid–ask matrix from the bounding vectors. Notes The bid–ask spread for pair is . If then that pair is frictionless. If a subset then that subset is frictionless. Arbitrage in bid-ask matrices Arbitrage is where a profit is guaranteed. If Rule 3 from above is true, then a bid-ask matrix (BAM) is arbitrage-free, otherwise arbitrage is present via buying from a middle vendor and then selling back to source. Iterative computation A method to determine if a BAM is arbitrage-free is as follows. Consider n assets, with a BAM and a portfolio . Then where the i-th entry of is the value of in terms of asset i. Then the tensor product defined by should resemble . References Mathematical finance
Bid–ask matrix
Mathematics
452
46,783,212
https://en.wikipedia.org/wiki/NGC%2093
NGC 93 is an interacting spiral galaxy estimated to be about 260 million light-years away in the constellation of Andromeda. It was discovered by R. J. Mitchell in 1854. The galaxy is currently interacting with NGC 90 and has some signs of interacting with it. NGC 93 and NGC 90 form the interacting galaxy pair Arp 65. References External links 0093 Andromeda (constellation) +04-02-012 00209 001412 18541026 Spiral galaxies Interacting galaxies
NGC 93
Astronomy
102
27,484,109
https://en.wikipedia.org/wiki/Defence%20Materials%20and%20Stores%20Research%20and%20Development%20Establishment
Defence Materials and Stores Research and Development Establishment (DMSRDE) is a laboratory of the Indian Defence Research and Development Organisation (DRDO) at Kanpur. It is responsible for the research and development of materials for the Indian military services, including various types of protective clothing and equipment. History The Defence Materials & Stores Research & Development Establishment (DMSRDE) was formed by renaming the Inspectorate of General Stores in the Harness & Saddlery Factory in Cawnpore (present day Ordnance Equipment Factory, Kanpur) of the Ordnance Factory Board in 1929. Projects and products DMSRDE had played important role in development of high tech non-metallic materials for the Indian Armed Forces. DMSRDE has developed Nuclear Shielding Pad, Boot Anti Mine, Blast Protection Suit, Bullet Proof Jackets, etc. "The Defence Material and Stores Research Development Establishment in Kanpur has developed a new NBC suit that would be proved effective against any kind of dangerous weapons or chemicals and protect soldiers from any sort of attack", DMSRDE Director Arvind Kumar Saxena was quoted by media-persons. 40,000 pieces of NBC suits costing about ₹30,000 had been requested by the Indian army. "The further progress on the other two suits are going on," further quoted by media-persons. DMSRDE developed a new medium sized light weight 9 Kg bulletproof vest for the Indian Army in 2021 that can protect from hard steel bullet core in counter insurgency operation. The conforms to Bureau of Indian Standards (BIS) and validated by Terminal Ballistics Research Laboratory (TBRL). In 2024, DMSRDE developed India's lightest bulletproof jacket. References External links Defence Research and Development Organisation laboratories Research institutes in Uttar Pradesh Materials science institutes Research and development in India 1929 establishments in India
Defence Materials and Stores Research and Development Establishment
Materials_science
363
36,834,470
https://en.wikipedia.org/wiki/Igor%20Rivin
Igor Rivin (born 1961 in Moscow, USSR) is a Russian-Canadian mathematician, working in various fields of pure and applied mathematics, computer science, and materials science. He was the Regius Professor of Mathematics at the University of St. Andrews from 2015 to 2017, and was the chief research officer at Cryptos Fund until 2019. He was the principal of a couple of small hedge funds, and later did research for Edgestream LP, in addition to his academic work. Career He received his B.Sc. (Hon) in mathematics from the University of Toronto in 1981, and his Ph.D. in 1986 from Princeton University under the direction of William Thurston. Following his doctorate, Rivin directed development of QLISP and the Mathematica kernel, before returning to academia in 1992, where he held positions at the Institut des Hautes Études Scientifiques, the Institute for Advanced Study, the University of Melbourne, Warwick, and Caltech. Since 1999, Rivin has been professor of mathematics at Temple University. Between 2015 and 2017 he was Regius Professor of Mathematics at the University of St. Andrews. Major accomplishments Rivin's PhD thesis and a series of extensions characterized hyperbolic 3-dimensional polyhedra in terms of their dihedral angles, resolving a long-standing open question of Jakob Steiner on the inscribable combinatorial types. These, and some related results in convex geometry, have been used in 3-manifold topology, theoretical physics, computational geometry, and the recently developed field of discrete differential geometry. Rivin has also made advances in counting geodesics on surfaces, the study of generic elements of discrete subgroups of Lie groups, and in the theory of dynamical systems. Rivin is also active in applied areas, having written large parts of the Mathematica 2.0 kernel, and he developed a database of hypothetical zeolites in collaboration with M. M. J. Treacy. Rivin is a frequent contributor to MathOverflow. Igor Rivin is the co-creator, with economist Carlo Scevola, of Cryptocurrencies Index 30 (CCi30), an index of the top 30 cryptocurrencies weighted by market capitalization. CCi30 is sometimes used by academic economists as a market index when comparing the cryptocurrency trading market as a whole with individual currencies. Honors First prize, Canadian Mathematical Olympiad, 1977 Whitehead prize of the London Mathematical Society, 1998 Advanced Research Fellowship of the EPSRC, 1998 Lady Davis Fellowship at the Hebrew University, 2006 Berlin Mathematical School professorship, 2011. Fellow of the American Mathematical Society, 2014. References External links Igor Rivin's author profile at MathSciNet Igor Rivin's Google Scholar profile Igor Rivin at Math Overflow Canadian mathematicians Jewish American scientists University of Toronto alumni Geometers 20th-century American mathematicians 21st-century American mathematicians Living people 1961 births Fellows of the American Mathematical Society 21st-century American Jews
Igor Rivin
Mathematics
607
58,431,337
https://en.wikipedia.org/wiki/HAT%20transposon
hAT transposons are a superfamily of DNA transposons, or Class II transposable elements, that are common in the genomes of plants, animals, and fungi. Nomenclature and classification Superfamilies are identified by shared DNA sequence and ability to respond to the same transposase. Common features of hAT transposons include a size of 2.5-5 kilobases with short terminal inverted repeats and short flanking target site duplications generated during the transposition process. The hAT superfamily's name derives from three of its members: the hobo element from Drosophila melanogaster, the Activator or Ac element from Zea mays, and the Tam3 element from Antirrhinum majus. The superfamily has been divided based on bioinformatics analysis into at least two clusters defined by their phylogenetic relationships: the Ac family and the Buster family. More recently, a third group called Tip has been described. Family members The hAT transposon superfamily includes the first transposon discovered, Ac from Zea mays (maize), first reported by Barbara McClintock. McClintock was awarded the Nobel Prize in Physiology or Medicine in 1983 for this discovery. The family also includes a subgroup known as space invaders or SPIN elements, which have very high copy numbers in some genomes and which are among the most efficient known transposons. Although no extant active example is known, laboratory-generated consensus sequences of active SPIN elements are able to generate high copy numbers when introduced to cells from a wide range of species. Distribution hAT transposons are widely distributed across eukaryotic genomes, but are not active in all organisms. Inactive hAT transposon sequences are present in mammal genomes, including the human genome; they are among the transposon families believed to have been present in the ancestral vertebrate genome. Among mammals, the genome of the little brown bat Myotis lucifugus is notable for its relatively high and recently acquired number of inactive hAT transposons. The distribution of SPIN elements is patchy and does not relate well to known phylogenetic relationships, prompting suggestions that these elements may have spread through horizontal gene transfer. Domestication Transposons are said to be exapted or "domesticated" when they have acquired functional roles in the host genome. Several sequences evolutionarily related to the hAT family have been exapted in diverse organisms, including Homo sapiens. An example is the ZBED gene family, which encode a group of zinc finger-containing regulatory proteins. References Mobile genetic elements
HAT transposon
Biology
527
244,601
https://en.wikipedia.org/wiki/Effects%20of%20nuclear%20explosions
The effects of a nuclear explosion on its immediate vicinity are typically much more destructive and multifaceted than those caused by conventional explosives. In most cases, the energy released from a nuclear weapon detonated within the lower atmosphere can be approximately divided into four basic categories: the blast and shock wave: 50% of total energy thermal radiation: 35% of total energy ionizing radiation: 5% of total energy (more in a neutron bomb) residual radiation: 5–10% of total energy with the mass of the explosion. Depending on the design of the weapon and the location in which it is detonated, the energy distributed to any one of these categories may be significantly higher or lower. The physical blast effect is created by the coupling of immense amounts of energy, spanning the electromagnetic spectrum, with the surroundings. The environment of the explosion (e.g. submarine, ground burst, air burst, or exo-atmospheric) determines how much energy is distributed to the blast and how much to radiation. In general, surrounding a bomb with denser media, such as water, absorbs more energy and creates more powerful shock waves while at the same time limiting the area of its effect. When a nuclear weapon is surrounded only by air, lethal blast and thermal effects proportionally scale much more rapidly than lethal radiation effects as explosive yield increases. This bubble is faster than the speed of sound. The physical damage mechanisms of a nuclear weapon (blast and thermal radiation) are identical to those of conventional explosives, but the energy produced by a nuclear explosion is usually millions of times more powerful per unit mass, and temperatures may briefly reach the tens of millions of degrees. Energy from a nuclear explosion is initially released in several forms of penetrating radiation. When there is surrounding material such as air, rock, or water, this radiation interacts with and rapidly heats the material to an equilibrium temperature (i.e. so that the matter is at the same temperature as the fuel powering the explosion). This causes vaporization of the surrounding material, resulting in its rapid expansion. Kinetic energy created by this expansion contributes to the formation of a shock wave which expands spherically from the center. Intense thermal radiation at the hypocenter forms a nuclear fireball which, if the explosion is low enough in altitude, is often associated with a mushroom cloud. In a high-altitude burst where the density of the atmosphere is low, more energy is released as ionizing gamma radiation and X-rays than as an atmosphere-displacing shockwave. Direct effects Blast damage The high temperatures and radiation cause gas to move outward radially in a thin, dense shell called "the hydrodynamic front". The front acts like a piston that pushes against and compresses the surrounding medium to make a spherically expanding shock wave. At first, this shock wave is inside the surface of the developing fireball, which is created in a volume of air heated by the explosion's "soft" X-rays. Within a fraction of a second, the dense shock front obscures the fireball and continues to move past it, expanding outwards and free from the fireball, causing a reduction of light emanating from a nuclear detonation. Eventually the shock wave dissipates to the point where the light becomes visible again giving rise to the characteristic double flash caused by the shock wave–fireball interaction. It is this unique feature of nuclear explosions that is exploited when verifying that an atmospheric nuclear explosion has occurred and not simply a large conventional explosion, with radiometer instruments known as Bhangmeters capable of determining the nature of explosions. For air bursts at or near sea level, 50–60% of the explosion's energy goes into the blast wave, depending on the size and the yield of the bomb. As a general rule, the blast fraction is higher for low yield weapons. Furthermore, it decreases at high altitudes because there is less air mass to absorb radiation energy and convert it into a blast. This effect is most important for altitudes above 30  km, corresponding to less than 1 percent of sea-level air density. The effects of a moderate rain storm during an Operation Castle nuclear explosion were found to dampen, or reduce, peak pressure levels by approximately 15% at all ranges. Much of the destruction caused by a nuclear explosion is from blast effects. Most buildings, except reinforced or blast-resistant structures, will suffer moderate damage when subjected to overpressures of only 35.5 kilopascals (kPa) (5.15 pounds-force per square inch or 0.35 atm). Data obtained from Japanese surveys following the atomic bombings of Hiroshima and Nagasaki found that was sufficient to destroy all wooden and brick residential structures. This can reasonably be defined as the pressure capable of producing severe damage. The blast wind at sea level may exceed 1,000 km/h, or ~300 m/s, approaching the speed of sound in air. The range for blast effects increases with the explosive yield of the weapon and also depends on the burst altitude. Contrary to what might be expected from geometry, the blast range is not maximal for surface or low altitude blasts but increases with altitude up to an "optimum burst altitude" and then decreases rapidly for higher altitudes. This is caused by the nonlinear behavior of shock waves. When the blast wave from an air burst reaches the ground it is reflected. Below a certain reflection angle, the reflected wave and the direct wave merge and form a reinforced horizontal wave, known as the '"Mach stem" and is a form of constructive interference. This phenomenon is responsible for the bumps or 'knees' in the above overpressure range graph. For each goal overpressure, there is a certain optimum burst height at which the blast range is maximized over ground targets. In a typical air burst, where the blast range is maximized to produce the greatest range of severe damage, i.e. the greatest range that ~ of pressure is extended over, is a GR/ground range of 0.4 km for 1 kiloton (kt) of TNT yield; 1.9 km for 100 kt; and 8.6 km for 10 megatons (Mt) of TNT. The optimum height of burst to maximize this desired severe ground range destruction for a 1 kt bomb is 0.22  km; for 100 kt, 1  km; and for 10 Mt, 4.7  km. Two distinct, simultaneous phenomena are associated with the blast wave in the air: Static overpressure, i.e., the sharp increase in pressure exerted by the shock wave. The overpressure at any given point is directly proportional to the density of the air in the wave. Dynamic pressures, i.e., drag exerted by the blast winds required to form the blast wave. These winds push, tumble and tear objects. Most of the material damage caused by a nuclear air burst is caused by a combination of the high static overpressures and the blast winds. The long compression of the blast wave weakens structures, which are then torn apart by the blast winds. The compression, vacuum and drag phases together may last several seconds or longer, and exert forces many times greater than the strongest hurricane. Acting on the human body, the shock waves cause pressure waves through the tissues. These waves mostly damage junctions between tissues of different densities (bone and muscle) or the interface between tissue and air. Lungs and the abdominal cavity, which contain air, are particularly injured. The damage causes severe hemorrhaging or air embolisms, either of which can be rapidly fatal. The overpressure estimated to damage lungs is about 70 kPa. Some eardrums would probably rupture around 22 kPa (0.2 atm) and half would rupture between 90 and 130 kPa (0.9 to 1.2 atm). Thermal radiation Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This is known as "flash". The chief hazards are burns and eye injuries. On clear days, these injuries can occur well beyond blast ranges, depending on weapon yield. Fires may also be started by the initial thermal radiation, but the following high winds due to the blast wave may put out almost all such fires, unless the yield is very high where the range of thermal effects vastly outranges blast effects, as observed from explosions in the multi-megaton range. This is because the intensity of the blast effects drops off with the third power of distance from the explosion, while the intensity of radiation effects drops off with the second power of distance. This results in the range of thermal effects increasing markedly more than blast range as higher and higher device yields are detonated. Thermal radiation accounts for between 35 and 45% of the energy released in the explosion, depending on the yield of the device. In urban areas, the extinguishing of fires ignited by thermal radiation may matter little, as in a surprise attack fires may also be started by blast-effect-induced electrical shorts, gas pilot lights, overturned stoves, and other ignition sources, as was the case in the breakfast-time bombing of Hiroshima. Whether or not these secondary fires will in turn be snuffed out as modern noncombustible brick and concrete buildings collapse in on themselves from the same blast wave is uncertain, not least of which, because of the masking effect of modern city landscapes on thermal and blast transmission are continually examined. When combustible frame buildings were blown down in Hiroshima and Nagasaki, they did not burn as rapidly as they would have done had they remained standing. The noncombustible debris produced by the blast frequently covered and prevented the burning of combustible material. Fire experts suggest that unlike Hiroshima, due to the nature of modern U.S. city design and construction, a firestorm in modern times is unlikely after a nuclear detonation. This does not exclude fires from being started but means that these fires will not form into a firestorm, due largely to the differences between modern building materials and those used in World War II-era Hiroshima. There are two types of eye injuries from thermal radiation: flash blindness and retinal burn. Flash blindness is caused by the initial brilliant flash of light produced by the nuclear detonation. More light energy is received on the retina than can be tolerated but less than is required for irreversible injury. The retina is particularly susceptible to visible and short wavelength infrared light since this part of the electromagnetic spectrum is focused by the lens on the retina. The result is bleaching of the visual pigments and temporary blindness for up to 40 minutes. A retinal burn resulting in permanent damage from scarring is also caused by the concentration of direct thermal energy on the retina by the lens. It will occur only when the fireball is actually in the individual's field of vision and would be a relatively uncommon injury. Retinal burns may be sustained at considerable distances from the explosion. The height of burst and apparent size of the fireball, a function of yield and range will determine the degree and extent of retinal scarring. A scar in the central visual field would be more debilitating. Generally, a limited visual field defect, which will be barely noticeable, is all that is likely to occur. When thermal radiation strikes an object, part will be reflected, part transmitted, and the rest absorbed. The fraction that is absorbed depends on the nature and color of the material. A thin material may transmit most of the radiation. A light-colored object may reflect much of the incident radiation and thus escape damage, like anti-flash white paint. The absorbed thermal radiation raises the temperature of the surface and results in scorching, charring, and burning of wood, paper, fabrics, etc. If the material is a poor thermal conductor, the heat is confined to the surface of the material. The actual ignition of materials depends on how long the thermal pulse lasts and the thickness and moisture content of the target. Near ground zero where the energy flux exceeds 125 J/cm2, what can burn, will. Farther away, only the most easily ignited materials will flame. Incendiary effects are compounded by secondary fires started by the blast wave effects such as from upset stoves and furnaces. In Hiroshima on 6 August 1945, a tremendous firestorm developed within 20 minutes after detonation and destroyed many more buildings and homes, built out of predominantly 'flimsy' wooden materials. A firestorm has gale-force winds blowing in towards the center of the fire from all directions. It is not peculiar to nuclear explosions, having been observed frequently in large forest fires and following incendiary raids during World War II. Despite fires destroying a large area of Nagasaki, no true firestorm occurred in the city even though a higher yielding weapon was used. Many factors explain this seeming contradiction, including a different time of bombing than Hiroshima, terrain, and crucially, a lower fuel loading/fuel density than that of Hiroshima. As thermal radiation travels more or less in a straight line from the fireball (unless scattered), any opaque object will produce a protective shadow that provides protection from the flash burn. Depending on the properties of the underlying surface material, the exposed area outside the protective shadow will be either burnt to a darker color, such as charring wood, or a brighter color, such as asphalt. If such a weather phenomenon as fog or haze is present at the point of the nuclear explosion, it scatters the flash, with radiant energy then reaching burn-sensitive substances from all directions. Under these conditions, opaque objects are therefore less effective than they would otherwise be without scattering, as they demonstrate maximum shadowing effect in an environment of perfect visibility and therefore zero scatterings. Similar to a foggy or overcast day, although there are few if any, shadows produced by the sun on such a day, the solar energy that reaches the ground from the sun's infrared rays is nevertheless considerably diminished, due to it being absorbed by the water of the clouds and the energy also being scattered back into space. Analogously, so too is the intensity at a range of burning flash energy attenuated, in units of J/cm2, along with the slant/horizontal range of a nuclear explosion, during fog or haze conditions. So despite any object that casts a shadow being rendered ineffective as a shield from the flash by fog or haze, due to scattering, the fog fills the same protective role, but generally only at the ranges that survival in the open is just a matter of being protected from the explosion's flash energy. The thermal pulse also is responsible for warming the atmospheric nitrogen close to the bomb and causing the creation of atmospheric NOx smog components. This, as part of the mushroom cloud, is shot into the stratosphere where it is responsible for dissociating ozone there, in the same way combustion NOx compounds do. The amount created depends on the yield of the explosion and the blast's environment. Studies done on the total effect of nuclear blasts on the ozone layer have been at least tentatively exonerating after initial discouraging findings. Indirect effects Electromagnetic pulse Gamma rays from a nuclear explosion produce high energy electrons through Compton scattering. For high altitude nuclear explosions, these electrons are captured in the Earth's magnetic field at altitudes between 20 and 40 kilometers where they interact with the Earth's magnetic field to produce a coherent nuclear electromagnetic pulse (NEMP) which lasts about one millisecond. Secondary effects may last for more than a second. The pulse is powerful enough to cause moderately long metal objects (such as cables) to act as antennas and generate high voltages due to interactions with the electromagnetic pulse. These voltages can destroy unshielded electronics. There are no known biological effects of EMP. The ionized air also disrupts radio traffic that would normally bounce off the ionosphere. Electronics can be shielded by wrapping them completely in conductive material such as metal foil; the effectiveness of the shielding may be less than perfect. Proper shielding is a complex subject due to the large number of variables involved. Semiconductors, especially integrated circuits, are extremely susceptible to the effects of EMP due to the close proximity of their p–n junctions, but this is not the case with thermionic tubes (or valves) which are relatively immune to EMP. A Faraday cage does not offer protection from the effects of EMP unless the mesh is designed to have holes no bigger than the smallest wavelength emitted from a nuclear explosion. Large nuclear weapons detonated at high altitudes also cause geomagnetically induced current in very long electrical conductors. The mechanism by which these geomagnetically induced currents are generated is entirely different from the gamma-ray induced pulse produced by Compton electrons. Radar blackout The heat of the explosion causes air in the vicinity to become ionized, creating the fireball. The free electrons in the fireball affect radio waves, especially at lower frequencies. This causes a large area of the sky to become opaque to radar, especially those operating in the VHF and UHF frequencies, which is common for long-range early warning radars. The effect is less for higher frequencies in the microwave region, as well as lasting a shorter time – the effect falls off both in strength and the affected frequencies as the fireball cools and the electrons begin to re-form onto free nuclei. A second blackout effect is caused by the emission of beta particles from the fission products. These can travel long distances, following the Earth's magnetic field lines. When they reach the upper atmosphere they cause ionization similar to the fireball but over a wider area. Calculations demonstrate that one megaton of fission, typical of a two-megaton H-bomb, will create enough beta radiation to blackout an area across for five minutes. Careful selection of the burst altitudes and locations can produce an extremely effective radar-blanking effect. The physical effects giving rise to blackouts also cause EMP, which can also cause power blackouts. The two effects are otherwise unrelated, and the similar naming can be confusing. Ionizing radiation About 5% of the energy released in a nuclear air burst is in the form of ionizing radiation: neutrons, gamma rays, alpha particles and electrons moving at speeds up to the speed of light. Gamma rays are high-energy electromagnetic radiation; the others are particles that move slower than light. The neutrons result almost exclusively from the fission and fusion reactions, while the initial gamma radiation includes that arising from these reactions as well as that resulting from the decay of short-lived fission products. The intensity of initial nuclear radiation decreases rapidly with distance from the point of burst because the radiation spreads over a larger area as it travels away from the explosion (the inverse-square law). It is also reduced by atmospheric absorption and scattering. The character of the radiation received at a given location also varies with the distance from the explosion. Near the point of the explosion, the neutron intensity is greater than the gamma intensity, but with increasing distance the neutron-gamma ratio decreases. Ultimately, the neutron component of the initial radiation becomes negligible in comparison with the gamma component. The range for significant levels of initial radiation does not increase markedly with weapon yield and, as a result, the initial radiation becomes less of a hazard with increasing yield. With larger weapons, above 50 kt (200 TJ), blast and thermal effects are so much greater in importance that prompt radiation effects can be ignored. The neutron radiation serves to transmute the surrounding matter, often rendering it radioactive. When added to the dust of radioactive material released by the bomb, a large amount of radioactive material is released into the environment. This form of radioactive contamination is known as nuclear fallout and poses the primary risk of exposure to ionizing radiation for a large nuclear weapon. Details of nuclear weapon design also affect neutron emission: the gun-type assembly Little Boy leaked far more neutrons than the implosion-type 21 kt Fat Man because the light hydrogen nuclei (protons) predominating in the exploded TNT molecules (surrounding the core of Fat Man) slowed down neutrons very efficiently while the heavier iron atoms in the steel nose forging of Little Boy scattered neutrons without absorbing much neutron energy. It was found in early experimentation that normally most of the neutrons released in the cascading chain reaction of the fission bomb are absorbed by the bomb case. Building a bomb case of materials which transmitted rather than absorbed the neutrons could make the bomb more intensely lethal to humans from prompt neutron radiation. This is one of the features used in the development of the neutron bomb. Earthquake The seismic pressure waves created from an explosion may release energy within nearby plates or otherwise cause an earthquake event. An underground explosion concentrates this pressure wave, and a localized earthquake event is more probable. The first and fastest wave, equivalent to a normal earthquake's P wave, can inform the location of the test; the S wave and the Rayleigh wave follow. These can all be measured in most circumstances by seismic stations across the globe, and comparisons with actual earthquakes can be used to help determine estimated yield via differential analysis, by the modelling of the high-frequency (>4 Hz) teleseismic P wave amplitudes. However, theory does not suggest that a nuclear explosion of current yields could trigger fault rupture and cause a major quake at distances beyond a few tens of kilometers from the shot point. Summary of the effects The following table summarizes the most important effects of single nuclear explosions under ideal, clear skies, weather conditions. Tables like these are calculated from nuclear weapons effects scaling laws. Advanced computer modelling of real-world conditions and how they impact on the damage to modern urban areas has found that most scaling laws are too simplistic and tend to overestimate nuclear explosion effects. The scaling laws that were used to produce the table below assume (among other things) a perfectly level target area, no attenuating effects from urban terrain masking (e.g. skyscraper shadowing), and no enhancement effects from reflections and tunneling by city streets. As a point of comparison in the chart below, the most likely nuclear weapons to be used against countervalue city targets in a global nuclear war are in the sub-megaton range. Weapons of yields from 100 to 475 kilotons have become the most numerous in the US and Russian nuclear arsenals; for example, the warheads equipping the Russian Bulava submarine-launched ballistic missile (SLBM) have a yield of 150 kilotons. US examples are the W76 and W88 warheads, with the lower yield W76 being over twice as numerous as the W88 in the US nuclear arsenal. 1 For the direct radiation effects the slant range instead of the ground range is shown here because some effects are not given even at ground zero for some burst heights. If the effect occurs at ground zero the ground range can be derived from slant range and burst altitude (Pythagorean theorem). 2 "Acute radiation syndrome" corresponds here to a total dose of one gray, "lethal" to ten grays. This is only a rough estimate since biological conditions are neglected here. Further complicating matters, under global nuclear war scenarios with conditions similar to that during the Cold War, major strategically important cities like Moscow and Washington are likely to be hit numerous times from sub-megaton multiple independently targetable re-entry vehicles, in a cluster bomb or "cookie-cutter" configuration. It has been reported that during the height of the Cold War in the 1970s Moscow was targeted by up to 60 warheads. The reason that the cluster bomb concept is preferable in the targeting of cities is twofold: the first is that large singular warheads are much easier to neutralize as both tracking and successful interception by anti-ballistic missile systems than it is when several smaller incoming warheads are approaching. This strength in numbers advantage to lower yield warheads is further compounded by such warheads tending to move at higher incoming speeds, due to their smaller, more slender physics package size, assuming both nuclear weapon designs are the same (a design exception being the advanced W88). The second reason for this cluster bomb, or 'layering' (using repeated hits by accurate low yield weapons) is that this tactic along with limiting the risk of failure reduces individual bomb yields, and therefore reduces the possibility of any serious collateral damage to non-targeted nearby civilian areas, including that of neighboring countries. This concept was pioneered by Philip J. Dolan and others. Other phenomena Gamma rays from the nuclear processes preceding the true explosion may be partially responsible for the following fireball, as they may superheat nearby air and/or other material. The vast majority of the energy that goes on to form the fireball is in the soft X-ray region of the electromagnetic spectrum, with these X-rays being produced by the inelastic collisions of the high-speed fission and fusion products. It is these reaction products and not the gamma rays which contain most of the energy of the nuclear reactions in the form of kinetic energy. This kinetic energy of the fission and fusion fragments is converted into internal and then radiation energy by approximately following the process of blackbody radiation emitting in the soft X-ray region. As a result of numerous inelastic collisions, part of the kinetic energy of the fission fragments is converted into internal and radiation energy. Some of the electrons are removed entirely from the atoms, thus causing ionization. Others are raised to higher energy (or excited) states while still remaining attached to the nuclei. Within an extremely short time, perhaps a hundredth of a microsecond or so, the weapon residues consist essentially of completely and partially stripped (ionized) atoms, many of the latter being in excited states, together with the corresponding free electrons. The system then immediately emits electromagnetic (thermal) radiation, the nature of which is determined by the temperature. Since this is of the order of 107 degrees, most of the energy emitted within a microsecond or so is in the soft X-ray region. Because temperature depends on the average internal energy/heat of the particles in a certain volume, internal energy or heat is from kinetic energy. For an explosion in the atmosphere, the fireball quickly expands to maximum size and then begins to cool as it rises like a balloon through buoyancy in the surrounding air. As it does so, it takes on the flow pattern of a vortex ring with incandescent material in the vortex core as seen in certain photographs. This effect is known as a mushroom cloud. Sand will fuse into glass if it is close enough to the nuclear fireball to be drawn into it, and is thus heated to the necessary temperatures to do so; this is known as trinitite. At the explosion of nuclear bombs lightning discharges sometimes occur. Smoke trails are often seen in photographs of nuclear explosions. These are not from the explosion; they are left by sounding rockets launched just prior to detonation. These trails allow observation of the blast's normally invisible shock wave in the moments following the explosion. The heat and airborne debris created by a nuclear explosion can cause rain; the debris is thought to do this by acting as cloud condensation nuclei. During the city firestorm which followed the Hiroshima explosion, drops of water were recorded to have been about the size of marbles. This was termed black rain and has served as the source of a book and film by the same name. Black rain is not unusual following large fires and is commonly produced by pyrocumulus clouds during large forest fires. The rain directly over Hiroshima on that day is said to have begun around 9 a.m. with it covering a wide area from the hypocenter to the northwest, raining heavily for one hour or more in some areas. The rain directly over the city may have carried neutron activated building material combustion products, but it did not carry any appreciable nuclear weapon debris or fallout, although this is generally to the contrary to what other less technical sources state. The "oily" black soot particles, are a characteristic of incomplete combustion in the city firestorm. The element einsteinium was discovered when analyzing nuclear fallout. A side-effect of the Pascal-B nuclear test during Operation Plumbbob may have resulted in the first man-made object launched on an Earth escape trajectory. The so-called "thunder well" effect from the underground explosion may have launched a metal cover plate into space at six times Earth's escape velocity, although the evidence remains subject to debate, due to aerodynamic heating likely disintegrating it before it could exit the atmosphere. Ignition of fusion in the environment Atmospheric ignition In 1942, there was speculation among the scientists developing the first nuclear weapons in the Manhattan Project that a sufficiently large nuclear explosion might ignite fusion reactions the Earth's atmosphere. Since the proposal of the CNO cycle in 1937, it was known that not only the hydrogen in water vapor, but the carbon, nitrogen, and oxygen nuclei in the atmosphere undergo exothermic fusion reactions to heavier nuclei; at stellar temperatures they behave as a fuel. The fear was that similar temperatures in the bomb's initial fireball might trigger the exothermic reactions ^{14}N \ + \ ^{1}H \rightarrow ^{15}O \ + \ \gamma or ^{14}N \ + \ ^{14}N \rightarrow ^{24}Mg \ + \ \alpha , sustaining itself until all the world's atmospheric nitrogen was consumed. Hans Bethe was assigned to study this hypothesis from the project's earliest days, and he eventually concluded that such a reaction could not sustain itself on a large scale due to cooling of the nuclear fireball through an inverse Compton effect. Richard Hamming was asked to make a similar calculation just before the first nuclear test, and he reached the same conclusion. Nevertheless, the notion has persisted as a rumor for many years and was the source of apocalyptic gallows humor at the Trinity test where Enrico Fermi took side bets on atmospheric ignition. Subsequent analysis shows that besides the cooling effect, the latter reaction, with a Gamow energy of 16.46 GeV, was unlikely to have occurred in even a single instance during the Trinity test, as the fireball core reached 1.01 × 1011 K, equivalent to the far lower thermal energy of 8.7 MeV. However, the possibility of fusion of hydrogen nuclei during the test, whose Gamow energies are on the order of 1 MeV, is not known. The first artificial initiation of a thermonuclear reaction is accepted to be the 1951 American nuclear test Greenhouse George. Oceanic ignition Fears of igniting the ocean's higher density of hydrogen, deuterium, or oxygen nuclei during American testing in the Pacific, remained a serious concern, especially as yields increased by orders of magnitude. These were raised from the first air burst over water and submerged tests in Operation Crossroads at Bikini Atoll, and continuing with the first full thermonuclear and megaton-level test of Ivy Mike. Survivability Survivability is highly dependent on factors such as if one is indoors or out, the size of the explosion, the proximity to the explosion, and to a lesser degree the direction of the wind carrying fallout. Death is highly likely and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking effects within a radius of from a 1 megaton airburst, and the 50% chance of death from the blast extends out to ~ from the same 1 megaton atmospheric explosion. An example that highlights the variability in the real world and the effect of being indoors is Akiko Takakura. Despite the lethal radiation and blast zone extending well past her position at Hiroshima, Takakura survived the effects of a 16 kt atomic bomb at a distance of from the hypocenter, with only minor injuries, due mainly to her position in the lobby of the Bank of Japan, a reinforced concrete building, at the time. In contrast, the unknown person sitting outside, fully exposed, on the steps of the Sumitomo Bank, next door to the Bank of Japan, received lethal third-degree burns and was then likely killed by the blast, in that order, within two seconds. With medical attention, radiation exposure is survivable to 200 rems of acute dose exposure. If a group of people is exposed to a 50 to 59 rems acute (within 24 hours) radiation dose, none will get radiation sickness. If the group is exposed to 60 to 180 rems, 50% will become sick with radiation poisoning. If medically treated, all of the 60–180 rems group will survive. If the group is exposed to 200 to 450 rems, most if not all of the group will become sick; 50% will die within two to four weeks, even with medical attention. If the group is exposed to 460 to 600 rems, 100% of the group will get radiation poisoning, and 50% will die within one to three weeks. If the group is exposed to 600 to 1000 rems, 50% will die in one to three weeks. If the group is exposed to 1,000 to 5,000 rems, 100% of the group will die within 2 weeks. At 5,000 rems, 100% of the group will die within 2 days. Nuclear explosion impact on humans indoors Researchers from the University of Nicosia simulated, using high-order computational fluid dynamics, an atomic bomb explosion from a typical intercontinental ballistic missile and the resulting blast wave to see how it would affect people sheltering indoors. They found that the blast wave was enough in the moderate damage zone to topple some buildings and injure people caught outdoors. However, sturdier buildings, such as concrete structures, can remain standing. The team used advanced computer modelling to study how a nuclear blast wave speeds through a standing structure. Their simulated structure featured rooms, windows, doorways, and corridors and allowed them to calculate the speed of the air following the blast wave and determine the best and worst places to be. The study showed that high airspeeds remain a considerable hazard and can still result in severe injuries or even fatalities. Furthermore, simply being in a sturdy building is not enough to avoid risk. The tight spaces can increase airspeed, and the involvement of the blast wave causes air to reflect off walls and bend around corners. In the worst cases, this can produce a force equivalent to multiple times a human's body weight. The most dangerous critical indoor locations to avoid are windows, corridors, and doors. The study received considerable interest from the international press. See also Bomb pulse Effects of nuclear explosions on human health Lists of nuclear disasters and radioactive incidents List of nuclear weapons tests Nuclear warfare Nuclear holocaust Nuclear terrorism Peaceful nuclear explosion Rope trick effect Underwater explosion Visual depictions of nuclear explosions in fiction References External links Nuclear Weapon Testing Effects – Comprehensive video archive Underground Bomb Shelters The Federation of American Scientists provide solid information on weapons of mass destruction, including nuclear weapons and their effects The Nuclear War Survival Skills is a public domain text and is an excellent source on how to survive a nuclear attack. Ground Zero: A Javascript simulation of the effects of a nuclear explosion in a city Oklahoma Geological Survey Nuclear Explosion Catalog lists 2,199 explosions with their date, country, location, yield, etc. Australian Government database of all nuclear explosions Nuclear Weapon Archive from Carey Sublette (NWA) is a reliable source of information and has links to other sources. NWA repository of blast models mainly used for the effects table (especially DOS programs BLAST and WE) HYDESim: High-Yield Detonation Effects Simulator – Mashup of Google Maps and Javascript to calculate blast effects. NUKEMAP – Google Maps/Javascript effects mapper, which includes fireball size, blast pressure, ionizing radiation, and thermal radiation as well as qualitative descriptions. Nuclear Weapons Frequently Asked Questions Atomic Forum Samuel Glasstone and Philip J. Dolan, The Effects of Nuclear Weapons, Third Edition, United States Department of Defense & Energy Research and Development Administration Available Online Nuclear Emergency and Radiation Resources Outrider believes in the power of an informed, engaged public. Nuclear weapons Nuclear physics Articles containing video clips sv:Kärnexplosion
Effects of nuclear explosions
Physics
7,410
14,715,254
https://en.wikipedia.org/wiki/Thayer%E2%80%93Martin%20agar
Thayer–Martin agar (or Thayer–Martin medium, or VPN agar) is a Mueller–Hinton agar with 5% chocolate sheep blood and antibiotics. It is used for culturing and primarily isolating pathogenic Neisseria bacteria, including Neisseria gonorrhoeae and Neisseria meningitidis, as the medium inhibits the growth of most other microorganisms. When growing Neisseria meningitidis, one usually starts with a normally sterile body fluid (blood or CSF), so a plain chocolate agar is used. Thayer–Martin agar was initially developed in 1964, with an improved formulation published in 1966. Components It usually contains the following combination of antibiotics, which make up the VPN acronym: Vancomycin, which is able to kill most Gram-positive organisms, although some Gram-positive organisms such as Lactobacillus and Pediococcus are intrinsically resistant Polymyxin, also known as colistin, which is added to kill most Gram-negative organisms except Neisseria, although some other Gram-negative organisms such as Legionella are also resistant Nystatin, which can kill most fungi Trimethoprim inhibits swarming of Proteus spp Clinical implications A negative culture on Thayer–Martin in a patient exhibiting symptoms of pelvic inflammatory disease most likely indicates an infection with Chlamydia trachomatis. References Microbiological media
Thayer–Martin agar
Biology
299
45,625,841
https://en.wikipedia.org/wiki/W%20Coronae%20Borealis
W Coronae Borealis (W CrB) is a Mira-type long period variable star in the constellation Corona Borealis. Its apparent magnitude varies between 7.8 and 14.3 over a period of 238 days. References Corona Borealis Mira variables Coronae Borealis, W M-type giants 146560 Emission-line stars
W Coronae Borealis
Astronomy
70
51,809,009
https://en.wikipedia.org/wiki/17%CE%B2-Dihydroequilin
17β-Dihydroequilin is a naturally occurring estrogen sex hormone found in horses as well as a medication. As the C3 sulfate ester sodium salt, it is a minor constituent (1.7%) of conjugated estrogens (CEEs; brand name Premarin). However, as equilin, with equilin sulfate being a major component of CEEs, is transformed into 17β-dihydroequilin in the body, analogously to the conversion of estrone into estradiol, 17β-dihydroequilin is, along with estradiol, the most important estrogen responsible for the effects of CEEs. Pharmacology Pharmacodynamics 17β-Dihydroequilin is an estrogen, or an agonist of the estrogen receptors (ERs), the ERα and ERβ. In terms of relative binding affinity for the ERs, 17β-dihydroequilin has about 113% and 108% of that of estradiol for the ERα and ERβ, respectively. 17β-Dihydroequilin has about 83% of the relative potency of CEEs in the vagina and 200% of the relative potency of CEEs in the uterus. Of the equine estrogens, it shows the highest estrogenic activity and greatest estrogenic potency. Like CEEs as a whole, 17β-dihydroequilin has disproportionate effects in certain tissues such as the liver and uterus. Equilin, the second major component of conjugated estrogens after estrone, is reversibly transformed into 17β-dihydroequilin analogously to the transformation of estrone into estradiol. However, whereas the balance of mutual interconversion of estrone and estradiol is largely shifted in the direction of estrone, it is nearly equal in the case of equilin and 17β-dihydroequilin. As such, although 17β-dihydroequilin is only a minor constituent of CEEs, it is, along with estradiol, the most important estrogen relevant to the estrogenic activity of the medication. Pharmacokinetics 17β-Dihydroequilin has about 30% of the relative binding affinity of testosterone for sex hormone-binding globulin (SHBG), relative to 50% for estradiol. The metabolic clearance rate of 17β-dihydroequilin is 1,250 L/day/m2, relative to 580 L/day/m2 for estradiol. Chemistry 17β-Dihydroequilin, or simply β-dihydroequilin, also known as δ7-17β-estradiol or as 7-dehydro-17β-estradiol, as well as estra-1,3,5(10),7-tetraen-3,17β-diol, is a naturally occurring estrane steroid and an analogue of estradiol. In terms of chemical structure and pharmacology, equilin (δ7-estrone) is to 17β-dihydroequilin as estrone is to estradiol. References Secondary alcohols Estranes Estrogens Human drug metabolites
17β-Dihydroequilin
Chemistry
715
429,296
https://en.wikipedia.org/wiki/Hausdorff%20distance
In mathematics, the Hausdorff distance, or Hausdorff metric, also called Pompeiu–Hausdorff distance, measures how far two subsets of a metric space are from each other. It turns the set of non-empty compact subsets of a metric space into a metric space in its own right. It is named after Felix Hausdorff and Dimitrie Pompeiu. Informally, two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is the longest distance someone can be forced to travel by an adversary who chooses a point in one of the two sets, from where they then must travel to the other set. In other words, it is the greatest of all the distances from a point in one set to the closest point in the other set. This distance was first introduced by Hausdorff in his book Grundzüge der Mengenlehre, first published in 1914, although a very close relative appeared in the doctoral thesis of Maurice Fréchet in 1906, in his study of the space of all continuous curves from . Definition Let be a metric space. For each pair of non-empty subsets and , the Hausdorff distance between and is defined as where represents the supremum operator, the infimum operator, and where quantifies the distance from a point to the subset . An equivalent definition is as follows. For each set let which is the set of all points within of the set (sometimes called the -fattening of or a generalized ball of radius around ). Then, the Hausdorff distance between and is defined as Equivalently, where is the smallest distance from the point to the set . Remark It is not true for arbitrary subsets that implies For instance, consider the metric space of the real numbers with the usual metric induced by the absolute value, Take Then . However because , but . But it is true that and ; in particular it is true if are closed. Properties In general, may be infinite. If both X and Y are bounded, then is guaranteed to be finite. if and only if X and Y have the same closure. For every point x of M and any non-empty sets Y, Z of M: d(x,Y) ≤ d(x,Z) + dH(Y,Z), where d(x,Y) is the distance between the point x and the closest point in the set Y. |diameter(Y)-diameter(X)| ≤ 2 dH(X,Y). If the intersection X ∩ Y has a non-empty interior, then there exists a constant r > 0, such that every set X′ whose Hausdorff distance from X is less than r also intersects Y. On the set of all subsets of M, dH yields an extended pseudometric. On the set F(M) of all non-empty compact subsets of M, dH is a metric. If M is complete, then so is F(M). If M is compact, then so is F(M). The topology of F(M) depends only on the topology of M, not on the metric d. Motivation The definition of the Hausdorff distance can be derived by a series of natural extensions of the distance function in the underlying metric space M, as follows: Define a distance function between any point x of M and any non-empty set Y of M by: For example, d(1, {3,6}) = 2 and d(7, {3,6}) = 1. Define a (not-necessarily-symmetric) "distance" function between any two non-empty sets X and Y of M by: For example, If X and Y are compact then d(X,Y) will be finite; d(X,X)=0; and d inherits the triangle inequality property from the distance function in M. As it stands, d(X,Y) is not a metric because d(X,Y) is not always symmetric, and does not imply that (It does imply that ). For example, , but . However, we can create a metric by defining the Hausdorff distance to be: Applications In computer vision, the Hausdorff distance can be used to find a given template in an arbitrary target image. The template and image are often pre-processed via an edge detector giving a binary image. Next, each 1 (activated) point in the binary image of the template is treated as a point in a set, the "shape" of the template. Similarly, an area of the binary target image is treated as a set of points. The algorithm then tries to minimize the Hausdorff distance between the template and some area of the target image. The area in the target image with the minimal Hausdorff distance to the template, can be considered the best candidate for locating the template in the target. In computer graphics the Hausdorff distance is used to measure the difference between two different representations of the same 3D object particularly when generating level of detail for efficient display of complex 3D models. If is the surface of Earth, and is the land-surface of Earth, then by finding the point Nemo, we see is around 2,704.8 km. Related concepts A measure for the dissimilarity of two shapes is given by Hausdorff distance up to isometry, denoted DH. Namely, let X and Y be two compact figures in a metric space M (usually a Euclidean space); then DH(X,Y) is the infimum of dH(I(X),Y) among all isometries I of the metric space M to itself. This distance measures how far the shapes X and Y are from being isometric. The Gromov–Hausdorff convergence is a related idea: measuring the distance of two metric spaces M and N by taking the infimum of among all isometric embeddings and into some common metric space L. See also Wijsman convergence Kuratowski convergence Hemicontinuity Fréchet distance Hypertopology References External links Hausdorff distance between convex polygons. Using MeshLab to measure difference between two surfaces A short tutorial on how to compute and visualize the Hausdorff distance between two triangulated 3D surfaces using the open source tool MeshLab. MATLAB code for Hausdorff distance: Distance Metric geometry
Hausdorff distance
Physics,Mathematics
1,329
51,488,917
https://en.wikipedia.org/wiki/Glycoside-pentoside-hexuronide%3Acation%20symporter%20family
The Glycoside-Pentoside-Hexuronide (GPH):Cation Symporter Family is part of the major facilitator superfamily and catalyzes uptake of sugars (mostly, but not exclusively, glycosides) in symport with a monovalent cation (H+ or Na+). The various members of the family have been reported to use Na+, H+ or Li, Na+ or Li+, or all three cations as the symported cation. Structure Proteins of the GHP family are generally about 500 amino acids in length, although the Gram-positive bacterial lactose permeases are larger, due to a C-terminal hydrophilic domain that is involved in regulation by the phosphotransferase system. All of these proteins possess twelve putative transmembrane α-helical spanners. Homology Homologues are from bacteria, including the distantly related sucrose:H+ symporters of plants and a yeast maltose/sucrose:H+ symporter of Schizosaccharomyces pombe. This yeast protein is about 24% identical to the plant sucrose:H+ symporters and is more distantly related to the bacterial members of the GPH family. Limited sequence similarity of some of these proteins with members of the major facilitator superfamily has been observed, and their 3D structures are clearly similar. Transport Reaction The generalized transport reaction catalyzed by the GPH:cation symporter family is: Sugar (out) + [H+ or Na+] (out) → Sugar (in) + [H+ or Na+] (in) References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
Glycoside-pentoside-hexuronide:cation symporter family
Biology
389
24,633,221
https://en.wikipedia.org/wiki/Fomes%20hemitephrus
Fomes hemitephrus is a bracket fungus in the family Polyporaceae. First named Polyporus hemitephrus by English naturalist Miles Joseph Berkeley in 1855, it was given its current name by the English mycologist Mordecai Cubitt Cooke in 1885. The species is found in Australia and New Zealand, and is one of the most common polypores in those countries, causing a white rot on several tree species. Historically, Fomes hemitephrus has been placed in several different genera, including Fomitopsis, Heterobasidion, and Trametes. References Polyporaceae Fungi described in 1855 Fungi of Australia Fungi of New Zealand Taxa named by Miles Joseph Berkeley Fungus species
Fomes hemitephrus
Biology
150
24,488,539
https://en.wikipedia.org/wiki/Aviation%20communication
Aviation communication refers to the conversing of two or more aircraft. Aircraft are constructed in such a way that make it very difficult to see beyond what is directly in front of them. As safety is a primary focus in aviation, communication methods such as wireless radio are an effective way for aircraft to communicate with the necessary personnel. Aviation is an international industry and as a result involves multiple languages. The International Civil Aviation Organization (ICAO) deemed English the official language of aviation. The industry considers that some pilots may not be fluent English speakers and as a result pilots are obligated to participate in an English proficiency test. Background Aviation communication is the means by which aircraft crews connect with other aircraft and people on the ground to relay information. Aviation communication is a crucial component pertaining to the successful functionality of aircraft movement both on the ground and in the air. Increased communication reduces the risk of an accident. During the early stages of aviation, it was assumed that skies were too big and empty that it was impossible that two planes would collide. In 1956 two planes famously crashed over the Grand Canyon, which sparked the creation of the Federal Aviation Administration (FAA). Aviation was roaring during the Jet Age and as a result, communication technologies needed to be developed. This was initially seen as a very difficult task: ground controls used visual aids to provide signals to pilots in the air. With the advent of portable radios small enough to be placed in planes, pilots were able to communicate with people on the ground. With later developments, pilots were then able to converse air-to-ground and air-to-air. Today, aviation communication relies heavily on the use of many systems. Planes are outfitted with the newest radio and GPS systems, as well as Internet and video capabilities. English is the main language used by the aviation industry; the use of aviation English is regulated by the International Civil Aviation Organization (ICAO). Early systems Flight was considered a foreign concept until the Wright Brothers successfully completed the world's first human flight in 1903. The industry grew rapidly and ground crews initially relied on coloured paddles, signal flares, hand signs, and other visual aids to communicate with incoming and outgoing aircraft. Although these methods were effective for ground crews, they offered no way for pilots to communicate back. As wireless telegraphy technologies developed alongside the growth of aviation during the first decade of the twentieth century, wireless telegraph systems were used to send messages in Morse code, first from ground-to-air and later air-to-ground. With this technology, planes were able to call in accurate artillery fire and act as forward observers in warfare. In 1911, wireless telegraphy was put into operational use in the Italo-Turkish War. In 1912, the Royal Flying Corps had begun experimenting with "wireless telegraphy" in aircraft. Lieutenant B.T James was a leading pioneer of wireless radio in aircraft. In the spring of 1913, James had begun to experiment with radios in a B.E.2A. James managed to successfully increase the efficiency of wireless radio before he was shot down and killed by anti-aircraft fire on July 13, 1915. Nonetheless, wireless communication systems in aircraft remained experimental and would take years to successfully develop a practical prototype. The early radios were heavy in weight and were unreliable; additionally, ground forces rarely used radio because signals were easily intercepted and targeted by opposing forces. At the beginning of World War I, aircraft were not typically equipped with wireless equipment. Instead, soldiers used large panel cut outs to distinguish friendly forces. These cut outs could also be used as a directional device to help pilots navigate back to friendly and familiar airfields. In April 1915, Captain J.M. Furnival was the first person to hear a voice from the ground from Major Prince who said, "If you can hear me now, it will be the first time speech has ever been communicated to an aeroplane in flight." In June 1915, the world's first air-to-ground voice transmission took place at Brooklands, United Kingdom, over about 20 miles. Ground-to-air was initially by Morse code, but it is believed 2-way voice communications were available and installed by July 1915. By early 1916, the Marconi Company in Britain started production of air-to-ground radio transmitters/receivers which were used in the war over France. In 1917, AT&T invented the first American air-to-ground radio transmitter. They tested this device at Langley Field in Virginia and found it was a viable technology. In May 1917, General George Squier of the U.S. Army Signal Corps contacted AT&T to develop an air-to-ground radio with a range of 2,000 yards. By July 4 of that same year, AT&T technicians achieved two-way communication between pilots and ground personnel. This allowed ground personnel to communicate directly with pilots using their voices instead of Morse code. Though few of these devices saw service in the war, they proved this was a viable and valuable technology worthy of refinement and advancement. Interwar Following World War I new technology was developed to increase the range and performance of the radios being used to communicate with planes in the air. In December 1919 a year after the end of World War I, Hugh Trenchard, 1st Viscount Trenchard, a senior officer in the Royal Flying Corps (RFC) later Royal Air Force (RAF), produced a report on the permanent organisation and operations of the RAF in peacetime in which he argued that if the air force officer was not to be a chauffeur, and nothing more, then navigation, meteorology, photography and wireless were necessities. It was not until 1930 that airborne radios were reliable enough and had enough power to make them effective; and it was this year that the International Commission for Aerial Navigation agreed that all aircraft carrying 10 or more passengers should carry wireless equipment. Prior to this, only military aircraft designated for scout missions required radios. The operating distance of radios increased much slower than the distance planes were able to travel. After an original two mile range for the two-way radio systems tested by 1917 had extended to ranges of an average of 20 miles, which remained a practical limit for medium sized aircraft. In terms of air traffic control, this resulted in a plane's messages having to bounce from airfield to airfield in order to get to its intended recipient. As the speed of planes increased, this resulted in a plane reaching its destination before the message announcing its departure. On 15 November 1938, the Army Airways Communications System (AACS) was established. This was a point-to-point communications system used by the US Army Air Corps, that allowed army air fields to remain in contact with planes throughout their entire flight. It could also be used to disseminate weather reports and orders to military aircraft and act as an air traffic control for arrivals and departures at military airfields. As technology increased, systems such as the AACS expanded and spread across the globe as other militaries and civilian services developed their own systems of air control. World War II The development of radar in the mid-1930s proved a great advance in air-to-ground communication. Radar could be used to track planes in the air and determine distance, direction, speed and even type of aircraft. This allowed for better air traffic control as well as navigation aides for pilots. Radar also proved to be a valuable tool in targeting for bombers. Radar stations on the coast of Britain could aim two radar beams from separate locations on the coast towards Germany. By aligning the two radar beams to intersect over the desired target, a town or factory for example, an aircraft could then follow one radar signal until it intersected with the other where it would then know to drop bombs. The Royal Air Force used the R1155/T1154 receiver/transmitter combination in most of its larger aircraft, particularly the Avro Lancaster and Short Sunderland. Single seat aircraft such as the Spitfire and Hurricane were equipped mostly with the TR1143 set. Other systems employed were Eureka and the S-Phone, which enabled Special Operations Executive agents working behind enemy lines to communicate with friendly aircraft and coordinate landings and the dropping of agents and supplies. Error Communication error can occur between pilots and between pilots and air traffic controllers due to inadequate information, unclear pronunciation or comprehensive misunderstanding. The more information needing transfer, the more chance for error. Unclear pronunciation could happen with non-English speakers. Sometimes lack of self-confidence and motivation affects expression in communication. Misunderstanding happens with both native speakers and non-native speakers through communication, so a standard aviation language is important to improve this situation. Sources of communication error come from: phonology (speech rate, stress, intonation, pauses), syntax (language word patterns, sentence structure), semantics, and pragmatics (language in context). Even though English is the international aviation language, native English speakers still play a role in misunderstanding and situational awareness. Both the ICAO and the Federal Aviation Administration use alternative phrases, which is confusing to both native and non-native English speakers. The biggest problem regarding non-native English speakers' transmissions is speech rate. In order to understand alternative and unfamiliar accents, people's rate of comprehension and response slows down. Accents also affect transmissions because of the different pronunciations across languages. Some of the earlier miscommunication issues included the limitation of language-based warning systems in aircraft and insufficient English proficiency. According to US department of transportation's report, errors between pilots and controllers include: Read-back/hear-back errors - the pilot reads back the clearance incorrectly and the controller fails to correct the error - accounted for 47% of the errors found in this analysis. No pilot read-back. A lack of a pilot read-back contributed to 25% of the errors found in this analysis. Hear-back Errors Type H - the controller fails to notice his or her own error in the pilot's correct read-back or fails to correct critical erroneous information in a pilot's statement of intent - accounted for 18% of the errors found in this analysis. Generally, miscommunication is caused by mis-hearing by the pilots for 28%, pilot not responding for 20%, controller mis-hearing for 15% and 10% that controllers do not respond. Also, a professional research shows that 30% of the information will be lost during the miscommunication. Moreover, miscommunication exists in personnel with different background of linguistics is shown to be one of the major problem in miscommunication to cause aviation accidents. Avoiding or minimizing miscommunication could be achieved by standardized debriefing or an interview process, and following a checklist to supplement written data. English The International Civil Aviation Organization established English as the international aviation language in 1951 to improve consistency, accuracy, and effectiveness of pilot - air traffic control communication. It requires that all pilots on international flights and air traffic controllers serving international airports and routes must be able to communicate in English effectively, as well as in their native language. The goal was to achieve standards that would eliminate communication error, language, and comprehension difficulties, all of which have been a major cause of operational airspace incidents. Miscommunication between pilots and air traffic control is a prominent factor in fatal airplane crashes, airspace incidents, runway incursion, and mid-air collisions. Aviation English is the highly specialized language and sequences used by pilots, air traffic control, and other aviation personnel and it focuses on a particular pronunciation, vocabulary, grammatical structure, and discourse styles that are used in specific aviation-related contexts. The language used by pilots and air traffic controllers during radiotelephony communication can be categorized into two types: standard phraseology, and plain language repertoire. Standard phraseology is the specialized phrasing commonly used by the aviation community to effectively communicate, and plain language is a more normal language used in everyday life. Many non-native English speaking pilots and air traffic controllers learn English during their flight training and use it in a highly practical level while safely operating an aircraft and maintaining the safety of airspace, which can be highly stressful. Language proficiency requirements ICAO also established the Language Proficiency Requirements to try to rectify multiple issues regarding accents, terminology, and interpretation in communication. The intention of the LPRs is to "ensure that the language proficiency of pilots and air traffic controllers is sufficient to reduce miscommunication as much as possible and to allow pilots and controllers to recognize and solve potential miscommunication when it does occur" and "that all speakers have sufficient language proficiency to handle non-routine situations." The structure of the LPR has six levels, pronunciation, structure, vocabulary, fluency, comprehension, and interactions. The implemented universal aviation English proficiency scale ranged from Level 1 to Level 6. Beginning in March 2008, ICAO set out the requirement that all pilots flying international routes and air traffic control serving international airports and routes must be a Level 4 or above and will be continually reassessed every three years. The criteria to achieve Level 4 are as follows: Pronunciation: A dialect and/or accent intelligible to aeronautical community. Structure: Relevant grammatical structures and sentence patterns determined by language functions appropriate to the task. Vocabulary: Vocabulary range and accuracy used sufficiently to communicate effectively. Fluency: Produces stretches of language at an appropriate tempo. Comprehension: Comprehension accurate in common, concrete, and work-related topics and the accent used is sufficiently intelligible for the international community. Interactions: Responses are immediate, appropriate, and informative. Non-English speakers English is the aviation language used by ICAO. Usually, human factors that affect communications include two aspects: direct, meaning the error caused by the language itself, which is the problem for non English speakers, and also indirect, with the gender, age, and experience impacting the communication in aviation. Accent and dialect are significant problems in aviation communication. These may cause misunderstandings and result in the wrong information being conveyed. Command of speech structures like grammar and vocabulary can also cause problems. During the communication through English for non-English speakers, gender and race may affect ability to communicate with the second language which is an indirect impact on communication. Intonation due to signal limitations, lack of function words, standard phraseology and rapid speech rate also plague many non English speakers. As a result, both pilots and ATCs need to have enough English ability to accomplish their tasks. Through education to help improve aviation English, participants need not only focus on the textbook, but need experience in an actual environment such as lab experience to help speakers to improve their English fluency and avoid misunderstanding which helps non-English speakers to communicate normally. See also Air navigation Signal square Ground Air Emergency Code Index of aviation articles References Telecommunications Wireless
Aviation communication
Technology,Engineering
2,984
11,250,389
https://en.wikipedia.org/wiki/NGC%203344
NGC 3344 is a relatively isolated barred spiral galaxy around half the size of the Milky Way located 22.5 million light years away in the constellation Leo Minor. This galaxy belongs to the group known as the Leo spur, which is a branch of the Virgo Supercluster. NGC 3344 has the morphological classification (R)SAB(r)bc, which indicates it is a weakly barred spiral galaxy that exhibits rings and moderate to loosely wound spiral arms. There is both an inner and outer ring, with the prominent arms radiating outward from the inner ring and the slightly elliptical bar being situated inside. At the center of the bar is an HII nucleus with an angular diameter of about 3″. NGC 3344 hosted supernova SN 2012fh, which was shown to likely be a Type Ib or Type Ic. References External links Galex image of NGC 3344 Intermediate spiral galaxies Leo Minor 3344 05840 31968 17850406
NGC 3344
Astronomy
196
5,837,036
https://en.wikipedia.org/wiki/Potential%20vorticity
In fluid mechanics, potential vorticity (PV) is a quantity which is proportional to the dot product of vorticity and stratification. This quantity, following a parcel of air or water, can only be changed by diabatic or frictional processes. It is a useful concept for understanding the generation of vorticity in cyclogenesis (the birth and development of a cyclone), especially along the polar front, and in analyzing flow in the ocean. Potential vorticity (PV) is seen as one of the important theoretical successes of modern meteorology. It is a simplified approach for understanding fluid motions in a rotating system such as the Earth's atmosphere and ocean. Its development traces back to the circulation theorem by Bjerknes in 1898, which is a specialized form of Kelvin's circulation theorem. Starting from Hoskins et al., 1985, PV has been more commonly used in operational weather diagnosis such as tracing dynamics of air parcels and inverting for the full flow field. Even after detailed numerical weather forecasts on finer scales were made possible by increases in computational power, the PV view is still used in academia and routine weather forecasts, shedding light on the synoptic scale features for forecasters and researchers. Baroclinic instability requires the presence of a potential vorticity gradient along which waves amplify during cyclogenesis. Bjerknes circulation theorem Vilhelm Bjerknes generalized Helmholtz's vorticity equation (1858) and Kelvin's circulation theorem (1869) to inviscid, geostrophic, and baroclinic fluids, i.e., fluids of varying density in a rotational frame which has a constant angular speed. If we define circulation as the integral of the tangent component of velocity around a closed fluid loop and take the integral of a closed chain of fluid parcels, we obtain (1) where is the time derivative in the rotational frame (not inertial frame), is the relative circulation, is projection of the area surrounded by the fluid loop on the equatorial plane, is density, is pressure, and is the frame's angular speed. With Stokes' theorem, the first term on the right-hand-side can be rewritten as (2) which states that the rate of the change of the circulation is governed by the variation of density in pressure coordinates and the equatorial projection of its area, corresponding to the first and second terms on the right hand side. The first term is also called the "solenoid term". Under the condition of a barotropic fluid with a constant projection area , the Bjerknes circulation theorem reduces to Kelvin's theorem. However, in the context of atmospheric dynamics, such conditions are not a good approximation: if the fluid circuit moves from the equatorial region to the extratropics, is not conserved. Furthermore, the complex geometry of the material circuit approach is not ideal for making an argument about fluid motions. Rossby's shallow water PV Carl Rossby proposed in 1939 that, instead of the full three-dimensional vorticity vector, the local vertical component of the absolute vorticity is the most important component for large-scale atmospheric flow, and that the large-scale structure of a two-dimensional non-divergent barotropic flow can be modeled by assuming that is conserved. His later paper in 1940 relaxed this theory from 2D flow to quasi-2D shallow water equations on a beta plane. In this system, the atmosphere is separated into several incompressible layers stacked upon each other, and the vertical velocity can be deduced from integrating the convergence of horizontal flow. For a one-layer shallow water system without external forces or diabatic heating, Rossby showed that , (3) where is the relative vorticity, is the layer depth, and is the Coriolis parameter. The conserved quantity, in parenthesis in equation (3), was later named the shallow water potential vorticity. For an atmosphere with multiple layers, with each layer having constant potential temperature, the above equation takes the form (4) in which is the relative vorticity on an isentropic surface—a surface of constant potential temperature, and is a measure of the weight of unit cross-section of an individual air column inside the layer. Interpretation Equation (3) is the atmospheric equivalent to the conservation of angular momentum. For example, a spinning ice skater with her arms spread out laterally can accelerate her rate of spin by contracting her arms. Similarly, when a vortex of air is broadened, it in turn spins more slowly. When the air converges horizontally, the air speed increases to maintain potential vorticity, and the vertical extent increases to conserve mass. On the other hand, divergence causes the vortex to spread, slowing down the rate of spin. Ertel's potential vorticity Hans Ertel generalized Rossby's work via an independent paper published in 1942. By identifying a conserved quantity following the motion of an air parcel, it can be proved that a certain quantity called the Ertel potential vorticity is also conserved for an idealized continuous fluid. We look at the momentum equation and the mass continuity equation of an idealized compressible fluid in Cartesian coordinates: (5) (6) where is the geopotential height. Writing the absolute vorticity as , as , and then take the curl of the full momentum equation (5), we have (7) Consider to be a hydrodynamical invariant, that is, equals to zero following the fluid motion in question. Scalar multiplication of equation (7) by , and note that , we have (8) The second term on the left-hand side of equation (8) is equal to , in which the second term is zero. From the triple vector product formula, we have (9) where the second row is due to the fact that is conserved following the motion, . Substituting equation (9) into equation (8) above, (10) Combining the first, second, and fourth term in equation (10) can yield . Dividing by and using a variant form of mass continuity equation,, equation (10) gives (11) If the invariant is only a function of pressure and density , then its gradient is perpendicular to the cross product of and , which means that the right-hand side of equation (11) is equal to zero. Specifically for the atmosphere, potential temperature is chosen as the invariant for frictionless and adiabatic motions. Therefore, the conservation law of Ertel's potential vorticity is given by (12) the potential vorticity is defined as (13) where is the fluid density, is the absolute vorticity and is the gradient of potential temperature. It can be shown through a combination of the first law of thermodynamics and momentum conservation that the potential vorticity can only be changed by diabatic heating (such as latent heat released from condensation) or frictional processes. If the atmosphere is stably stratified so that the potential temperature increases monotonically with height, can be used as a vertical coordinate instead of . In the coordinate system, "density" is defined as . Then, if we start the derivation from the horizontal momentum equation in isentropic coordinates, Ertel PV takes a much simpler form (14) where is the local vertical vector of unit length and is the 3-dimensional gradient operator in isentropic coordinates. It can be seen that this form of potential vorticity is just the continuous form of Rossby's isentropic multi-layer PV in equation (4). Interpretation The Ertel PV conservation theorem, equation (12), states that for a dry atmosphere, if an air parcel conserves its potential temperature, its potential vorticity is also conserved following its full three-dimensional motions. In other words, in adiabatic motion, air parcels conserve Ertel PV on an isentropic surface. Remarkably, this quantity can serve as a Lagrangian tracer that links the wind and temperature fields. Using the Ertel PV conservation theorem has led to various advances in understanding the general circulation. One of them was "tropopause folding" process described in Reed et al., (1950). For the upper-troposphere and stratosphere, air parcels follow adiabatic movements during a synoptic period of time. In the extratropical region, isentropic surfaces in the stratosphere can penetrate into the tropopause, and thus air parcels can move between stratosphere and troposphere, although the strong gradient of PV near the tropopause usually prevents this motion. However, in frontal region near jet streaks, which is a concentrated region within a jet stream where the wind speeds are the strongest, the PV contour can extend substantially downward into the troposphere, which is similar to the isentropic surfaces. Therefore, stratospheric air can be advected, following both constant PV and isentropic surfaces, downwards deep into the troposphere. The use of PV maps was also proved to be accurate in distinguishing air parcels of recent stratospheric origin even under sub-synoptic-scale disturbances. (An illustration can be found in Holton, 2004, figure 6.4) The Ertel PV also acts as a flow tracer in the ocean, and can be used to explain how a range of mountains, such as the Andes, can make the upper westerly winds swerve towards the equator and back. Maps depicting Ertel PV are usually used In meteorological analysis in which the potential vorticity unit (PVU) defined as . Quasi-geostrophic PV One of the simplest but nevertheless insightful balancing conditions is in the form of quasi-geostrophic equations. This approximation basically states that for three-dimensional atmospheric motions that are nearly hydrostatic and geostrophic, their geostrophic part can be determined approximately by the pressure field, whereas the ageostrophic part governs the evolution of the geostrophic flow. The potential vorticity in the quasi-geostrophic limit (QGPV) was first formulated by Charney and Stern in 1960. Similar to Chapter 6.3 in Holton 2004, we start from horizontal momentum (15), mass continuity (16), hydrostatic (17), and thermodynamic (18) equations on a beta plane, while assuming that the flow is inviscid and hydrostatic, (15) (16) (17) (18) where represents the geostrophic evolution, , is the diabatic heating term in , is the geopotential height, is the geostrophic component of horizontal velocity, is the ageostrophic velocity, is horizontal gradient operator in (x, y, p) coordinates. With some manipulation (see Quasi-geostrophic equations or Holton 2004, Chapter 6 for details), one can arrive at a conservation law (19) where is the spatially averaged dry static stability. Assuming that the flow is adiabatic, which means , we have the conservation of QGPV. The conserved quantity takes the form (20) which is the QGPV, and it is also known as the pseudo-potential-vorticity. Apart from the diabatic heating term on the right-hand-side of equation(19), it can also be shown that QGPV can be changed by frictional forces. The Ertel PV reduces to the QGPV if one expand the Ertel PV to the leading order, and assume that the evolution equation is quasi-geostrophic, i.e., . Because of this factor, one should also note that the Ertel PV conserves following air parcel on an isentropic surface and is therefore a good Lagrangian tracer, whereas the QGPV is conserved following large-scale geostrophic flow. QGPV has been widely used in depicting large-scale atmospheric flow structures, as discussed in the section PV invertibility principle; PV invertibility principle Apart from being a Lagrangian tracer, the potential vorticity also gives dynamical implications via the invertibility principle. For a 2-dimensional ideal fluid, the vorticity distribution controls the stream function by a Laplace operator, (21) where is the relative vorticity, and is the streamfunction. Hence from the knowledge of vorticity field, the operator can be inverted and the stream function can be calculated. In this particular case (equation 21), vorticity gives all the information needed to deduce motions, or streamfunction, thus one can think in terms of vorticity to understand the dynamics of the fluid. A similar principle was originally introduced for the potential vorticity in three-dimensional fluid in the 1940s by Kleinschmit, and was developed by Charney and Stern in their quasi-geostrophic theory. Despite theoretical elegance of Ertel's potential vorticity, early applications of Ertel PV are limited to tracer studies using special isentropic maps. It is generally insufficient to deduce other variables from the knowledge of Ertel PV fields only, since it is a product of wind () and temperature fields ( and ). However, large-scale atmospheric motions are inherently quasi-static; wind and mass fields are adjusted and balanced against each other (e.g., gradient balance, geostrophic balance). Therefore, other assumptions can be made to form a closure and deduce the complete structure of the flow in question:(1) introduce balancing conditions of certain form. These conditions must be physically realizable and stable without instabilities such as static instability. Also, the space and time scales of the motion must be compatible with the assumed balance;(2) specify a certain reference state, such as distribution of temperature, potential temperature, or geopotential height;(3) assert proper boundary conditions and invert the PV field globally.The first and second assumptions are expressed explicitly in the derivation of quasi-geostrophic PV. Leading-order geostrophic balance is used as the balancing condition. The second-order terms such as ageostrophic winds, perturbations of potential temperature and perturbations of geostrophic height should have consistent magnitude, i.e., of the order of Rossby number. The reference state is zonally averaged potential temperature and geopotential height. The third assumption is apparent even for 2-dimensional vorticity inversion because inverting the Laplace operator in equation (21), which is a second-order elliptic operator, requires knowledge of the boundary conditions. For example, in equation (20), invertibility implies that given the knowledge of , the Laplace-like operator can be inverted to yield geopotential height . is also proportional to the QG streamfunction under the quasi-geostrophic assumption. The geostrophic wind field can then be readily deduced from . Lastly, the temperature field is given by substituting into the hydrostatic equation (17). See also Vorticity Circulation (fluid dynamics) References Further reading External links AMS Glossary entry Encyclopedia article on Potential Vorticity by Michael E. McIntyre Oceanography Atmospheric dynamics
Potential vorticity
Physics,Chemistry,Environmental_science
3,180
38,040,895
https://en.wikipedia.org/wiki/Stenella%20gynoxidicola
Stenella gynoxidicola, formerly Cladosporium gynoxidicola is a species of anamorphic fungi. Description Belonging to the Stenella genus, this species is a Cercospora-like fungus with a superficial secondary mycelium, solitary conidiophores, conidiogenous cells with thickened and darkened conidiogenous loci and catenate or single conidia with dark, slightly thickened hila. See also Stenella iteae Stenella africana Stenella constricta Stenella uniformis Stenella vermiculata Stenella capparidicola References Further reading External links gynoxidicola Fungi described in 1982 Fungus species
Stenella gynoxidicola
Biology
152
1,211,905
https://en.wikipedia.org/wiki/HD%2074156
HD 74156 is a yellow dwarf star (spectral type G0V) in the constellation of Hydra, 187 light years from the Solar System. It is known to be orbited by two giant planets. Star This star is 24% more massive and 64% larger than the Sun. The total luminosity is 2.96 times that of the Sun and its temperature 5960 K. The age of the star is estimated at 3.7 billion years, with metallicity 1.35 times that of the Sun based on its abundance of iron. Planetary system In April 2001, two giant planets were announced orbiting the star. The first planet HD 74156 b orbits the star at a distance closer than Mercury is to the Sun, in an extremely eccentric orbit. The second planet HD 74156 c is a long-period, massive planet (at least 8 times the mass of Jupiter), which orbits the star in an elliptical orbit with a semimajor axis of 3.90 astronomical units. In 2022, the inclination and true mass of HD 74156 c were measured via astrometry. Claims of a third planet Given the two-planet configuration of the system under the assumption that the orbits are coplanar and have masses equal to their minimum masses, an additional Saturn-mass planet would be stable in a region between 0.9 and 1.4 AU between the orbits of the two known planets. Under the "packed planetary systems" hypothesis, which predicts that planetary systems form in such a way that the system could not support additional planets between the orbits of the existing ones, the gap would be expected to host a planet. In September 2007, a third planet with a mass at least 0.396 Jupiter masses was announced to be orbiting between planets b and c with an eccentric orbit. The planet, orbiting in a region of the planetary system previously known to be stable for additional planets, was seen as a confirmation of the "packed planetary systems" hypothesis. However, Roman V. Baluev has cast doubt on this discovery, suggesting that the observed variations may be due to annual errors in the data. A subsequent search using the Hobby-Eberly Telescope also failed to confirm the planet, and further data obtained using HIRES instrument strongly contradicts its existence. See also List of extrasolar planets HD 37124 Upsilon Andromedae References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona Hydra (constellation) 074156 042327 G-type main-sequence stars Planetary systems with two confirmed planets BD+05 2035 J08422511+0434411
HD 74156
Astronomy
544
25,713,656
https://en.wikipedia.org/wiki/Experimental%20architecture
Experimental Architecture is a visionary branch of architecture and research practice that aims to bring about change, and develop forms of architecture never seen before. The common concept behind experimental architecture is the challenging of conventional methods of architecture in order to change the way in which we relate to the natural world, while meeting the needs of all peoples. Rather than using architecture to control the environment, experimental architecture seeks to utilize the natural environment in its design, by searching for new ways in which we can inhabit our ecosystem. Experimental architecture considers the contribution of non-humans to our living space. There is also a large emphasis, within experimental architecture, on the inclusivity of all peoples, disadvantaged included, as it addresses the realities of diverse bodies and abilities. Combating climate change, and reducing wastage and pollution is another main focus behind the concept of experimental architecture. Methodology Experimental architecture seeks to break out of typical architectural conventions by questioning the limitations of architecture, and experimenting with shapes, materials, technology, construction methods, and social structures. Thus, experimental architecture utilizes a transdisciplinary approach in order to address various issues in its design. A lot of experimental architectural concepts arise from developing efficient structures by examining nature's forms and processes. For example, experimental architecture considers the use of construction materials, such as, super-strong renewable wood, fungus-based self healing concrete, nano cellulose made of waste materials, 3D-printed sandstone and cement that absorbs out of the atmosphere. The use of such materials demonstrates how experimental architecture plans for the entire life cycle of a structure, considering how components of the structure can be reused and/or recycled. The materials considered in experimental architecture are different from the materials typically used in architectural designs, which include steel, concrete, wood, stone and brick. Furthermore, current construction, underpinned by Modern Architecture, is responsible for a large amount of global energy use, greenhouse gas emissions, water use, wood harvests, raw material extraction and waste. Thus, experimental architecture seeks to resist the mainstream practices of modern architecture. Following Lebbeus Woods’ scholarship, experimental architecture applies a scientific approach to research, requiring that developments of tools and methodologies can be recorded, evaluated and discussed among a community of peers. The contextualization in scientific tradition derives, for example, from Woods’ interest in Isaac Newton's cause-and-effect determinism; his critique of Descartes; and his dedication to deploy design practices for exploring alternatives to Cartesian space. History The concept of experimental architecture has been around since the late 20th century. It is seen as having emerged predominantly as a reaction against the functional and standardized architectural design of the post-war years. Architects reacted by taking inspiration from certain art forms, which eventually culminated in the emergence of the concepts of experimental architecture. Experimental architecture also emerged with the increased inventive use of technology and was conceived amid advances of innovative materials, computers, communication, transportation and plastics. Experimental architecture sought to apply these advances to produce more radical and empowering architecture. Experimental architecture was often considered to be a form of paper architecture, referring to architects making utopian, dystopian or fantasy projects that were never meant to be built. The concept of experimental architecture was first conceived of by the architect Peter Cook in his 1970 book "Experimental Architecture." Peter Cook was also part of the architecture firm Archigram, formed in the 1960s, which embraced the ideology of experimental architecture. However, while the term “experimental architecture” was first coined in Cook's book, the practice of experimental architecture predates 1970, as there are many examples before this time of architecture that could be considered to be experimental architecture. Lebbeus Woods is another prominent figure in the conceptualization of experimental architecture, he wrote about the topic in a variety of his published Books, in particular his book “Radical Reconstruction” explores the practice and ideas of experimental architecture. Woods played an integral part in researching and conceptualizing experimental architecture. He established the Research Institute for Experimental Architecture in 1988, from which many architectural organizations followed. He also used experimental architectural concepts in multiple of his architectural designs. He was heavily involved in designing experimental, alternative ways of living. An example of Wood's ideas of experimental architecture is in his Underground Berlin design. During the time of the Berlin wall, Woods came up with an experimental design that involved living underground. This design sought to overthrow the current system of values and social control through means of experimental architecture. This design may be considered to be paper architecture as it was merely a concept and was never made into reality. This topic was further explored by the architect Rachel Armstrong, in her 2019 book "Experimental Architecture: Designing the Unknown." Rachel Armstrong's book is predominantly concerned with theorizing experimental architecture. Rachel Armstrong's’ work investigates a new approach to building materials called ‘living architecture,’ which explores the idea of buildings sharing some of the properties of living systems. Armstrong describes experimental architecture to be about challenging the practice of upholding previous principles of architecture that emerged in the industrial age, to take steps towards more ecologically engaged approaches. Organizations There are a multitude of experimental architecture organizations that have emerged since the late 20th century. One example of these organizations is the Research Institute of Experimental Architecture (RIEA), which was founded in 1988 by Lebbeus Woods in Switzerland. Wood describes the institute as having an epistemological approach to architecture. The purpose of RIEA is to advance experimentation and research in the architectural field. RIEA promotes and provides training for experimental design, and implementation of experimental projects. The Institute for Experimental Architecture at Innsbruck University, founded in 2000 by Volker Giencke, is another organization committed to exploring the concept of experimental architecture. The Experimental Architecture Group (EAG), founded by Rachel Armstrong, is involved in researching and adopting experimental architecture practices, and adopts an ethical and ecological approach in their exploration of architecture design. They use prototypes to explore new pathways of architecture that acknowledges the diversity of humans and nonhumans. EAG's work has been exhibited and performed at the Venice Art Biennale, Trondheim Biennale, Allenheads Contemporary Arts, Culture Lab and the Tallinn Architecture Biennale. The EAG sets out to enable the transition from an industrial era towards an ecological era, by developing new architectural processes. The architecture firm, Archigram, which formed in the 1960s, practises experimental architecture by opposing the conventions of modern architecture, in which the architect is the designer of fixed forms, by using adaptive architecture and integrating new technologies in their design. ARCH5 is an international research consortium that specializes in the design of experimental architecture. This group focuses on developing building techniques that integrate plant technologies and variegated roof systems. An example of their work is their roof landscapes that allow water to percolate through various soil substrates that emulate meadow or fenlike artificial habitats. Experimental Architecture in China The practice of experimental architecture has been predominant in China since the end of the 20th century. The concept of experimental architecture began to emerge in China with the appearance of the Art Movement in 1985, experimental novels and avant-garde drama. Experimental architecture was able to emerge in the post-Mao era as authorities gradually permitted private architecture design firms to operate, enabling freedom for the practice of experimental architecture. With fewer of the architectural restraints that were experienced in the Mao era architects were able to explore architecture through innovation and experimentation. Experimental architecture sought to challenge the restrained architecture that resulted from the restrictions of the Mao era. Experimental architecture emerged in China in two different ways, one being the exploration of ancient Chinese confucian architecture, which was a symbolic expression of the “new great China,” post-Mao. Another way in which experimental architecture manifested was in attempts to follow young international avant-garde, and international modern approaches to architecture, such as de-constructivism. Some Chinese experimental architects have attempted to move away from traditional architectural concepts, theories and forms to create brand new experimental architecture, and some have applied the concepts of experimental architecture to traditional Chinese architecture. There have been a variety of experimental architecture firms in China that explore the different practices of experimental architecture, some of which explore the deeper layer of Chinese identity and some of which take more of a transnational approach. Some architectural firms in China that practise experimental architecture seek to combine traditional ideas of Chinese architecture with new experimental forms of architecture. For example, Atelier FCJZ, an architecture office located in China, designed an experimental house called Concrete Vessel. The design of this house built upon the concept of a traditional courtyard house in Beijing, while bringing to the design a connectedness with the natural environment, through the use of experimental materials. The material used externally and internally throughout the house was a 3mm thin Glass Fiber reinforced concrete made from recycled construction debris. This thin material was lightweight and its porosity created a living environment that breathes and filters the air while allowing light to come through. This material is an example of how experimental architecture seeks to connect life and nature. Examples Architects practicing experimental architecture conceptualize new ideas of architecture. Types of experimental architecture vary, with some types demonstrating the application of ideas and approaches in full scale, and some types using small scale models. Some experimental architecture is considered to be like paper architecture, which illustrates utopian or dystopian visions which may not necessarily be intended for realization. Although, there are a variety of examples of experimental architecture that have been implemented in the real world. Experimental architecture may focus on incorporating the properties of living systems into its design, it may focus on interconnectedness between humans and non-humans, it may focus on the reuse and reusability of designs or it may focus on the ecology of design. Many experimental architecture designs are a combination of these factors. Singapore's solar-powered Supertrees are considered to be a form of experimental architecture. The Supertrees are a mechanical forest of vertical gardens, rainwater collection systems and conservatories. This architecture is an example of ecologically focused structures that seek to replicate some of the properties of living systems, such as rainforests. It is a form of biomimicry, a practice, common in experimental architecture, that learns from and mimics the strategies found in nature to solve human design challenges. Another example of experimental architecture that attempts to encapsulate some of the properties of living systems is the University of Stuttgart's carbon fiber pavilions. This structure is inspired by the lightweight shell that encases the wings and abdomen of a beetle, and thus, this design is another example of biomimicry in experimental architecture. Furthermore, the practise of biomimicry is evident in the development of fiber composites, using 3D printing, based on the behavior of spiders and silkworms.Some Chinese experimental architects have attempted to move away from traditional architectural concepts, theories and forms to create brand new experimental architecture, and some have applied the concepts of experimental architecture to traditional Chinese architecture. 3D printing is a common tool utilized in experimental architecture. Another experimental architecture concept is the zero-water desert garden design. This design is an ecologically focussed architecture project that explores how the design of urban cities can implement plants that are not dependent on water. Another example of experimental architecture is the prototype called Co-Occupancy. This design aimed to develop interconnectedness between human and non-human species. The design involved delineating the zones between humans and non-humans, for example designing roofs and foundations so that they could be utilized by animals. Another example of experimental architecture is the Minnesota's Experimental City, which was a concept design for a self-sustaining city. The design encompassed ideas of recycling, circularity and reversible design. An example of experimental architecture that considers the entire life cycle of the structure is the Cellophane House. This structure is pre-fabricated and designed for disassembly and reuse of its materials. Lightweight materials were chosen that were reusable within existing recyclable streams. The house was also designed so that it could adapt to different sites and climatic factors, enabling it to be reused in different areas. And it was designed so that there would be no waste from the disassembly process. An example of experimental architecture that focuses on a user-centre design is the Soar Design Studio's residence, which was converted to a communal space for local students and designed to increase social interaction through connected and open spaces. References External links Lebbeus Woods' website Research Institute for Experimental Architecture Minnesota's Experimental City of the Future that Never Got Built Experimental Prototype Architecture Exhibited in French Park Experimental Architecture: Testing New Ideas in Living Laboratories - WebUrbanist Concrete Vessel / Atelier FCJZ Experimental Architecture: Prototyping Possibilities for an Ecological Era Architectural design
Experimental architecture
Engineering
2,583
1,978,796
https://en.wikipedia.org/wiki/Epitaxial%20wafer
An epitaxial wafer (also called epi wafer, epi-wafer, or epiwafer) is a wafer of semiconducting material made by epitaxial growth (epitaxy) for use in photonics, microelectronics, spintronics, or photovoltaics. The epi layer may be the same material as the substrate, typically monocrystaline silicon, or it may be a silicon dioxide (SoI) or a more exotic material with specific desirable qualities. The purpose of epitaxy is to perfect the crystal structure over the bare substrate below and improve the wafer surface's electrical characteristics, making it suitable for highly complex microprocessors and memory devices. History Silicon epi wafers were first developed around 1966 and achieved commercial acceptance by the early 1980s. Methods for growing the epitaxial layer on monocrystalline silicon or other wafers include: various types of chemical vapor deposition (CVD) classified as Atmospheric pressure CVD (APCVD) or metal organic chemical vapor deposition (MOCVD), as well as molecular beam epitaxy (MBE). Two "kerfless" methods (without abrasive sawing) for separating the epitaxial layer from the substrate are called "implant-cleave" and "stress liftoff". A method applicable when the epi-layer and substrate are the same material employs ion implantation to deposit a thin layer of crystal impurity atoms and resulting mechanical stress at the precise depth of the intended epi layer thickness. The induced localized stress provides a controlled path for crack propagation in the following cleavage step. In the dry stress lift-off process applicable when the epi-layer and substrate are suitably different materials, a controlled crack is driven by a temperature change at the epi/wafer interface purely by the thermal stresses due to the mismatch in thermal expansion between the epi layer and substrate, without the necessity for any external mechanical force or tool to aid crack propagation. It was reported that this process yields single atomic plane cleavage, reducing the need for post-lift-off polishing and allowing multiple substrate reuses up to 10 times. Types The epitaxial layers may consist of compounds with particular desirable features such as gallium nitride (GaN), gallium arsenide (GaAs), or some combination of the elements gallium, indium, aluminum, nitrogen, phosphorus or arsenic. Photovoltaic research and development Solar cells, or photovoltaic cells (PV) for producing electric power from sunlight can be grown as thick epi wafers on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 μm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells with this technique can have efficiencies approaching wafer-cut cells but at appreciably lower costs if the CVD can be done at atmospheric pressure in a high-throughput inline process. In September 2015, the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE) announced the achievement of efficiency above 20% for such cells. Optimizing the production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production. The surface of epitaxial wafers may be textured to enhance light absorption. In April 2016, the company Crystal Solar of Santa Clara, California, in collaboration with the European research institute IMEC announced that they achieved a 22.5% cell efficiency of an epitaxial silicon cell with an nPERT (n-type passivated emitter, rear totally-diffused) structure grown on 6-inch (150 mm) wafers. In September 2015 Hanwha Q Cells presented an achieved conversion efficiency of 21.4% (independently confirmed) for screen-printed solar cells made with Crystal Solar epitaxial wafers. In June 2015, it was reported that heterojunction solar cells grown epitaxially on n-type monocrystalline silicon wafers had reached an efficiency of 22.5% over a total cell area of 243.4 cm. In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by Plasma-enhanced chemical vapor deposition (PECVD) References Swinger, Patricia. Building on the Past, Ready for the Future: A Fiftieth Anniversary Celebration of MEMC Electronic Materials, Inc.. The Donning Company, 2009. Notes Semiconductor device fabrication
Epitaxial wafer
Materials_science
1,053