text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Off-the-record Messaging ( OTR ) is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AES symmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption , OTR provides forward secrecy and malleable encryption .
The primary motivation behind the protocol was providing deniable authentication for the conversation participants while keeping conversations confidential, like a private conversation in real life, or off the record in journalism sourcing . This is in contrast with cryptography tools that produce output which can be later used as a verifiable record of the communication event and the identities of the participants. The initial introductory paper was named "Off-the-Record Communication, or, Why Not To Use PGP ". [ 1 ]
The OTR protocol was designed by cryptographers Ian Goldberg and Nikita Borisov and released on 26 October 2004. [ 2 ] They provide a client library to facilitate support for instant messaging client developers who want to implement the protocol. A Pidgin and Kopete plugin exists that allows OTR to be used over any IM protocol supported by Pidgin or Kopete, offering an auto-detection feature that starts the OTR session with the buddies that have it enabled, without interfering with regular, unencrypted conversations. Version 4 of the protocol [ 3 ] has been in development since 2017 [ 4 ] by a team led by Sofía Celi, and reviewed by Nik Unger and Ian Goldberg. This version aims to provide online and offline deniability, to update the cryptographic primitives, and to support out-of-order delivery and asynchronous communication.
OTR was presented in 2004 by Nikita Borisov, Ian Avrum Goldberg , and Eric A. Brewer as an improvement over the OpenPGP and the S/MIME system at the "Workshop on Privacy in the Electronic Society" (WPES). [ 1 ] The first version 0.8.0 of the reference implementation was published on 21 November 2004. In 2005 an analysis was presented by Mario Di Raimondo, Rosario Gennaro, and Hugo Krawczyk that called attention to several vulnerabilities and proposed appropriate fixes, most notably including a flaw in the key exchange. [ 5 ] As a result, version 2 of the OTR protocol was published in 2005 which implements a variation of the proposed modification that additionally hides the public keys. Moreover, the possibility to fragment OTR messages was introduced in order to deal with chat systems that have a limited message size, and a simpler method of verification against man-in-the-middle attacks was implemented. [ 6 ]
In 2007 Olivier Goffart published mod_otr [ 7 ] for ejabberd , making it possible to perform man-in-the-middle attacks on OTR users who don't check key fingerprints. OTR developers countered this attack by introducing a socialist millionaire protocol implementation in libotr. Instead of comparing key checksums, knowledge of an arbitrary shared secret can be utilised for which relatively low entropy can be tolerated. [ 8 ]
Version 3 of the protocol was published in 2012. As a measure against the repeated reestablishment of a session in case of several competing chat clients being signed on to the same user address at the same time, more precise identification labels for sending and receiving client instances were introduced in version 3. Moreover, an additional key is negotiated which can be used for another data channel. [ 9 ]
Several solutions have been proposed for supporting conversations with multiple participants. A method proposed in 2007 by Jiang Bian, Remzi Seker, and Umit Topaloglu uses the system of one participant as a "virtual server". [ 10 ] The method called "Multi-party Off-the-Record Messaging" (mpOTR) which was published in 2009 works without a central management host and was introduced in Cryptocat by Ian Goldberg et al. [ 11 ]
In 2013, the Signal Protocol was introduced, which is based on OTR Messaging and the Silent Circle Instant Messaging Protocol (SCIMP). It brought about support for asynchronous communication ("offline messages") as its major new feature, as well as better resilience with distorted order of messages and simpler support for conversations with multiple participants. [ 12 ] OMEMO , introduced in an Android XMPP client called Conversations in 2015, integrates the Double Ratchet Algorithm used in Signal into the instant messaging protocol XMPP ("Jabber") and also enables encryption of file transfers. In the autumn of 2015 it was submitted to the XMPP Standards Foundation for standardisation. [ 13 ] [ 14 ]
Currently, version 4 of the protocol has been designed. It was presented by Sofía Celi and Ola Bini on PETS2018. [ 15 ]
In addition to providing encryption and authentication — features also provided by typical public-key cryptography suites, such as PGP , GnuPG , and X.509 ( S/MIME ) — OTR also offers some less common features:
As of OTR 3.1, the protocol supports mutual authentication of users using a shared secret through the socialist millionaire protocol. This feature makes it possible for users to verify the identity of the remote party and avoid a man-in-the-middle attack without the inconvenience of manually comparing public key fingerprints through an outside channel.
Due to limitations of the protocol, OTR does not support multi-user group chat as of 2009 [update] [ 16 ] but it may be implemented in the future. As of version 3 [ 9 ] of the protocol specification, an extra symmetric key is derived during authenticated key exchanges that can be used for secure communication (e.g., encrypted file transfers ) over a different channel. Support for encrypted audio or video is not planned. ( SRTP with ZRTP exists for that purpose.) A project to produce a protocol for multi-party off-the-record messaging (mpOTR) has been organized by Cryptocat , eQualitie , and other contributors including Ian Goldberg. [ 11 ] [ 17 ]
Since OTR protocol v3 (libotr 4.0.0) the plugin supports multiple OTR conversations with the same buddy who is logged in at multiple locations. [ 18 ]
These clients support Off-the-Record Messaging out of the box (incomplete list).
The following clients require a plug-in to use Off-the-Record Messaging.
Although Gmail's Google Talk uses the term "off the record", the feature has no connection to the Off-the-Record Messaging protocol described in this article, its chats are not encrypted in the way described above—and could be logged internally by Google even if not accessible by end-users. [ 32 ] [ 33 ] | https://en.wikipedia.org/wiki/Off-the-record_messaging |
The Office Open XML file formats are a set of file formats that can be used to represent electronic office documents. There are formats for word processing documents, spreadsheets and presentations as well as specific formats for material such as mathematical formulas, graphics, bibliographies etc.
The formats were developed by Microsoft and first appeared in Microsoft Office 2007 . They were standardized between December 2006 and November 2008, first by the Ecma International consortium, where they became ECMA-376, and subsequently, after a contentious standardization process , by the ISO/IEC's Joint Technical Committee 1, where they became ISO/IEC 29500:2008.
Office Open XML documents are stored in Open Packaging Conventions (OPC) packages, which are ZIP files containing XML and other data files, along with a specification of the relationships between them. [ 2 ] Depending on the type of the document, the packages have different internal directory structures and names. An application will use the relationships files to locate individual sections (files), with each having accompanying metadata, in particular MIME metadata.
A basic package contains an XML file called [Content_Types].xml at the root, along with three directories: _rels , docProps , and a directory specific for the document type (for example, in a .docx word processing package, there would be a word directory). The word directory contains the document.xml file which is the core content of the document.
An example relationship file ( word/_rels/document.xml.rels ), is:
As such, images referenced in the document can be found in the relationship file by looking for all relationships that are of type http://schemas.microsoft.com/office/2006/relationships/image . To change the used image, edit the relationship.
The following code shows an example of inline markup for a hyperlink :
In this example, the Uniform Resource Locator (URL) is in the Target attribute of the Relationship referenced through the relationship Id, "rId2" in this case. Linked images, templates, and other items are referenced in the same way.
Pictures can be embedded or linked using a tag:
This is the reference to the image file. All references are managed via relationships. For example, a document.xml has a relationship to the image. There is a _rels directory in the same directory as document.xml, inside _rels is a file called document.xml.rels. In this file there will be a relationship definition that contains type, ID and location. The ID is the referenced ID used in the XML document. The type will be a reference schema definition for the media type and the location will be an internal location within the ZIP package or an external location defined with a URL.
Office Open XML uses the Dublin Core Metadata Element Set and DCMI Metadata Terms to store document properties. Dublin Core is a standard for cross-domain information resource description and is defined in ISO 15836:2003 .
An example document properties file ( docProps/core.xml ) that uses Dublin Core metadata, is:
An Office Open XML file may contain several documents encoded in specialized markup languages corresponding to applications within the Microsoft Office product line. Office Open XML defines multiple vocabularies using 27 namespaces and 89 schema modules.
The primary markup languages are:
Shared markup language materials include:
In addition to the above markup languages custom XML schemas can be used to extend Office Open XML.
Patrick Durusau, the editor of ODF , has viewed the markup style of OOXML and ODF as representing two sides of a debate: the "element side" and the "attribute side". He notes that OOXML represents "the element side of this approach" and singles out the KeepNext element as an example:
In contrast, he notes ODF would use the single attribute fo:keep-next , rather than an element, for the same semantic. [ 3 ]
The XML Schema of Office Open XML emphasizes reducing load time and improving parsing speed. [ 4 ] In a test with applications current in April 2007, XML-based office documents were slower to load than binary formats. [ 5 ] To enhance performance, Office Open XML uses very short element names for common elements and spreadsheets save dates as index numbers (starting from 1900 or from 1904). [ 6 ] In order to be systematic and generic, Office Open XML typically uses separate child elements for data and metadata (element names ending in Pr for properties ) rather than using multiple attributes, which allows structured properties. Office Open XML does not use mixed content but uses elements to put a series of text runs (element name r ) into paragraphs (element name p ). The result is terse [ citation needed ] and highly nested in contrast to HTML , for example, which is fairly flat, designed for humans to write in text editors and is more congenial for humans to read.
The naming of elements and attributes within the text has attracted some criticism. There are three different syntaxes in OOXML (ECMA-376) for specifying the color and alignment of text depending on whether the document is a text, spreadsheet, or presentation. Rob Weir (an IBM employee and co-chair of the OASIS OpenDocument Format TC) asks "What is the engineering justification for this horror?". He contrasts with OpenDocument : "ODF uses the W3C's XSL-FO vocabulary for text styling, and uses this vocabulary consistently". [ 7 ]
Some have argued the design is based too closely on Microsoft applications.
In August 2007, the Linux Foundation published a blog post calling upon ISO National Bodies to vote "No, with comments" during the International Standardization of OOXML. It said, "OOXML is a direct port of a single vendor's binary document formats. It avoids the re-use of relevant existing international standards (e.g. several cryptographic algorithms, VML, etc.). There are literally hundreds of technical flaws that should be addressed before standardizing OOXML including continued use of binary code tied to platform specific features, propagating bugs in MS-Office into the standard, proprietary units, references to proprietary/confidential tags, unclear IP and patent rights, and much more". [ 8 ]
The version of the standard submitted to JTC 1 was 6546 pages long. The need and appropriateness of such length has been questioned. [ 9 ] [ 10 ] Google stated that "the ODF standard, which achieves the same goal, is only 867 pages" [ 9 ]
Word processing documents use the XML vocabulary known as WordprocessingML normatively defined by the schema wml.xsd which accompanies the standard. This vocabulary is defined in clause 11 of Part 1. [ 11 ]
Spreadsheet documents use the XML vocabulary known as SpreadsheetML normatively defined by the schema sml.xsd which accompanies the standard. This vocabulary is described in clause 12 of Part 1. [ 11 ]
Each worksheet in a spreadsheet is represented by an XML document with a root element named <worksheet>...</worksheet> in the http://schemas.openxmlformats.org/spreadsheetml/2006/main Namespace.
The representation of date and time values in SpreadsheetML has attracted some criticism. ECMA-376 1st edition does not conform to ISO 8601:2004 "Representation of Dates and Times". It requires that implementations replicate a Lotus 1-2-3 [ 12 ] bug that erroneously treats 1900 as a leap year. Products complying with ECMA-376 would be required to use the WEEKDAY() spreadsheet function, and therefore assign incorrect dates to some days of the week, and also miscalculate the number of days between certain dates. [ 13 ] ECMA-376 2nd edition (ISO/IEC 29500) allows the use of 8601:2004 "Representation of Dates and Times" in addition to the Lotus 1-2-3 bug-compatible form. [ 14 ] [ 15 ]
Office Math Markup Language is a mathematical markup language which can be embedded in WordprocessingML, with intrinsic support for including word processing markup like revision markings, [ 16 ] footnotes, comments, images and elaborate formatting and styles. [ 17 ] The OMML format is different from the World Wide Web Consortium (W3C) MathML recommendation that does not support those office features, but is partially compatible [ 18 ] through XSL Transformations ; tools are provided with office suite and are automatically used via clipboard transformations. [ 19 ]
The following Office MathML example defines the fraction : π 2 {\displaystyle {\frac {\pi }{2}}}
Some have queried the need for Office MathML (OMML) instead advocating the use of MathML , a W3C recommendation for the "inclusion of mathematical expressions in Web pages" and "machine to machine communication". [ 20 ] Murray Sargent has answered some of these issues in a blog post, which details some of the philosophical differences between the two formats. [ 21 ]
DrawingML is the vector graphics markup language used in Office Open XML documents. Its major features are the graphics rendering of text elements, graphical vector-based shape elements, graphical tables and charts.
The DrawingML table is the third table model in Office Open XML (next to the table models in WordprocessingML and SpreadsheetML) and is optimized for graphical effects and its main use is in presentations created with PresentationML markup.
DrawingML contains graphics effects (like shadows and reflection) that can be used on the different graphical elements that are used in DrawingML.
In DrawingML you can also create 3d effects, for instance to show the different graphical elements through a flexible camera viewpoint.
It is possible to create separate DrawingML theme parts in an Office Open XML package. These themes can then be applied to graphical elements throughout the Office Open XML package. [ 22 ]
DrawingML is unrelated to the other vector graphics formats such as SVG . These can be converted to DrawingML to include natively in an Office Open XML document. This is a different approach to that of the OpenDocument format, which uses a subset of SVG, and includes vector graphics as separate files.
A DrawingML graphic's dimensions are specified in English Metric Units (EMUs). It is so called because it allows an exact common representation of dimensions originally in either English or metric units—defined as 1/360,000 of a centimeter , and thus there are 914,400 EMUs per inch , and 12,700 EMUs per point , to prevent round-off in calculations. Rick Jelliffe favors EMUs as a rational solution to a particular set of design criteria. [ 23 ]
Some have criticised the use of DrawingML (and the transitional-use-only VML ) instead of W3C recommendation SVG . [ 24 ] VML did not become a W3C recommendation. [ 25 ]
OOXML documents are typically composed of other resources in addition to XML content (graphics, video, etc.).
Some have criticised the choice of permitted format for such resources: ECMA-376 1st edition specifies "Embedded Object Alternate Image Requests Types" and "Clipboard Format Types", which refer to Windows Metafiles or Enhanced Metafiles – each of which are proprietary formats that have hard-coded dependencies on Windows itself. The critics state the standard should instead have referenced the platform neutral standard ISO/IEC 8632 " Computer Graphics Metafile ". [ 13 ]
The Standard provides three mechanisms to allow foreign markup to be embedded within content for editing purposes:
These are defined in clause 17.5 of Part 1.
Versions of Office Open XML contain what are termed "compatibility settings". These are contained in Part 4 ("Markup Language Reference") of ECMA-376 1st Edition, but during standardization were moved to become a new part (also called Part 4) of ISO/IEC 29500:2008 ("Transitional Migration Features").
These settings (including element with names such as autoSpaceLikeWord95 , footnoteLayoutLikeWW8 , lineWrapLikeWord6 , mwSmallCaps , shapeLayoutLikeWW8 , suppressTopSpacingWP , truncateFontHeightsLikeWP6 , uiCompat97To2003 , useWord2002TableStyleRules , useWord97LineBreakRules , wpJustification and wpSpaceWidth ) were the focus of some controversy during the standardisation of DIS 29500. [ 26 ] As a result, new text was added to ISO/IEC 29500 to document them. [ 27 ]
An article in Free Software Magazine has criticized the markup used for these settings. Office Open XML uses distinctly named elements for each compatibility setting, each of which is declared in the schema. The repertoire of settings is thus limited — for new compatibility settings to be added, new elements may need to be declared, "potentially creating thousands of them, each having nothing to do with interoperability". [ 28 ]
The standard provides two types of extensibility mechanism, Markup Compatibility and Extensibility (MCE) defined in Part 3 (ISO/IEC 29500-3:2008) and Extension Lists defined in clause 18.2.10 of Part 1. | https://en.wikipedia.org/wiki/Office_MathML |
The Office of Astronomy for Development ( OAD ) is an office of the International Astronomical Union (IAU) established in 2011 to further the use of astronomy as a tool for development. [ 1 ] [ 2 ]
The OAD is jointly funded by the International Astronomical Union and the National Research Foundation of South Africa . [ 3 ]
The office consists of eleven regional offices located in Armenia, China, Colombia, Ethiopia, Jordan, Nigeria, Portugal, Thailand, the Netherlands, United States, and Zambia which have similar objectives to the OAD but with regional focus. [ 4 ]
The OAD annually issues a call for proposals to fund projects which use Astronomy as a tool to address an issue related to sustainable development. The mission of the OAD is to help further the use of astronomy as a tool for development by mobilizing the human and financial resources necessary in order to realize the field's scientific, technological and cultural benefits to society.
The OAD was established on the 16th of April 2011 at the South African Astronomical Observatory (SAAO) in Cape Town. [ 5 ] It was launched by the South African Minister of Science and Technology Naledi Pandor . [ 6 ] [ 7 ]
In 2016, the IAU together with the OAD director Kevin Govender was awarded the Edinburgh Medal for " furthering education and technological capacity worldwide through the inspirational science of astronomy ". [ 8 ] [ 9 ] [ 10 ]
As of 2023, the OAD had administered a total of €1.1 Million in IAU grant funds. These funds have been awarded to 215 projects that reached over 100 countries across the world. [ 11 ] | https://en.wikipedia.org/wiki/Office_of_Astronomy_for_Development |
The Office of Atoms for Peace (OAP) of Thailand (สำนักงานปรมาณูเพื่อสันติ) in Chatuchak district , Bangkok, Thailand, was established in 1961 as the Office of Atomic Energy for Peace. The OAP serves as the main authority for nuclear research in Thailand. The OAP employs approximately 400 people. The research topics and services provided at the OAP include radioisotope production, gamma radiography, neutron activation analysis, neutron radiography, and gemstone irradiation .
The OAP operated a 2- megawatt nuclear research reactor , Thai Research Reactor 1/Modification 1 (TRR-1/M1). The TRR-1/M1 is of the type TRIGA Mark III, built by General Atomics , and began operation in 1962 after being commissioned in 1961 as a 1MW reactor. The TRR-1/M1 underwent its modification during 1975-1977, at which point it began operation as a 2MW reactor. TRR-1/M1 is the only nuclear reactor in Thailand.
In 2006, OAP was divided into two separate entities: the original OAP, which will oversee nuclear and radiation regulations nationally, and the new Thailand Institute of Nuclear Technology (TINT), which will conduct peaceful nuclear research and offer services to the public. [ 2 ] | https://en.wikipedia.org/wiki/Office_of_Atoms_for_Peace |
The Office of Technology Assessment ( OTA ) was an office of the United States Congress that operated from 1974 to 1995. OTA's purpose was to provide congressional members and committees with objective and authoritative analysis of the complex scientific and technical issues of the late 20th century, i.e. technology assessment . It was a leader in practicing and encouraging delivery of public services in innovative and inexpensive ways, including early involvement in the distribution of government documents through electronic publishing . Its model was widely copied around the world.
The OTA was authorized in 1972 and received its first funding in fiscal year 1974. [ 1 ] It was defunded at the end of 1995, following the 1994 mid-term elections which led to Republican control of the Senate and the House. House Republican legislators characterized the OTA as wasteful and hostile to GOP interests.
Princeton University hosts The OTA Legacy site, which holds "the complete collection of OTA publications along with additional materials that illuminate the history and impact of the agency." [ 2 ] On July 23, 2008 the Federation of American Scientists launched a similar archive that includes interviews and additional documents about OTA. [ 3 ]
Congress established the Office of Technology Assessment with the Technology Assessment Act of 1972. [ 4 ] It was governed by a twelve-member board, comprising six members of Congress from each party—half from the Senate and half from the House of Representatives. During its twenty-four-year life it produced about 750 studies on a wide range of topics, including acid rain , health care , global climate change , and polygraphs .
Criticism of the agency was fueled by Fat City , a 1980 book by Washington Times journalist Donald Lambro that was regarded favorably by the Reagan administration; it called OTA an "unnecessary agency" that duplicated government work done elsewhere. OTA was abolished (technically "de-funded") in the " Contract with America " period of Newt Gingrich 's Republican ascendancy in Congress. According to Science magazine, "some Republican lawmakers came to view [the OTA] as duplicative, wasteful, and biased against their party." [ 5 ]
When the 104th Congress withdrew funding for OTA, it had a statutory limit of 143 full-time staff (augmented by various project-based contractors) and an annual budget of $21.9 million. "Legislative Branch Appropriations Act, FY1995" . U.S. Congress. July 24, 1994. The closure of OTA was criticized at the time, including by Republican representative Amo Houghton , who commented at the time of OTA's defunding that "we are cutting off one of the most important arms of Congress when we cut off unbiased knowledge about science and technology." [ 6 ]
Critics of the closure saw it as an example of politics overriding science, and a variety of scientists have called for the agency's reinstatement. [ 7 ] Law professor and legal scholar David L. Faigman also made a strong case supporting the role OTA had played, also calling for its reinstatement. [ 8 ]
While the OTA was closed down, the idea of technology assessment survived, in particular in Europe. The European Parliamentary Technology Assessment (EPTA) network coordinates members of technology assessment units working for various European governments. The US Government Accountability Office has meanwhile established a TA unit, taking on former duties of the OTA.
While campaigning in the 2008 US presidential election , Hillary Clinton pledged to work to restore the OTA if elected President . [ 9 ] [ 10 ] On April 29, 2009, House of Representatives member Rush Holt of New Jersey wrote an op-ed piece articulating the argument for restoring the OTA. [ 11 ]
In April 2010 The Woodrow Wilson International Center for Scholars released a report entitled "Reinventing Technology Assessment" that emphasized citizen engagement and called for performing the functions of the OTA by creating a nationwide network of non-partisan policy research organizations, universities, and science museums: the Expert & Citizen Assessment of Science & Technology (ECAST) network. ECAST would conduct both expert and participatory technology assessments for Congress and other clients. The author of the report was Dr. Richard Sclove of the Loka Institute. The report states that the drive to modernize OTA was initiated by Darlene Cavalier , a popular citizen science advocate and author of the Science Cheerleader blog. [ 12 ] Cavalier outlined the idea of the citizen network in a guest blog post for Discover magazine 's The Intersection. [ 13 ] She introduced the concept in an article in Science Progress in July 2008. [ 14 ] Andrew Yang became the first 2020 presidential candidate on April 4, 2019 to push for the idea to reestablish the OTA. [ 15 ] He did so with a detailed proposal that includes refusing to sign any budget that did not include the OTA. [ 16 ]
In January 2019 the Government Accountability Office established the Science, Technology Assessment, and Analytics (STAA) team [ 17 ] to take on the technology assessment mission of the former OTA. STAA developed out of a small technology assessment pilot program at GAO created in 2002, [ 18 ] which was elevated to GAO's 15th mission team. It launched with 49 full-time equivalent staff and has since grown to over 100. [ 19 ] In October 2019, a congressionally directed report by the National Academy of Public Administration recommended increased investment in GAO and CRS to build Congress's policy capacity in science and technology. [ 20 ]
In 2022, Jamie Susskind advocated for bringing back the office in order to have expertise on pressing issues like artificial intelligence and online privacy . [ 21 ] Bruce Schneier also called for a similar National Cyber Office. [ 21 ] | https://en.wikipedia.org/wiki/Office_of_Technology_Assessment |
Official Medicines Control Laboratory (OMCL) is the term coined in Europe for a public institute in charge of controlling the quality of medicines and, depending on the country, other similar products (for example, medical devices). They are part of or report to national competent authorities (NCAs).
By testing medicines independently of manufacturers (that is, without any conflict of interest and with guaranteed impartiality), OMCLs play a fundamental role in ensuring the quality and contributing to the safety and efficacy of medicines, whether already on the market or not, for human and veterinary use.
OMCLs assess human and veterinary medicines to determine whether they meet the relevant requirements for content, purity, etc., as specified in the marketing authorisation dossier or an official pharmacopoeia. They can also check whether packaging and labelling comply with legal requirements, and provide support during quality assessment, good manufacturing practice (GMP) inspections and investigations of quality defects and pharmacovigilance . Investigations may also be carried out on products suspected of being falsified, in support of police, customs, health or judicial authorities. OMCLs also actively contribute to the development and verification of pharmacopoeial methods.
To take into account the cross-border and global dimension of medicines markets, OMCLs co-operate actively at the European level and beyond. They do so through the General European OMCL Network (GEON), which was set up jointly by the Council of Europe and the European Commission (EC) in 1995. A number of non-European OMCLs have joined the network as associate members.
The GEON, which comprises over 70 OMCLs from over 40 different countries, is co-ordinated by the Strasbourg-based European Directorate for the Quality of Medicines & HealthCare (EDQM) of the Council of Europe , an international organisation upholding human rights, democracy and the rule of law in Europe. A list of network members [ 1 ] is publicly available on the EDQM homepage.
The network supports laboratories across Europe in making the best use of their expertise, technical capacity and financial resources, in order to ensure the appropriate control of medicines in Europe. This is done by organising co-ordinated testing programmes, meetings, training, audits and tailored Proficiency Testing Schemes (PTSs) and by providing the necessary (IT) infrastructure. The activities of the GEON are co-funded by the Council of Europe and the European Union .
OMCLs play an essential role in the Official Control Authority Batch Release (OCABR) [ 2 ] procedure, which is foreseen in EU legislation. [ 3 ] [ 4 ] Under this procedure, each batch of vaccine for human use, medicinal product derived from human blood or plasma (e.g. clotting factors, human albumin) or immunological veterinary medicinal product (e.g. veterinary vaccine) [ 5 ] undergoes independent quality control, including testing, by an OMCL after release by the manufacturer and before it reaches the patient. The legislation requires mutual recognition of test results among the member states (EU/EEA), so the OMCLs involved work together as a network to ensure that any batch is tested in only one OMCL, under agreed conditions, for the benefit of all. | https://en.wikipedia.org/wiki/Official_Medicines_Control_Laboratory |
Officinalis , officinale , or occasionally officinarum is a Medieval Latin epithet denoting organisms —mainly plants—with uses in medicine, herbalism , manufacturing, and cookery. It commonly occurs as a specific epithet , the second term of a two-part botanical name. Officinalis is used to modify masculine and feminine nouns, while officinale is used for neuter nouns.
The word officinalis literally means 'of or belonging to an officīna ', the storeroom of a monastery, where medicines and other necessaries were kept. [ 1 ] Officīna was a contraction of opificīna , from opifex ( gen. opificis ) 'worker, maker, doer' (from opus 'work') + -fex , -ficis , 'one who does', from facere 'do, perform'. [ 2 ] When Linnaeus invented the binomial system of nomenclature , he gave the specific name officinalis , in the 1735 (1st Edition) of his Systema Naturae , to plants (and sometimes animals or fungi) with an established medicinal, culinary, or other use. [ 3 ] | https://en.wikipedia.org/wiki/Officinalis |
Offline mobile learning is the ability to access learning materials on a mobile device without requiring an Internet connection.
Generally, web-based applications functionalities are dependent on ability to access to the Web . While there are many practical reasons for an application to access data on a server , not every feature of an application may necessarily need to have such access. Offline access to such features may enhance the user experience and increase access where networks are unavailable, unaffordable (such as in Educational psychologist ), rural areas, and those with limited data plans. [ 1 ]
Various technologies used to develop education technology support offline functionality such as Progressive Web Apps [ 2 ] and Mobile apps . [ 3 ]
The developed world’s emphasis on highly sophisticated technological devices is a futuristic dream for most developing countries . [ 4 ] Nevertheless, these countries realise that M-Learning is more than just using a mobile device for E-Learning , and that it requires an entirely different approach. In order to utilize M-Learning efficiently in these developing countries, there is a need to understand this approach, as technology becomes available.
These mobile technologies have successfully enabled learning opportunities and support to those learners in developing countries who are situated far distances away from educational facilities , and do not have the infrastructure to support access.
Users in developing countries have the same need for M-Learning to be mobile, accessible and affordable, as those in developed countries do. The very significance of M-Learning is its ability to make learning mobile, away from the classroom or workplace . These Wireless and mobile technologies enable learning opportunities to learners who do not have direct access to learning in these places. Many learners in developing countries have trouble accessing the internet, or experience difficulty in affording technology that enables learning in an E-Learning environment. Mobile devices are a cheaper alternative compared to traditional E-Learning equipment such as PCs and Laptops .
However, to fully utilize this potential it is imperative to explore the factors that determine mobile telecommunications development in the developing world. [ 5 ] Delivering mobile services on open hardware and open software not just practically makes sense but can also lower the cost and thus increase the possibility of offering sustainable services in the future. While the benefits of open-source software are proven, it is important to conduct a broader study to investigate the potential role of relatively new copyleft approach for custom hardware, as supporting mobile learners in their own socio-cultural contexts of developing countries is a significant challenge. [ 6 ]
A range of devices exist from Mobile devices such as smartphones and tablets to single-purpose devices such as E-book readers. Different combinations of hardware and software can be used to make learner experiences that normally require an Internet connection work offline
Some Learning Management Systems provide offline functionality via Mobile apps [ 7 ] [ 8 ] and/or offline web technologies. They can use data synchronization to make learning content available offline and submit user data (e.g. assignments) when a connection is available.
Web scraping and web archiving can be used to download web content for use offline. It can be complex to scrape dynamic content in such a way that it can work offline the same as it works online (e.g. where the content relies on communication with a server). When standard HTML is rendered the archive will work as expected, however results with dynamic content can vary. [ 9 ] Dynamic websites can use service workers to automatically preload all resources required for a site to work offline. [ 10 ]
Projects such as World Possible and Internet-in-a-Box make large collections of scraped websites available offline.
Some modern mobile devices have the capability to store thousands of documents and therefore have the potential to be used as powerful offline learning tools .
Typically, the optimal solution is to use the local store as much as possible, since it is usually faster than a remote connection. However, the more work an application does locally, the more code you need to write to implement the feature locally and to synchronize the corresponding data. There is a cost/benefit tradeoff to consider, and some features may not be worthwhile to support locally. [ 13 ] | https://en.wikipedia.org/wiki/Offline_mobile_learning |
In botany and horticulture , an offset (also called a pup , mainly in the US, [ 1 ] ) is a small, virtually complete daughter plant that has been naturally and asexually produced on the mother plant . They are clones , meaning that they are genetically identical to the mother plant. They can divide mitotically . In the plant nursery business and gardens, they are detached and grown in order to produce new plants. This is a cheap and simple process for those plants that readily produce offsets as it does not usually require specialist materials and equipment.
An offset or 'pup' may also be used as a broad term to refer to any short shoot originating from the ground at the base of another shoot. [ 2 ] The term 'sucker' has also been used as well, especially for bromeliads , which can be short lived plants and when the parent plant has flowered, they signal the root nodes to form new plants. [ 1 ]
Offsets form when meristem regions of plants, such as axillary buds or homologous structures, differentiate into a new plant with the ability to become self-sustaining. This is particularly common in species that develop underground storage organs, such as bulbs , corms and tubers . Tulips and lilies are examples of plants that display offset characteristics by forming cormlets around the original mother corm. In the UK, the term 'bulbils' is used for lilies. It can take up to 3 years for the bulbil to store enough energy to produce a flower stem. [ 3 ] although larger bulbs (such as Cardiocrinum giganteum ) may take 5 to 7 years before flowering. [ 4 ]
It is a means of plant propagation. When propagating plants to increase a stock of a cultivar , thus seeking identical copies of parent plant, various cloning techniques ( asexual reproduction ) are used. Offsets are a natural means by which plants may be cloned.
In contrast, when propagating plants to create new cultivars, sexual reproduction through pollination is used to create seeds . The recombination of genes gives rise to offspring plant with similar but distinct offspring genome.
This horticulture article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Offset_(botany) |
Offset binary , [ 1 ] also referred to as excess-K , [ 1 ] excess- N , excess-e , [ 2 ] [ 3 ] excess code or biased representation , is a method for signed number representation where a signed number n is represented by the bit pattern corresponding to the unsigned number n + K , K being the biasing value or offset . There is no standard for offset binary, but most often the K for an n -bit binary word is K = 2 n −1 (for example, the offset for a four-digit binary number would be 2 3 =8). This has the consequence that the minimal negative value is represented by all-zeros, the "zero" value is represented by a 1 in the most significant bit and zero in all other bits, and the maximal positive value is represented by all-ones (conveniently, this is the same as using two's complement but with the most significant bit inverted). It also has the consequence that in a logical comparison operation, one gets the same result as with a true form numerical comparison operation, whereas, in two's complement notation a logical comparison will agree with true form numerical comparison operation if and only if the numbers being compared have the same sign. Otherwise the sense of the comparison will be inverted, with all negative values being taken as being larger than all positive values.
The 5-bit Baudot code used in early synchronous multiplexing telegraphs can be seen as an offset-1 ( excess-1 ) reflected binary (Gray) code .
One historically prominent example of offset-64 ( excess-64 ) notation was in the floating point (exponential) notation in the IBM System/360 and System/370 generations of computers. The "characteristic" (exponent) took the form of a seven-bit excess-64 number (The high-order bit of the same byte contained the sign of the significand ). [ 4 ]
The 8-bit exponent in Microsoft Binary Format , a floating point format used in various programming languages (in particular BASIC ) in the 1970s and 1980s, was encoded using an offset-129 notation ( excess-129 ).
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) uses offset notation for the exponent part in each of its various formats of precision . Unusually however, instead of using "excess 2 n −1 " it uses "excess 2 n −1 − 1" (i.e. excess-15 , excess-127 , excess-1023 , excess-16383 ) which means that inverting the leading (high-order) bit of the exponent will not convert the exponent to correct two's complement notation.
Offset binary is often used in digital signal processing (DSP). Most analog to digital (A/D) and digital to analog (D/A) chips are unipolar, which means that they cannot handle bipolar signals (signals with both positive and negative values). A simple solution to this is to bias the analog signals with a DC offset equal to half of the A/D and D/A converter's range. The resulting digital data then ends up being in offset binary format. [ 5 ]
Most standard computer CPU chips cannot handle the offset binary format directly [ citation needed ] . CPU chips typically can only handle signed and unsigned integers, and floating point value formats. Offset binary values can be handled in several ways by these CPU chips. The data may just be treated as unsigned integers, requiring the programmer to deal with the zero offset in software. The data may also be converted to signed integer format (which the CPU can handle natively) by simply subtracting the zero offset. As a consequence of the most common offset for an n -bit word being 2 n −1 , which implies that the first bit is inverted relative to two's complement, there is no need for a separate subtraction step, but one simply can invert the first bit. This sometimes is a useful simplification in hardware, and can be convenient in software as well.
Table of offset binary for four bits, with two's complement for comparison: [ 6 ]
Offset binary may be converted into two's complement by inverting the most significant bit. For example, with 8-bit values, the offset binary value may be XORed with 0x80 in order to convert to two's complement. In specialised hardware it may be simpler to accept the bit as it stands, but to apply its value in inverted significance. | https://en.wikipedia.org/wiki/Offset_binary |
The offset filtration (also called the "union-of-balls" [ 1 ] or "union-of-disks" [ 2 ] filtration ) is a growing sequence of metric balls used to detect the size and scale of topological features of a data set. The offset filtration commonly arises in persistent homology and the field of topological data analysis . Utilizing a union of balls to approximate the shape of geometric objects was first suggested by Frosini in 1992 in the context of submanifolds of Euclidean space . [ 3 ] The construction was independently explored by Robins in 1998, and expanded to considering the collection of offsets indexed over a series of increasing scale parameters (i.e., a growing sequence of balls), in order to observe the stability of topological features with respect to attractors . [ 4 ] Homological persistence as introduced in these papers by Frosini and Robins was subsequently formalized by Edelsbrunner et al. in their seminal 2002 paper Topological Persistence and Simplification. [ 5 ] Since then, the offset filtration has become a primary example in the study of computational topology and data analysis.
Let X {\displaystyle X} be a finite set in a metric space ( M , d ) {\displaystyle (M,d)} , and for any x ∈ X {\displaystyle x\in X} let B ( x , ε ) = { y ∈ X ∣ d ( x , y ) ≤ ε } {\displaystyle B(x,\varepsilon )=\{y\in X\mid d(x,y)\leq \varepsilon \}} be the closed ball of radius ε {\displaystyle \varepsilon } centered at x {\displaystyle x} . Then the union X ( ε ) := ⋃ x ∈ X B ( x , ε ) {\textstyle X^{(\varepsilon )}:=\bigcup _{x\in X}B(x,\varepsilon )} is known as the offset of X {\displaystyle X} with respect to the parameter ε {\displaystyle \varepsilon } (or simply the ε {\displaystyle \varepsilon } -offset of X {\displaystyle X} ).
By considering the collection of offsets over all ε ∈ [ 0 , ∞ ) {\displaystyle \varepsilon \in [0,\infty )} we get a family of spaces O ( X ) := { X ( ε ) ∣ ε ∈ [ 0 , ∞ ) } {\displaystyle {\mathcal {O}}(X):=\{X^{(\varepsilon )}\mid \varepsilon \in [0,\infty )\}} where X ( ε ) ⊆ X ( ε ′ ) {\displaystyle X^{(\varepsilon )}\subseteq X^{(\varepsilon ^{\prime })}} whenever ε ≤ ε ′ {\displaystyle \varepsilon \leq \varepsilon ^{\prime }} . So O ( X ) {\displaystyle {\mathcal {O}}(X)} is a family of nested topological spaces indexed over ε {\displaystyle \varepsilon } , which defines a filtration known as the offset filtration on X {\displaystyle X} . [ 6 ]
Note that it is also possible to view the offset filtration as a functor O ( X ) : [ 0 , ∞ ) → T o p {\displaystyle {\mathcal {O}}(X):[0,\infty )\to \mathbf {Top} } from the poset category of non-negative real numbers to the category of topological spaces and continuous maps. [ 7 ] [ 8 ] There are some advantages to the categorical viewpoint, as explored by Bubenik and others. [ 9 ]
A standard application of the nerve theorem shows that the union of balls has the same homotopy type as its nerve, since closed balls are convex and the intersection of convex sets is convex. [ 10 ] The nerve of the union of balls is also known as the Čech complex , [ 11 ] which is a subcomplex of the Vietoris-Rips complex. [ 12 ] Therefore the offset filtration is weakly equivalent to the Čech filtration (defined as the nerve of each offset across all scale parameters), so their homology groups are isomorphic . [ 13 ]
Although the Vietoris-Rips filtration is not identical to the Čech filtration in general, it is an approximation in a sense. In particular, for a set X ⊂ R d {\displaystyle X\subset \mathbb {R} ^{d}} we have a chain of inclusions Rips ε ( X ) ⊂ Cech ε ′ ( X ) ⊂ Rips ε ′ ( X ) {\displaystyle \operatorname {Rips} _{\varepsilon }(X)\subset \operatorname {Cech} _{\varepsilon ^{\prime }}(X)\subset \operatorname {Rips} _{\varepsilon ^{\prime }}(X)} between the Rips and Čech complexes on X {\displaystyle X} whenever ε ′ / ε ≥ 2 d / d + 1 {\displaystyle \varepsilon ^{\prime }/\varepsilon \geq {\sqrt {2d/d+1}}} . [ 14 ] In general metric spaces, we have that Cech ε ( X ) ⊂ Rips 2 ε ( X ) ⊂ Cech 2 ε ( X ) {\displaystyle \operatorname {Cech} _{\varepsilon }(X)\subset \operatorname {Rips} _{2\varepsilon }(X)\subset \operatorname {Cech} _{2\varepsilon }(X)} for all ε > 0 {\displaystyle \varepsilon >0} , implying that the Rips and Cech filtrations are 2-interleaved with respect to the interleaving distance as introduced by Chazal et al. in 2009. [ 15 ] [ 16 ]
It is a well-known result of Niyogi, Smale, and Weinberger that given a sufficiently dense random point cloud sample of a smooth submanifold in Euclidean space, the union of balls of a certain radius recovers the homology of the object via a deformation retraction of the Čech complex. [ 17 ]
The offset filtration is also known to be stable with respect to perturbations of the underlying data set. This follows from the fact that the offset filtration can be viewed as a sublevel-set filtration with respect to the distance function of the metric space. The stability of sublevel-set filtrations can be stated as follows: Given any two real-valued functions γ , κ {\displaystyle \gamma ,\kappa } on a topological space T {\displaystyle T} such that for all i ≥ 0 {\displaystyle i\geq 0} , the i th {\displaystyle i{\text{th}}} -dimensional homology modules on the sublevel-set filtrations with respect to γ , κ {\displaystyle \gamma ,\kappa } are point-wise finite dimensional, we have d B ( B i ( γ ) , B i ( κ ) ) ≤ d ∞ ( γ , κ ) {\displaystyle d_{B}({\mathcal {B}}_{i}(\gamma ),{\mathcal {B}}_{i}(\kappa ))\leq d_{\infty }(\gamma ,\kappa )} where d B ( − ) {\displaystyle d_{B}(-)} and d ∞ ( − ) {\displaystyle d_{\infty }(-)} denote the bottleneck and sup-norm distances, respectively, and B i ( − ) {\displaystyle {\mathcal {B}}_{i}(-)} denotes the i th {\displaystyle i{\text{th}}} -dimensional persistent homology barcode. [ 18 ] While first stated in 2005, this sublevel stability result also follows directly from an algebraic stability property sometimes known as the "Isometry Theorem," [ 9 ] which was proved in one direction in 2009, [ 16 ] and the other direction in 2011. [ 19 ] [ 20 ]
A multiparameter extension of the offset filtration defined by considering points covered by multiple balls is given by the multicover bifiltration , and has also been an object of interest in persistent homology and computational geometry . [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Offset_filtration |
Offshore Group Newcastle or OGN Group was a British company that fabricated steel in North East England , often for oil platforms . At one time it was Tyneside 's largest manufacturing yard.
On 5 February 2016 it appeared in the episode Sea Cities Tyneside of BBC Two series Sea Cities . [ 1 ] Also appearing in the programme was the Shields Ferry and the Port of Tyne . [ 2 ] It also visited South Shields Marine School , part of South Tyneside College and the oldest marine school in the world, Target of Leif Höegh & Co from Norway, the Great North Run , MS Marina of Oceania Cruises , and the Old Low Light .
The company donated over £100,000 to the Conservative Party during the 2019 United Kingdom general election [ 3 ]
It is an offshore fabrication yard on the north bank of the River Tyne in Wallsend , near Point Pleasant , opposite the former site of Hebburn Colliery . | https://en.wikipedia.org/wiki/Offshore_Group_Newcastle |
Offshore Technology Conference (OTC) is a series of conferences and exhibitions, [ 1 ] [ 2 ] [ 3 ] focused on the exchanging technical knowledge relevant to the development of offshore energy resources, primarily oil and natural gas. It was founded in 1969. [ 4 ] [ 5 ] There are four events organized by OTC. The flagship Offshore Technology Conference is held annually during early May in Houston, Texas, USA since 1969. [ 6 ] [ 7 ] In 2011, OTC organized OTC Brasil and the Arctic Technology Conference to offer a version of the event focused on the development in the Brasil and Arctic regions. [ 8 ] In 2014, OTC Asia was created. OTC Brasil and OTC Asia are held every other year.
OTC is sponsored by 13 industry associations, [ 9 ] who work cooperatively to develop the technical program each year. [ 10 ] Sponsoring organizations include the American Association of Petroleum Geologists , the American Institute of Chemical Engineers , the American Society of Civil Engineers , the ASME International Petroleum Technology Institute, the Institute of Electrical and Electronics Engineers , the Marine Technology Society , the Society of Exploration Geophysicists , the Society for Mining, Metallurgy, and Exploration , the Society of Naval Architects and Marine Engineers , the Society of Petroleum Engineers , and the Minerals, Metals and Materials Society . [ 11 ]
OTC uses proceeds from the events to reinvest into the oil and gas industry through the 13 sponsoring organizations. [ 12 ] All technical papers presented at Offshore Technology Conference events are available on OnePetro.org .
Offshore Technology Conference, the flagship OTC event, is the largest oil and gas sector trade show in the world. [ 13 ] [ 14 ] The first OTC was held in Houston, Texas in 1969. [ 15 ] In 2017, OTC was ranked the 25th event on the Trade Show News Network 2017 Top US Trade Shows List. In 2018, the 50th edition of OTC was held and many outlets reflected on the impact of OTC on Houston and the industry over the years. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] To celebrate the 50th edition of the event, [ 22 ] OTC created a special sign and commissioned Houstonian artist Mario E. Figueroa Jr., also known as GONZO247, [ 23 ] to create a one of a kind plexi glasscube painted [ 24 ] live at OTC. [ 25 ] [ 26 ] The 2019 OTC is the 50th anniversary of OTC.
It ranks among the largest 200 tradeshows held annually in the United States and is among the 10 largest meetings in terms of attendance. [ citation needed ] Attendance consistently exceeds 50,000, and more than 2,000 companies participate in the exhibition. [ 27 ] OTC includes attendees from around the globe, with more than 120 countries represented at recent conferences. [ 28 ] In 2009, 2,500 companies from 38 countries participated and 66,820 attendance, an 8.5% drop from previous year's 75,092, was reported despite a global economic recession and initial concerns about swine flu. [ 29 ] [ 30 ] Attendance in 2014 reached a 46-year high of 108,300, the highest in show history and up 3.3% from the previous year. [ 31 ]
The cost to attend the Offshore Technology Conference event is generally lower than prices for other oil and gas conferences. In 2018, to attend the full conference as a professional who was a member of one of the 13 sponsoring organizations the registration fee was $180 which included full access to the technical sessions and massive exhibition.
The annual Spotlight on New Technology Awards showcases the latest and most advanced hardware and software technologies that are leading the industry into the future. Technologies are awarded based on the following criteria: new and innovative, proven, broad interest for the industry and significant impact beyond existing technologies. [ 32 ]
OTC Brasil has been held in 2011, 2013, and 2017. [ 33 ] In 2017, OTC Brasil was held in Rio de Janeiro and 150 technical papers were presented in 29 technical sessions.
OTC Asia was held in 2014, 2016, 2018 and 2024. [ 34 ] [ 35 ] The 2018 OTC Asia was held in March in Kuala Lumpur. More than 290 technical papers were presented at OTC Asia in 2018. [ 36 ] The event also includes the OTC Asia Spotlight on Technology Awards. [ 37 ] [ 38 ]
The Arctic Technology Conference has been held in 2011, 2012, 2014, 2015, 2016, [ 39 ] and will be held in fall 2018. [ 40 ] In 2011-2012 and 2014, the conference was held in Houston. The 2015 edition was held in Copenhagen, Denmark. The 2016th edition was held in St. John's, Canada and 135 technical papers were presented. [ 41 ] The 2018 edition was held in Houston. [ 42 ]
The Arctic Technology Conference is an OTC event that covers the technical challenges that come along with exploration and production of petroleum and natural gas in the arctic. The conference technical program will discuss the logistical and technical aspects involved with recovery oil and gas in the harsh icy arctic environment as well as the environmental and regulatory requirements in the unique arctic environment. [ 43 ] | https://en.wikipedia.org/wiki/Offshore_Technology_Conference |
Offshore concrete structures , or concrete offshore structures , are structures built from reinforced concrete for use in the offshore marine environment. They serve the same purpose as their steel counterparts in oil and gas production and storage. The first concrete oil platform was installed in the North Sea in the Ekofisk oil field in 1973 by Phillips Petroleum , and they have become a significant part of the marine construction industry. Since then at least 47 major concrete offshore structures have been built.
Concrete offshore structures are mostly used in the petroleum industry as drilling, extraction or storage units for crude oil or natural gas. These large structures house machinery and equipment used to drill for, or extract, oil and gas. [ 1 ] Concrete offshore structures are not limited to applications within the oil and gas industry, several conceptual studies have shown that concrete support structures for offshore wind turbines can be competitive compared to the more common steel structures, especially for greater water depths.
Depending on the circumstances, platforms may be attached to the ocean floor, consist of an artificial island, or be floating. Generally, offshore concrete structures are classified into fixed and floating structures. Fixed structures are mostly built as concrete gravity based structures (CGS, also termed as caisson type), where the loads bear down directly on the uppermost layers as soil pressure. The caisson provides buoyancy during construction and towing and acts also as a foundation structure in the operation phase. Furthermore, the caisson could be used as storage volume for oil or other liquids. [ 1 ] Floating units may be held in position by anchored wires or chains in a spread mooring pattern. Because of the low stiffness in those systems, the natural frequency is low and the structure can move with all six degrees of freedom. Floating units serve as production units, storage and offloading units (FSO) or for crude oil or as terminals for liquefied natural gas (LNG). A more recent development is concrete sub-sea structures. [ 1 ]
Concrete offshore structures are highly durable, constructed of low-maintenance material, suitable for harsh and/or arctic environment (like ice and seismic regions), [ 1 ] can carry heavy topsides, may be designed to provide storage capacity, can be suitable for soft ground and are economical for water depths larger than 150 m. Most gravity-type platforms need no additional fixing because of their large foundation dimensions and extremely high weight. [ 1 ]
Since the 1970s, several fixed concrete platform designs have been developed. Most of the designs have in common a base caisson (normally for storage of oil) and shafts penetrating the water surface to carry the topside. In the shafts normally utility systems for offloading, drilling, draw down and ballast are put up. [ citation needed ]
Concrete offshore platforms of the gravity-base type are almost always constructed in their vertical attitude. This allows the inshore installation of deck girders and equipment and the later transport of the whole structure to the installation site.
The most common concrete designs are: [ citation needed ]
Condeep refers to a make of gravity base structure for oil platforms invented by Olav Mo [ 2 ] and fabricated by Norwegian Contractors in Norway. Condeep usually consists of a base of concrete oil storage tanks from which one, three or four concrete shafts rise. The original Condeep always rests on the sea floor, and the shafts rise to about 30m above the sea level. The platform deck itself is not a part of the construction.
The Condeep Platforms Brent B (1975) and Brent D (1976) were designed for a water depth of 142m in the Brent oilfield operated by Shell . Their main mass is represented by the storage tank (ca. 100m diameter and 56m high, consisting of 19 cylindrical compartments with 20m diameter). Three of the cells are extended into shafts tapering off at the surface and carrying a steel deck. The tanks serve as storage of crude oil in the operation phase. During the installation these tanks have been used as ballast compartment.
Among the largest Condeep type platform are the Troll A platform and the Gullfaks C . Troll A was built within four years and deployed in 1995 to produce gas from the Troll oil field which was developed by Norske Shell , since 1996 operated by Statoil . [ 3 ] A detailed overview about Condeep platforms is given in a separate article.
Concrete Gravity Base Structures (CGBS) is a further development of the first-generation Condeep drilling/production platforms installed in the North Sea between the late 1970s and mid '90s. The CGBS have no oil storage facilities and the topside installations will be carried out in the field by a float-over mating method. Current [ when? ] or most recent projects are: [ citation needed ]
The first concrete gravity platform in the North Sea was a C G Doris platform, the Ekofisk Tank, in Norwegian waters.
The structure has a shape not unlike a marine sea island and is surrounded by a perforated breakwater wall (Jarlan patent).
The original proposal of the French group C G DORIS (Compagnie General pour les Developments Operationelles des Richesses Sous-Marines) for a prestressed post-tensioned concrete "island" structure was adopted on cost and operational grounds. DORIS was general contractor responsible for the structural design: the concrete design was prepared and supervised on behalf of DORIS by Europe-Etudes. Further example for the C G DORIS designs are the Frigg platforms, the Ninian Central Platform and the Schwedeneck platforms. [ citation needed ] The design typically consists of a large volume caisson based on the sea floor merging into a monolithic structure, which is offering the base for the deck. The single main leg is surrounded by an outer breaker wall perforated with so called Jarlan holes. This wall is intended to break up waves, thus reducing their forces.
This design is quite similar to the Condeep type. [ citation needed ]
To achieve its goal and extract oil within five years after discovering the Brent reservoir Shell divided up the construction of four offshore platforms. Redpath Dorman Long at Methil in Fife, Scotland getting Brent A, the two concrete Condeeps B and D were to be built in Norway by Norwegian Contractors (NC) of Stavanger, and C (also concrete) was to be built by McAlpine at Ardyne Point on the Clyde (which is known as the ANDOC (Anglo Dutch Offshore Concrete) design). The ANDOC design can be considered as the British construction industry's attempt to compete with Norway in this sector. McAlpine constructed three concrete platforms for the North Sea oil industry at Ardyne Point. The ANDOC type is very similar to the Sea Tank design, but the four concrete legs terminate and steel legs take over to support the deck.
The Arup dry-build Concrete Gravity Substructure (CGS) concept was originally developed by Arup in 1989 for Hamilton Brothers' Ravenspurn North. The Arup CGS are designed to be simple to install, and are fully removable. Simplicity and repetition of concrete structural elements, low reinforcement and pre-stress densities as well as the use of normal density concrete lead to economical construction costs. Typical for the Arup CGS is the inclined installation technique. This technique helps to maximise economy and provide a robust offshore emplacement methodology. Further projects have been the Malampaya project in the Philippines and the Wandoo Full Field Development on the North West Shelf of Western Australia.
Since concrete is quite resistant to corrosion from salt water and keeps maintenance costs low, floating concrete structures have become increasingly attractive to the oil and gas industry in the last two decades. Temporary floating structures such as the Condeep platforms float during construction but are towed out and finally ballasted until they sit on the sea floor. Permanent floating concrete structures have various uses including the discovery of oil and gas deposits, in oil and gas production, as storage and offloading units and in heavy lifting systems.
Common designs for floating concrete structures are the barge or ship design, the platform design (semi-submersible, TLP) as well as the floating terminals e.g. for LNG.
Floating production, storage, and offloading systems (FPSOS) receive crude oil from deep-water wells and store it in their hull tanks until the crude is transferred into tank ships or transport barges. In addition to FPSO’s, there have been a number of ship-shaped Floating Storage and Offloading (FSO) systems (vessels with no production processing equipment) used in these same areas to support oil and gas developments. An FSO is typically used as a storage unit in remote locations far from pipelines or other infrastructures.
Semi-submersible marine structures are typically only movable by towing. Semi-submersible platforms have the principal characteristic of remaining in a substantially stable position, presenting small movements when they experience environmental forces such as the wind, waves and currents. Semi-Submersible platforms have pontoons and columns, typically two parallel spaced apart pontoons with buoyant columns upstanding from those pontoons to support a deck. Some of the semi-submersible vessels only have a single caisson, or column, usually denoted as a buoy while others utilize three or more columns extended upwardly from buoyant pontoons. For activities which require a stable offshore platform, the vessel is then ballasted down so that the pontoons are submerged, and only the buoyant columns pierce the water surface - thus giving the vessel a substantial buoyancy with a small water-plane area. The only concrete semi-submersible in existence [ when? ] is Troll B. [ citation needed ]
A Tension Leg Platform is a buoyant platform, which is held in place by a mooring system. TLP mooring is different to conventional chained or wire mooring systems. The platform is held in place with large steel tendons fastened to the sea floor. Those tendons are held in tension by the buoyancy of the hull. Statoil's Heidrun TLP is the only one with a concrete hull, all other TLPs have steel hulls.
FPSO or FSO systems are typically barge/ship-shaped and store crude oil in tanks located in the hull of the vessel. Their turret structures are designed to anchor the vessel, allow “weathervaning” of the units to accommodate environmental conditions, permit the constant flow of oil and production fluids from vessel to undersea field, all while being a structure capable of quick disconnect in the event of emergency.
The first barge of prestressed concrete has been designed in the early 1970s as an LPG ( liquefied petroleum gas ) storage barge in the Ardjuna Field (Indonesia). This barge is built of reinforced and prestressed concrete containing cylindrical tanks each having a cross-section perpendicular to its longitudinal axes that comprises a preferably circular curved portion corresponding to the bottom.
Following table summarizes the major [ according to whom? ] [ clarification needed ] existing offshore concrete structures. | https://en.wikipedia.org/wiki/Offshore_concrete_structure |
Offshore construction is the installation of structures and facilities in a marine environment, usually for the production and transmission of electricity, oil, gas and other resources. It is also called maritime engineering .
Construction and pre-commissioning is typically performed as much as possible onshore. To optimize the costs and risks of installing large offshore platforms, different construction strategies have been developed. [ 1 ]
One strategy is to fully construct the offshore facility onshore, and tow the installation to site floating on its own buoyancy. Bottom founded structure are lowered to the seabed by de-ballasting (see for instance Condeep or Cranefree ), whilst floating structures are held in position with substantial mooring systems. [ 1 ]
The size of offshore lifts can be reduced by making the construction modular, with each module being constructed onshore and then lifted using a crane vessel into place onto the platform. [ 1 ] A number of very large crane vessels were built in the 1970s which allow very large single modules weighing up to 14,000 tonnes to be fabricated and then lifted into place. [ citation needed ]
Specialist floating hotel vessels known as flotels or accommodation rigs are used to accommodate workers during the construction and hook-up phases. This is a high cost activity due to the limited space and access to materials. [ clarification needed ]
Oil platforms are key fixed installations from which drilling and production activity is carried out. Drilling rigs are either floating vessels for deeper water or jack-up designs which are a barge with liftable legs. [ 2 ] Both of these types of vessel are constructed in marine yards but are often involved during the construction phase to pre-drill some production wells.
Other key factors in offshore construction are the weather windows which define periods of relatively light weather during which continuous construction or other offshore activity can take place. Safety of personnel is another key construction parameter, an obvious hazard being a fall into the sea from which speedy recovery in cold waters is essential. Environmental issues are also often a major concern, and environmental impact assessment may be required during planning.
The main types of vessels used for pipe laying are the " derrick barge (DB)", the " pipelay barge (LB)" and the "derrick/lay barge (DLB)" combination. Closed diving bells in offshore construction are mainly used for saturation diving in water depths greater than 120 feet (40 m), less than that, the surface oriented divers are transported through the water in a wet bell or diving stage (basket), a suspended platform deployed from a launch and recovery system (LARS, or "A" frame) on the deck of the rig or a diving support vessel . The basket is lowered to the working depth and recovered at a controlled rate for decompression. Closed bells can go to 1,500 feet (460 m), but are normally used at 400 to 800 feet (120 to 240 m). [ 3 ]
Offshore construction includes foundations engineering , structural design, construction, and/or repair of offshore structures, both commercial and military. [ 1 ] | https://en.wikipedia.org/wiki/Offshore_construction |
Offshore drilling is a mechanical process where a wellbore is drilled below the seabed. It is typically carried out in order to explore for and subsequently extract petroleum that lies in rock formations beneath the seabed. Most commonly, the term is used to describe drilling activities on the continental shelf , though the term can also be applied to drilling in lakes , inshore waters and inland seas .
Offshore drilling presents all environmental challenges, both offshore and onshore from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate . [ 1 ]
There are many different types of facilities from which offshore drilling operations take place. These include bottom founded drilling rigs ( jackup barges and swamp barges ), combined drilling and production facilities either bottom founded or floating platforms, and deepwater mobile offshore drilling units (MODU) including semi-submersibles or drillships . These are capable of operating in water depths up to 3,000 metres (9,800 ft). In shallower waters the mobile units are anchored to the seabed; however, in water deeper than 1,500 metres (4,900 ft), the semi-submersibles and drillships are maintained at the required drilling location using dynamic positioning .
Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys in Ohio . The wells were developed by small local companies such as Bryson, Riley Oil, German-American and Banker's Oil. [ 2 ]
Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California . The wells were drilled from piers extending from land out into the channel. [ 3 ] [ 4 ]
Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie in the 1900s and Caddo Lake in Louisiana in the 1910s. Shortly thereafter wells were drilled in tidal zones along the Texas and Louisiana gulf coast . The Goose Creek Oil Field near Baytown, Texas is one such example. In the 1920s drilling activities occurred from concrete platforms in Venezuela 's Lake Maracaibo . [ 5 ]
One of the oldest subsea wells is the Bibi Eibat well, which came on stream in 1923 in Azerbaijan . [ 6 ] [ dubious – discuss ] The well was located on an artificial island in a shallow portion of the Caspian Sea . In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the Gulf of Mexico .
In 1937, Pure Oil and its partner Superior Oil used a fixed platform to develop a field 1 mile (1.6 km) offshore of Calcasieu Parish, Louisiana in 14 feet (4.3 m) of water.
In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end – this was later destroyed by a hurricane. [ 7 ]
In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit " freedom of the seas " regime. [ 8 ]
In 1946, Magnolia drilled at a site 18 miles (29 km) off the coast, erecting a platform in 18 feet (5.5 m) of water off St. Mary Parish, Louisiana . [ 9 ]
In early 1947, Superior Oil erected a drilling and production platform in 20 feet (6.1 m) of water some 18 miles (29 km) off Vermilion Parish, La. But it was Kerr-Magee , as operator for partners Phillips Petroleum and Stanolind Oil & Gas that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land. [ 10 ]
When offshore drilling moved into deeper waters of up to 30 metres (98 ft), fixed platform rigs were built, until demands for drilling equipment was needed in the 100 feet (30 m) to 120 metres (390 ft) depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors. [ 11 ]
The first semi-submersible resulted from an unexpected observation in 1961. [ 12 ] Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company . As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck.
It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in the floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the 'seadrome'. The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963 by ODECO .. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet.
The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust. [ 13 ]
As of June 2010, there were over 620 mobile offshore drilling rigs (jackups, semisubs, drillships, barges, etc.) available for service in the worldwide offshore rig fleet. [ 14 ]
One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters (7,999 ft) of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion. [ 15 ] The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters (8,500 ft) of water. [ 16 ]
Offshore drilling is usually done from platforms generically known as mobile offshore drilling units (MODU), which can be of one of several formats, depending on the water depth:
Notable offshore fields include:
Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters (980 ft). [ 20 ]
Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities.
The ocean can add several thousand meters or more to the fluid column. The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform.
The trend today is to conduct more of the production operations subsea , by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waters—locations which had been inaccessible—and overcome challenges posed by sea ice such as in the Barents Sea . One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed).
Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salary than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry , at least in the western world. These efforts among others are contained in the established term integrated operations . The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation.
Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform (e.g. Deepwater Horizon oil spill and Ixtoc I oil spill ). [ 21 ] Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons. | https://en.wikipedia.org/wiki/Offshore_drilling |
Offshore embedded anchors are anchors intended for offshore use that derive their holding capacity from the frictional, or bearing, resistance of the surrounding soil, as opposed to gravity anchors, which derive their holding capacity largely from their weight. As offshore developments move into deeper waters, gravity-based structures become less economical due to the large size needed and the consequent cost of transportation.
Each of several embedded-anchor types presents its own advantages for anchoring offshore structures. The choice of anchoring solution depends on multiple factors, such as the type of offshore facility that requires mooring, its location, economic viability, the lifetime of its use, soil conditions, and resources available.
Examples of facilities that may need mooring offshore are floating production storage and offloading (FPSO) units, mobile offshore drilling units , offshore oil production platforms , wave power and other renewable energy converters, and floating liquefied natural gas facilities.
Drag-embedment anchors (DEA) derive their holding capacity from being buried, or embedded, deep within the seabed with their anchoring capacity being directly related to embedment depth. DEAs are installed by means of dragging, using a mooring chain or wire, this relatively simple means of installation making the DEA a cost-effective option for anchoring offshore structures. DEAs are commonly used for temporary moorings of offshore oil and gas structures, e.g. mobile offshore drilling units . Their use in only temporary mooring situations may be largely attributed to uncertainty involving the anchor's embedding trajectory and placement in the soil, which results in uncertainty with regard to the anchor's holding capacity. [ 2 ]
Under ideal conditions, DEAs are one of the most efficient types of anchors, with holding capacities ranging from 33 to greater than 50 times their weight; [ 3 ] and such efficiency gives DEAs an inherent advantage over other anchoring solutions such as caissons and piles, since the mass of a DEA is concentrated deep within the seabed where soil resistance and, hence, holding capacity, is greatest. [ 2 ] Anchor efficiency is defined as the ratio between the ultimate holding capacity and the dry weight of the anchor, with DEAs often possessing significantly higher efficiency ratios compared to other anchoring solutions.
A catenary configuration consists of "slack" mooring lines that form a catenary shape under their own weight. Since the catenary mooring lines lie flat along the seabed, they exert only horizontal forces on their anchors. Taut mooring lines arriving at an angle to the seabed exert both horizontal and vertical forces on their anchors. [ 1 ] Since DEAs are designed to resist horizontal forces only, these anchors should only be used in a catenary-moored configuration. Applying a significant vertical load to a DEA will result in its failure, as the vertical force applied to the padeye will result in anchor retrieval. However, this does facilitate anchor retrieval, which contributes to the cost effectiveness of this anchoring solution.
The three main components of a DEA are the fluke, shank, and padeye. For a DEA, the angle between the fluke and the shank is fixed at approximately 30 degrees for stiff clays and sand, and 50 degrees for soft clays. [ 1 ]
The fluke of a plate anchor is a bearing plate that provides the large majority of the anchors holding capacity at its ultimate embedment depth. As well as contributing to anchor capacity, the fluke may contribute to anchor stability during embedment. Adopting a wider fluke can help in providing rolling stability which allows for deeper embedment and better holding capacity. [ 4 ] There are industry guidelines pertaining to both the appropriate width, length, and thickness of anchor flukes, where width refers to the dimension perpendicular to the direction of embedment. Commercial anchors typically have a fluke width-to-length ratio of 2:1 and a fluke length-to- thickness ratio between 5 and 30. [ 4 ]
Since DEAs derive their strength from their embedment depth, the shank should be designed such that soil resistance perpendicular to the anchor's embedment trajectory should be minimised.
Frictional soil resistance against the parallel component of the shank, however, is less significant. Thus, the area of the shank in line with the direction of the embedment trajectory is often relatively large to provide anchor stability against rolling during embedment.
The padeye is the connection between the anchor and mooring line. Padeye eccentricity, often measured as the padeye offset ratio, is the relationship between the horizontal and vertical distance of the padeye position in relation to the fluke–shank connection of an anchor. The evaluation of the optimal padeye eccentricity for DEAs and vertically loaded anchors (VLAs) is limited to the appropriate choice of shank length given a fixed fluke–shank angle during embedment. A study conducted to investigate appropriate shank lengths considered a range of shank-length to fluke-length ratios between 1 and 2. [ 5 ] It was determined that the shorter shank lengths (closer to ratios of 1) produced deeper anchor embedment. [ 5 ]
Although the mooring line is not an anchor component unique to the DEA, its design significantly influences the behaviour of the anchor. A thicker mooring line makes for more resistance to anchor embedment. The properties of chain, versus wire, mooring lines have been investigated, with chain mooring lines causing reductions in anchor capacity of up to 70%. [ 6 ] Thus, where appropriate and cost-efficient, wire mooring lines should be used. The embedded section of a mooring line contributes to the anchor's holding capacity against horizontal movement. It is, therefore, appropriate to analyse the contribution of the anchor's mooring line with respect to both the embedment process of the anchor and its contribution to the final anchor holding capacity.
Vertically-loaded anchors (VLAs) are essentially DEAs that are free to rotate about the fluke-shank connection, which allows the anchor to withstand both vertical and horizontal loading and thus, unlike DEAs, mooring lines may be in either a catenary or taut-moored configuration. VLAs are embedded as DEAs are, over a specified drag length. As a result, much of the design considerations required for DEAs is applicable to VLAs. Following the drag length insertion, the fluke is "released" and allowed to rotate freely about its connection with the shank. This new anchor configuration results in the mooring line load being essentially normal to the fluke of the VLA. [ 2 ]
Suction caissons (also known as suction buckets, suction piles, or suction anchors) are a new class of embedded anchors that have a number of economic advantages over other methods. They are essentially upturned buckets that are embedded into the soil and use suction, by pumping out the water to create a vacuum, to anchor offshore floating facilities. They present a number of economic benefits, including quick installation and removal during decommissioning, as well as a reduction in material costs. [ 7 ] The caisson consists of a large-diameter cylinder (typically in the 3-to-8-metre (10 to 26 ft) range), open at the bottom and closed at the top, with a length-to-diameter ratio in the range of 3 to 6. [ 8 ] This anchoring solution is used extensively in large offshore structures, offshore drilling and accommodation platforms. Since the rise in the demand for renewable energy, such anchors are now used for offshore wind turbines, typically in a tripod configuration.
In 1997, the suction-embedded plate anchor (SEPLA) was introduced as a combination of two proven anchoring concepts—suction piles and plate anchors—to increase efficiency and reduce costs. [ 10 ]
Today, SEPLA anchors are used in the Gulf of Mexico , off the coast of West Africa , and in many other locations. The SEPLA uses a suction "follower" , an initially water-filled, open-bottom caisson, to embed a plate anchor into soil. The suction follower is lowered to the seabed where it begins to penetrate under its own weight. Water is then pumped from the interior of the caisson to create a vacuum that pushes the plate anchor underneath to the desired depth (Step 1). The plate anchor mooring line is then disengaged from the caisson, which is retrieved by water being forced into the caisson, causing it to move upwards whilst leaving the plate anchor embedded (Step 2). Tension is then applied to the mooring line (Step 3), causing the plate anchor to rotate (a process also known as "keying") to be perpendicular to the direction of loading (Step 4). [ 9 ] This is done so that the maximum surface area is facing the direction of loading, maximising the resistance of the anchor.
As a suction-caisson follower is used, SEPLA anchors can be classified as direct-embedment anchors; and thus the location and depth of the anchor are known. Because of their geotechnical efficiency, SEPLA plate anchors are significantly smaller and lighter than the equivalent suction anchors, thus reducing costs.
The increased cost of installing anchors in deep water has led to the inception of dynamically penetrating anchors that embed themselves into the seabed by free-fall . These anchors typically consist of a thick-walled, steel, tubular shaft filled with scrap metal or concrete and fitted with a conical tip. Steel flukes are often attached to the shaft to improve its hydrodynamic stability and to provide additional frictional resistance against uplift after installation. [ 1 ]
The main advantage of dynamically installed anchors is that their use is not restricted by water depth. Costs are reduced, as no additional mechanical interaction is required during installation. The simple anchor design keeps fabrication and handling costs to a minimum. Additionally, the ultimate holding capacity of dynamic anchors is less dependent on the geotechnical assessment of the location, as lower shear-strengths permit greater penetration which increases the holding capacity. [ 11 ] Despite these advantages, this anchor type's major disadvantage is the degree of uncertainty in predicting embedment depth and orientation and the resultant uncertain holding capacity.
Several different forms of dynamically installed anchors have been designed since their first commercial development in the 1990s. The deep-penetrating anchor (DPA) and the torpedo anchor have seen widespread adoption in offshore South American and Norwegian waters. [ 11 ] Their designs are shown in the figure with two other forms of dynamically installed anchors, namely the Omni-Max and the dynamically embedded plate anchor (DEPLA).
Deep-penetrating and torpedo anchors are designed to reach maximum velocities of 25–35 metres per second (82–115 ft/s) at the seabed, allowing for tip penetration of two to three times the anchor length, and holding capacities in the range of three to six times the weight of the anchor after soil consolidation . [ 1 ]
The dynamically-embedded plate anchor (DEPLA) is a direct-embedment, vertically-loaded anchor that consists of a plate embedded in the seabed by the kinetic energy obtained by freefall in water. This new anchor concept has only been recently developed but has been tested both in the lab and field. The different components of the DEPLA can be seen in the labeled diagram in the figure.
The Omni-Max anchor pictured is a gravity-installed anchor that is capable of being loaded in any direction due to its 360-degree swivel feature. [ 12 ] The anchor is manufactured from high-strength steel and possesses adjustable fluke fins that can be adapted to specific soil conditions. [ 12 ]
A torpedo anchor features a tubular steel shaft, with or without vertical steel fins, which is fitted with a conical tip and filled with scrap metal or concrete. [ 13 ] Up to 150 metres (490 ft) long, the anchor becomes completely buried within the seabed by free-fall.
Full-scale field tests were performed in water at depths of up to 1,000 metres (3,300 ft) using a 12-metre (39 ft) long, 762-millimetre (30.0 in) diameter, torpedo anchor with a dry weight of 400 kilonewtons (90,000 lb f ). The torpedo anchor dropped from a height of 30 metres (98 ft) above sea level achieved 29-metre (95 ft) penetration in normally consolidated clay. [ 13 ]
Subsequent tests with a torpedo anchor, with a dry weight of 240 kilonewtons (54,000 lb f ) and an average tip embedment of 20 metres (66 ft), resulted in holding capacities of approximately 4 times the anchor's dry weight immediately following installation, which approximately doubled after 10 days of soil consolidation. [ 13 ]
While the efficiencies are lower than what would be obtained with other sorts of anchor, such as a drag embedment anchor, this is compensated by the low cost of fabrication and ease of installation. Therefore, a series of torpedo anchors can be deployed for station-keeping of risers and other floating structures. [ 1 ]
A deep-penetrating anchor (DPA) is conceptually similar to a torpedo anchor: it features a dart -shaped, thick-walled, steel cylinder with flukes attached to the upper section of the anchor. A full-scale DPA is approximately 15 metres (49 ft) in length, 1.2 metres (4 ft) in diameter, and weighs on the order of 50–100 tonnes (49–98 long tons; 55–110 short tons). Its installation method is no different from that of the torpedo anchor: it is lowered to a predetermined height above the seabed and then released in free-fall to embed itself into the seabed. [ 1 ]
Embedded anchor piles (driven or drilled) are required for situations where a large holding capacity is required. The design of anchor piles allows for three types of mooring configurations—vertical tethers, catenary moorings, and semi-taut/taut moorings—which are used for the mooring of offshore structures such as offshore wind turbines , floating production storage and offloading (FPSO) vessels, floating liquefied natural gas (FLNG) facilities, etc. An industrial example is the Ursa tension-leg platform (TLP) which has been held on-station by 16 anchor piles, each of which is 147 metres (482 ft) long, 2.4 metres (7 ft 10 in) in diameter, and weighs 380 tonnes (370 long tons; 420 short tons). [ 1 ]
Anchor piles are hollow steel pipes that are either driven, or inserted into a hole drilled into the seabed and then grouted, similar to pile foundations commonly used in offshore fixed structures . The figure shows the different installation methods, where in the "driven" method, the steel tube is driven mechanically by a hammer, whilst in the "drilled" method a cast in-situ pile is inserted into an oversized borehole constructed with a rotary drill and then grouted with cement. Employment of a particular method depends on the geophysical and geotechnical properties of the seabed.
Anchor piles are typically designed to resist both horizontal and vertical loads. The axial holding capacity of the anchor pile is due to the friction along the pile-soil interface, while the lateral capacity of the pile is generated by lateral soil resistance, where the anchor's orientation is critical to optimising this resistance. As a result, the location of the padeye is placed such that the force from the catenary or taut mooring will result in a moment equilibrium about the point of rotation, to achieve the optimal lateral soil resistance. [ 1 ]
Due to the slender nature of anchor piles, there are three installation issues pertaining to driven piles, [ 14 ] the first of which is the driveability of the piles at the location, or where excessive soil resistance may prevent penetration to the desired depth. The second issue is the deformation of the piles where tip collapse or buckling occurs due to excessive resistance and a deviation of pile trajectory. The third issue is the geotechnical properties of the soil. Insufficient lateral resistance by the soil may lead to a toppling of the anchor, and rocks and boulders along the penetration trajectory may lead to refusal and tip collapse.
Installation issues pertaining to drilled-and-grouted piles include borehole stability, unwanted soft cuttings at the base of the hole, hydrofracture of the soil leading to loss of grout, and thermal expansion effects. [ 14 ] | https://en.wikipedia.org/wiki/Offshore_embedded_anchors |
Offshore freshened groundwater (OFG) is water that contains a Total Dissolved Solid (TDS) concentration lower than sea water, and which is hosted in porous sediments and rocks located in the sub-seafloor. OFG systems have been documented all over around the world and have an estimated global volume of around 1 × 10 6 km 3 . [ 1 ] Their study is important because they may represent an unconventional source of potable water for human populations living near the coast, especially in areas where groundwater resources are scarce or facing stress [ 2 ]
OFG usually presents salinity values < 33 Practical Salinity Units (PSU). They are located at water depth < 100 m and within 55 km of the coast in both siliciclastic and carbonatic aquifers along active and passive margins . OFG systems are usually composed by multiple OFG bodies which are altogether < 2 km thick ( Fig.1 )
The principal emplacement mechanisms for OFG systems are (from the most common to the least common):
The geological settings have a major control on OFG development: the majority are hosted in coarser siliciclastic materials, with porosity values around 30% to 60%, constraint by a permeability contrast (predominantly sand to clay). [ 19 ] Topographic gradients have a major impact on OFG emplacement [ 1 ] as topography-driven flow is one of the most important mechanisms controlling discharge of freshwater offshore.
Different methods can be used to characterize and assess OFG occurrences:
OFG systems are receiving increasing attention as they may be used as an unconventional source of potable water in coastal areas, where groundwater resources are being rapidly depleted or contaminated. [ 2 ] 60% of the global population lives in areas of water stress [ 26 ] defined as the ratio of total water withdrawals to available renewable surface and groundwater supplies ( Fig.1 ). Climate change, rapid population growth, and urbanization have a negative impact on water stress especially in coastal communities. [ 27 ] Therefore, OFG has been proposed as an alternative source of freshwater to mitigate water scarcity and groundwater depletion in areas of water stress [ 26 ] | https://en.wikipedia.org/wiki/Offshore_freshened_groundwater |
Offshore geotechnical engineering is a sub-field of geotechnical engineering . It is concerned with foundation design, construction, maintenance and decommissioning for human-made structures in the sea . [ 1 ] Oil platforms , artificial islands and submarine pipelines are examples of such structures. The seabed has to be able to withstand the weight of these structures and the applied loads. Geohazards must also be taken into account. The need for offshore developments stems from a gradual depletion of hydrocarbon reserves onshore or near the coastlines, as new fields are being developed at greater distances offshore and in deeper water, [ 2 ] with a corresponding adaptation of the offshore site investigations. [ 3 ] Today, there are more than 7,000 offshore platforms operating at a water depth up to and exceeding 2000 m. [ 2 ] A typical field development extends over tens of square kilometers, and may comprise several fixed structures, infield flowlines with an export pipeline either to the shoreline or connected to a regional trunkline. [ 4 ]
An offshore environment has several implications for geotechnical engineering. These include the following: [ 1 ] [ 4 ]
Offshore structures are exposed to various environmental loads: wind , waves , currents and, in cold oceans, sea ice and icebergs . [ 6 ] [ 7 ] Environmental loads act primarily in the horizontal direction, but also have a vertical component. Some of these loads get transmitted to the foundation (the seabed). Wind, wave and current regimes can be estimated from meteorological and oceanographic data, which are collectively referred to as metocean data . Earthquake -induced loading can also occur – they proceed in the opposite direction: from the foundation to the structure. Depending on location, other geohazards may also be an issue. All of these phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan – they need to be taken into account in offshore design.
Following are some to the features characterizing the soil in an offshore environment: [ 8 ]
Wave forces induce motion of floating structures in all six degrees of freedom – they are a major design criterion for offshore structures. [ 9 ] [ note 1 ] When a wave's orbital motion reaches the seabed, it induces sediment transport. This only occurs to a water depth of about 200 metres (660 ft), which is the commonly adopted boundary between shallow water and deep water . The reason is that the orbital motion only extends to a water depth that is half the wavelength, and the maximum possible wavelength is generally considered to be 400 metres (1,300 ft). [ 7 ] In shallow water, waves may generate pore pressure build-up in the soil, which may lead to flow slide, and repeated impact on a platform may cause liquefaction , and loss of support. [ 7 ]
Currents are a source of horizontal loading for offshore structures. Because of the Bernoulli effect , they may also exert upward or downward forces on structural surfaces and can induce the vibration of wire lines and pipelines. [ 7 ] Currents are responsible for eddies around a structure, which cause scouring and erosion of the soil. [ 7 ] There are various types of currents: oceanic circulation , geostrophic , tidal , wind-driven, and density currents . [ 7 ]
Geohazards are associated with geological activity, geotechnical features and environmental conditions. Shallow geohazards are those occurring at less than 400 metres (1,300 ft) below the seafloor. [ 10 ] Information on the potential risks associated with these phenomena is acquired through studies of the geomorphology, geological setting and tectonic framework in the area of interest, as well as with geophysical and geotechnical surveys of the seafloor. [ 5 ] Examples of potential threats include tsunamis , landslides , active faults , mud diapirs and the nature of the soil layering (presence of karst , gas hydrates , carbonates). [ 10 ] [ 11 ] [ 12 ] In cold regions, gouging ice features are a threat to subsea installations, such as pipelines. [ 13 ] [ 14 ] [ 5 ] The risks associated with a particular type of geohazard is a function of how exposed the structure is to the event, how severe this event is and how often it occurs (for episodic events). Any threat has to be monitored, and mitigated for or removed. [ 15 ] [ 16 ]
Offshore site investigations are not unlike those conducted onshore (see Geotechnical investigation ). They may be divided into three phases: [ 17 ]
In this phase, which may take place over a period of several months (depending on project size), information is gathered from various sources, including reports, scientific literature (journal articles, conference proceedings) and databases, with the purpose of evaluating risks, assessing design options and planning the subsequent phases. Bathymetry , regional geology, potential geohazards, seabed obstacles and metocean data [ 17 ] [ 18 ] are some of the information that are sought after during that phase.
Geophysical surveys can be used for various purposes. One is to study the bathymetry in the location of interest and to produce an image of the seafloor (irregularities, objects on the seabed, lateral variability, ice gouges , ...). Seismic refraction surveys can be done to obtain information on shallow seabed stratigraphy – it can also be used to locate material such as sand, sand deposit and gravel for use in the construction of artificial islands . [ 19 ] Geophysical surveys are conducted from a research vessel equipped with sonar devices and related equipment, such as single-beam and multibeam echosounders , side-scan sonars , ‘towfish’ and remotely operated vehicles (ROVs) . [ 20 ] [ 21 ] For the sub-bottom stratigraphy, the tools used include boomers, sparkers, pingers and chirp. [ 22 ] Geophysical surveys are normally required before conducting the geotechnical surveys; in larger projects, these phases may be interwoven. [ 22 ]
Geotechnical surveys involve a combination of sampling, drilling, in situ testing as well as laboratory soil testing that is conducted offshore and, with samples, onshore. They serve to ground truth the results of the geophysical investigations; they also provide a detailed account of the seabed stratigraphy and soil engineering properties. [ 23 ] Depending on water depth and metocean conditions, geotechnical surveys may be conducted from a dedicated geotechnical drillship , a semi-submersible , a jackup rig , a large hovercraft or other means. [ 24 ] They are done at a series of specific locations, while the vessel maintains a constant position. Dynamic positioning and mooring with four-point anchoring systems are used for that purpose.
Shallow penetration geotechnical surveys may include soil sampling of the seabed surface or in situ mechanical testing. They are used to generate information on the physical and mechanical properties of the seabed. [ 25 ] They extend to the first few meters below the mudline. Surveys done to these depths, which may be conducted at the same time as the shallow geophysical survey, may suffice if the structure to be deployed at that location is relatively light. These surveys are also useful for planning subsea pipeline routes.
The purpose of deep penetration geotechnical surveys is to collect information on the seabed stratigraphy to depths extending up to a few 100 meters below the mudline. [ 10 ] [ 26 ] These surveys are done when larger structures are planned at these locations. Deep drill holes require a few days during which the drilling unit has to remain exactly in the same position (see dynamic positioning ).
Seabed surface sampling can be done with a grab sampler and with a box corer . [ 27 ] The latter provides undisturbed specimens, on which testing can be conducted, for instance, to determine the soil's relative density , water content and mechanical properties . Sampling can also be achieved with a tube corer, either gravity-driven, or that can be pushed into the seabed by a piston or by means of a vibration system (a device called a vibrocorer). [ 28 ]
Drilling is another means of sampling the seabed. It is used to obtain a record of the seabed stratigraphy or the rock formations below it. The set-up used to sample an offshore structure's foundation is similar to that used by the oil industry to reach and delineate hydrocarbon reservoirs, with some differences in the types of testing. [ 29 ] The drill string consists of a series of pipe segments 5 inches (13 cm) in diameter screwed end to end, with a drillbit assembly at the bottom. [ 28 ] As the dragbit (teeth extending downward from the drillbit) cut into the soil, soil cuttings are produced. Viscous drilling mud flowing down the drillpipe collects these cuttings and carry them up outside the drillpipe. As is the case for onshore geotechnical surveys , different tools can be used for sampling the soil from a drill hole, notably "Shelby tubes", "piston samplers" and "split spoon samplers".
Information on the mechanical strength of the soil can be obtained in situ (from the seabed itself as opposed to in a laboratory from a soil sample). The advantage of this approach is that the data are obtained from soil that has not suffered any disturbance as a result of its relocation. Two of the most commonly used instruments used for that purpose are the cone penetrometer (CPT) and the shear vane . [ 30 ] [ 31 ]
The CPT is a rod-shaped tool whose end has the shape of a cone with a known apex angle ( e.g. 60 degrees). [ 32 ] As it is pushed into the soil, the resistance to penetration is measured, thereby providing an indication of soil strength. [ 33 ] A sleeve behind the cone allows the independent determination of the frictional resistance. Some cones are also able to measure pore water pressure . The shear vane test is used to determine the undrained shear strength of soft to medium cohesive soils . [ 34 ] [ 35 ] This instrument usually consists of four plates welded at 90 degrees from each other at the end of a rod. The rod is then inserted into the soil and a torque is applied to it so as to achieve a constant rotation rate. The torque resistance is measured and an equation is then used to determine the undrained shear strength (and the residual strength), which takes into account the vane's size and geometry. [ 35 ]
Offshore structures are mainly represented by platforms , notably jackup rigs , steel jacket structures and gravity-based structures . [ 36 ] The nature of the seabed has to be taken into account when planning these developments. For instance, a gravity-based structure typically has a very large footprint and is relatively buoyant (because it encloses a large open volume). [ 37 ] Under these circumstances, vertical loading of the foundation may not be as significant as the horizontal loads exerted by wave actions and transferred to the seabed. In that scenario, sliding could be the dominant mode of failure. A more specific example is that of the Woodside "North Rankin A" steel jacket structure offshore Australia. [ 38 ] The shaft capacity for the piles making up each of the structure's legs was estimated on the basis of conventional design methods, notably when driven into siliceous sands. But the soil at that site was a lower capacity calcareous sand. Costly remediation measures were required to correct this oversight.
Proper seabed characterization is also required for mooring systems . For instance, the design and installation of suction piles has to take into account the soil properties, notably its undrained shear strength. [ 39 ] The same is true for the installation and capacity assessment of plate anchors . [ 40 ]
Submarine pipelines are another common type of man-made structure in the offshore environment. [ 41 ] These structures either rest on the seabed, or are placed inside a trench to protect them from fishing trawlers , dragging anchors or fatigue due current-induced oscillations. [ 42 ] Trenching is also used to protect pipelines from gouging by ice keels . [ 13 ] [ 14 ] In both cases, planning of the pipeline involves geotechnical considerations. Pipelines resting on the seabed require geotechnical data along the proposed pipeline route to evaluate potential stability issues, such as passive failure of the soil below it (the pipeline drops) due to insufficient bearing capacity , or sliding failure (the pipeline shift sideways), due to low sliding resistance. [ 43 ] [ 44 ] The process of trenching, when required, needs to take into account soil properties and how they would affect ploughing duration. [ 45 ] Buckling potential induced by the axial and transverse response of the buried pipeline during its operational lifespan need to be assessed at the planning phase, and this will depend on the resistance of the enclosing soil. [ 44 ]
Offshore embedded anchors are anchors that derive their capacity from the frictional and/or bearing resistance of the soil surrounding them. This is converse to gravity anchors that derive their capacity from their weight. As offshore developments move into deeper waters, gravity based structures become less economical due to the large required size and cost of transportation. This proves opportune for the employment of embedded anchors. | https://en.wikipedia.org/wiki/Offshore_geotechnical_engineering |
In biology , offspring are the young creation of living organisms , produced either by sexual or asexual reproduction . Collective offspring may be known as a brood or progeny . This can refer to a set of simultaneous offspring, such as the chicks hatched from one clutch of eggs , or to all offspring produced over time, as with the honeybee . Offspring can occur after mating , artificial insemination , or as a result of cloning .
Human offspring ( descendants ) are referred to as children ; male children are sons and female children are daughters (see Kinship ).
Offspring contains many parts and properties that are precise and accurate in what they consist of, and what they define. As the offspring of a new species, also known as a child or f1 generation, consist of genes of the father and the mother, which is also known as the parent generation. [ 1 ] Each of these offspring contains numerous genes which have coding for specific tasks and properties. Males and females both contribute equally to the genotypes of their offspring, in which gametes fuse and form. An important aspect of the formation of the parent offspring is the chromosome , which is a structure of DNA which contains many genes. [ 1 ]
To focus more on the offspring and how it results in the formation of the f1 generation, is an inheritance called sex linkage , [ 1 ] which is a gene located on the sex chromosome , and patterns of this inheritance differ in both male and female. The explanation that proves the theory of the offspring having genes from both parent generations is proven through a process called crossing over , which consists of taking genes from the male chromosomes and genes from the female chromosome, resulting in a process of meiosis occurring, and leading to the splitting of the chromosomes evenly. [ 2 ] Depending on which genes are dominantly expressed in the gene will result in the sex of the offspring. The female will always give an X chromosome , whereas the male, depending on the situation, will either give an X chromosome or a Y chromosome . If a male offspring is produced, the gene will consist of an X and a Y chromosome, and if a female offspring is produced, the gene will consist of two X chromosomes. [ 2 ]
Cloning is the production of an offspring which represents the identical genes to its parent. Reproductive cloning begins with the removal of the nucleus from an egg, which holds the genetic material. [ 3 ] In order to clone an organ, a stem cell is to be produced and then utilized to clone that specific organ. [ 4 ] A common misconception of cloning is that it produces an exact copy of the parent being cloned. Cloning copies the DNA/genes of the parent and then creates a genetic duplicate. The clone will not be a similar copy as they will grow up in different surroundings from the parent and may encounter different opportunities and experiences that can result in epigenetic changes. Although mostly positive, cloning also faces some setbacks in terms of ethics and human health. Though cell division and DNA replication is a vital part of survival, there are many steps involved and mutations can occur with permanent change in an organism's and their offspring's DNA. [ 5 ] Some mutations can be good as they result in random evolution periods which may be good for the species, but most mutations are bad as they can change the genotypes of offspring, which can result in changes that harm the species. | https://en.wikipedia.org/wiki/Offspring |
In the theory of formal languages , Ogden's lemma (named after William F. Ogden) [ 1 ] is a generalization of the pumping lemma for context-free languages .
Despite Ogden's lemma being a strengthening of the pumping lemma, it is insufficient to fully characterize the class of context-free languages. [ 2 ] This is in contrast to the Myhill–Nerode theorem , which unlike the pumping lemma for regular languages is a necessary and sufficient condition for regularity.
Ogden's lemma — If a language L {\displaystyle L} is generated by a context-free grammar , then there exists some p ≥ 1 {\displaystyle p\geq 1} such that ∀ w ∈ L {\displaystyle \forall w\in L} with length ≥ p {\displaystyle \geq p} , and any way of marking ≥ p {\displaystyle \geq p} positions of w {\displaystyle w} as "marked", there exists a nonterminal A {\displaystyle A} and a way to split w {\displaystyle w} into 5 segments u x z y v {\displaystyle uxzyv} , such that
S ⇒ ∗ u A v ⇒ ∗ u x A y v ⇒ ∗ u x z y v {\displaystyle S\Rightarrow ^{*}uAv\Rightarrow ^{*}uxAyv\Rightarrow ^{*}uxzyv}
z {\displaystyle z} contains at least one marked position.
x z y {\displaystyle xzy} contains at most p {\displaystyle p} marked positions.
u , x {\displaystyle u,x} both contain marked positions, or y , v {\displaystyle y,v} both contain marked positions.
We will use underlines to indicate "marked" positions.
Ogden's lemma is often stated in the following form, which can be obtained by "forgetting about" the grammar, and concentrating on the language itself:
If a language L is context-free, then there exists some number p ≥ 1 {\displaystyle p\geq 1} (where p may or may not be a pumping length) such that for any string s of length at least p in L and every way of "marking" p or more of the positions in s , s can be written as
with strings u, v, w, x, and y , such that
In the special case where every position is marked, Ogden's lemma is equivalent to the pumping lemma for context-free languages. Ogden's lemma can be used to show that certain languages are not context-free in cases where the pumping lemma is not sufficient. An example is the language { a i b j c k d l : i = 0 or j = k = l } {\displaystyle \{a^{i}b^{j}c^{k}d^{l}:i=0{\text{ or }}j=k=l\}} .
The special case of Ogden's lemma is often sufficient to prove some languages are not context-free. For example, { a m b n c m d n | m , n ≥ 1 } {\displaystyle \{a^{m}b^{n}c^{m}d^{n}|m,n\geq 1\}} is a standard example of non-context-free language, [ 3 ]
Suppose the language is generated by a context-free grammar, then let p {\displaystyle p} be the length required in Ogden's lemma, then consider the word a p b p c p _ d p {\displaystyle a^{p}{\underline {b^{p}c^{p}}}d^{p}} in the language. Then the three conditions implied by Ogden's lemma cannot all be satisfied.
Similarly, one can prove the "copy twice" language L = { w 2 | w ∈ { a , b } ∗ } {\displaystyle L=\{w^{2}|w\in \{a,b\}^{*}\}} is not context-free, by using Ogden's lemma on a 2 p b 2 p _ a 2 p b 2 p {\displaystyle a^{2p}{\underline {b^{2p}}}a^{2p}b^{2p}} .
And the given example last section { a i b j c k d l : i = 0 or j = k = l } {\displaystyle \{a^{i}b^{j}c^{k}d^{l}:i=0{\text{ or }}j=k=l\}} is not context-free by using Ogden's lemma on a b 2 p c 2 p _ d 2 p {\displaystyle ab^{2p}{\underline {c^{2p}}}d^{2p}} .
Ogden's lemma can be used to prove the inherent ambiguity of some languages, which is implied by the title of Ogden's paper.
Example : Let L 0 = { a n b m c m | m , n ≥ 1 } , L 1 = { a m b m c n | m , n ≥ 1 } {\displaystyle L_{0}=\{a^{n}b^{m}c^{m}|m,n\geq 1\},L_{1}=\{a^{m}b^{m}c^{n}|m,n\geq 1\}} . The language L = L 0 ∪ L 1 {\displaystyle L=L_{0}\cup L_{1}} is inherently ambiguous. (Example from page 3 of Ogden's paper.)
Let p {\displaystyle p} be the pumping length needed for Ogden's lemma, and apply it to the sentence a p ! + p b p c p _ {\displaystyle a^{p!+p}{\underline {b^{p}c^{p}}}} .
By routine checking of the conditions of Ogden's lemma, we find that the derivation is
S ⇒ ∗ u A v ⇒ ∗ u x A y v ⇒ ∗ u x z y v {\displaystyle S\Rightarrow ^{*}uAv\Rightarrow ^{*}uxAyv\Rightarrow ^{*}uxzyv} where u = a p ! + p b p − s − k , x = b k , z = b s c s ′ , y = c k , v = c p − s ′ − k {\displaystyle u=a^{p!+p}b^{p-s-k},x=b^{k},z=b^{s}c^{s'},y=c^{k},v=c^{p-s'-k}} , satisfying s + s ′ ≥ 1 {\displaystyle s+s'\geq 1} and k ≥ 1 {\displaystyle k\geq 1} and p ≥ s + s ′ + 2 k {\displaystyle p\geq s+s'+2k} .
Thus, we obtain a derivation of a p ! + p b p ! + p c p ! + p {\displaystyle a^{p!+p}b^{p!+p}c^{p!+p}} by interpolating the derivation with p ! / k {\displaystyle p!/k} copies of A ⇒ ∗ x A y {\displaystyle A\Rightarrow ^{*}xAy} . According to this derivation, an entire sub-sentence x p ! / k + 1 z y p ! / k + 1 = b p ! + k + s c p ! + k + s ′ {\displaystyle x^{p!/k+1}zy^{p!/k+1}=b^{p!+k+s}c^{p!+k+s'}} is the descendent of one node A {\displaystyle A} in the derivation tree.
Symmetrically, we can obtain another derivation of a p ! + p b p ! + p c p ! + p {\displaystyle a^{p!+p}b^{p!+p}c^{p!+p}} , according to which there is an entire sub-sentence a p ! + k ″ + s ″ b p ! + k ″ + s ‴ {\displaystyle a^{p!+k''+s''}b^{p!+k''+s'''}} being the descendent of one node in the derivation tree.
Since ( p ! + k + s ) + ( p ! + k ″ + s ‴ ) > 2 p ! + 1 > p ! + p {\displaystyle (p!+k+s)+(p!+k''+s''')>2p!+1>p!+p} , the two sub-sentences have nonempty intersection, and since neither contains the other, the two derivation trees are different.
Similarly, L ∗ {\displaystyle L^{*}} is inherently ambiguous, and for any CFG of the language, letting p {\displaystyle p} be the constant for Ogden's lemma, we find that ( a p ! + p b p ! + p c p ! + p ) n {\displaystyle (a^{p!+p}b^{p!+p}c^{p!+p})^{n}} has at least 2 n {\displaystyle 2^{n}} different parses. Thus L ∗ {\displaystyle L^{*}} has an unbounded degree of inherent ambiguity.
The proof can be extended to show that deciding whether a CFG is inherently ambiguous is undecidable, by reduction to the Post correspondence problem . It can also show that deciding whether a CFG has an unbounded degree of inherent ambiguity is undecidable. (page 4 of Ogden's paper)
Given any Post correspondence problem over binary strings, we reduce it to a decision problem over a CFG.
Given any two lists of binary strings ξ 1 , . . . , ξ n {\displaystyle \xi _{1},...,\xi _{n}} and η 1 , . . . , η n {\displaystyle \eta _{1},...,\eta _{n}} , rewrite the binary alphabet to { d , e } {\displaystyle \{d,e\}} .
Let L ( ξ 1 , ⋯ , ξ n ) {\displaystyle L\left(\xi _{1},\cdots ,\xi _{n}\right)} be the language over alphabet { d , e , f } {\displaystyle \{d,e,f\}} , generated by the CFG with rules S → ξ i S d i e | f {\displaystyle S\to \xi _{i}Sd^{i}e|f} for every i = 1 , . . . , n {\displaystyle i=1,...,n} . Similarly define L ( η 1 , ⋯ , η n ) {\displaystyle L\left(\eta _{1},\cdots ,\eta _{n}\right)} .
Now, by the same argument as above, the language ( L 0 ⋅ L ( ξ 1 , ⋯ , ξ n ) ) ∪ ( L 1 ⋅ L ( η 1 , ⋯ , η n ) ) {\displaystyle (L_{0}\cdot L\left(\xi _{1},\cdots ,\xi _{n}\right))\cup (L_{1}\cdot L\left(\eta _{1},\cdots ,\eta _{n}\right))} is inherently ambiguous iff the Post correspondence problem has a solution.
And the language ( ( L 0 ⋅ L ( ξ 1 , ⋯ , ξ n ) ) ∪ ( L 1 ⋅ L ( η 1 , ⋯ , η n ) ) ) ∗ {\displaystyle ((L_{0}\cdot L\left(\xi _{1},\cdots ,\xi _{n}\right))\cup (L_{1}\cdot L\left(\eta _{1},\cdots ,\eta _{n}\right)))^{*}} has an unbounded degree of inherent ambiguity iff the Post correspondence problem has a solution.
Bader and Moura have generalized the lemma [ 4 ] to allow marking some positions that are not to be included in vx . Their dependence of the parameters was later improved by Dömösi and Kudlek. [ 5 ] If we denote the number of such excluded positions by e , then the number d of marked positions of which we want to include some in vx must satisfy d ≥ p ( e + 1 ) {\displaystyle d\geq p(e+1)} , where p is some constant that depends only on the language. The statement becomes that every s can be written as
with strings u, v, w, x, and y , such that
Moreover, either each of u,v,w has a marked position, or each of w , x , y {\displaystyle w,x,y} has a marked position. | https://en.wikipedia.org/wiki/Ogden's_lemma |
Ogo is a handheld electronic device that enables communication via instant messaging services, email , MMS and SMS text messages. The device works through GSM cellular networks and allows unlimited usage for a flat monthly fee. It supports AOL Instant Messenger , Yahoo! Messenger , and MSN Messenger . It was released in 2004.
Ogo uses the IXI-Connect OS. It features a clamshell design with a 12- bit depth color screen on the top half and a full QWERTY keyboard on the lower half. Navigation through the menus is accomplished primarily through the use of a directional pad located on the lower right hand of the device and alternately through buttons that directly access each of the devices features.
The Ogo is part of a family of devices produced by its overseas manufacturer, IXI, which showcase the "personal mobile gateway" concept, wherein the Ogo acts as a wireless gateway for other Bluetooth enabled devices to access the Internet . Other devices in the family include pens and cameras. With support for AOL Instant Messenger, Yahoo Instant Messenger, and MSN Windows Messenger, it's at least as equipped for IM as any phone you could hope for. The Ogo supports POP3 e-mail, but that's about it. The Ogo is not a phone and has no voice features. [ 1 ]
AT&T deliberately omitted the wireless gateway capabilities of the Ogo in all domestic advertising, possibly in a bid to keep the device from being used as a flat-rate wireless modem.
After the acquisition of AT&T Wireless by Cingular , the Ogo was no longer offered. Cingular discontinued its Ogo service on October 10, 2006.
The device is also marketed in Germany by 1&1. In Germany , the OGO is called a Pocket Web. The OGO can web surf, email, sync with outlook, IM and all the other things like the US based OGO but can not play MP3s. It is also available in Austria through A1 and in Switzerland through Swisscom carrier.
Size
11.5 cm × 7.5 cm × 2.5 cm
Weight
162 g
Display
240×160 Pixel = 1/8 VGA with 4096 Colors
Battery Life
120 hours standby, 2.5 hours fully functioning,
charges with a normal USB Mini-Cable (5V)
Ports
Mini USB,
Headphone connection (for the CT-17 and CT-12 version)
Optical Highlights
Backlighting for monitor and for keyboard,
2 color LED for new messages and for charging
Speaker
Monospeaker,
0.8 watt with 8 Ω
Processor
Texas Instruments OMAP P330B with 200 MHz
Memory
16 MB - RAM and 32 MB NAND - ROM Flash memory (Samsung)
Wireless connections
Quadband Chip w/ memory ROM,
Dualband 900/1800 (CT-17/CT-12) i.e. 850/1900 (CT-15/CT-10) GSM with GPRS-with data relay capabilities
SAR 0.596 W/kg
Keyboard
QWERTZ keyboard,
Navi button
Software Base
Operating System
IXI-Connect OS proprietary system written in C, kernel is based on NucleosOS
Browser
Obigo Browser: Gecko Engine (like Mozilla 4.0)
Protocols
IMAP , HTTP , WSP , SyncML , FOTA , RSS
The Ogo is indeed capable of being used as a fully functional GPRS Bluetooth modem. Connectivity to Windows and Apple computers is still possible, provided a still-activated Ogo is available. The device shows up as a standard bluetooth device.
The European version connects to the web and email push system via the Vodafone GPRS network. | https://en.wikipedia.org/wiki/Ogo_(handheld_device) |
Ohio Public Interest Research Group ( Ohio PIRG ) is a non-profit organization that is part of the state PIRG organizations. It works on a variety of political activities.
In the United States, Public Interest Research Groups (PIRGs) are non-profit organizations that employ grassroots organizing, direct advocacy, investigative journalism , and litigation to affect public policy. [ 1 ]
Ohio PIRG's mission is to deliver persistent, result-oriented public interest activism that protects our environment, encourages a fair, sustainable economy, and fosters responsive, democratic government. [ 2 ]
The PIRGs emerged in the early 1970s on U.S. college campuses. The PIRG model was proposed in the book Action for a Change by Ralph Nader and Donald Ross . [ 3 ] Among other early accomplishments, the PIRGs were responsible for much of the Container Container Deposit Legislation in the United States , also known as "bottle bills." [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Ohio_Public_Interest_Research_Group |
Electrical resistivity (also called volume resistivity or specific electrical resistance ) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current . A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ρ ( rho ). The SI unit of electrical resistivity is the ohm - metre (Ω⋅m). [ 1 ] [ 2 ] [ 3 ] For example, if a 1 m 3 solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω , then the resistivity of the material is 1 Ω⋅m .
Electrical conductivity (or specific conductance ) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter σ ( sigma ), but κ ( kappa ) (especially in electrical engineering) [ citation needed ] and γ ( gamma ) [ citation needed ] are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current.
In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity ρ (Greek: rho ) is the constant of proportionality. This is written as:
R ∝ ℓ A {\displaystyle R\propto {\frac {\ell }{A}}} R = ρ ℓ A ⇔ ρ = R A ℓ , {\displaystyle {\begin{aligned}R&=\rho {\frac {\ell }{A}}\\[3pt]{}\Leftrightarrow \rho &=R{\frac {A}{\ell }},\end{aligned}}}
where
The resistivity can be expressed using the SI unit ohm metre (Ω⋅m)—i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length).
Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same resistivity , but a long, thin copper wire has a much larger resistance than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper.
In a hydraulic analogy , passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not solely determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes.
The above equation can be transposed to get Pouillet's law (named after Claude Pouillet ):
R = ρ ℓ A . {\displaystyle R=\rho {\frac {\ell }{A}}.} The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if A = 1 m 2 , ℓ {\displaystyle \ell } = 1 m (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m.
Conductivity, σ , is the inverse of resistivity:
σ = 1 ρ . {\displaystyle \sigma ={\frac {1}{\rho }}.}
Conductivity has SI units of siemens per metre (S/m).
Conductivity, σ {\displaystyle \sigma } , is directly proportional to n μ n + p μ p {\displaystyle n\mu _{n}+p\mu _{p}}
σ = q ( n μ n + p μ p ) {\displaystyle \sigma =q(n\mu _{n}+p\mu _{p})}
Where: n {\displaystyle n} = electron concentration, p {\displaystyle p} = hole concentration, μ n {\displaystyle \mu _{n}} = electron mobility, μ p {\displaystyle \mu _{p}} = hole mobility.
If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point:
ρ ( x ) = E ( x ) J ( x ) , {\displaystyle \rho (x)={\frac {E(x)}{J(x)}},}
where
The current density is parallel to the electric field by necessity.
Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by:
σ ( x ) = 1 ρ ( x ) = J ( x ) E ( x ) . {\displaystyle \sigma (x)={\frac {1}{\rho (x)}}={\frac {J(x)}{E(x)}}.}
For example, rubber is a material with large ρ and small σ — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small ρ and large σ — because even a small electric field pulls a lot of current through it.
This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel.
Assume the geometry has a uniform cross-section and the resistivity is constant in the material. Then the electric field and current density are constant and parallel, and by the general definition of resistivity, we obtain
ρ = E J , {\displaystyle \rho ={\frac {E}{J}},}
Since the electric field is constant, it is given by the total voltage V across the conductor divided by the length ℓ of the conductor:
E = V ℓ . {\displaystyle E={\frac {V}{\ell }}.}
Since the current density is constant, it is equal to the total current divided by the cross sectional area:
J = I A . {\displaystyle J={\frac {I}{A}}.}
Plugging in the values of E and J into the first expression, we obtain:
ρ = V A I ℓ . {\displaystyle \rho ={\frac {VA}{I\ell }}.}
Finally, we apply Ohm's law, V / I = R :
ρ = R A ℓ . {\displaystyle \rho =R{\frac {A}{\ell }}.}
When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law , which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead.
Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. [ 4 ] In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form: [ 5 ] [ 6 ]
J = σ E ⇌ E = ρ J , {\displaystyle \mathbf {J} ={\boldsymbol {\sigma }}\mathbf {E} \,\,\rightleftharpoons \,\,\mathbf {E} ={\boldsymbol {\rho }}\mathbf {J} ,}
where the conductivity σ and resistivity ρ are rank-2 tensors , and electric field E and current density J are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by:
[ E x E y E z ] = [ ρ x x ρ x y ρ x z ρ y x ρ y y ρ y z ρ z x ρ z y ρ z z ] [ J x J y J z ] , {\displaystyle {\begin{bmatrix}E_{x}\\E_{y}\\E_{z}\end{bmatrix}}={\begin{bmatrix}\rho _{xx}&\rho _{xy}&\rho _{xz}\\\rho _{yx}&\rho _{yy}&\rho _{yz}\\\rho _{zx}&\rho _{zy}&\rho _{zz}\end{bmatrix}}{\begin{bmatrix}J_{x}\\J_{y}\\J_{z}\end{bmatrix}},}
where
Equivalently, resistivity can be given in the more compact Einstein notation :
E i = ρ i j J j . {\displaystyle \mathbf {E} _{i}={\boldsymbol {\rho }}_{ij}\mathbf {J} _{j}~.}
In either case, the resulting expression for each electric field component is:
E x = ρ x x J x + ρ x y J y + ρ x z J z , E y = ρ y x J x + ρ y y J y + ρ y z J z , E z = ρ z x J x + ρ z y J y + ρ z z J z . {\displaystyle {\begin{aligned}E_{x}&=\rho _{xx}J_{x}+\rho _{xy}J_{y}+\rho _{xz}J_{z},\\E_{y}&=\rho _{yx}J_{x}+\rho _{yy}J_{y}+\rho _{yz}J_{z},\\E_{z}&=\rho _{zx}J_{x}+\rho _{zy}J_{y}+\rho _{zz}J_{z}.\end{aligned}}}
Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an x -axis parallel to the current direction, so J y = J z = 0 . This leaves:
ρ x x = E x J x , ρ y x = E y J x , and ρ z x = E z J x . {\displaystyle \rho _{xx}={\frac {E_{x}}{J_{x}}},\quad \rho _{yx}={\frac {E_{y}}{J_{x}}},{\text{ and }}\rho _{zx}={\frac {E_{z}}{J_{x}}}.}
Conductivity is defined similarly: [ 7 ]
[ J x J y J z ] = [ σ x x σ x y σ x z σ y x σ y y σ y z σ z x σ z y σ z z ] [ E x E y E z ] {\displaystyle {\begin{bmatrix}J_{x}\\J_{y}\\J_{z}\end{bmatrix}}={\begin{bmatrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\end{bmatrix}}{\begin{bmatrix}E_{x}\\E_{y}\\E_{z}\end{bmatrix}}}
or
J i = σ i j E j , {\displaystyle \mathbf {J} _{i}={\boldsymbol {\sigma }}_{ij}\mathbf {E} _{j},}
both resulting in:
J x = σ x x E x + σ x y E y + σ x z E z J y = σ y x E x + σ y y E y + σ y z E z J z = σ z x E x + σ z y E y + σ z z E z . {\displaystyle {\begin{aligned}J_{x}&=\sigma _{xx}E_{x}+\sigma _{xy}E_{y}+\sigma _{xz}E_{z}\\J_{y}&=\sigma _{yx}E_{x}+\sigma _{yy}E_{y}+\sigma _{yz}E_{z}\\J_{z}&=\sigma _{zx}E_{x}+\sigma _{zy}E_{y}+\sigma _{zz}E_{z}\end{aligned}}.}
Looking at the two expressions, ρ {\displaystyle {\boldsymbol {\rho }}} and σ {\displaystyle {\boldsymbol {\sigma }}} are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, σ xx may not be equal to 1/ ρ xx . This can be seen in the Hall effect , where ρ x y {\displaystyle \rho _{xy}} is nonzero. In the Hall effect, due to rotational invariance about the z -axis, ρ y y = ρ x x {\displaystyle \rho _{yy}=\rho _{xx}} and ρ y x = − ρ x y {\displaystyle \rho _{yx}=-\rho _{xy}} , so the relation between resistivity and conductivity simplifies to: [ 8 ]
σ x x = ρ x x ρ x x 2 + ρ x y 2 , σ x y = − ρ x y ρ x x 2 + ρ x y 2 . {\displaystyle \sigma _{xx}={\frac {\rho _{xx}}{\rho _{xx}^{2}+\rho _{xy}^{2}}},\quad \sigma _{xy}={\frac {-\rho _{xy}}{\rho _{xx}^{2}+\rho _{xy}^{2}}}.}
If the electric field is parallel to the applied current, ρ x y {\displaystyle \rho _{xy}} and ρ x z {\displaystyle \rho _{xz}} are zero. When they are zero, one number, ρ x x {\displaystyle \rho _{xx}} , is enough to describe the electrical resistivity. It is then written as simply ρ {\displaystyle \rho } , and this reduces to the simpler expression.
Electric current is the ordered movement of electric charges . [ 2 ]
The relation between current density and electric current velocity is governed by the equation
J → = q n v → d {\displaystyle {\vec {J}}=qn{\vec {v}}_{d}} ,
where
J → {\displaystyle {\vec {J}}} = current density (A/m²),
q {\displaystyle q} = charge of the carrier (C) — e.g., q = − 1.6 × 10 − 19 {\displaystyle q=-1.6\times 10^{-19}} C for electrons,
n {\displaystyle n} = number of charge carriers per unit volume (1/m³),
v → d {\displaystyle {\vec {v}}_{d}} = drift velocity (m/s) — the average velocity of charge carriers in the direction of the electric field.
Which can be rearranged to show current velocity's inverse relationship to the number of charge carriers at constant current density.
v → d = J → q n {\displaystyle {\vec {v}}_{d}={\frac {\vec {J}}{qn}}}
According to elementary quantum mechanics , an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values—i.e. have energies that differ only minutely—those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms [ a ] and their distribution within the crystal. [ b ]
The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level . The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times.
Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals.
An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low.
A metal consists of a lattice of atoms , each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. [ 9 ] This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage ) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density . [ 10 ] The mechanism is similar to transfer of momentum of balls in a Newton's cradle [ 11 ] but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire.
Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere , which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. [ 12 ] [ 13 ] The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions. [ 14 ]
In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature.
In electrolytes , electrical conduction happens not by band electrons or holes, but by full atomic species ( ions ) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes , currents are carried by ionic salts. Small holes in cell membranes, called ion channels , are selective to specific ions and determine the membrane resistance.
The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient α {\displaystyle \alpha } , which is the ratio of the concentration of ions N {\displaystyle N} to the concentration of molecules of the dissolved substance N 0 {\displaystyle N_{0}} :
N = α N 0 . {\displaystyle N=\alpha N_{0}~.}
The specific electrical conductivity ( σ {\displaystyle \sigma } ) of a solution is equal to:
σ = q ( b + + b − ) α N 0 , {\displaystyle \sigma =q\left(b^{+}+b^{-}\right)\alpha N_{0}~,}
where q {\displaystyle q} : module of the ion charge, b + {\displaystyle b^{+}} and b − {\displaystyle b^{-}} : mobility of positively and negatively charged ions, N 0 {\displaystyle N_{0}} : concentration of molecules of the dissolved substance, α {\displaystyle \alpha } : the coefficient of dissociation.
The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver , this decrease is limited by impurities and other defects. Even near absolute zero , a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. [ 15 ] A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. [ 16 ]
In a class of superconductors known as type II superconductors , including all known high-temperature superconductors , an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero.
Plasmas are very good conductors and electric potentials play an important role.
The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential , or space potential . If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath . The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality , which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma ( n e = ⟨Z⟩ > n i ), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths.
The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density . A common example is to assume that the electrons satisfy the Boltzmann relation : n e ∝ exp ( e Φ / k B T e ) . {\displaystyle n_{\text{e}}\propto \exp \left(e\Phi /k_{\text{B}}T_{\text{e}}\right).}
Differentiating this relation provides a means to calculate the electric field from the density: E = − k B T e e ∇ n e n e . {\displaystyle \mathbf {E} =-{\frac {k_{\text{B}}T_{\text{e}}}{e}}{\frac {\nabla n_{\text{e}}}{n_{\text{e}}}}.}
(∇ is the vector gradient operator; see nabla symbol and gradient for more information.)
It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it.
In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length . However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields . This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths . The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics .
Plasma is often called the fourth state of matter after solid, liquids and gases. [ 18 ] [ 19 ] It is distinct from these and other lower-energy states of matter . Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following:
The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water / aqueous solution is highly dependent on its concentration of dissolved salts and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance , relative to the conductivity of pure water at 25 °C . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows:
This table shows the resistivity ( ρ ), conductivity and temperature coefficient of various materials at 20 °C (68 °F; 293 K).
The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at 0 °C . [ 53 ]
The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947):
The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current.
More technically, the free electron model gives a basic description of electron flow in metals.
Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least 10 10 worse insulator than oven-dry. [ 46 ] In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood. [ citation needed ]
The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used: ρ ( T ) = ρ 0 [ 1 + α ( T − T 0 ) ] , {\displaystyle \rho (T)=\rho _{0}[1+\alpha (T-T_{0})],}
where α {\displaystyle \alpha } is called the temperature coefficient of resistivity , T 0 {\displaystyle T_{0}} is a fixed reference temperature (usually room temperature), and ρ 0 {\displaystyle \rho _{0}} is the resistivity at temperature T 0 {\displaystyle T_{0}} . The parameter α {\displaystyle \alpha } is an empirical parameter fitted from measurement data , equal to 1/ κ {\displaystyle \kappa } [ clarify ] . Because the linear approximation is only an approximation, α {\displaystyle \alpha } is different for different reference temperatures. For this reason it is usual to specify the temperature that α {\displaystyle \alpha } was measured at with a suffix, such as α 15 {\displaystyle \alpha _{15}} , and the relationship only holds in a range of temperatures around the reference. [ 54 ] When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used.
In general, electrical resistivity of metals increases with temperature. Electron– phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity ρ of a metal can be approximated through the Bloch–Grüneisen formula: [ 55 ]
ρ ( T ) = ρ ( 0 ) + A ( T Θ R ) n ∫ 0 Θ R / T x n ( e x − 1 ) ( 1 − e − x ) d x , {\displaystyle \rho (T)=\rho (0)+A\left({\frac {T}{\Theta _{R}}}\right)^{n}\int _{0}^{\Theta _{R}/T}{\frac {x^{n}}{(e^{x}-1)(1-e^{-x})}}\,dx,}
where ρ ( 0 ) {\displaystyle \rho (0)} is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface , the Debye radius and the number density of electrons in the metal. Θ R {\displaystyle \Theta _{R}} is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:
The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum . [ 56 ]
If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) [ 57 ] [ 58 ] states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of n .
As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity . This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity .
An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes 's experiments that led in 1911 to discovery of superconductivity . For details see History of superconductivity .
The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature:
κ σ = π 2 3 ( k e ) 2 T , {\displaystyle {\kappa \over \sigma }={\pi ^{2} \over 3}\left({\frac {k}{e}}\right)^{2}T,}
where κ {\displaystyle \kappa } is the thermal conductivity , k {\displaystyle k} is the Boltzmann constant , e {\displaystyle e} is the electron charge, T {\displaystyle T} is temperature, and σ {\displaystyle \sigma } is the electric conductivity . The ratio on the rhs is called the Lorenz number.
In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band , which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model :
ρ = ρ 0 e E A k B T . {\displaystyle \rho =\rho _{0}e^{\frac {E_{A}}{k_{B}T}}.}
An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation :
1 T = A + B ln ρ + C ( ln ρ ) 3 , {\displaystyle {\frac {1}{T}}=A+B\ln \rho +C(\ln \rho )^{3},}
where A , B and C are the so-called Steinhart–Hart coefficients .
This equation is used to calibrate thermistors .
Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers. [ 59 ]
In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of ρ = A exp ( T − 1 / n ) , {\displaystyle \rho =A\exp \left(T^{-1/n}\right),}
where n = 2, 3, 4, depending on the dimensionality of the system.
Kondo insulators are materials where the resistivity follows the formula
where a {\displaystyle a} , b {\displaystyle b} , c m {\displaystyle c_{m}} and μ {\displaystyle \mu } are constant parameters, ρ 0 {\displaystyle \rho _{0}} the residual resistivity, T 2 {\displaystyle T^{2}} the Fermi liquid contribution, T 5 {\displaystyle T^{5}} a lattice vibrations term and ln 1 T {\displaystyle \ln {\frac {1}{T}}} the Kondo effect .
When analyzing the response of materials to alternating electric fields ( dielectric spectroscopy ), [ 60 ] in applications such as electrical impedance tomography , [ 61 ] it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance ). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance ). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity.
Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity . Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity .
An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity . The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity .
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula R = ρ ℓ / A {\displaystyle R=\rho \ell /A} above. One example is spreading resistance profiling , where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious.
In cases like this, the formulas J = σ E ⇌ E = ρ J {\displaystyle J=\sigma E\,\,\rightleftharpoons \,\,E=\rho J}
must be replaced with J ( r ) = σ ( r ) E ( r ) ⇌ E ( r ) = ρ ( r ) J ( r ) , {\displaystyle \mathbf {J} (\mathbf {r} )=\sigma (\mathbf {r} )\mathbf {E} (\mathbf {r} )\,\,\rightleftharpoons \,\,\mathbf {E} (\mathbf {r} )=\rho (\mathbf {r} )\mathbf {J} (\mathbf {r} ),}
where E and J are now vector fields . This equation, along with the continuity equation for J and the Poisson's equation for E , form a set of partial differential equations . In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required.
In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines , aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance.
Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. [ 62 ] (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration.
In a 1774 letter to Dutch-born British scientist Jan Ingenhousz , Benjamin Franklin relates an experiment by another British scientist, John Walsh , that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all. [ 63 ]
Mr. Walsh ... has just made a curious Discovery in Electricity. You know we find that in rarify’d Air it would pass more freely, and leap thro’ greater Spaces than in dense Air; and thence it was concluded that in a perfect Vacuum it would pass any distance without the least Obstruction. But having made a perfect Vacuum by means of boil’d Mercury in a long Torricellian bent Tube, its Ends immers’d in Cups full of Mercury, he finds that the Vacuum will not conduct at all, but resists the Passage of the Electric Fluid absolutely.
However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter: [ 63 ]
We can only assume that something was wrong with Walsh’s findings. ... Although the conductivity of a gas, as it approaches a vacuum, increases up to a point and then decreases, that point is far beyond what the technique described might have been expected to reach. Boiling replaced the air with mercury vapor, which as it cooled created a vacuum that could scarcely have been complete enough to decrease, let alone eliminate, the vapor’s conductivity. | https://en.wikipedia.org/wiki/Ohm_metre |
The Ohmer fare register was, in various models, a mechanical device for registering and recording the fares of passengers on streetcars , buses and taxis in the early 20th century. It was invented and improved by members and employees of the Ohmer family of Dayton, Ohio, especially John F. Ohmer who founded the Ohmer Fare Register Company in 1898, [ 1 ] and his brother Wilfred I. Ohmer of the Recording and Computing Machines Company of Dayton, Ohio . This latter company employed up to 9,000 people at one time and was a major manufacturer of precision equipment during World War I . [ 2 ] It was subsequently renamed the Ohmer Corporation and in 1949, acquired by Rockwell Manufacturing Company . [ 3 ]
Fare registers on city buses were replaced by fare boxes by the middle of the 20th century, and today by ticket or card machines. Ohmer fare registers can be found in use and on display at trolley museums throughout the U.S.
A station on the Sacramento Northern line through Concord, California , was called "Ohmer", named for the Ohmer company and its fare register. [ 4 ] The site is now occupied by the North Concord/Martinez Station of the Bay Area Rapid Transit system. [ 5 ] [ 6 ]
Media related to Ohmer fare register at Wikimedia Commons | https://en.wikipedia.org/wiki/Ohmer_fare_register |
An ohmic contact is a non- rectifying electrical junction : a junction between two conductors that has a linear current–voltage (I–V) curve as with Ohm's law . Low-resistance ohmic contacts are used to allow charge to flow easily in both directions between the two conductors, without blocking due to rectification or excess power dissipation due to voltage thresholds.
By contrast, a junction or contact that does not demonstrate a linear I–V curve is called non-ohmic. Non-ohmic contacts come in a number of forms, such as p–n junction , Schottky barrier , rectifying heterojunction , or breakdown junction.
Generally the term "ohmic contact" implicitly refers to an ohmic contact of a metal to a semiconductor, where achieving ohmic contact resistance is possible but requires careful technique. Metal–metal ohmic contacts are relatively simpler to make, by ensuring direct contact between the metals without intervening layers of insulating contamination, excessive roughness or oxidation ; various techniques are used to create ohmic metal–metal junctions ( soldering , welding , crimping , deposition , electroplating , etc.). This article focuses on metal–semiconductor ohmic contacts.
Stable contacts at semiconductor interfaces, with low contact resistance and linear I–V behavior, are critical for the performance and reliability of semiconductor devices , and their preparation and characterization are major efforts in circuit fabrication. Poorly prepared junctions to semiconductors can easily show rectifying behaviour by causing depletion of the semiconductor near the junction, rendering the device useless by blocking the flow of charge between those devices and the external circuitry. Ohmic contacts to semiconductors are typically constructed by depositing thin metal films of a carefully chosen composition, possibly followed by annealing to alter the semiconductor–metal bond.
Both ohmic contacts and Schottky barriers are dependent on the Schottky barrier height, which sets the threshold for the excess energy an electron requires to pass from the semiconductor to the metal. For the junction to admit electrons easily in both directions (ohmic contact), the barrier height must be small in at least some parts of the junction surface. To form an excellent ohmic contact (low resistance), the barrier height should be small everywhere and furthermore the interface should not reflect electrons.
The Schottky barrier height between a metal and semiconductor is naively predicted by the Schottky–Mott rule to be proportional to the difference of the metal-vacuum work function and the semiconductor-vacuum electron affinity . In practice, most metal–semiconductor interfaces do not follow this rule to the predicted degree. Instead, the chemical termination of the semiconductor crystal against a metal creates electron states within its band gap . The nature of these metal-induced gap states and their occupation by electrons tends to pin the center of the band gap to the Fermi level, an effect known as Fermi level pinning . Thus, the heights of the Schottky barriers in metal–semiconductor contacts often show little dependence on the value of the semiconductor or metal work functions, in stark contrast to the Schottky–Mott rule. [ 1 ] Different semiconductors exhibit this Fermi level pinning to different degrees, but a technological consequence is that high quality (low resistance) ohmic contacts are usually difficult to form in important semiconductors such as silicon and gallium arsenide .
The Schottky–Mott rule is not entirely incorrect since, in practice, metals with high work functions form the best contacts to p-type semiconductors, while those with low work functions form the best contacts to n-type semiconductors. Unfortunately experiments have shown that the predictive power of the model doesn't extend much beyond this statement. Under realistic conditions, contact metals may react with semiconductor surfaces to form a compound with new electronic properties. A contamination layer at the interface may effectively widen the barrier. The surface of the semiconductor may reconstruct leading to a new electronic state. The dependence of contact resistance on the details of the interfacial chemistry is what makes the reproducible fabrication of ohmic contacts such a manufacturing challenge.
The fabrication of the ohmic contacts is a much-studied part of materials engineering that nonetheless remains something of an art. The reproducible, reliable fabrication of contacts relies on extreme cleanliness of the semiconductor surface. Since a native oxide rapidly forms on the surface of silicon , for example, the performance of a contact can depend sensitively on the details of preparation.
Often the contact region is heavily doped to ensure the type of contact wanted. As a rule, ohmic contacts on semiconductors form more easily when the semiconductor is highly doped near the junction; a high doping narrows the depletion region at the interface and allow electrons to flow in both directions easily at any bias by tunneling through the barrier.
The fundamental steps in contact fabrication are semiconductor surface cleaning, contact metal deposition, patterning and annealing. Surface cleaning may be performed by sputter-etching, chemical etching, reactive gas etching or ion milling. For example, the native oxide of silicon may be removed with a hydrofluoric acid dip, while GaAs is more typically cleaned by a bromine-methanol dip. After cleaning, metals are deposited via sputter deposition , evaporation or chemical vapor deposition (CVD). Sputtering is a faster and more convenient method of metal deposition than evaporation but the ion bombardment from the plasma may induce surface states or even invert the charge carrier type at the surface. For this reason the gentler but still rapid CVD may be used. Post-deposition annealing of contacts is useful for relieving stress as well as for inducing any desirable reactions between the metal and the semiconductor.
Because deposited metals can themselves react in ambient conditions, to the detriment of the contacts' electrical properties, it is common to form ohmic contacts with layered structures, with the bottom layer, in contact with the semiconductor, chosen for its ability to induce ohmic behaviour. A diffusion barrier-layer may be used to prevent the layers from mixing during any annealing process.
The measurement of contact resistance is most simply performed using a four-point probe although for more accurate determination, use of the transmission line method is typical.
Aluminum was originally the most important contact metal for silicon which was used with either the n-type or p-type semiconductor. As with other reactive metals, Al contributes to contact formation by consuming oxygen from native silicon-dioxide residue. Pure aluminum did react with the silicon, so it was replaced by silicon-doped aluminum and eventually by silicides less prone to diffuse during subsequent high-temperature processing.
Modern ohmic contacts to silicon such as titanium-tungsten disilicide are usually silicides made by CVD. Contacts are often made by depositing the transition metal and forming the silicide by annealing with the result that the silicide may be non-stoichiometric . Silicide contacts can also be deposited by direct sputtering of the compound or by ion implantation of the transition metal followed by annealing.
Formation of contacts to compound semiconductors is considerably more difficult than with silicon. For example, GaAs surfaces tend to lose arsenic and the trend towards As loss can be considerably exacerbated by the deposition of metal. In addition, the volatility of As limits the amount of post-deposition annealing that GaAs devices will tolerate. One solution for GaAs and other compound semiconductors is to deposit a low-bandgap alloy contact layer as opposed to a heavily doped layer. For example, GaAs itself has a smaller bandgap than AlGaAs and so a layer of GaAs near its surface can promote ohmic behavior. In general the technology of ohmic contacts for III-V and II-VI semiconductors is much less developed than for Si.
Transparent or semi-transparent contacts are necessary for active matrix LCD displays , optoelectronic devices such as laser diodes and photovoltaics . The most popular choice is indium tin oxide , a metal that is formed by reactive sputtering of an In-Sn target in an oxide atmosphere.
The RC time constant associated with contact resistance can limit the frequency response of devices. The charging and discharging of the leads resistance is a major cause of power dissipation in high- clock-rate digital electronics. Contact resistance causes power dissipation by Joule heating in low-frequency and analog circuits (for example, solar cells ) made from less common semiconductors. The establishment of a contact fabrication methodology is a critical part of the technological development of any new semiconductor. Electromigration and delamination at contacts are also a limitation on the lifetime of electronic devices. | https://en.wikipedia.org/wiki/Ohmic_contact |
The Ohnesorge number ( Oh ) is a dimensionless number that relates the viscous forces to inertial and surface tension forces. The number was defined by Wolfgang von Ohnesorge in his 1936 doctoral thesis. [ 1 ] [ 2 ]
It is defined as:
Where
The Ohnesorge number for a 3 mm diameter rain drop is typically ~0.002. Larger Ohnesorge numbers indicate a greater influence of the viscosity.
This is often used to relate to free surface fluid dynamics such as dispersion of liquids in gases and in spray technology. [ 3 ] [ 4 ]
In inkjet printing , liquids whose Ohnesorge number are in the range 0.1 < Oh < 1.0 are jettable (1<Z<10 where Z is the reciprocal of the Ohnesorge number). [ 1 ] [ 5 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ohnesorge_number |
Ohno's law was proposed by a Japanese-American biologist Susumu Ohno , saying that the gene content of the mammalian species has been conserved over species not only in the DNA content but also in the genes themselves. That is, nearly all mammalian species have conserved the X chromosome from their primordial X chromosome of a common ancestor. [ 1 ]
Mammalian X chromosomes in various species, including human and mouse , have nearly the same size, with the content of about 5% of the genome. Additionally, for individual gene loci , a number of X-linked genes are common through mammalian species. Examples include glucose-6-phosphate dehydrogenase (G6PD), and the genes for Factor VIII and Factor IX . Moreover, no instances were found where an X-linked gene in one species was located on an autosome in the other species. [ 1 ]
The content of a chromosome would be changed mainly by mutation after duplication of the chromosome and translocation with other chromosomes. However, in mammals, since the chromosomal sex-determination mechanism would have been established in their earlier stages of evolution , polyploidy would have not occurred due to its incompatibility with the sex-determining mechanism. Moreover, X-autosome translocation would have been prohibited because it might have resulted in detrimental effects for survival to the organism. Thus in mammals, the content of X chromosomes has been conserved after typical 2 round duplication events at early ancestral stages of evolution, at the fish or amphibia ( 2R hypothesis ). [ 1 ] [ 2 ]
Genes on the long arm of the human X are contained in the monotreme X and genes on the short arm of the human X are distributed on the autosomes of marsupials. [ 3 ] Ohno commented to the result that monotremes and marsupials were not considered to be ancestors of true mammals, but they have diverged very early from the main line of mammals. [ 4 ] Chloride channel gene ( CLCN4 ) was mapped to the human X but on chromosome 7 of C57BL/6 mice, species of Mus musculus , though the gene is located on X of Mus spretus and rat . [ 5 ] | https://en.wikipedia.org/wiki/Ohno's_law |
The Oil, Chemical and Atomic Workers Union ( OCAW ) was a trade union in the United States which existed between 1917 and 1999. At the time of its dissolution and merger, the International represented 80,000 workers and was affiliated with the AFL–CIO .
The union was first originally established as the International Association of Oil Field, Gas Well, and Refinery Workers of America in 1918 after a major workers' strike in the Texas oil fields in late 1917. [ 2 ] It affiliated with the American Federation of Labor (AFL) when they granted the occurrence of local unions of oil workers at a convention held in El Paso, Texas , and officially set up the international union for oil workers in 1918. [ 3 ] Beginning with only 25 members, the newly established union underwent much success in the first few years of establishment. In just a few years they were already organizing and negotiating well thought out contacts that would affect thousands of oil workers in only three states California , Texas , and Oklahoma .
Its membership grew to 30,000 as the oil industry grew rapidly in the United States in 1921, which was considered their first high peak but the Great Depression reduced its ranks to just 350 in the beginning of 1933. With the several local unions that had been established, only one local – LB local 128 – managed to not miss a single meeting. [ 4 ] The union began to increase in size and activity again once the NRA was passed in 1933. [ 3 ] The NRA, under the New Deal , guaranteed the right of workers to organize. At the end of 1933, and even through the depression, several thousands of oil workers joined and rejoined the union and dispersed into several dozen locals. At this point being a part of the union became really important for the oil industry.
In 1937, the union changed its name to the Oil Workers International Union (OWIU). [ 3 ] The union was one of the first that affiliated with the Committee for Industrial Organization in early 1938, and AFL President William Green revoked the union's AFL charter.
CIO helped the union grow significantly between the years of 1940–1946. Memberships grew due to large strategic groups that were brought into the union, but soon after growth began to slowly decrease after 1946. [ 4 ]
Due to the expanding in memberships and in the union itself, the OWIU extended its membership into Canada in 1948. [ 5 ] [ 6 ] They expanded into Canada so that they could improve wages and working conditions. After 1948, due to wages and working conditions being outrageous, Canadian workers reaped the benefits of these changes from the union and soon after started to receive wages close behind those in the US. The impressive movement even far surpassed wages of other industries in Canada as well. [ 5 ]
Similar to that of the OWIU, the UGCCWA began as the United Mine Workers of America . The main purpose of the UMA was to unite workers in industries related to coke and artificial gas production, which used coke as a fuel.
Due to being unhappy with the service AFL was providing them, UMW eventually broke away from the AFL and they decided to create its own union separate from that of AFL called “District 50”. District 50 became a branch of UMW and its main purpose was to cover “gas, coke and chemical products” made from coal. [ 4 ]
Eventually, the president, John L Lewis , of UMW and of CIO resigned as president of the CIO and therefore removing UMW out of the CIO as well. [ 4 ] After they were no longer part of AFL and CIO, Lewis strengthened District 50. He made it transform into a sort of all of the above branch of the Mine Workers, in such that all miscellaneous groups related to that of gas, coke, and chemicals were now a part of District 50 making gas, coke, and chemical workers simply a small division of District 50. [ 7 ]
Due to the impact that this action had on the workers of these companies, several of the division leaders from District 50 met with the CIO executive board in June 1942. They wanted to break away from District 20 and unite back into CIO so they wanted to discuss the chance of creating an international union for their industry alone. The UGCCWU had broken away from the United Mine Workers of America in September 1942, and won a charter from the Congress of Industrial Organizations (CIO). [ 7 ] At the time they were finally granted charter, their union officially changed their name to United Gas Coke and Chemical Workers of America.
The international under the CIO got off to a slow beginning and the first meeting only represented around 5,000 workers. [ 7 ] However, in just a few months the union grew in size when numerous other groups left District 50 and joined the UGCCWA.
In 1948, Lee Pressman of New York and Joseph Forer of Washington, DC, represented Charles A. Doyle of the Gas, Coke and Chemical Workers Union along with Gerhard Eisler (public thought to be the top Soviet spy in America); Irving Potash , vice president of the Fur and Leather Workers Union ; Ferdinand C. Smith , secretary of the National Maritime Union ; and John Williamson, labor secretary of the CPUSA ). On May 5, 1946, Pressman and Forer received a preliminary injunction so their defendants might have hearings with examiners unconnected with the investigations and prosecutions by examiners of the Immigration and Naturalization Service . [ 8 ]
Over the next several years, members slowly but steadily increased but finally hit their peak in 1950 when members quickly grew. Finally, when UGCCWA merged with OWIU almost 100,000 workers represented those in the gas, coke, and chemical industry. [ 4 ]
Oil Workers International Union (OWIU) and the United Gas, Coke, and Chemical Workers of America (UGCCWA) merged on March 4, 1955, to form the Oil, Chemical, and Atomic Workers Union (OCAW). [ 4 ] When the AFL and CIO merged in 1955, so did the two oil workers' unions. [ 6 ] [ 9 ] [ 10 ] In 1956, after only one year of the merge, OCAW represented approximately 210,000 workers. During this time, it represented more workers than any other union in the oil and chemical field.
The OCAW had one important objective and main focus of their union, the improvement of living conditions of those who work in oil, chemical and related industries. OCAW went about achieving this by collective bargaining and participating in community activities, political action, and educational work. [ 4 ] Collective bargaining was focused on seeking better wages and better working conditions for the wage earned. The union, in a specific rule followed way, bargained with employers on how to improve these conditions. Also, by participating in community activities, political action, and educational work, the union intended on gaining experience first hand and developing ways to better government, schools, housing, recreational facilities, amongst other things that will help improve the community in entirety. [ 4 ]
In the 1970s, OCAW's Canadian locals broke off to form their own union. [ 6 ] OCAW tried to absorb the United Rubber Workers several times in the 1970s and 1980s, but the talks collapsed due to internal union politics within the Rubber Workers and no merger ever occurred. [ 11 ]
OCAW lost approximately 50 percent of its membership between 1980 and 1995, primarily because oil companies closed nearly half the refineries in the US. [ 12 ] [ 13 ] OCAW sought a merger with larger unions in an attempt to survive. A planned merger with the United Mine Workers of America was rejected on February 24, 1988, just two hours before the unions planned to announce the merger agreement. [ 14 ] OCAW finally merged with the 250,000-member United Paperworkers International Union on January 4, 1999, to form the Paper, Allied-Industrial, Chemical and Energy Workers International Union (PACE). [ 1 ] [ 15 ]
OCAW gained a final victory as an independent union seven months after the merger, when the federal government acknowledged for the first time that nuclear weapons production during the Cold War likely caused the illness and even deaths of thousands of atomic mining, refining, and production workers. The government agreed to seek legislation to compensate workers and their survivors for their medical care and lost wages. The admission of complicity and legislative relief had long been sought by OCAW. [ 16 ]
PACE merged with the United Steelworkers in 2005 to form the United Steel, Paper and Forestry, Rubber, Manufacturing, Energy, Allied-Industrial and Service Workers International Union (although the merged union is still more commonly known as the United Steelworkers). [ 17 ] OCAW members are scattered throughout several "bargaining conferences", the industry divisions internal to the United Steelworkers. These include the Chemical Industry, Energy and Utilities, Manufacturing, Mining, and Pharmacies and Pharmaceuticals conferences. Robert Wages, president of OCAW from 1991 to 1999, is currently retired. Kip Phillips a former Vice President is now a Vice President at large with the USW. | https://en.wikipedia.org/wiki/Oil,_Chemical_and_Atomic_Workers_International_Union |
Urban runoff is surface runoff of rainwater, landscape irrigation, and car washing [ 1 ] created by urbanization . Impervious surfaces ( roads , parking lots and sidewalks ) are constructed during land development . During rain , storms, and other precipitation events, these surfaces (built from materials such as asphalt and concrete ), along with rooftops , carry polluted stormwater to storm drains , instead of allowing the water to percolate through soil . [ 2 ]
This causes lowering of the water table (because groundwater recharge is lessened) and flooding since the amount of water that remains on the surface is greater. [ 3 ] [ 4 ] Most municipal storm sewer systems discharge untreated stormwater to streams , rivers , and bays . This excess water can also make its way into people's properties through basement backups and seepage through building wall and floors.
Urban runoff can be a major source of urban flooding and water pollution in urban communities worldwide.
Water running off impervious surfaces in urban areas tends to pick up gasoline , motor oil , heavy metals , trash , and other pollutants from roadways and parking lots, as well as fertilizers and pesticides from lawns. Roads and parking lots are major sources of polycyclic aromatic hydrocarbons (PAHs), which are created as the byproducts of the combustion of gasoline and other fossil fuels , as well as of the heavy metals nickel , copper , zinc , cadmium , and lead . Roof runoff contributes high levels of synthetic organic compounds and zinc (from galvanized gutters). Fertilizer use on residential lawns, parks and golf courses is a measurable source of nitrates and phosphorus in urban runoff when fertilizer is improperly applied or when turf is over-fertilized. [ 3 ] [ 5 ]
Eroding soils or poorly maintained construction sites can often lead to increased sedimentation in runoff. Sedimentation often settles to the bottom of water bodies and can directly affect water quality. Excessive levels of sediment in water bodies can increase the risk of infection and disease through high levels of nutrients present in the soil. These high levels of nutrients can reduce oxygen and boost algae growth while limiting native vegetation growth, which can disrupt aquatic ecosystems . Excessive levels of sediment and suspended solids have the potential to damage existing infrastructure as well. Sedimentation can increase surface runoff by plugging underground injection systems. Increased sedimentation levels can also reduce storage behind reservoir . This reduction of reservoir capacities can lead to increased expenses for public land agencies while also impacting the quality of water recreational areas. [ 6 ]
Runoff can also induce bioaccumulation and biomagnification of toxins in ocean life. Small amounts of heavy metals are carried by runoff into the oceans, which can accumulate within aquatic animals to cause metal poisoning . This heavy metal poisoning can also affect humans, since ingesting a poisoned animal increases the risk of heavy metal poisoning. [ 7 ] [ 8 ]
As stormwater is channeled into storm drains and surface waters, the natural sediment load discharged to receiving waters decreases, but the water flow and velocity increases. In fact, the impervious cover in a typical city creates five times the runoff of a typical woodland of the same size. [ 9 ] [ clarification needed ]
Overwatering through irrigation by sprinkler may produce runoff reaching receiving waters during low flow conditions. [ 10 ] Runoff carries accumulated pollutants to streams with unusually low dilution ratios causing higher pollutant concentrations than would be found during regional precipitation events. [ 11 ]
Urban runoff is a major cause of urban flooding , the inundation of land or property in a built-up environment caused by rainfall overwhelming the capacity of drainage systems , such as storm sewers . [ 12 ] Triggered by events such as flash flooding , storm surges , overbank flooding, or snow melts , urban flooding is characterized by its repetitive, costly, and systemic impacts on communities, even when not within floodplains or near any body of water. [ 13 ]
There are several ways in which stormwater enters properties : backup through sewer pipes, toilets and sinks into buildings; seepage through building walls and floors; the accumulation of water on the property and in public rights-of-way; and the overflow of water from water bodies such as rivers and lakes. Where properties are built with basements, urban flooding is the primary cause of basement flooding. [ citation needed ]
Urban runoff contributes to water quality problems. In 2009 the US National Research Council published a comprehensive report on the effects of urban stormwater and stated that it continues to be a major contamination source in many watersheds throughout the United States. [ 14 ] : vii The report explained that "...further declines in water quality remain likely if the land-use changes that typify more diffuse sources of pollution are not addressed... These include land-disturbing agricultural, silvicultural, urban, industrial, and construction activities from which hard-to-monitor pollutants emerge during wet-weather events. Pollution from these landscapes has been almost universally acknowledged as the most pressing challenge to the restoration of waterbodies and aquatic ecosystems nationwide." [ 14 ] : 24
The runoff also increases temperatures in streams, harming fish and other organisms. (A sudden burst of runoff from a rainstorm can cause a fish-killing shock of hot water.) Also, road salt used to melt snow on sidewalks and roadways can contaminate streams and groundwater aquifers . [ 15 ]
One of the most pronounced effects of urban runoff is on watercourses that historically contained little or no water during dry weather periods (often called ephemeral streams ). When an area around such a stream is urbanized , the resultant runoff creates an unnatural year-round streamflow that hurts the vegetation, wildlife and stream bed of the waterway. Containing little or no sediment relative to the historic ratio of sediment to water, urban runoff rushes down the stream channel, ruining natural features such as meanders and sandbars , and creates severe erosion—increasing sediment loads at the mouth while severely carving the stream bed upstream. As an example, on many Southern California beaches at the mouth of a waterway, urban runoff carries trash, pollutants, excessive silt, and other wastes, and can pose moderate to severe health hazards.
Because of fertilizer and organic waste that urban runoff often carries, eutrophication often occurs in waterways affected by this type of runoff. After heavy rains, organic matter in the waterway is relatively high compared with natural levels, spurring growth of algae blooms that soon consume most of the oxygen . Once the naturally occurring oxygen in the water is depleted, the algae blooms die, and their decomposition causes further eutrophication. These algae blooms mostly occur in areas with still water, such as stream pools and the pools behind dams , weirs , and some drop structures . Eutrophication usually comes with deadly consequences for fish and other aquatic organisms.
Excessive stream bank erosion may cause flooding and property damage. For many years governments have often responded to urban stream erosion problems by modifying the streams through construction of hardened embankments and similar control structures using concrete and masonry materials. Use of these hard materials destroys habitat for fish and other animals. [ 16 ] Such a project may stabilize the immediate area where flood damage occurred, but often it simply shifts the problem to an upstream or downstream segment of the stream. [ 17 ] See River engineering .
There are many different ways that polluted urban runoff could harm humans, such as by contaminating drinking water, disrupting food sources and even causing parts of beaches to be closed off due to a risk of illness. After heavy rainfall events that cause stormwater overflows, contaminated water can impact waterways in which people recreate or fish, causing the beaches or water-based activities to be closed. This is because the runoff has likely caused a spike in harmful bacterial growth or inorganic chemical pollution in the water. [ citation needed ] The contaminants that we often think of as the most damaging are gasoline and oil spillage, but we often overlook the impact that fertilizers and insecticides have. When plants are watered and fields irrigated, the chemicals that lawns and crops have been treated with can be washed into the water table. The new environments that these chemicals are introduced to suffer due to their presence as they kill native vegetation, invertebrates, and vertebrates. [ citation needed ]
Effective control of urban runoff involves reducing the velocity and flow of stormwater, as well as reducing pollutant discharges. Local governments use a variety of stormwater management techniques to reduce the effects of urban runoff. These techniques, called best management practices for water pollution (BMPs) in some countries, may focus on water quantity control, while others focus on improving water quality, and some perform both functions. [ 18 ]
Pollution prevention practices include low impact development (LID) or green infrastructure techniques - known as Sustainable Drainage Systems (SuDS) in the UK, and Water-Sensitive Urban Design (WSUD) in Australia and the Middle East - such as the installation of green roofs and improved chemical handling (e.g. management of motor fuels & oil, fertilizers, pesticides and roadway deicers ). [ 9 ] [ 19 ] Runoff mitigation systems include infiltration basins , bioretention systems, constructed wetlands , retention basins , and similar devices. [ 20 ] [ 21 ]
Providing effective urban runoff solutions often requires proper city programs that take into account the needs and differences of the community. Factors such as a city's mean temperature, precipitation levels, geographical location, and airborne pollutant levels can all affect rates of pollution in urban runoff and present unique challenges for management. Human factors such as urbanization rates, land use trends, and chosen building materials for impervious surfaces often exacerbate these issues.
The implementation of citywide maintenance strategies such as street sweeping programs can also be an effective method in improving the quality of urban runoff. Street sweeping vacuums collect particles of dust and suspended solids often found in public parking lots and roads that often end up in runoff. [ 22 ]
Educational programs can also be an effective tool for managing urban runoff. Local businesses and individuals can have an integral role in reducing pollution in urban runoff simply through their practices, but often are unaware of regulations. Creating a productive discussion on urban runoff and the importance of effective disposal of household items can help to encourage environmentally friendly practices at a reduced cost to the city and local economy. [ 23 ]
Thermal pollution from runoff can be controlled by stormwater management facilities that absorb the runoff or direct it into groundwater , such as bioretention systems and infiltration basins. Bioretention basins tend to be less effective at reducing temperature, as the water may be heated by the sun before being discharged to a receiving stream. [ 18 ] : p. 5–58
Stormwater harvesting deals with the collection of runoff from creeks, gullies, ephemeral streams, and other ground conveyances. Stormwater harvesting projects often have multiple objectives, such as reducing contaminated runoff to sensitive waters, promoting groundwater recharge, and non-potable applications such as toilet flushing and irrigation . [ 24 ] | https://en.wikipedia.org/wiki/Oil-grit_separator |
Oil Mines Regulations-1984 [ 1 ] [ 2 ] (OMR 1984) replaces the Oil Mines Regulations-1933, with effect from October 1984 to deal with matters for the prevention of possible dangers in oil mines in India .
OMR 1984 was Published in 1986 by Directorate General of Mines Safety , Ministry of Labour in Dhanbad , Jharkhand .
Short Title; Extent; Application and; Definitions. (Reg 1–2)
Returns, Notices and Plans. (Reg 3–9)
Qualifications; Appointment; General Management and; Duties of Pe
ns Employed in Mines for various functions. (Reg 10–23)
Reg 24- Derricks; 25- Derrick platforms and floors; 26- Ladders; 27- Safety belts and life lines; 28- Emergency escape device; 29- Weight indicator; 30- Escape exits; 31- Guardrails, handrails and covers; 32- Draw-works; 33- Cathead and cat line; 34- Tongs; 35- Safety chains or wire lines; 36- Casing lines; 37- Rigging equipment for material handling; 38- Storage of materials; 39- Construction and loading of pipe-racks; 40- Rigging-up and rig dismantling; 41- Mud tanks and mud pumps; 42- Blowout preventer assembly; 43- Control system for blowout preventers; 44- Testing of blowout preventer assembly; 45- Precautions against blowout; 46- Precautions after a blowout has occurred; 47- Drilling workover and other operations; 48- Precautions during drill stem test.
Well completion, Testing and Activation (Reg 49–50)
Group Gathering Station and Emergency Plan (Reg 51-51A)
Precautions during acidizing operations; fractu operations and; loading and unloading of petroleum tankers. (Reg 52–54)
Storage Tank; Well servicing operations; Artificial lifting of oil; Temporary closure of producing well and; Plugging requirements of abandoned wells (Reg 55–59)
Application (Reg-60)
Approval and design of the route and design of pipeline, their laying and, Emergency procedure. (Reg 61–64)
Storage and use of flammable material; Precaution against noxious, flammable gases and, fire; Fire Fighting Equipment and; Contingency plan. (Reg 65–72)
Use of certain machinery and equipment; Classification of Hazardous Area; Use of electrical equipment in hazardous area; General Provisions about construction and maintenance of machinery; Internal combustion Engines; Apparatus under pressure; Precautions regarding moving parts of machinery; Engine rooms and their exits and; Working and examination of machinery. (Reg 73–81)
Housekeeping; General/Emergency lighting; Supply and use of protective equipments; Protection against noise, toxic dusts, gases and ionising radiation; Communication; Protection against pollution of environment; Fencings and; General Safety. (Reg 82–98)
Safety and health education, instructions and, inspections; Returns, notices, correspondence and Appeals. (Reg 99-106)
Oil spill | https://en.wikipedia.org/wiki/Oil_Mines_Regulations-1984 |
Oil additives are chemical compounds that improve the lubricant performance of base oil (or oil "base stock"). The manufacturer of many oils can use the same base stock for each formulation and can choose different additives for each use. Additives comprise up to 5% by weight of some oils. [ 1 ]
Nearly all commercial motor oils contain additives, whether the oils are synthetic or petroleum based. Essentially, only the American Petroleum Institute (API) Service SA motor oils have no additives, and they are therefore incapable of protecting modern engines . [ 2 ] The choice of additives is determined by the use, e.g. the oil for a diesel engine with direct injection in a pickup truck (API Service CJ-4) has different additives than the oil used in a small gasoline -powered outboard motor on a boat (2-cycle engine oil).
Oil additives are vital for the proper lubrication and prolonged use of motor oil in modern internal combustion engines . Without many of these, the oil would become contaminated, break down, leak out, or not properly protect engine parts at all operating temperatures . Just as important are additives for oils used inside gearboxes , automatic transmissions , and bearings . Some of the most important additives include those used for viscosity and lubricity , contaminant control, for the control of chemical breakdown, and for seal conditioning. Some additives permit lubricants to perform better under severe conditions, such as extreme pressures and temperatures and high levels of contamination.
Motor oil is manufactured with numerous additives, and there are also aftermarket additives. A glaring inconsistency of mass-marketed aftermarket oil additives is that they often use additives which are foreign to motor oil. On the other hand, commercial additives are also sold that are designed for extended drain intervals (to replace depleted additives in used oil) or for formulating oils in situ (to make a custom motor oil from base stock). Commercial additives are identical to the additives found in off-the-shelf motor oil, while mass-marketed additives have some of each.
Although PTFE, a solid, was used in some aftermarket oil additives, some users said that the PTFE clumped together, clogging filters. Certain people [ who? ] in the 1990s reported that this was corroborated by NASA [ 12 ] and U.S. universities. [ 13 ] However, if the PTFE particles are smaller than those apparently used in the 1980s and 1990s, then PTFE can be an effective lubricant in suspension. [ 14 ] The size of the particle and many other interrelated components of a lubricant make it difficult to make blanket statements about whether PTFE is useful or harmful. Although PTFE has been called " the slickest substance known to man ", [ 15 ] [ 16 ] it would hardly do any good if it remains in the oil filter .
Some mass-market engine oil additives, notably the ones containing PTFE / Teflon (e.g. Slick 50 ) [ 17 ] and chlorinated paraffins (e.g. Dura Lube ), [ 18 ] caused a major backlash by consumers; the U.S. Federal Trade Commission investigated many mass-marketed engine oil additives in the late 1990s.
Although there is no reason to say that all oil additives used in packaged engine oil are good and all aftermarket oil additives are bad, there has been a tendency in the aftermarket industry to make unfounded claims regarding the efficacy of their oil additives. These unsubstantiated claims have caused consumers to be lured into adding a bottle of chemicals to their engines which do not lower emissions, improve wear resistance, lower temperatures, improve efficiency, or extend engine life more than the (much cheaper) oil would have. Many consumers are convinced that aftermarket oil additives work, but many consumers are convinced that they do not work and are in fact detrimental to the engine. The topic is hotly debated on the Internet . | https://en.wikipedia.org/wiki/Oil_additive |
Oil analysis (OA) is the laboratory analysis of a lubricant 's properties, suspended contaminants, and wear debris. OA is performed during routine predictive maintenance to provide meaningful and accurate information on lubricant and machine condition. By tracking oil analysis sample results over the life of a particular machine, trends can be established which can help eliminate costly repairs. The study of wear in machinery is called tribology . Tribologists often perform or interpret oil analysis data.
OA can be divided into three categories:
Oil sampling is a procedure for collecting a volume of fluid from lubricated or hydraulic machinery for the purpose of oil analysis. Much like collecting forensic evidence at a crime scene, when collecting an oil sample, it is important to ensure that procedures are used to minimize disturbance of the sample during and after the sampling process. Oil samples are typically drawn into a small, clean bottle which is sealed and sent to a laboratory for analysis.
OA was first used after World War II by the US railroad industry to monitor the health of locomotives. In 1946 the Denver and Rio Grande Railroad 's research laboratory successfully detected diesel engine problems through wear metal analysis of used oils. A key factor in their success was the development of the spectrograph , an instrument which replaced several wet chemical methods for detecting and measuring individual chemical element such as iron or copper . This practice was soon accepted and used extensively throughout the railroad industry.
By 1955 OA had matured to the point that the United States Bureau of Naval Weapons began a major research program to adopt wear metal analysis for use in aircraft component failure prediction. These studies formed the basis for a Joint Oil Analysis Program (JOAP) involving all branches of the U.S. Armed Forces. The JOAP results proved conclusively that increases in component wear could be confirmed by detecting corresponding increases in the wear metal content of the lubricating oil. In 1958 Pacific Intermountain Express (P.I.E.) was the first trucking company to set up an in-house used oil analysis laboratory to control vehicle maintenance costs which was managed by Bob Herguth. In 1960 the first independent commercial oil analysis laboratory was started by Edward Forgeron in Oakland, CA .
In addition to monitoring oil contamination and wear metals, modern usage of OA includes the analysis of the additives in oils to determine if an extended drain interval may be used. Maintenance costs can be reduced using OA to determine the remaining useful life of additives in the oil. By comparing the OA results of new and used oil, a tribologist can determine when an oil must be replaced. Careful analysis might even allow the oil to be "sweetened" to its original additive levels by either adding fresh oil or replenishing additives that were depleted.
Oil analysis professionals and analysts can get certified in compliance with ISO standards by passing exams administered by the International Council for Machinery Lubrication (ICML).
For purposes of Oil Analysis Program (OAP) trend analysis, replacement, replenishment or drain and flush of lubricating fluids in excess of half an engine’s oil capacity (2.5 gallons or more) will be considered an Oil Change and the engine will be placed in code Charlie (C) for three flights to establish a new working trend. Oil-Wetted Maintenance (OWM) is any replacement of engine components within an oil-lubricated system (bearings, gearbox, pumps, etc.). OWM actions shall be documented on DD Form 2026 and submitted to OAP lab for update of Oil Analysis database.
(a) Special Samples can be requested by the laboratory whenever they feel its necessary.
(b) Whenever directed by the unit maintenance activity to investigate suspected deficiencies.
The NDI/JOAP laboratory will set the standards and intervals of oil analysis.
A typical predictive maintenance technique is ferrography , which analyses iron in oil. | https://en.wikipedia.org/wiki/Oil_analysis |
An oil bath is a type of heated bath used in a laboratory , most commonly used to heat up chemical reactions. It is a container of oil that is heated by a hot plate or (in rare cases) a Bunsen burner .
These baths are commonly used to heat reaction mixtures more evenly than would be possible with a hot plate alone, as the entire outside of the reaction flask is heated. Generally, silicone oil is used in modern oil baths, although mineral oil , cottonseed oil and even phosphoric acid have been used in the past. [ 1 ]
Overheating the oil bath can result in a fire hazard, especially if mineral oil is being used. Generally, the maximum safe operating temperature of a mineral oil bath is approximately 160 °C (320 °F), the oil's flash point. Mineral oil cannot be used above 310 °C (590 °F) due to the compound's boiling point. If higher temperatures are needed, a silicone oil or a sand bath may be used instead. [ 2 ] [ better source needed ] Silicone oil baths are effective in the 25 °C (77 °F) - 230 °C (446 °F) range. Sand baths are effective from 25 °C (77 °F) to above 500 °C (932 °F). [ 3 ]
Another use of an oil bath is to filter particulates out of air, by leading the air stream through an unheated oil bath. This type of air filter was used in car and tractor engines, but has been replaced by modern paper air filters; some small engines continue to use this system. In some cases oil baths are used to heat bearings so they expand before installing them on shafts of aircraft engines and tractors.
This science article is a stub . You can help Wikipedia by expanding it .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oil_bath |
An oil burner engine is a steam engine that uses oil as its fuel. The term is usually applied to a locomotive or ship engine that burns oil to heat water, to produce the steam which drives the pistons , or turbines , from which the power is derived.
This is mechanically very different from diesel engines , which use internal combustion , although they are sometimes colloquially referred to as oil burners. [ 1 ]
A variety of experimental oil powered steam boilers were patented in the 1860s. Most of the early patents used steam to spray atomized oil into the steam boilers furnace. Attempts to burn oil from a free surface were unsuccessful due to the inherently low rates of combustion from the available surface area. [ 2 ] [ 3 ]
On 21 April 1868, the steam yacht Henrietta made a voyage down the river Clyde powered by an oil fired boiler designed and patented by a Mr Donald of George Miller & Co. [ 2 ] [ 4 ] Donald's design used a jet of dry steam to spray oil into a furnace lined with fireproof bricks. Prior to the Henrietta ’s oil burner conversion, George Miller & Co was recorded as having used oil to power their works in Glasgow for a “considerable time”. [ 4 ]
During the late 19th century numerous burner designs were patented using combinations of steam, compressed air and injection pumps to spray oil into boiler furnaces. Most of the early oil burner designs were commercial failures due to the high cost of oil (relative to coal) rather than any technical issues with the burners themselves. [ 2 ]
During the early 20th century, marine and large oil burning steam engines generally used electric motor or steam driven injection pumps. Oil would be drawn from a storage tank through suction strainers and across viscosity-reducing oil heaters. The oil would then be pumped through discharge strainers before entering the burners as a whirling mist. Combustion air was introduced through special furnace-fronts, which were fitted with dampers to regulate the supply. Smaller land-based oil burning steam engines typically used steam jets fed from the main boiler to blast atomized oil into the burner nozzles. [ 5 ]
In the 1870s, Caspian steamships began using mazut , a residual fuel oil which at that time was produced as a waste stream by the many oil refineries located in the Absheron peninsula . During the late 19th century Mazut remained cheap and plentiful in the Caspian region. [ 3 ]
In 1870, either the SS Iran [ 3 ] or SS Constantine [ 6 ] [ 7 ] (depending on source) became the first ship to convert to burning fuel oil, both were Caspian based merchant steamships . [ 3 ] [ 6 ]
During the 1870s, the Imperial Russian Navy converted the ships of the Caspian fleet to oil burners starting with the Khivenets in 1874. [ 3 ]
In 1894, the oil tanker SS Baku Standard became the first oil burning vessel to cross the Atlantic Ocean . In 1903, the Red Star Liner SS Kensington became the first passenger liner to make the Atlantic crossing with boilers fired by fuel oil. [ 7 ]
Fuel oil has a higher energy density than coal and oil powered ships did not need to employ stokers however coal remained the dominant power source for marine boilers throughout the 19th century primarily due to the relatively high cost of fuel oil. [ 3 ] Oil was used in marine boilers to a greater extent during the early 20th century. By 1939, about half the world’s ships burned fuel oil, of these about half had steam engines and the other half used diesel engines. [ 6 ]
Oil burners designed by Thomas Urquhart were fitted to the locomotives of the Gryazi - Tsaritsyn railway in southern Russia . [ 2 ] Thomas Urquhart, who was employed as a Locomotive Superintendent by the Gryazi-Tsaritsyn Railway Company, began his experiments in 1874. By 1885 all the locomotives of the Gryazi-Tsaritsyn Railway had been converted to run on fuel oil. [ 8 ]
In Great Britain , an early pioneer of oil burning railway locomotives was James Holden , [ 9 ] [ 2 ] of the Great Eastern Railway . In James Holden's system, steam was raised by burning coal before the oil fuel was turned on. [ 10 ] Holden's first oil burning locomotive Petrolea, was a class T19 2-4-0 . Built in 1893, Petrolea burned waste oil that the railway had previously been discharging into the River Lea . [ 11 ] Due to the relatively low cost of coal, oil was rarely used on Britain's stream trains and in most cases only where there was a shortage of coal. [ 12 ]
In the United States , the first oil burning steam locomotive was in service on the Southern Pacific railroad by 1900. [ 13 ] By 1915 there were 4,259 oil burning steam locomotives in the United States, which represented 6.5% of all the locomotives then in service. [ 13 ] Most oil burners were operated in areas west of the Mississippi where oil was abundant. [ 13 ] American usage of oil burning steam locomotives peaked in 1945 when they were responsible for around 20% of all the fuel consumed (measured by energy content) during rail freight operations. [ 14 ] After WW2, both oil and coal burning steam locomotives were replaced by more efficient diesel engines and had been almost entirely phased out of service by 1960. [ 14 ]
('*' symbol indicates locomotive was converted or is being converted from coal-burning to oil-burning in either revenue service or excursion service) | https://en.wikipedia.org/wiki/Oil_burner_(engine) |
The term crude oil constant ( Erdölkonstante in German) has been used as an inside joke and pun in the German petroleum industry, pointing out that the reserves-to-production ratio has been observed as roughly constant in the past decades, whereas oil constant ( Ölkonstante in German) is a term describing various material properties of (vegetable and mineral) oils. [ 1 ]
The so-called crude oil constant refers to the approximately constant estimate of available petroleum reserves to production ratio R/P . The estimated duration until the available petroleum reserves are depleted at current production has remained around 40 years since the late 80s. [ 2 ] : 15 Prewar and immediately postwar estimates were sometimes lower, in 1919 as low as 9 years (USA) and in 1948 around 20 years (world) and rose up to 35 years until the 1970s. [ 3 ] However, since then the duration value of static production T=R/P has been rather constant for decades despite rising oil consumption.
One factor contributing to the apparent constancy of the R/P ratio is a neglect or misunderstanding of the fact that the term " proven reserves " does not refer to some absolute quantity of remaining oil that is thought to exist, but rather to the quantity of oil that can be economically extracted given the current price of oil and current oil-extraction technologies. Thus, either an increase in the price of oil or improvements in oil-extraction technologies can lead to an increase in the estimate of "proven reserves" since more-expensive-to-mine deposits such as tight oil become economically viable at a higher oil price, and because newer or more expensive enhanced oil recovery processes such as gas injection , steam injection , and hydraulic fracturing allow continued extraction of oil from fields that would have been considered worth to abandon at a lower price or using older technologies. Thus, it is possible for the "proven reserves of oil" (i.e., economically extractable reserves of oil) to keep pace with or even pull ahead of oil consumption at the current rate. [ 4 ]
On the other hand, the reserves to production ratio is only one mathematical indicator for the geological inventory. More important than the size of the tank is the production rate (e.g. the size of the spigot of a barrel), and with many capital-intensive technologies for extracting oil from non-conventional sources, also the flow rate is getting smaller. A large expansion of global reserves took place in the 2000s, when Athabasca oil sands (Canada) and the heavy oil of the Orinoco Belt (Venezuela) were reclassified from (physically in place) ressource to (producible) reserve . While the oil reserves are sizeable and in the same range as the reserves of Saudi Arabia, [ 2 ] : 15 oil production is growing slowly in Canada [ 5 ] and declining in Venezuela. [ 6 ]
Another contributing factor for the steady P/R-ratio is the large expansion of OPEC reserves, that were booked in the years around 1988. The OPEC quota system had been amended, allowing a production which relates to the reported reserves. Within a few years, OPEC members raised their reserves on paper without reporting any major new discoveries. [ 7 ]
Oil companies which were listed at US stock exchanges or elsewhere are obliged to report their reserves on the principle of carefulness. This led to the effect that a new discovery was first reported by its lowest estimate (P90 = high confidence). [ 8 ] : 2 Later, during production when the reservoir data became more detailed, the most likely estimate (P50) was reported but without backdating this reserve expansion to the year of the discovery. Enhanded oil recovery techniques made it possible to produce the P10 value (10% probability), but again the backdating was forgotten and it seemed as if new discoveries have been made.
A similar pun has been used about the feasibility of fusion power : Since the 1950s, feasible technological means of using fusion for electricity production have constantly been predicted as being 30–40 years ahead, so the "fusion constant" exhibits a similar range to the "oil constant". [ 9 ] | https://en.wikipedia.org/wiki/Oil_constant |
An oil content meter (OCM) is an integral part of all oily water separator (OWS) systems. Oil content meters are also sometimes referred to as oil content monitors, bilge alarms, or bilge monitors. [ 1 ]
The OCM continuously monitors how much oil is in the water that is pumped out the discharge line of the OWS system. [ 2 ] The OCM will not allow the oil concentration of the exiting water to be above the Marpol standard of 15 ppm. This standard was first adopted in 1977 with Resolution A.393(X) which was published by IMO . [ 3 ] These standards were updated various but the most current resolution is MEPC 108(49). The oil content meter will sound an alarm if the liquid leaving the system has an unsatisfactory amount of oil in the mixture. If it is still above that standard, then the bilge water will be reentered into the system until it meets the required criteria. The OCM uses light beams to determine how oily the water in the system is. [ 4 ] The system will then gauge the oil concentration based on a light intensity meter. Modern oil content meters also have a data logging system that can store oil concentration measurements for more than 18 months. [ 2 ]
If the OCM determines that there is far too much of a type of oil, the OCM may be fouled and needs to be flushed out. [ 1 ] Running clean water through the OCM sensor cell is one way it can be cleaned. Also scrubbing the sensor area with a bottle brush is another effective method. [ 1 ] The new MEPC 107(49) regulations have set out stringent actions that require the OCM to be tamper proof and also the OCM needs to have an alarm that sounds whenever the OCM is being cleaned. [ 1 ] When the alarm goes off, the OCM functionality will be checked by crew members. [ 1 ]
An OCM is a small part of what is called the oil discharge monitoring and control system . The first part is the oil content meter. The second is a flow meter which measures the flow rate of the water at the discharge pipe. [ 5 ] Third, is a computing unit which calculates how much oil has actually been discharged along with the day and time of the discharge. [ 5 ] And lastly is the overboard valve control system which is essentially just a valve that can stop the discharge from flowing out at the appropriate time. [ 5 ]
Oil content meters measure how effective the oily water separators on a ship are functioning. [ 6 ] If the OCM computes that the oily discharge is above the 15 ppm standard, the oily water separator needs to be checked by the crew.
There are three types of oil that the oil content meter needs to check for and they are fuel oil , diesel , and emulsions . [ 6 ] | https://en.wikipedia.org/wiki/Oil_content_meter |
An oil dispersant is a mixture of emulsifiers and solvents that helps break oil into small droplets following an oil spill . Small droplets are easier to disperse throughout a water volume, and small droplets may be more readily biodegraded by microbes in the water. Dispersant use involves a trade-off between exposing coastal life to surface oil and exposing aquatic life to dispersed oil. While submerging the oil with dispersant may lessen exposure to marine life on the surface, it increases exposure for animals dwelling underwater, who may be harmed by toxicity of both dispersed oil and dispersant. [ 1 ] [ 2 ] [ 3 ] Although dispersant reduces the amount of oil that lands ashore, it may allow faster, deeper penetration of oil into coastal terrain, where it is not easily biodegraded. [ 4 ]
In 1967, the supertanker Torrey Canyon leaked oil onto the English coastline. [ 5 ] Alkylphenol surfactants were primarily used to break up the oil, but proved very toxic in the marine environment; all types of marine life were killed. This led to a reformulation of dispersants to be more environmentally sensitive. [ when? ] After the Torrey Canyon spill, new boat-spraying systems were developed. [ 5 ] Later reformulations allowed more dispersant to be contained (at a higher concentration) to be aerosolized .
Alaska had fewer than 4,000 gallons of dispersants available at the time of the Exxon Valdez oil spill, and no aircraft with which to dispense them. The dispersants introduced were relatively ineffective due to insufficient wave action to mix the oil and water, and their use was shortly abandoned. [ 6 ]
A report by David Kirby for TakePart found that the main component of the Corexit 9527 formulation used during Exxon Valdez cleanup, 2-butoxyethanol , was identified as "one of the agents that caused liver, kidney, lung, nervous system, and blood disorders among cleanup crews in Alaska following the 1989 Exxon Valdez spill." [ 7 ]
Dispersants were applied to a number of oil spills between the years 1967 and 1989. [ 8 ]
During the Deep water Horizon oil spill, an estimated 1.84 million gallons of Corexit was used in an attempt to increase the amount of surface oil and mitigate the damage to coastal habitat. BP purchased all of the world's supply of Corexit soon after the spill began. [ 10 ] Nearly half (771,000 gallons) of the dispersants were applied directly at the wellhead. [ 11 ] The primary dispersant used were Corexit 9527 and 9500 , which were controversial due to toxicity .
In 2012, a study found that Corexit made the oil up to 52 times more toxic than oil alone, [ 12 ] and that the dispersant's emulsifying effect makes oil droplets more bio-available to plankton . [ 13 ] The Georgia Institute of Technology found that "Mixing oil with dispersant increased toxicity to ecosystems " and made the gulf oil spill worse. [ 14 ]
In 2013, in response to the growing body of laboratory-derived toxicity data, some researchers address the scrutiny that should be used when evaluating laboratory test results that have been extrapolated using procedures that are not fully reliable for environmental assessments. [ 15 ] [ 16 ] Since then, guidance has been published that improves the comparability and relevance of oil toxicity tests. [ 17 ]
Maritime New Zealand used the oil dispersant Corexit 9500 to help in the cleanup process. [ 18 ] The dispersant was applied for only a week, after results proved inconclusive. [ 19 ]
Surfactants reduce oil-water interfacial tension , which helps waves break oil into small droplets. A mixture of oil and water is normally unstable, but can be stabilized with the addition of surfactants; these surfactants can prevent coalescence of dispersed oil droplets. The effectiveness of the dispersant depends on the weathering of the oil, sea energy (waves), salinity of the water, temperature and the type of oil. [ 20 ] Dispersion is unlikely to occur if the oil spreads into a thin layer, because the dispersant requires a particular thickness to work; otherwise, the dispersant will interact with both the water and the oil. More dispersant may be required if the sea energy is low. The salinity of the water is more important for ionic-surfactant dispersants, as salt screens electrostatic interactions between molecules. The viscosity of the oil is another important factor; viscosity can retard dispersant migration to the oil-water interface and also increase the energy required to shear a drop from the slick. Viscosities below 2,000 centi poise are optimal for dispersants. If the viscosity is above 10,000 centipoise, no dispersion is possible. [ 21 ]
There are five requirements for surfactants to successfully disperse oil: [ 5 ]
The effectiveness of a dispersant may be analyzed with the following equations. [ 22 ] The Area refers to the area under the absorbance/wavelength curve, which is determined using the trapezoidal rule. The absorbances are measured at 340, 370, and 400 nm.
Area = 30(Abs 340 + Abs 370 )/2 + 30(Abs 340 + Abs 400 )/2 (1)
The dispersant effectiveness may then be calculated using the equation below.
Effectiveness (%) = Total oil dispersed x 100/(ρ oil V oil )
Developing well-constructed models (accounting for variables such as oil type, salinity and surfactant) are necessary to select the appropriate dispersant in a given situation. Two models exist which integrate the use of dispersants: Mackay's model and Johansen's model. [ 23 ] There are several parameters which must be considered when creating a dispersion model, including oil-slick thickness, advection , resurfacing and wave action. [ 23 ] A general problem in modeling dispersants is that they change several of these parameters; surfactants lower the thickness of the film, increase the amount of diffusion into the water column and increase the amount of breakup caused by wave action. This causes the oil slick's behavior to be more dominated by vertical diffusion than horizontal advection. [ 23 ]
One equation for the modeling of oil spills is: [ 24 ]
∂ h ∂ t + ∇ → ( h ( U → + τ → f ) ) − ∇ → ( E ∇ → h ) = R {\displaystyle {\frac {\partial h}{\partial t}}+{\vec {\nabla }}\left(h\left({\vec {U}}+{\frac {\vec {\tau }}{f}}\right)\right)-{\vec {\nabla }}(E{\vec {\nabla }}h)=R}
where
Mackay's model predicts an increasing dispersion rate, as the slick becomes thinner in one dimension. The model predicts that thin slicks will disperse faster than thick slicks for several reasons. Thin slicks are less effective at dampening waves and other sources of turbidity. Additionally, droplets formed upon dispersion are expected to be smaller in a thin slick and thus easier to disperse in water.
The model also includes: [ 23 ]
The model is lacking in several areas: it does not account for evaporation, the topography of the ocean floor or the geography of the spill zone. [ 23 ]
Johansen's model is more complex than Mackay's model. It considers particles to be in one of three states: at the surface, entrained in the water column or evaporated. The empirically based model uses probabilistic variables to determine where the dispersant will move and where it will go after it breaks up oil slicks. The drift of each particle is determined by the state of that particle; this means that a particle in the vapor state will travel much further than a particle on the surface (or under the surface) of the ocean. [ 23 ] This model improves on Mackay's model in several key areas, including terms for: [ 23 ]
Oil dispersants are modeled by Johansen using a different set of entrainment and resurfacing parameters for treated versus untreated oil. This allows areas of the oil slick to be modeled differently, to better understand how oil spreads along the water's surface.
Surfactants are classified into four main types, each with different properties and applications: anionic , cationic, nonionic and zwitterionic (or amphoteric). Anionic surfactants are compounds that contain an anionic polar group. Examples of anionic surfactants include sodium dodecyl sulfate and dioctyl sodium sulfosuccinate . [ 25 ] Included in this class of surfactants are sodium alkylcarboxylates (soaps). [ 26 ] Cationic surfactats are similar in nature to anionic surfactants, except the surfactant molecules carry a positive charge at the hydrophilic portion. Many of these compounds are quaternary ammonium salts , as well as cetrimonium bromide (CTAB). [ 26 ] Non-ionic surfactants are non-charged and together with anionic surfactants make up the majority of oil-dispersant formulations. [ 25 ] The hydrophilic portion of the surfactant contains polar functional groups , such as -OH or -NH. [ 26 ] Zwitterionic surfactants are the most expensive, and are used for specific applications. [ 26 ] These compounds have both positively and negatively charged components. An example of a zwitterionic compound is phosphatidylcholine , which as a lipid is largely insoluble in water. [ 26 ]
Surfactant behavior is highly dependent on the hydrophilic-lipophilic balance (HLB) value. The HLB is a coding scale from 0 to 20 for non- ionic surfactants, and takes into account the chemical structure of the surfactant molecule. A zero value corresponds to the most lipophilic and a value of 20 is the most hydrophilic for a non-ionic surfactant. [ 5 ] In general, compounds with an HLB between one and four will not mix with water. Compounds with an HLB value above 13 will form a clear solution in water. [ 25 ] Oil dispersants usually have HLB values from 8–18. [ 25 ]
Two formulations of different dispersing agents for oil spills, Dispersit and Omni-Clean, are shown below. A key difference between the two is that Omni-Clean uses ionic surfactants and Dispersit uses entirely non-ionic surfactants. Omni-Clean was formulated for little or no toxicity toward the environment. Dispersit, however, was designed as a competitor with Corexit. Dispersit contains non-ionic surfactants, which permit both primarily oil-soluble and primarily water-soluble surfactants. The partitioning of surfactants between the phases allows for effective dispersion.
Concerns regarding the persistence in the environment and toxicity to various flora and fauna of oil dispersants date back to their early use in the 1960s and 1970s. [ 33 ] Both the degradation and the toxicity of dispersants depend on the chemicals chosen within the formulation. Compounds which interact too harshly with oil dispersants should be tested to ensure that they meet three criteria: [ 34 ]
Dispersants can be delivered in aerosolized form by an aircraft or boat. Sufficient dispersant with droplets in the proper size are necessary; this can be achieved with an appropriate pumping rate. Droplets larger than 1,000 μm are preferred, to ensure they are not blown away by the wind. The ratio of dispersant to oil is typically 1:20. [ 20 ] | https://en.wikipedia.org/wiki/Oil_dispersant |
Oil Field Engine is a sort of internal combustion engine used as a power source in the production of crude petroleum . Most commonly the term refers to a class of reciprocating engines built in the mid to late 19th and early 20th centuries.
With the drilling of the first commercially successful oil well by Edwin Drake in 1859, a new industry was born- one that would rapidly demand new technologies for extraction. Initially, the steam engine used for drilling the well was kept in-place to lift the oil to the surface. As the well's production dropped off, the economic feasibility of firing a boiler and maintaining a steam operation at each individual well fell far short of the income from the well's production (often only a fraction of a barrel per day.) The ideal solution was to install a new gas engine at each well, thus eliminating hours of preparatory work and large quantities of fuel required to steam a boiler for only a few hours of production. High initial cost prevented this in most cases, so it became much more feasible to convert an abundance of existing steam engines to gas engines. The idea of a converted engine is most commonly credited to Dr. Edwin J. Fithian, a Portersville, PA physician with a great interest in mechanics. His 1897 prototype for a 10-horsepower conversion cylinder was turned down by the Oil Well Supply Company of Oil City, PA , so in 1898 Dr. Fithian partnered with John Carruthers and formed the Carruthers-Fithian Clutch Company, with headquarters in Grove City, PA . The "half-breed" concept (as these engines- being half steam engine, half gas engine, were often referred to) was an immediate success, with an oil producer being able to convert a steam engine with a 10HP gas cylinder and clutch for $120.00. Soon after, multiple companies capitalized on the market for conversion cylinders, most producing a simple two-stroke gas cylinder to bolt onto a steam bedplate, thus avoiding patent infringement from Carruthers-Fithian (which by 1899 had formed the Bessemer Gas Engine Company in Grove City.) [ 1 ] Other manufacturers produced engines that were purely of an internal-combustion design, and examples of both exist to this day.
Whether two or four-stroke in design, all oilfield engines share some common parts. A heavy cast iron bedplate secures the engine to its base, usually of concrete. The cylinder is attached to one end of the bedplate, the crankshaft bearings are at the other. The crankshaft rests in these bearings, with either one or two flywheels and a clutch fastened to it. A great number of oil field engines used a crosshead to connect the piston rod to the connecting rod; this slides back and forth between the bedplate and crosshead guides. Ignition in the combustion chamber is either by hot tube or spark plug .
There were a number of builders of oilfield engines, most located in the region producing Pennsylvania-Grade crude petroleum. Some of the best known include: | https://en.wikipedia.org/wiki/Oil_field_engine |
An oil filter is a filter designed to remove contaminants from engine oil , transmission oil , lubricating oil , or hydraulic oil . Their chief use is in internal-combustion engines for motor vehicles (both on- and off-road ), powered aircraft , railway locomotives, ships and boats, and static engines such as generators and pumps. Other vehicle hydraulic systems, such as those in automatic transmissions and power steering , are often equipped with an oil filter. Gas turbine engines, such as those on jet aircraft, also require the use of oil filters. Oil filters are used in many different types of hydraulic machinery . The oil industry itself employs filters for oil production, oil pumping, and oil recycling. Modern engine oil filters tend to be "full-flow" (inline) or "bypass".
Early automobile engines did not have oil filters, having only a rudimentary mesh sieve placed at the oil pump intake. Consequently, along with the generally low quality of oil available, very frequent oil changes were required. The Purolator oil filter was the first oil filter for the automobile; it revolutionized the filtration industry, and is still in production today. [ 1 ] The Purolator was a bypass filter, whereby most of the oil was pumped from the oil sump directly to the engine's working parts, while a smaller proportion of the oil was sent through the filter via a second flow path, filtering the oil over time. [ 2 ]
A full-flow system will have a pump which sends pressurised oil through a filter to the engine bearings, after which the oil returns by gravity to the sump . In the case of a dry sump engine, the oil that reaches the sump is evacuated by a second pump to a remote oil tank. The function of the full-flow filter is to protect the engine from wear through abrasion.
Modern bypass oil filter systems are secondary systems whereby a bleed from the main oil pump supplies oil to the bypass filter, the oil then passing not to the engine but returning to the sump or oil tank. The purpose of the bypass is to have a secondary filtration system to keep the oil in good condition, free of dirt, soot and water, providing much smaller particle retention than is practical for full flow filtration, the full-flow filter is still used to prevent any excessively large particles from causing substantial abrasion or acute blockage in the engine. Originally used on commercial and industrial diesel engines with large oil capacities where the cost of oil analysis testing and extra filtration to extended oil change intervals makes economic sense; bypass oil filters are becoming more common in private consumer applications. [ 3 ] [ 4 ] [ 5 ] (It is essential that the bypass does not compromise the pressurised oilfeed within the full-flow system; one way to avoid such compromise is to have the bypass system as completely independent).
Most pressurized lubrication systems incorporate an overpressure relief valve to allow oil to bypass the filter if its flow restriction is excessive, to protect the engine from oil starvation. Filter bypass may occur if the filter is clogged or the oil is thickened by cold weather. The overpressure relief valve is frequently incorporated into the oil filter. Filters mounted such that oil tends to drain from them usually incorporate
an anti-drainback valve to hold oil in the filter after the engine (or other lubrication system) is shut down. This is done to avoid a delay in oil pressure buildup once the system is restarted; without an anti-drainback valve, pressurized oil would have to fill the filter before travelling onward to the engine's working parts. This situation can cause premature wear of moving parts due to initial lack of oil.
Mechanical designs employ an element made of bulk material (such as cotton waste) or pleated Filter paper to entrap and sequester suspended contaminants. As material builds up on (or in) the filtration medium, oil flow is progressively restricted. This requires periodic replacement of the filter element (or the entire filter, if the element is not separately replaceable).
Early engine oil filters were of cartridge (or replaceable element ) construction, in which a permanent housing contains a replaceable filter element or cartridge. The housing is mounted either directly on the engine or remotely with supply and return pipes connecting it to the engine. In the mid-1950s, the spin-on oil filter design was introduced: a self-contained housing and element assembly which was to be unscrewed from its mount, discarded, and replaced with a new one. This made filter changes more convenient and potentially less messy, and quickly came to be the dominant type of oil filter installed by the world's automakers. Conversion kits were offered for vehicles originally equipped with cartridge-type filters. [ 6 ] In the 1990s, European and Asian automakers in particular began to shift back in favor of replaceable-element filter construction, because it generates less waste with each filter change. American automakers have likewise begun to shift to replaceable-cartridge filters, and retrofit kits to convert from spin-on to cartridge-type filters are offered for popular applications. [ 7 ] Commercially available automotive oil filters vary in their design, materials, and construction details. Ones that are made from completely synthetic material excepting the metal drain cylinders contained within are far superior and longer lasting than the traditional cardboard/cellulose/paper type that still predominate. These variables affect the efficacy, durability, and cost of the filter. [ 8 ]
Magnetic filters use a permanent magnet or an electromagnet to capture ferromagnetic particles. An advantage of magnetic filtration is that maintaining the filter simply requires cleaning the particles from the surface of the magnet. Automatic transmissions in vehicles frequently have a magnet in the fluid pan to sequester magnetic particles and prolong the life of the media-type fluid filter. Some companies are manufacturing magnets that attach to the outside of an oil filter or magnetic drain plugs—first invented and offered for cars and motorcycles in the mid-1930s [ 9 ] —to aid in capturing these metallic particles, though there is ongoing debate as to the effectiveness of such devices. [ 10 ]
A sedimentation or gravity bed filter allows contaminants heavier than oil to settle to the bottom of a container under the influence of gravity .
A centrifuge oil cleaner is a rotary sedimentation device using centrifugal force rather than gravity to separate contaminants from the oil, in the same manner as any other centrifuge . Pressurized oil enters the center of the housing and passes into a drum rotor free to spin on a bearing and seal . The rotor has two jet nozzles arranged to direct a stream of oil at the inner housing to rotate the drum. The oil then slides to the bottom of the housing wall, leaving particulate oil contaminants stuck to the housing walls. The housing must periodically be cleaned, or the particles will accumulate to such a thickness as to stop the drum rotating. In this condition, unfiltered oil will be recirculated. Advantages of the centrifuge are: (i) that the cleaned oil may separate from any water which, being heavier than oil, settles at the bottom and can be drained off (provided any water has not emulsified with the oil); and (ii) they are much less likely to become blocked than a conventional filter. If the oil pressure is insufficient to spin the centrifuge, it may instead by driven mechanically or electrically.
Note: some spin-off filters [ 11 ] are described as centrifugal but they are not true centrifuges; rather, the oil is directed in such a way that there is a centrifugal swirl that helps contaminants stick to the outside of the filter.
High efficiency oil filters are a type of bypass filter that are claimed to allow extended oil drain intervals. [ 5 ] HE oil filters typically have pore sizes of 3 micrometres , which studies have shown reduce engine wear. [ 12 ] Some fleets have been able to increase their drain intervals up to 5-10 times. [ 13 ]
Deciding how clean the oil needs to be is important as cost increases rapidly with cleanliness. Having determined the optimum target cleanliness level for a contamination control programme, many engineers are then challenged by the process of optimizing the location of the filter. To ensure effective solid particle ingression balance, the engineer must consider various elements such as whether the filter will be for protection or for contamination control, ease of access for maintenance, and the performance of the unit being considered to meet the challenges of the target set. [ 14 ] | https://en.wikipedia.org/wiki/Oil_filter |
In light microscopy , oil immersion is a technique used to increase the resolving power of a microscope . This is achieved by immersing both the objective lens and the specimen in a transparent oil of high refractive index , thereby increasing the numerical aperture of the objective lens.
Without oil, light waves reflect off the slide specimen through the glass cover slip, through the air, and into the microscope lens (see the colored figure to the right). Unless a wave comes out at a 90-degree angle, it bends when it hits a new substance, the amount of bend depending on the angle. This distorts the image. Air has a very different index of refraction from glass, making for a larger bend compared to oil, which has an index more similar to glass. Specially manufactured oil can have nearly exactly the same refractive index as glass, making an oil-immersed lens nearly as effective as having glass entirely around the sample (which would be impractical).
Immersion oils are transparent oils that have specific optical and viscosity characteristics necessary for use in microscopy. Typical oils used have an index of refraction of around 1.515. [ 1 ] An oil-immersion objective is an objective lens specially designed to be used in this way. Many condensers also give optimal resolution when the condenser lens is immersed in oil.
Lenses reconstruct the light scattered by an object. To successfully achieve this end, ideally, all the diffraction orders have to be collected. This is related to the opening angle of the lens and its refractive index. The resolution of a microscope is defined as the minimum separation needed between two objects under examination in order for the microscope to discern them as separate objects. This minimum distance is labelled δ. If two objects are separated by a distance shorter than δ, then they will appear as a single object in the microscope.
A measure of the resolving power, R.P., of a lens is given by its numerical aperture , NA:
where λ is the wavelength of light. From this it is clear that a good resolution (small δ) is connected with a high numerical aperture.
The numerical aperture of a lens is defined as
where α 0 is half the angle spanned by the objective lens seen from the sample, and n is the refractive index of the medium between the lens and specimen (≈1 for air).
State-of-the-art objectives can have numerical apertures of up to 0.95. Because sin α 0 ≤ 1, the numerical aperture can never be greater than unity for an objective lens in air. If the space between the objective lens and the specimen is filled with oil, however, the numerical aperture can obtain values greater than 1. This is because oil has a refractive index greater than 1.
From the above it is understood that oil between the specimen and the objective lens improves the resolving power by a factor 1/ n . Objectives specifically designed for this purpose are known as oil-immersion objectives.
Oil-immersion objectives are used only at very large magnifications that require high resolving power. Objectives with high-power magnification have short focal lengths , facilitating the use of oil. The oil is applied to the specimen (conventional microscope), and the stage is raised, immersing the objective in oil. (In inverted microscopes the oil is applied to the objective).
The refractive indices of the oil and of the glass in the first lens element are nearly the same, which means that the refraction of light will be small upon entering the lens (the oil and glass are optically very similar). The correct immersion oil for an objective lens has to be used to ensure that the refractive indices match closely. Use of an oil-immersion lens with the incorrect immersion oil, or without immersion oil altogether, will suffer from spherical aberration. The strength of this effect depends on the size of the refractive index mismatch.
Oil immersion can generally only be used on rigidly mounted specimens; otherwise, the surface tension of the oil can move the coverslip and so move the sample underneath. This can also happen on inverted microscopes because the coverslip is below the slide.
Before the development of synthetic immersion oils in the 1940s, cedar tree oil was widely used. Cedar oil has an index of refraction of approximately 1.516. The numerical aperture of cedar tree oil objectives is generally around 1.3. Cedar oil has a number of disadvantages: it absorbs blue and ultraviolet light, yellows with age, has sufficient acidity to potentially damage objectives with repeated use (by attacking the cement used to join lenses ), and can change viscosity upon dilution with solvent (and thereby change its refraction index and dispersion ). Cedar oil must be removed from the objective immediately after use before it can harden, since removing hardened cedar oil can damage the lens. [ 2 ]
In modern microscopy, synthetic immersion oils are more commonly used, as they eliminate most of these problems. [ 2 ] NA values of 1.6 can be achieved with different oils. Unlike natural oils, synthetic ones do not harden on the lens and can typically be left on the objective for months at a time, although to best maintain a microscope it is best to remove the oil daily. Over time, oil can enter the front lens of the objective or seep into the barrel of the objective and damage the objective.
There are different types of immersion oils with different properties based on the type of microscopy. Types A and B are both general-purpose immersion oils with different viscosities. Type F immersion oil is best used for fluorescent imaging at room temperature (23 °C), while type N oil is made to be used at body temperature (37 °C) for live-cell imaging applications. All have a n D of 1.515, quite similar to the original cedar oil. [ 3 ] | https://en.wikipedia.org/wiki/Oil_immersion |
Oil mist refers to oil droplets suspended in the air in the size range 1~10 μm.
Oil mist may form when high pressure fuel oil , lubricating oil , hydraulic oil , or other oil is sprayed through a narrow crack, or when leaked oil connects with a high temperature surface, vaporizes, and comes in contact with low air temperature.
This happens while the fluids interact with the moving parts during machining . [ 1 ]
Smaller oil droplets than oil mist are difficult to generate under normal circumstances.
Bigger oil droplets than oil mist remain in spray form; this has the advantage of a higher ignition temperature. It sinks easily, reducing fire hazard. Oil mist inside the crankcase can cause a bigger problem.
When the concentration of oil mist increases and reaches the lower explosion limit (LEL; 50 mg/ℓ, as defined by the IACS), explosion may occur when the mist contacts surfaces of over 200 °C (392 °F) or a spark.
The International Association of Classification Societies (IACS) mandates that all ships with a cylinder diameter greater than 300mm or engine power over 2,250 kW must be equipped with either bearing temperature detectors or oil mist detectors. [ 2 ]
In regards to occupational exposures, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set occupational exposure limits at 5 ppm over an eight-hour time-weighted average, with a short-term exposure limit at 10 ppm. [ 3 ]
1 International Maritime Organization (IMO)
2 International Association of Classification Societies Ltd unified requirements concerning MACHINERY INSTALLATIONS
3 Oil Companies International Marine Forum Ship Inspection Report (SIRE) Programme/ Vessel Inspection Questionnaires for Oil Tankers, Combination Carriers, Shuttle Tankers, Chemical Tankers and Gas Carriers | https://en.wikipedia.org/wiki/Oil_mist |
Oil of brick , called by apothecaries Oleum de Lateribus and by alchemists Oil of Philosophers , was an empyreumatic oil obtained by subjecting a brick soaked in oil, such as olive oil , to distillation at a high temperature.
The process initially started with pieces of brick, which were heated red hot in live coals, and extinguished in an earth half-saturated with olive oil. Being then separated and pounded grossly, the brick absorbs the oil. It was then put in a retort , and placed in a reverberatory furnace , where the oil was drawn out by fire. [ 1 ]
Oil of brick was used in pre-modern medicine as a treatment for tumors , in the spleen , in palsies , and epilepsies . [ 1 ] It was used by lapidaries as a vehicle for the emery by which stones and gems were sawn or cut.
This history of science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oil_of_brick |
An oil platform (also called an oil rig , offshore platform , oil production platform , etc.) is a large structure with facilities to extract and process petroleum and natural gas that lie in rock formations beneath the seabed . Many oil platforms will also have facilities to accommodate the workers, although it is also common to have a separate accommodation platform linked by bridge to the production platform. Most commonly, oil platforms engage in activities on the continental shelf , though they can also be used in lakes, inshore waters, and inland seas. Depending on the circumstances, the platform may be fixed to the ocean floor, consist of an artificial island , or float . [ 1 ] In some arrangements the main facility may have storage facilities for the processed oil. Remote subsea wells may also be connected to a platform by flow lines and by umbilical connections. These sub-sea facilities may include one or more subsea wells or manifold centres for multiple wells.
Offshore drilling presents environmental challenges, both from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate . [ 2 ]
There are many different types of facilities from which offshore drilling operations take place. These include bottom-founded drilling rigs ( jackup barges and swamp barges), combined drilling and production facilities, either bottom-founded or floating platforms, and deepwater mobile offshore drilling units (MODU), including semi-submersibles and drillships. These are capable of operating in water depths up to 3,000 metres (9,800 ft). In shallower waters, the mobile units are anchored to the seabed. However, in deeper water (more than 1,500 metres (4,900 ft)), the semisubmersibles or drillships are maintained at the required drilling location using dynamic positioning .
Jan Józef Ignacy Łukasiewicz [ 3 ] ( Polish pronunciation: [iɡˈnatsɨ wukaˈɕɛvitʂ] ; 8 March 1822 – 7 January 1882) was a Polish pharmacist, engineer, businessman, inventor, and philanthropist. He was one of the most prominent philanthropists in the Kingdom of Galicia and Lodomeria, crown land of Austria-Hungary. He was a pioneer who in 1856 built the world's first modern oil refinery.
Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys (a.k.a. Mercer County Reservoir) in Ohio . The wide but shallow reservoir was built from 1837 to 1845 to provide water to the Miami and Erie Canal .
Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California . The wells were drilled from piers extending from land out into the channel.
Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie since 1913 and Caddo Lake in Louisiana in the 1910s. Shortly thereafter, wells were drilled in tidal zones along the Gulf Coast of Texas and Louisiana. The Goose Creek field near Baytown, Texas , is one such example. In the 1920s, drilling was done from concrete platforms in Lake Maracaibo , Venezuela .
The oldest offshore well recorded in Infield's offshore database is the Bibi Eibat well which came on stream in 1923 in Azerbaijan . [ 4 ] Landfill was used to raise shallow portions of the Caspian Sea .
In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the gulf.
In 1937, Pure Oil Company (now Chevron Corporation ) and its partner Superior Oil Company (now part of ExxonMobil Corporation ) used a fixed platform to develop a field in 14 feet (4.3 m) of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana .
In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end – this was later destroyed by a hurricane. [ 5 ]
In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit " freedom of the seas " regime.
In 1946, Magnolia Petroleum (now ExxonMobil ) drilled at a site 18 miles (29 km) off the coast, erecting a platform in 18 feet (5.5 m) of water off St. Mary Parish, Louisiana .
In early 1947, Superior Oil erected a drilling/production platform in 20 ft (6.1 m) of water some 18 miles [ vague ] off Vermilion Parish, Louisiana . But it was Kerr-McGee Oil Industries (now part of Occidental Petroleum ), as operator for partners Phillips Petroleum ( ConocoPhillips ) and Stanolind Oil & Gas ( BP ), that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land. [ 6 ] [ 7 ]
The British Maunsell Forts constructed during World War II are considered the direct predecessors of modern offshore platforms. Having been pre-constructed in a very short time, they were then floated to their location and placed on the shallow bottom of the Thames and the Mersey estuary. [ 7 ] [ 8 ]
In 1954, the first jackup oil rig was ordered by Zapata Oil . It was designed by R. G. LeTourneau and featured three electro-mechanically operated lattice-type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955, and christened "Scorpion". The Scorpion was put into operation in May 1956 off Port Aransas , Texas. It was lost in 1969. [ 9 ] [ 10 ] [ 11 ]
When offshore drilling moved into deeper waters of up to 30 metres (98 ft), fixed platform rigs were built, until demands for drilling equipment was needed in the 30 metres (98 ft) to 120 metres (390 ft) depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors such as forerunners of ENSCO International.
The first semi-submersible resulted from an unexpected observation in 1961. Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company . As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck. It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in its floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the "seadrome". The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet.
The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust.
As of June, 2010, there were over 620 mobile offshore drilling rigs (Jackups, semisubs, drillships, barges) available for service in the competitive rig fleet. [ 12 ]
One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters of water. It is operated by Shell plc and was built at a cost of $3 billion. [ 13 ] The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters of water.
Notable offshore basins include:
Larger lake- and sea-based offshore platforms and drilling rig for oil.
Jack-up drilling rigs, Drillships, and Gravity-based structure aren't pictured here.
These platforms are built on concrete or steel legs, or both, anchored directly onto the seabed, supporting the deck with space for drilling rigs, production facilities and crew quarters. Such platforms are, by virtue of their immobility, designed for very long term use (for instance the Hibernia platform ). Various types of structure are used: steel jacket, concrete caisson , floating steel, and even floating concrete . Steel jackets are structural sections made of tubular steel members, and are usually piled into the seabed. To see more details regarding design, construction and installation of such platforms refer to: [ 18 ] and. [ 19 ]
Concrete caisson structures , pioneered by the Condeep concept, often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore ( Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Fixed platforms are economically feasible for installation in water depths up to about 520 m (1,710 ft).
These platforms consist of slender, flexible towers and a pile foundation supporting a conventional deck for drilling and production operations. Compliant towers are designed to sustain significant lateral deflections and forces, and are typically used in water depths ranging from 370 to 910 metres (1,210 to 2,990 ft).
TLPs are floating platforms tethered to the seabed in a manner that eliminates most vertical movement of the structure. TLPs are used in water depths up to about 2,000 meters (6,600 feet). The "conventional" TLP is a 4-column design that looks similar to a semisubmersible. Proprietary versions include the Seastar and MOSES mini TLPs; they are relatively low cost, used in water depths between 180 and 1,300 metres (590 and 4,270 ft). Mini TLPs can also be used as utility, satellite or early production platforms for larger deepwater discoveries.
Spars are moored to the seabed like TLPs, but whereas a TLP has vertical tension tethers, a spar has more conventional mooring lines. Spars have to-date been designed in three configurations: the "conventional" one-piece cylindrical hull; the "truss spar", in which the midsection is composed of truss elements connecting the upper buoyant hull (called a hard tank) with the bottom soft tank containing permanent ballast; and the "cell spar", which is built from multiple vertical cylinders. The spar has more inherent stability than a TLP since it has a large counterweight at the bottom and does not depend on the mooring to hold it upright. It also has the ability, by adjusting the mooring line tensions (using chain-jacks attached to the mooring lines), to move horizontally and to position itself over wells at some distance from the main platform location. The first production spar [ when? ] was Kerr-McGee's Neptune, anchored in 590 m (1,940 ft) in the Gulf of Mexico; however, spars (such as Brent Spar ) were previously used [ when? ] as FSOs.
Eni 's Devil's Tower located in 1,710 m (5,610 ft) of water in the Gulf of Mexico, was the world's deepest spar until 2010. The world's deepest platform as of 2011 was the Perdido spar in the Gulf of Mexico, floating in 2,438 metres of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion. [ 13 ] [ 20 ] [ 21 ]
The first truss spars [ when? ] were Kerr-McGee's Boomvang and Nansen. [ citation needed ] The first (and, as of 2010, only) cell spar [ when? ] is Kerr-McGee's Red Hawk. [ 22 ]
These platforms have hulls (columns and pontoons) of sufficient buoyancy to cause the structure to float, but of weight sufficient to keep the structure upright. Semi-submersible platforms can be moved from place to place and can be ballasted up or down by altering the amount of flooding in buoyancy tanks. They are generally anchored by combinations of chain, wire rope or polyester rope, or both, during drilling and/or production operations, though they can also be kept in place by the use of dynamic positioning . Semi-submersibles can be used in water depths from 60 to 6,000 metres (200 to 20,000 ft).
The main types of floating production systems are FPSO (floating production, storage, and offloading system) . FPSOs consist of large monohull structures, generally (but not always) shipshaped, equipped with processing facilities. These platforms are moored to a location for extended periods, and do not actually drill for oil or gas. Some variants of these applications, called FSO (floating storage and offloading system) or FSU (floating storage unit), are used exclusively for storage purposes, and host very little process equipment. This is one of the best sources for having floating production.
The world's first floating liquefied natural gas (FLNG) facility is in production. See the section on particularly large examples below.
Jack-up Mobile Drilling Units (or jack-ups), as the name suggests, are rigs that can be jacked up above the sea using legs that can be lowered, much like jacks . These MODUs (Mobile Offshore Drilling Units) are typically used in water depths up to 120 metres (390 ft), although some designs can go to 170 m (560 ft) depth. They are designed to move from place to place, and then anchor themselves by deploying their legs to the ocean bottom using a rack and pinion gear system on each leg.
A drillship is a maritime vessel that has been fitted with drilling apparatus. It is most often used for exploratory drilling of new oil or gas wells in deep water but can also be used for scientific drilling. Early versions were built on a modified tanker hull, but purpose-built designs are used today. Most drillships are outfitted with a dynamic positioning system to maintain position over the well. They can drill in water depths up to 3,700 m (12,100 ft). [ 23 ]
A GBS can either be steel or concrete and is usually anchored directly onto the seabed. Steel GBS are predominantly used when there is no or limited availability of crane barges to install a conventional fixed offshore platform, for example in the Caspian Sea. There are several steel GBS's in the world today (e.g. offshore Turkmenistan Waters (Caspian Sea) and offshore New Zealand). Steel GBS do not usually provide hydrocarbon storage capability. It is mainly installed by pulling it off the yard, by either wet-tow or/and dry-tow, and self-installing by controlled ballasting of the compartments with sea water. To position the GBS during installation, the GBS may be connected to either a transportation barge or any other barge (provided it is large enough to support the GBS) using strand jacks. The jacks shall be released gradually whilst the GBS is ballasted to ensure that the GBS does not sway too much from target location.
These installations, sometimes called toadstools, are small platforms, consisting of little more than a well bay , helipad and emergency shelter. They are designed to be operated remotely under normal conditions, only to be visited occasionally for routine maintenance or well work .
These installations, also known as satellite platforms , are small unmanned platforms consisting of little more than a well bay and a small process plant . They are designed to operate in conjunction with a static production platform which is connected to the platform by flow lines or by umbilical cable , or both.
This is a list of oil wells based on the depth of the water in which they were drilled. It doesn't include how deep underground they go, which in some cases is over 10,000 metres.
Other deep compliant towers and fixed platforms, by water depth:
The Hibernia platform in Canada is the world's heaviest offshore platform, located on the Jeanne D'Arc Basin , in the Atlantic Ocean off the coast of Newfoundland . This gravity base structure (GBS), which sits on the ocean floor, is 111 metres (364 ft) high and has storage capacity for 1.3 million barrels (210,000 m 3 ) of crude oil in its 85-metre (279 ft) high caisson. The platform acts as a small concrete island with serrated outer edges designed to withstand the impact of an iceberg . The GBS contains production storage tanks and the remainder of the void space is filled with ballast with the entire structure weighing in at 1.2 million tons .
Royal Dutch Shell has developed the first Floating Liquefied Natural Gas (FLNG) facility, which is situated approximately 200 km off the coast of Western Australia . It is the largest floating offshore facility. It is approximately 488m long and 74m wide with displacement of around 600,000t when fully ballasted. [ 24 ]
This is a list of oil wells based on the depth of the water in which they were drilled. It doesn't include how deep underground they go, which in some cases is over 10,000 metres.
A typical oil production platform is self-sufficient in energy and water needs, housing electrical generation, water desalinators and all of the equipment necessary to process oil and gas such that it can be either delivered directly onshore by pipeline or to a floating platform or tanker loading facility, or both. Elements in the oil/gas production process include wellhead , production manifold , production separator , glycol process to dry gas, gas compressors , water injection pumps , oil/gas export metering and main oil line pumps.
Larger platforms are assisted by smaller ESVs (emergency support vessels) like the British Iolair that are summoned when something has gone wrong, e.g. when a search and rescue operation is required. During normal operations, PSVs (platform supply vessels) keep the platforms provisioned and supplied, and AHTS vessels can also supply them, as well as tow them to location and serve as standby rescue and firefighting vessels.
Not all of the following personnel are present on every platform. On smaller platforms, one worker can perform a number of different jobs. The following also are not names officially recognized in the industry:
Drill crew will be on board if the installation is performing drilling operations. A drill crew will normally comprise:
Well services crew will be on board for well work . The crew will normally comprise:
The nature of their operation—extraction of volatile substances sometimes under extreme pressure in a hostile environment—means risk; accidents and tragedies occur regularly. The U.S. Minerals Management Service reported 69 offshore deaths, 1,349 injuries, and 858 fires and explosions on offshore rigs in the Gulf of Mexico from 2001 to 2010. [ 30 ] On July 6, 1988, 167 people died when Occidental Petroleum 's Piper Alpha offshore production platform, on the Piper field in the UK sector of the North Sea , exploded after a gas leak. The resulting investigation conducted by Lord Cullen and publicized in the first Cullen Report was highly critical of a number of areas, including, but not limited to, management within the company, the design of the structure, and the Permit to Work System. The report was commissioned in 1988, and was delivered in November 1990. [ 31 ] The accident greatly accelerated the practice of providing living accommodations on separate platforms, away from those used for extraction.
The offshore can be in itself a hazardous environment. In March 1980, the ' flotel ' (floating hotel) platform Alexander L. Kielland capsized in a storm in the North Sea with the loss of 123 lives. [ 32 ]
In 2001, Petrobras 36 in Brazil exploded and sank five days later, killing 11 people.
Given the number of grievances and conspiracy theories that involve the oil business, and the importance of gas/oil platforms to the economy, platforms in the United States are believed to be potential terrorist targets. [ 33 ] Agencies and military units responsible for maritime counter-terrorism in the US ( Coast Guard , Navy SEALs , Marine Recon ) often train for platform raids. [ 34 ]
On April 21, 2010, the Deepwater Horizon platform, 52 miles off-shore of Venice, Louisiana , (property of Transocean and leased to BP ) exploded , killing 11 people, and sank two days later. The resulting undersea gusher, conservatively estimated to exceed 20 million US gallons (76,000 m 3 ) as of early June 2010, became the worst oil spill in US history, eclipsing the Exxon Valdez oil spill .
In British waters, the cost of removing all platform rig structures entirely was estimated in 2013 at £30 billion. [ 35 ]
Aquatic organisms invariably attach themselves to the undersea portions of oil platforms, turning them into artificial reefs. In the Gulf of Mexico and offshore California, the waters around oil platforms are popular destinations for sports and commercial fishermen, because of the greater numbers of fish near the platforms. The United States and Brunei have active Rigs-to-Reefs programs, in which former oil platforms are left in the sea, either in place or towed to new locations, as permanent artificial reefs. In the US Gulf of Mexico , as of September 2012, 420 former oil platforms, about 10 percent of decommissioned platforms, have been converted to permanent reefs. [ 36 ]
On the US Pacific coast, marine biologist Milton Love has proposed that oil platforms off California be retained as artificial reefs , instead of being dismantled (at great cost), because he has found them to be havens for many of the species of fish which are otherwise declining in the region, in the course of 11 years of research. [ 37 ] [ 38 ] Love is funded mainly by government agencies, but also in small part by the California Artificial Reef Enhancement Program . Divers have been used to assess the fish populations surrounding the platforms. [ 39 ]
Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform. [ 40 ] Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons.
Offshore rigs are shut down during hurricanes. [ 41 ] In the Gulf of Mexico the number hurricanes is increasing because of the increasing number of oil platforms that heat surrounding air with methane. It is estimated that oil and gas facilities in the Gulf of Mexico emit approximately 500000 tons of methane each year, corresponding to a 2.9% loss of produced gas. The increasing number of oil rigs also increases the number and movement of oil tankers, resulting in increasing CO 2 levels which directly warm water in the zone. Warm waters are a key factor for hurricanes to form. [ 42 ]
To reduce the amount of carbon emissions otherwise released into the atmosphere, methane pyrolysis of natural gas pumped up by oil platforms is a possible alternative to flaring for consideration. Methane pyrolysis produces non-polluting hydrogen in high volume from this natural gas at low cost. This process operates at around 1000 °C and removes carbon in a solid form from the methane, producing hydrogen. [ 43 ] [ 44 ] [ 45 ] The carbon can then be pumped underground and is not released into the atmosphere.
It is being evaluated in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). [ 46 ] and the chemical engineering team at University of California – Santa Barbara [ 47 ]
If not decommissioned , [ 48 ] old platforms can be repurposed to pump CO 2 into rocks below the seabed. [ 49 ] [ 50 ] Others have been converted to launch rockets into space , and more are being redesigned for use with heavy-lift launch vehicles. [ 51 ]
In Saudi Arabia , there are plans to repurpose decommissioned oil rigs into a theme park . [ 52 ]
Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters.
Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities.
The ocean can add several thousand meters or more to the fluid column . The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform.
The trend today is to conduct more of the production operations subsea , by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waters—locations that had been inaccessible—and overcome challenges posed by sea ice such as in the Barents Sea . One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed).
Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salaries than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry , at least in the western world. These efforts among others are contained in the established term integrated operations . The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation. | https://en.wikipedia.org/wiki/Oil_platform |
The oil print process is a photographic printmaking process that dates to the mid-19th century. Oil prints are made on paper on which a thick gelatin layer has been sensitized to light using dichromate salts. After the paper is exposed to light through a negative, the gelatin emulsion is treated in such a way that highly exposed areas take up an oil-based paint, forming the photographic image.
A significant drawback to the oil print process is that it requires the negative to be the same size as the final print because the medium is not sensitive enough to light to make use of an enlarger . A subtype of the oil print process, the bromoil process , was developed in the early 20th century to solve this problem.
The oil print and bromoil processes create soft images reminiscent of paint or pastels but with the distinctive indexicality of a photograph. For this reason, they were popular with the Pictorialists during the first half of the 20th century. The painterly qualities of the prints continue to appeal to artists and have recently led some contemporary art photographers to take up these processes again.
The origins of the oil print process go back to experiments by Alphonse Louis Poitevin with bichromated gelatin in the 1850s. [ 1 ]
To make an oil print, a piece of paper is coated with a thick gelatin layer containing dichromate salts that sensitize it to light. A contact print is made by laying a negative over the paper and exposing it to light, which leads to hardening of the dichromated gelatin in proportion to the amount of light that reaches the paper. After exposure, the print is soaked in water and the non-hardened areas absorb more water than the hardened parts. The sponge-dried but still moist paper is then inked with an oil-based ink, which sticks preferentially to the hardened (drier) areas. The result is a positive image in the color of the ink. As with other forms of printmaking, the ink application requires considerable skill, and no two prints are identical.
Multicolor oil prints are possible through local inking of the print, and it is also possible to create reverse prints by contact-printing the wet oil print to a piece of plain paper. Artists have also sometimes created variations by applying extra paint using brushes. In the later 19th century, it was possible to buy commercially prepared gelatin-coated paper. [ 1 ]
The bromoil process is a variation on the oil print process that allows for enlargements. [ 2 ] In 1907, E. J. Wall described how it should theoretically be possible to place a negative in an enlarger to produce a larger silver bromide positive, which would then be bleached, hardened, and inked following the oil print process. [ 1 ] That same year C. Welborne Piper worked out the practical details. [ 1 ] Much as Wall envisioned it, the bromoil process starts with a normally developed print exposed onto a silver-bromide paper that is then chemically bleached, hardened, and fixed. When the still-moist print is inked, the hardest (driest) areas take up the most ink while the wettest areas become the highlights.
An issue with the bromoil process is that inadequate rinsing of the chrome salts can lead to discoloration of the prints when exposed to light over long periods of time. In addition, irregularities in the thickness of the gelatin layer can, under unfavorable conditions, lead to stresses that damage the pictorial (ink) layer.
A version of the bromoil process was developed to produce full-color prints in the 1930s before commercial color film was developed. This technique requires three matching negatives of the subject, each made on Ilford Hypersensitive Panchromatic plates and shot through a blue, green, and red filter. The developed plates are enlarged and printed onto separate pieces of bromide-silver photographic paper, which are then bleached and hardened in the usual manner. The three prints are then inked with a firm bromoil ink, yellow on the blue-filtered print, red on the green-filtered print, and blue on the red-filtered print. The three inked prints are then treated as printing plates and passed through an etching press that will transfer the ink to a new piece of paper or cloth, reversing the image in the process. Care must be taken to maintain exact registration of the three plates. [ 3 ] | https://en.wikipedia.org/wiki/Oil_print_process |
Oil purification ( transformer , turbine, industrial, etc.) removes oil contaminants in order to prolong oil service life.
Contaminants and various impurities get into industrial oils during storage and operation. The most common contaminants are: [ 1 ]
Industrial oils are purified through sedimentation, filtration, centrifugation, vacuum treatment and adsorption purification. [ citation needed ]
Sedimentation is precipitation of solid particles and water to the bottom of oil tanks under gravity. The main drawback of this process is its longevity. [ citation needed ]
Filtration is a partial removal of solid particles through filter medium. Oil filtration systems generally use a multistage filtration with coarse and fine filters. [ 2 ]
Centrifugation is separation of oil and water, or oil and solid particles by centrifugal forces.
Vacuum treatment degasses and dehydrates industrial oil. This method is well suited for removing dispersed and dissolved water, as well as dissolved gases. [ 3 ]
Adsorption purification , in contrast to the methods mentioned above, does not remove solid particles and gases, but it shows good results at removing water, oil sludge and aging products. This process uses adsorbents of natural or artificial origin: bleaching clays, synthetic aluminosilicates, silica gels, zeolites, etc. [ citation needed ]
Often the terms "oil purification" and " oil regeneration " are used synonymously. Although in fact they are not the same. Oil purification cleans oil from contaminants. It can be used independently or as a part of oil regeneration. Oil regeneration also removes aging products (with the help of adsorbents) and stabilizes oil with additives. Regenerated oil is clean from carcinogenic products of oil aging and stabilized with the help of additives. | https://en.wikipedia.org/wiki/Oil_purification |
An oil refinery or petroleum refinery is an industrial process plant where petroleum (crude oil) is transformed and refined into products such as gasoline (petrol), diesel fuel , asphalt base , fuel oils , heating oil , kerosene , liquefied petroleum gas and petroleum naphtha . [ 1 ] [ 2 ] [ 3 ] Petrochemical feedstock like ethylene and propylene can also be produced directly by cracking crude oil without the need of using refined products of crude oil such as naphtha. [ 4 ] [ 5 ] The crude oil feedstock has typically been processed by an oil production plant . There is usually an oil depot at or near an oil refinery for the storage of incoming crude oil feedstock as well as bulk liquid products. In 2020, the total capacity of global refineries for crude oil was about 101.2 million barrels per day. [ 6 ]
Oil refineries are typically large, sprawling industrial complexes with extensive piping running throughout, carrying streams of fluids between large chemical processing units, such as distillation columns. In many ways, oil refineries use many different technologies and can be thought of as types of chemical plants . Since December 2008, the world's largest oil refinery has been the Jamnagar Refinery owned by Reliance Industries , located in Gujarat , India, with a processing capacity of 1.24 million barrels (197,000 m 3 ) per day.
Oil refineries are an essential part of the petroleum industry's downstream sector. [ 7 ]
The Chinese were among the first civilizations to refine oil. [ 8 ] As early as the first century, the Chinese were refining crude oil for use as an energy source. [ 9 ] [ 8 ] Between 512 and 518, in the late Northern Wei dynasty , the Chinese geographer, writer and politician Li Daoyuan introduced the process of refining oil into various lubricants in his famous work Commentary on the Water Classic . [ 10 ] [ 9 ] [ 8 ]
Crude oil was often distilled by Persian chemists , with clear descriptions given in handbooks such as those of Muhammad ibn Zakarīya Rāzi ( c. 865–925 ). [ 11 ] The streets of Baghdad were paved with tar , derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku , Azerbaijan. These fields were described by the Arab geographer Abu al-Hasan 'Alī al-Mas'ūdī in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. [ 12 ] Arab and Persian chemists also distilled crude oil in order to produce flammable products for military purposes. Through Islamic Spain , distillation became available in Western Europe by the 12th century. [ 13 ]
In the Northern Song dynasty (960–1127), a workshop called the "Fierce Oil Workshop", was established in the city of Kaifeng to produce refined oil for the Song military as a weapon. The troops would then fill iron cans with refined oil and throw them toward the enemy troops, causing a fire – effectively the world's first " fire bomb ". The workshop was one of the world's earliest oil refining factories where thousands of people worked to produce Chinese oil-powered weaponry. [ 14 ]
Prior to the nineteenth century, petroleum was known and utilized in various fashions in Babylon , Egypt , China , Philippines , Rome and Azerbaijan . However, the modern history of the petroleum industry is said to have begun in 1846 when Abraham Gessner of Nova Scotia , Canada devised a process to produce kerosene from coal. Shortly thereafter, in 1854, Ignacy Łukasiewicz began producing kerosene from hand-dug oil wells near the town of Krosno , Poland .
Romania was registered as the first country in world oil production statistics, according to the Academy Of World Records . [ 15 ] [ 16 ]
In North America, the first oil well was drilled in 1858 by James Miller Williams in Oil Springs, Ontario , Canada. [ 17 ] In the United States, the petroleum industry began in 1859 when Edwin Drake found oil near Titusville , Pennsylvania . [ 18 ] The industry grew slowly in the 1800s, primarily producing kerosene for oil lamps. In the early twentieth century, the introduction of the internal combustion engine and its use in automobiles created a market for gasoline that was the impetus for fairly rapid growth of the petroleum industry. The early finds of petroleum like those in Ontario and Pennsylvania were soon outstripped by large oil "booms" in Oklahoma , Texas and California . [ 19 ]
Samuel Kier established America's first oil refinery in Pittsburgh on Seventh Avenue near Grant Street, in 1853. [ 20 ] Polish pharmacist and inventor Ignacy Łukasiewicz established an oil refinery in Jasło , then part of the Austro-Hungarian Empire (now in Poland ) in 1854.
The first large refinery opened at Ploiești , Romania, in 1856–1857. [ 15 ] It was in Ploiesti that, 51 years later, in 1908, Lazăr Edeleanu , a Romanian chemist of Jewish origin who got his PhD in 1887 by discovering amphetamine , invented, patented and tested on industrial scale the first modern method of liquid extraction for refining crude oil, the Edeleanu process . This increased the refining efficiency compared to pure fractional distillation and allowed a massive development of the refining plants. Successively, the process was implemented in France, Germany, U.S. and in a few decades became worldwide spread. In 1910 Edeleanu founded "Allgemeine Gesellschaft für Chemische Industrie" in Germany, which, given the success of the name, changed to Edeleanu GmbH, in 1930. During Nazi's time, the company was bought by the Deutsche Erdöl-AG and Edeleanu, being of Jewish origin, moved back to Romania. After the war, the trademark was used by the successor company EDELEANU Gesellschaft mbH Alzenau (RWE) for many petroleum products, while the company was lately integrated as EDL in the Pörner Group .
The Ploiești refineries, after being taken over by Nazi Germany , were bombed in the 1943 Operation Tidal Wave by the Allies , during the Oil Campaign of World War II .
Another close contender for the title of hosting the world's oldest oil refinery is Salzbergen in Lower Saxony , Germany. Salzbergen's refinery was opened in 1860.
At one point, the refinery in Ras Tanura , Saudi Arabia owned by Saudi Aramco was claimed to be the largest oil refinery in the world. For most of the 20th century, the largest refinery was the Abadan Refinery in Iran . This refinery suffered extensive damage during the Iran–Iraq War . Since 25 December 2008, the world's largest refinery complex is the Jamnagar Refinery Complex, consisting of two refineries side by side operated by Reliance Industries Limited in Jamnagar, India with a combined production capacity of 1,240,000 barrels per day (197,000 m 3 /d). PDVSA 's Paraguaná Refinery Complex in Paraguaná Peninsula , Venezuela , with a capacity of 940,000 bbl/d (149,000 m 3 /d) but effective run rates have been dramatically lower due to the impact of 20 years of sanctions [ citation needed ] , and SK Energy 's Ulsan in South Korea with 840,000 bbl/d (134,000 m 3 /d) are the second and third largest, respectively.
Prior to World War II in the early 1940s, most petroleum refineries in the United States consisted simply of crude oil distillation units (often referred to as atmospheric crude oil distillation units). Some refineries also had vacuum distillation units as well as thermal cracking units such as visbreakers (viscosity breakers, units to lower the viscosity of the oil). All of the many other refining processes discussed below were developed during the war or within a few years after the war. They became commercially available within 5 to 10 years after the war ended and the worldwide petroleum industry experienced very rapid growth. The driving force for that growth in technology and in the number and size of refineries worldwide was the growing demand for automotive gasoline and aircraft fuel.
In the United States, for various complex economic and political reasons, the construction of new refineries came to a virtual stop in about the 1980s. However, many of the existing refineries in the United States have revamped many of their units and/or constructed add-on units in order to: increase their crude oil processing capacity, increase the octane rating of their product gasoline, lower the sulfur content of their diesel fuel and home heating fuels to comply with environmental regulations and comply with environmental air pollution and water pollution requirements.
In the 19th century, refineries in the U.S. processed crude oil primarily to recover the kerosene . There was no market for the more volatile fraction, including gasoline, which was considered waste and was often dumped directly into the nearest river. The invention of the automobile shifted demand to gasoline and diesel , which remain the primary refined products today. [ 22 ]
Today, national and state legislation require refineries to meet stringent air and water cleanliness standards. In fact, oil companies in the U.S. perceive obtaining a permit to build a modern refinery to be so difficult and costly that no new refineries were built (though many have been expanded) in the U.S. from 1976 until 2014 when the small Dakota Prairie Refinery in North Dakota began operation. [ 23 ] More than half the refineries that existed in 1981 are now closed due to low utilization rates and accelerating mergers. [ 24 ] As a result of these closures total US refinery capacity fell between 1981 and 1995, though the operating capacity stayed fairly constant in that time period at around 15,000,000 barrels per day (2,400,000 m 3 /d). [ 25 ] Increases in facility size and improvements in efficiencies have offset much of the lost physical capacity of the industry. In 1982 (the earliest data provided), the United States operated 301 refineries with a combined capacity of 17.9 million barrels (2,850,000 m 3 ) of crude oil each calendar day. In 2010, there were 149 operable U.S. refineries with a combined capacity of 17.6 million barrels (2,800,000 m 3 ) per calendar day. [ 26 ] By 2014 the number of refinery had reduced to 140 but the total capacity increased to 18.02 million barrels (2,865,000 m 3 ) per calendar day. Indeed, in order to reduce operating costs and depreciation, refining is operated in fewer sites but of bigger capacity.
In 2009 through 2010, as revenue streams in the oil business dried up and profitability of oil refineries fell due to lower demand for product and high reserves of supply preceding the economic recession , oil companies began to close or sell the less profitable refineries. [ 27 ]
Raw or unprocessed crude oil is not generally useful in industrial applications, although "light, sweet" (low viscosity, low sulfur ) crude oil has been used directly as a burner fuel to produce steam for the propulsion of seagoing vessels. The lighter elements, however, form explosive vapors in the fuel tanks and are therefore hazardous, especially in warships . Instead, the hundreds of different hydrocarbon molecules in crude oil are separated in a refinery into components that can be used as fuels , lubricants , and feedstocks in petrochemical processes that manufacture such products as plastics , detergents , solvents , elastomers , and fibers such as nylon and polyesters .
Petroleum fossil fuels are burned in internal combustion engines to provide power for ships , automobiles , aircraft engines , lawn mowers , dirt bikes , and other machines. Different boiling points allow the hydrocarbons to be separated by distillation . Since the lighter liquid products are in great demand for use in internal combustion engines, a modern refinery will convert heavy hydrocarbons and lighter gaseous elements into these higher-value products. [ 28 ]
Oil can be used in a variety of ways because it contains hydrocarbons of varying molecular masses , forms and lengths such as paraffins , aromatics , naphthenes (or cycloalkanes ), alkenes , dienes , and alkynes . [ 29 ] While the molecules in crude oil include different atoms such as sulfur and nitrogen, the hydrocarbons are the most common form of molecules, which are molecules of varying lengths and complexity made of hydrogen and carbon atoms , and a small number of oxygen atoms. The differences in the structure of these molecules account for their varying physical and chemical properties , and it is this variety that makes crude oil useful in a broad range of several applications.
Once separated and purified of any contaminants and impurities, the fuel or lubricant can be sold without further processing. Smaller molecules such as isobutane and propylene or butylenes can be recombined to meet specific octane requirements by processes such as alkylation , or more commonly, dimerization . The octane grade of gasoline can also be improved by catalytic reforming , which involves removing hydrogen from hydrocarbons producing compounds with higher octane ratings such as aromatics . Intermediate products such as gasoils can even be reprocessed to break a heavy, long-chained oil into a lighter short-chained one, by various forms of cracking such as fluid catalytic cracking , thermal cracking , and hydrocracking . The final step in gasoline production is the blending of fuels with different octane ratings, vapor pressures , and other properties to meet product specifications. Another method for reprocessing and upgrading these intermediate products (residual oils) uses a devolatilization process to separate usable oil from the waste asphaltene material. Certain cracked streams are particularly suitable to produce petrochemicals includes polypropylene, heavier polymers, and block polymers based on the molecular weight and the characteristics of the olefin specie that is cracked from the source feedstock. [ 30 ]
Oil refineries are large-scale plants, processing about a hundred thousand to several hundred thousand barrels of crude oil a day. Because of the high capacity, many of the units operate continuously , as opposed to processing in batches , at steady state or nearly steady state for months to years. The high capacity also makes process optimization and advanced process control very desirable.
Petroleum products are materials derived from crude oil ( petroleum ) as it is processed in oil refineries . The majority of petroleum is converted to petroleum products, which includes several classes of fuels. [ 32 ]
Oil refineries also produce various intermediate products such as hydrogen , light hydrocarbons, reformate and pyrolysis gasoline . These are not usually transported but instead are blended or processed further on-site. Chemical plants are thus often adjacent to oil refineries or a number of further chemical processes are integrated into it. For example, light hydrocarbons are steam-cracked in an ethylene plant, and the produced ethylene is polymerized to produce polyethene .
To ensure both proper separation and environmental protection, a very low sulfur content is necessary in all but the heaviest products. The crude sulfur contaminant is transformed to hydrogen sulfide via catalytic hydrodesulfurization and removed from the product stream via amine gas treating . Using the Claus process , hydrogen sulfide is afterward transformed to elementary sulfur to be sold to the chemical industry. The rather large heat energy freed by this process is directly used in the other parts of the refinery. Often an electrical power plant is combined into the whole refinery process to take up the excess heat.
According to the composition of the crude oil and depending on the demands of the market, refineries can produce different shares of petroleum products. The largest share of oil products is used as "energy carriers", i.e. various grades of fuel oil and gasoline . These fuels include or can be blended to give gasoline, jet fuel , diesel fuel , heating oil , and heavier fuel oils. Heavier (less volatile ) fractions can also be used to produce asphalt , tar , paraffin wax , lubricating and other heavy oils. Refineries also produce other chemicals , some of which are used in chemical processes to produce plastics and other useful materials. Since petroleum often contains a few percent sulfur -containing molecules, elemental sulfur is also often produced as a petroleum product. Carbon , in the form of petroleum coke , and hydrogen may also be produced as petroleum products. The hydrogen produced is often used as an intermediate product for other oil refinery processes such as hydrocracking and hydrodesulfurization . [ 33 ]
Petroleum products are usually grouped into four categories: light distillates (LPG, gasoline, naphtha), middle distillates (kerosene, jet fuel, diesel), heavy distillates, and residuum (heavy fuel oil, lubricating oils, wax, asphalt). These require blending various feedstocks, mixing appropriate additives, providing short-term storage, and preparation for bulk loading to trucks, barges, product ships, and railcars. This classification is based on the way crude oil is distilled and separated into fractions. [ 2 ]
Over 6,000 items are made from petroleum waste by-products, including fertilizer , floor coverings, perfume , insecticide , petroleum jelly , soap , and vitamin capsules. [ 34 ]
The image below is a schematic flow diagram of a typical oil refinery that depicts the various unit processes and the flow of intermediate product streams that occurs between the inlet crude oil feedstock and the final end products. The diagram depicts only one of the literally hundreds of different oil refinery configurations. The diagram also does not include any of the usual refinery facilities providing utilities such as steam, cooling water, and electric power as well as storage tanks for crude oil feedstock and for intermediate products and end products. [ 1 ] [ 56 ] [ 57 ] [ 58 ]
There are many process configurations other than that depicted above. For example, the vacuum distillation unit may also produce fractions that can be refined into end products such as spindle oil used in the textile industry, light machine oil, motor oil, and various waxes.
The crude oil distillation unit (CDU) is the first processing unit in virtually all petroleum refineries. The CDU distills the incoming crude oil into various fractions of different boiling ranges, each of which is then processed further in the other refinery processing units. The CDU is often referred to as the atmospheric distillation unit because it operates at slightly above atmospheric pressure. [ 1 ] [ 2 ] [ 41 ] Below is a schematic flow diagram of a typical crude oil distillation unit. The incoming crude oil is preheated by exchanging heat with some of the hot, distilled fractions and other streams. It is then desalted to remove inorganic salts (primarily sodium chloride).
Following the desalter, the crude oil is further heated by exchanging heat with some of the hot, distilled fractions and other streams. It is then heated in a fuel-fired furnace (fired heater) to a temperature of about 398 °C and routed into the bottom of the distillation unit.
The cooling and condensing of the distillation tower overhead is provided partially by exchanging heat with the incoming crude oil and partially by either an air-cooled or water-cooled condenser. Additional heat is removed from the distillation column by a pumparound system as shown in the diagram below.
As shown in the flow diagram, the overhead distillate fraction from the distillation column is naphtha. The fractions removed from the side of the distillation column at various points between the column top and bottom are called sidecuts . Each of the sidecuts (i.e., the kerosene, light gas oil, and heavy gas oil) is cooled by exchanging heat with the incoming crude oil. All of the fractions (i.e., the overhead naphtha, the sidecuts, and the bottom residue) are sent to intermediate storage tanks before being processed further.
A party searching for a site to construct a refinery or a chemical plant needs to consider the following issues:
Factors affecting site selection for oil refinery:
Refineries that use a large amount of steam and cooling water need to have an abundant source of water. Oil refineries, therefore, are often located nearby navigable rivers or on a seashore, nearby a port. Such location also gives access to transportation by river or by sea. The advantages of transporting crude oil by pipeline are evident, and oil companies often transport a large volume of fuel to distribution terminals by pipeline. A pipeline may not be practical for products with small output, and railcars, road tankers, and barges are used.
Petrochemical plants and solvent manufacturing (fine fractionating) plants need spaces for further processing of a large volume of refinery products, or to mix chemical additives with a product at source rather than at blending terminals.
The refining process releases a number of different chemicals into the atmosphere (see AP 42 Compilation of Air Pollutant Emission Factors ) and a notable odor normally accompanies the presence of a refinery. Aside from air pollution impacts there are also wastewater concerns, [ 55 ] risks of industrial accidents such as fire and explosion, and noise health effects due to industrial noise . [ 59 ]
Many governments worldwide have mandated restrictions on contaminants that refineries release, and most refineries have installed the equipment needed to comply with the requirements of the pertinent environmental protection regulatory agencies. In the United States, there is strong pressure to prevent the development of new refineries, and no major refinery has been built in the country since Marathon's Garyville, Louisiana facility in 1976. However, many existing refineries have been expanded during that time. Environmental restrictions and pressure to prevent the construction of new refineries may have also contributed to rising fuel prices in the United States. [ 60 ] Additionally, many refineries (more than 100 since the 1980s) have closed due to obsolescence and/or merger activity within the industry itself. [ 61 ]
Environmental and safety concerns mean that oil refineries are sometimes located some distance away from major urban areas. Nevertheless, there are many instances where refinery operations are close to populated areas and pose health risks. [ 62 ] [ 63 ] In California's Contra Costa County and Solano County , a shoreline necklace of refineries, built in the early 20th century before this area was populated, and associated chemical plants are adjacent to urban areas in Richmond , Martinez , Pacheco , Concord , Pittsburg , Vallejo and Benicia , with occasional accidental events that require " shelter in place " orders to the adjacent populations. A number of refineries are located in Sherwood Park, Alberta , directly adjacent to the City of Edmonton , which has a population of over 1,000,000 residents. [ 64 ]
NIOSH criteria for occupational exposure to refined petroleum solvents have been available since 1977. [ 65 ]
Modern petroleum refining involves a complicated system of interrelated chemical reactions that produce a wide variety of petroleum-based products. [ 66 ] [ 67 ] Many of these reactions require precise temperature and pressure parameters. [ 68 ] The equipment and monitoring required to ensure the proper progression of these processes is complex, and has evolved through the advancement of the scientific field of petroleum engineering . [ 69 ] [ 70 ]
The wide array of high pressure and/or high temperature reactions, along with the necessary chemical additives or extracted contaminants, produces an astonishing number of potential health hazards to the oil refinery worker. [ 71 ] [ 72 ] Through the advancement of technical chemical and petroleum engineering, the vast majority of these processes are automated and enclosed, thus greatly reducing the potential health impact to workers. [ 73 ] However, depending on the specific process in which a worker is engaged, as well as the particular method employed by the refinery in which he/she works, significant health hazards remain. [ 74 ]
Although occupational injuries in the United States were not routinely tracked and reported at the time, reports of the health impacts of working in an oil refinery can be found as early as the 1800s. For instance, an explosion in a Chicago refinery killed 20 workers in 1890. [ 75 ] Since then, numerous fires, explosions, and other significant events have from time to time drawn the public's attention to the health of oil refinery workers. [ 76 ] Such events continue in the 21st century, with explosions reported in refineries in Wisconsin and Germany in 2018. [ 77 ]
However, there are many less visible hazards that endanger oil refinery workers.
Given the highly automated and technically advanced nature of modern petroleum refineries, nearly all processes are contained within engineering controls and represent a substantially decreased risk of exposure to workers compared to earlier times. [ 73 ] However, certain situations or work tasks may subvert these safety mechanisms, and expose workers to a number of chemical (see table above) or physical (described below) hazards. [ 78 ] [ 79 ] Examples of these scenarios include:
A 2021 systematic review associated working in the petrochemical industry with increased risk of various cancers, such as mesothelioma . It also found reduced risks of other cancers, such as stomach and rectal . The systematic review did mention that several of the associations were not due to factors directly related to the petroleum industry, rather were related to lifestyle factors such as smoking . Evidence for adverse health effects for nearby residents was also weak, with the evidence primarily centering around neighborhoods in developed countries . [ 82 ]
BTX stands for benzene, toluene , xylene . This is a group of common volatile organic compounds (VOCs) that are found in the oil refinery environment, and serve as a paradigm for more in depth discussion of occupational exposure limits, chemical exposure and surveillance among refinery workers. [ 83 ] [ 84 ]
The most important route of exposure for BTX chemicals is inhalation due to the low boiling point of these chemicals. The majority of the gaseous production of BTX occurs during tank cleaning and fuel transfer, which causes offgassing of these chemicals into the air. [ 85 ] Exposure can also occur through ingestion via contaminated water, but this is unlikely in an occupational setting. [ 86 ] Dermal exposure and absorption is also possible, but is again less likely in an occupational setting where appropriate personal protective equipment is in place. [ 86 ]
In the United States, the Occupational Safety and Health Administration (OSHA), National Institute for Occupational Safety and Health (NIOSH), and American Conference of Governmental Industrial Hygienists (ACGIH) have all established occupational exposure limits (OELs) for many of the chemicals above that workers may be exposed to in petroleum refineries. [ 87 ] [ 88 ] [ 89 ]
Benzene, in particular, has multiple biomarkers that can be measured to determine exposure. Benzene itself can be measured in the breath, blood, and urine, and metabolites such as phenol , t , t -muconic acid ( t , t MA) and S-phenylmercapturic acid ( s PMA) can be measured in urine. [ 94 ] In addition to monitoring the exposure levels via these biomarkers, employers are required by OSHA to perform regular blood tests on workers to test for early signs of some of the feared hematologic outcomes, of which the most widely recognized is leukemia. Required testing includes complete blood count with cell differentials and peripheral blood smear "on a regular basis". [ 95 ] The utility of these tests is supported by formal scientific studies. [ 96 ]
Workers are at risk of physical injuries due to a large number of high-powered machines in the relatively close proximity of the oil refinery. The high pressure required for many of the chemical reactions also presents the possibility of localized system failures resulting in blunt or penetrating trauma from exploding system components. [ 111 ]
Heat is also a hazard. The temperature required for the proper progression of certain reactions in the refining process can reach 1,600 °F (870 °C). [ 73 ] As with chemicals, the operating system is designed to safely contain this hazard without injury to the worker. However, in system failures, this is a potent threat to workers' health. Concerns include both direct injury through a heat illness or injury , as well as the potential for devastating burns should the worker come in contact with super-heated reagents/equipment. [ 73 ]
Noise is another hazard. Refineries can be very loud environments, and have previously been shown to be associated with hearing loss among workers. [ 112 ] The interior environment of an oil refinery can reach levels in excess of 90 dB . [ 113 ] [ 59 ] In the United States, an average of 90 dB is the permissible exposure limit (PEL) for an 8-hour work-day. [ 114 ] Noise exposures that average greater than 85 dB over an 8-hour require a hearing conservation program to regularly evaluate workers' hearing and to promote its protection. [ 115 ] Regular evaluation of workers' auditory capacity and faithful use of properly vetted hearing protection are essential parts of such programs. [ 116 ]
While not specific to the industry, oil refinery workers may also be at risk for hazards such as vehicle-related accidents , machinery-associated injuries, work in a confined space, explosions/fires, ergonomic hazards , shift-work related sleep disorders , and falls. [ 117 ]
The theory of hierarchy of controls can be applied to petroleum refineries and their efforts to ensure worker safety.
Elimination and substitution are unlikely in petroleum refineries, as many of the raw materials, waste products, and finished products are hazardous in one form or another (e.g. flammable, carcinogenic). [ 97 ] [ 118 ]
Examples of engineering controls include a fire detection/extinguishing system , pressure/chemical sensors to detect/predict loss of structural integrity, [ 119 ] and adequate maintenance of piping to prevent hydrocarbon-induced corrosion (leading to structural failure). [ 80 ] [ 81 ] [ 120 ] [ 121 ] Other examples employed in petroleum refineries include the post-construction protection of steel components with vermiculite to improve heat/fire resistance. [ 122 ] Compartmentalization can help to prevent a fire or other systems failure from spreading to affect other areas of the structure, and may help prevent dangerous reactions by keeping different chemicals separate from one another until they can be safely combined in the proper environment. [ 119 ]
Administrative controls include careful planning and oversight of the refinery cleaning, maintenance, and turnaround processes. These occur when many of the engineering controls are shut down or suppressed and may be especially dangerous to workers. Detailed coordination is necessary to ensure that maintenance of one part of the facility will not cause dangerous exposures to those performing the maintenance, or to workers in other areas of the plant. Due to the highly flammable nature of many of the involved chemicals, smoking areas are tightly controlled and carefully placed. [ 78 ]
Personal protective equipment (PPE) may be necessary depending on the specific chemical being processed or produced. Particular care is needed during sampling of the partially-completed product, tank cleaning, and other high-risk tasks as mentioned above. Such activities may require the use of impervious outerwear, acid hood, disposable coveralls, etc. [ 78 ] More generally, all personnel in operating areas should use appropriate hearing and vision protection , avoid clothes made of flammable material ( nylon , Dacron , acrylic , or blends), and full-length pants and sleeves. [ 78 ]
Worker health and safety in oil refineries is closely monitored at a national level by both the Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH). [ 123 ] [ 124 ] In addition to federal monitoring, California 's CalOSHA has been particularly active in protecting worker health in the industry, and adopted a policy in 2017 that requires petroleum refineries to perform a "Hierarchy of Hazard Controls Analysis" (see above "Hazard controls" section) for each process safety hazard. [ 125 ] Safety regulations have resulted in a below-average injury rate for refining industry workers. In a 2018 report by the US Bureau of Labor Statistics , they indicate that petroleum refinery workers have a significantly lower rate of occupational injury (0.4 OSHA-recordable cases per 100 full-time workers) than all industries (3.1 cases), oil and gas extraction (0.8 cases), and petroleum manufacturing in general (1.3 cases). [ 126 ]
Below is a list of the most common regulations referenced in petroleum refinery safety citations issued by OSHA: [ 127 ]
Corrosion of metallic components is a major factor of inefficiency in the refining process. Because it leads to equipment failure, it is a primary driver for the refinery maintenance schedule. Corrosion-related direct costs in the U.S. petroleum industry as of 1996 were estimated at US$3.7 billion. [ 121 ] [ 128 ]
Corrosion occurs in various forms in the refining process, such as pitting corrosion from water droplets, embrittlement from hydrogen, and stress corrosion cracking from sulfide attack. [ 129 ] From a materials standpoint, carbon steel is used for upwards of 80 percent of refinery components, which is beneficial due to its low cost. Carbon steel is resistant to the most common forms of corrosion, particularly from hydrocarbon impurities at temperatures below 205 °C, but other corrosive chemicals and environments prevent its use everywhere. Common replacement materials are low alloy steels containing chromium and molybdenum , with stainless steels containing more chromium dealing with more corrosive environments. More expensive materials commonly used are nickel , titanium , and copper alloys. These are primarily saved for the most problematic areas where extremely high temperatures and/or very corrosive chemicals are present. [ 130 ]
Corrosion is fought by a complex system of monitoring, preventative repairs, and careful use of materials. Monitoring methods include both offline checks taken during maintenance and online monitoring. Offline checks measure corrosion after it has occurred, telling the engineer when equipment must be replaced based on the historical information they have collected. This is referred to as preventative management.
Online systems are a more modern development and are revolutionizing the way corrosion is approached. There are several types of online corrosion monitoring technologies such as linear polarization resistance, electrochemical noise and electrical resistance. Online monitoring has generally had slow reporting rates in the past (minutes or hours) and been limited by process conditions and sources of error but newer technologies can report rates up to twice per minute with much higher accuracy (referred to as real-time monitoring). This allows process engineers to treat corrosion as another process variable that can be optimized in the system. Immediate responses to process changes allow the control of corrosion mechanisms, so they can be minimized while also maximizing production output. [ 120 ] In an ideal situation having on-line corrosion information that is accurate and real-time will allow conditions that cause high corrosion rates to be identified and reduced. This is known as predictive management.
Materials methods include selecting the proper material for the application. In areas of minimal corrosion, cheap materials are preferable, but when bad corrosion can occur, more expensive but longer-lasting materials should be used. Other materials methods come in the form of protective barriers between corrosive substances and the equipment metals. These can be either a lining of refractory material such as standard Portland cement or other special acid-resistant cement that is shot onto the inner surface of the vessel. Also available are thin overlays of more expensive metals that protect cheaper metal against corrosion without requiring much material. [ 131 ] | https://en.wikipedia.org/wiki/Oil_refinery |
Oil regeneration - is extraction of contaminants from oil in order to restore its original properties to be used equally with fresh oils. [ 1 ]
Aging is a result of physical and chemical processes that change oil during storage and use in machines and mechanisms.
The main cause of aging is exposure to high temperatures and contact with air that leads to oxidation, decomposition, polymerization and condensation of hydrocarbons. Another cause of aging is contamination with metal particles, water and dust . Their accumulation leads to buildup of slurries, resinous and asphaltic compounds, coke, soot, various salts and acids in the oils. [ 2 ] The oil in which aging process occurs, cannot fully perform its functions. Therefore, it is either replaced with new oil or regenerated.
Physical methods of regeneration do not change the chemical properties of oil. They remove only mechanical impurities (metal particles, sand, dust, as well as tar, asphalt and coke-like substances, water).
Regeneration by physical methods include:
Physicochemical methods are based on the use of coagulants and adsorbents . [ citation needed ] Coagulants promote the coarsening and precipitation of fine-dispersed asphalt-resinous substances in oil. Adsorbents selectively absorb organic and inorganic compounds. These methods remove asphalt and resinous compounds, emulsified and dissolved water from oil. Adsorptive treatment with bleaching clays neutralizes free acid in acid-treated oil, unstable oxidized and sulphurized products as well as traces of sulphonic acid. In addition, clay treatment leads to higher resistance to oil oxidation at high temperatures and increased colour stability. This process is used in clay polishing plants for waste oil re-refining and transformer oil regeneration systems for the reclamation of old transformer oil to as-new condition. [ 6 ]
Chemical methods of regeneration remove asphalt, silicic, acidic, some hetero-organic compounds and water from oils. These methods are based on the interaction of contaminating substances in oil with special reagents introduced into them. [ citation needed ] The compounds formed as a result of these chemical reactions are then easily removed from oil. Chemical methods include acid and alkaline refining, drying with calcium sulphate or reduction with metal hydrides.
In practice, to achieve a complete regeneration of oil using only one method is difficult. Therefore, a combination of different approaches are often used. [ 7 ] | https://en.wikipedia.org/wiki/Oil_regeneration |
An oil rig is any kind of apparatus constructed for oil drilling .
Kinds of oil rig include: | https://en.wikipedia.org/wiki/Oil_rig |
An oil skimmer is a device that is designed to remove oil floating on a liquid surface. They are commonly used to recover oil from oil spills in water, or in industrial situations where water is contaminated with oil. Oil skimmers are designed to remove free floating oil and are not water treatment devices.
The effectiveness of a skimmer deployed in open water or oil spill recovery is highly dependent on the roughness of the surrounding water that it is working on; the choppier the surrounding wake and water, the more water the oil skimmer will take in along with the oil, rather than take in oil alone. Oil spill skimmers can be self-propelled, used from shore, or operated from vessels, with the best choice being dependent on the specifics for the job at hand. [ 1 ]
Oil skimmers were used to great effect to assist in the remediation of the Exxon Valdez oil spill in 1989.
Oil skimmers are also used in a large number applications other than oil spills. Examples include as a part of oil removal in vehicle wash water, fuel storage sites and workshops. Industries that extensively use oil skimmers include manufacturing, mining, oil and gas, refining, petrochemical, solvent extraction and food industries. Selecting the correct type to use depends on the nature of the intended application and the nature of the oil and water. Oil skimmers are frequently one component of oily water treatment systems.
Oil skimmers are different from swimming pool sanitation skimmers, which are designed for a similar but unrelated purpose.
There are many different types of oil skimmer. Each type has different design features and therefore results in different applications and use. It is important to understand the design features and fluid properties before employing a particular skimmer type.
Some factors to consider are:
The use of skimmers in industrial applications is often required to remove oils, grease and fats prior to further treatment for environmental discharge compliance. By removing the top layer of oils, water stagnation, smell and unsightly surface scum can be reduced. Placed before an oily water treatment system an oil skimmer may give greater overall oil separation efficiency for improved discharge wastewater quality. All oil skimmers will pick up a percentage of water with the oil which will need to be decanted to obtain concentrated oil.
There are three types of oil skimmers: [ 2 ] weir, oleophilic and suction. Oleophilic skimmers include disc, belt, tube, brush, mop, brush, grooved disc, smooth drum, and grooved drum. The material chosen for an oleophilic skimmer affects the collection rate based on the material's affinity for the particular type of oil that is skimmed.
Oleophilic skimmers function by using an element such as a drum, disc, belt, rope or mop to which the oil adheres as the element is moved through the oil/water surface. Adhesion is the tendency of dissimilar particles or surfaces to cling to one another (cohesion refers to the tendency of similar or identical particles/surfaces to cling to one another). Any adhering oil is then wiped or scraped from the oleophilic surface and collected in a tank. As the oil adheres to the collection surface the amount of water collected is limited. When there is no oil left, some water will be collecte. The amount collected depends on the material properties of the oleophilic element and its affinity to water. Oilophilic skimmers can remove many kinds of oil; including machine oil, kerosene, diesel oil, lubricating oil, plant oil and other liquids with specific gravity less than water.
Small oleophilic oil skimmers can be reliable and economical. Larger Oleophilic skimmers require larger drive motors with moving mechanical parts and require maintenance. Oleophilic skimmers are not effected by the oil layer thickness.
Recovery rates are lower than other types of skimmer. Recovery rates depend on the surface area of the oleophilic material, the surface speed and the material's affinity to oil as well as other factors such as temperature, specific oil makes-up, debris in the water and other chemicals that maybe present. Surfactants such as detergents, cleaners, caustics and fine suspended solids can impare the ability of oil to adhere to the oleophilic material. Simple tests are available to determine the impairment cause by these chemicals.
Belt oil skimmers use a continuous loop belt that enters and exits the oil/water surface. As the belts exits the liquid surface oil clings to both sides. Wiper blades remove the oil from the rotating belt depositing it into a collection trough where it is moved to a storage location either via gravity or a pumping system. Belts are generally wide, thin and flexible.
Drum skimmers operate by using one or more drums made from oleophilic material. As the drums rotate oil adheres to the surface, separating it from the water. Wiper blades remove the oil from the drums depositing it into the collection trough where it is pumped to a storage location. Drum skimmers are lightweight and may have a high oil recovery rate depending on the size and number of drums used. The drums can be either smooth or grooved. These types of skimmers are generally used in oil spill response and various industrial operations.
Disc skimmers are oleophilic skimmers that use a disc constructed from PVC, steel or aluminum and can be either smooth or grooved. They are capable of recovering high volumes of oil with little water. They can be equipped with either a single or multiple discs. The discs can be driven by hydraulic, electric, diesel or air motors. DISCOIL technology patented by OCS in the year 1970, is able to recover most of the hydrocarbons on the surface of water: 98% of oil with the only 2% water. It is hydraulic type and able to operate also in classified areas, as ATEX Zone 0, 1 and 2.
Non-oleophilic skimmers are distinguished by the component used to collect the oil. A metal disc, belt or drum is used in applications where an polymeric material is inappropriate, such as in a hot alkaline aqueous parts washer. [ 3 ] The skimmer is generally turned off whenever there is no oil to skim thus minimizing the amount of water collected. Metal skimming elements are nearly as efficient as oleophilic skimmers when oil is present.
Weir skimmers function by allowing the oil floating on the surface of the water to flow over a weir . There are two main types of weir skimmer, those that require the weir height to be manually adjusted and those where the weir height is automatic or self-adjusting. Whilst manually adjusted weir skimmer types can have a lower initial cost, the requirement for regular manual adjustment makes self-adjusting weir types more popular in most applications.
Weir skimmers will collect some water if operating when oil is no longer present. To overcome this limitation most weir type skimmers contain an automatic water drain on the oil collection tank. Large debris 20 mm plus must be prevented from entering a Weir skimmer. This is usually done with simple screens added to the skimmer or in the case of pit operation, screening debris at the entrance to collection pits.
Weir skimmers can remove oil at a greater rate than other types of skimmer. Oil removal rates of over 25 cubic metres per hour (6,600 US gal/h) are available. [ citation needed ] They can also pull in oil from a greater radius on a surface than other skimmers. This makes weir skimmers popular if high oil recovery rates and large coverage areas are required.
Weir type skimmers do not rely on oil adherence or coalescence and therefore are not affected by detergents, chemicals and other surfactants in the water. They are not affected by fine suspended solids in the water which can impede adherence and therefore the operation of other types of skimmer. | https://en.wikipedia.org/wiki/Oil_skimmer |
Oil sludge or black sludge is a gel -like or semi-solid deposit inside an internal combustion engine , that can create a catastrophic buildup. It is often the result of contaminated engine oil and occurs when moisture and/or high heat is introduced to engine oil.
Oil sludge may occur due to a variety of different factors. Some of the most common causes are:
Oil sludge is generally preventable through frequent oil changes at manufacturer specified intervals, however, while uncommon, some engines do have a tendency to build up more sludge than others. | https://en.wikipedia.org/wiki/Oil_sludge |
An oil spill is the release of a liquid petroleum hydrocarbon into the environment, especially the marine ecosystem , due to human activity, and is a form of pollution . The term is usually given to marine oil spills, where oil is released into the ocean or coastal waters , but spills may also occur on land. Oil spills can result from the release of crude oil from tankers , offshore platforms , drilling rigs , and wells . They may also involve spills of refined petroleum products , such as gasoline and diesel fuel , as well as their by-products. Additionally, heavier fuels used by large ships, such as bunker fuel , or spills of any oily refuse or waste oil , contribute to such incidents. These spills can have severe environmental and economic consequences.
Oil spills penetrate into the structure of the plumage of birds and the fur of mammals, reducing its insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water. Cleanup and recovery from an oil spill is difficult and depends upon many factors, including the type of oil spilled, the temperature of the water (affecting evaporation and biodegradation), and the types of shorelines and beaches involved. [ 1 ] Spills may take weeks, months or even years to clean up. [ 2 ]
Oil spills can have disastrous consequences for society; economically, environmentally, and socially. As a result, oil spill accidents have initiated intense media attention and political uproar, bringing many together in a political struggle concerning government response to oil spills and what actions can best prevent them from happening. [ 3 ]
An oil spill creates an immediate risk of negative effects on human health, including respiratory and reproductive problems as well as liver and immune-system damage. Oil spills also affect the everyday lives of humans through secondary consequences such as increased fire hazards and the potential closure of beaches, parks, and fisheries. The Kuwaiti oil fires produced air pollution that caused respiratory distress. [ 4 ] The Deepwater Horizon explosion killed eleven oil rig workers. [ 5 ] The fire resulting from the Lac-Mégantic derailment killed 47 and destroyed half of the town's centre. [ 6 ]
Spilled oil can also contaminate drinking water supplies. For example, in 2013 two different oil spills contaminated water supplies for 300,000 in Miri , Malaysia ; [ 7 ] 80,000 people in Coca, Ecuador . [ 8 ] In 2000, springs were contaminated by an oil spill in Clark County, Kentucky . [ 9 ]
Contamination can have an economic impact on tourism and marine resource extraction industries. For example, the Deepwater Horizon oil spill impacted beach tourism and fishing along the Gulf Coast , and the responsible parties were required to compensate economic victims.
The threat posed to birds, fish, shellfish and crustaceans from spilled oil was known in England in the 1920s, largely through observations made in Yorkshire . [ 10 ] The subject was also explored in a scientific paper produced by the National Academy of Sciences in the US in 1974 which considered impacts to fish, crustaceans and molluscs. The paper was limited to 100 copies and was described as a draft document, not to be cited. [ 11 ]
In general, spilled oil can affect animals and plants in two ways: dirесt from the oil and from the response or cleanup process. [ 12 ] [ 13 ] [ 14 ] [ 15 ] Oil penetrates into the structure of the plumage of birds and the fur of mammals, reducing their insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water.
Animals who rely on scent to find their babies or mothers cannot do so due to the strong scent of the oil. This causes a baby to be rejected and abandoned, leaving the babies to starve and eventually die. Oil can impair a bird's ability to fly, preventing it from foraging or escaping from predators. As they preen , birds may ingest the oil coating their feathers, irritating the digestive tract , altering liver function, and causing kidney damage. Together with their diminished foraging capacity, this can rapidly result in dehydration and metabolic imbalance. Some birds exposed to petroleum also experience changes in their hormonal balance, including changes in their luteinizing protein. [ 16 ] The majority of birds affected by oil spills die from complications without human intervention. [ 17 ] [ 18 ] Some studies have suggested that less than one percent of oil-soaked birds survive, even after cleaning, [ 19 ] although the survival rate can also exceed ninety percent, as in the case of the MV Treasure oil spill . [ 20 ] Oil spills and oil dumping events have been impacting sea birds since at least the 1920s [ 21 ] [ 22 ] [ 23 ] and was understood to be a global problem in the 1930s. [ 24 ]
Heavily furred marine mammals exposed to oil spills are affected in similar ways. Oil coats the fur of sea otters and seals , reducing its insulating effect, and leading to fluctuations in body temperature and hypothermia . Oil can also blind an animal, leaving it defenseless. The ingestion of oil causes dehydration and impairs the digestive process. Animals can be poisoned, and may die from oil entering the lungs or liver.
In addition, oil spills can also harm air quality. [ 25 ] The chemicals in crude oil are mostly hydrocarbons that contains toxic chemicals such as benzenes , toluene , poly-aromatic hydrocarbons and oxygenated polycyclic aromatic hydrocarbons . These chemicals can introduce adverse health effects when being inhaled into human body. In addition, these chemicals can be oxidized by oxidants in the atmosphere to form fine particulate matter after they evaporate into the atmosphere. [ 26 ] These particulates can penetrate lungs and carry toxic chemicals into the human body.
Burning surface oil can also be a source for pollution such as soot particles. During the cleanup and recovery process, it will also generate air pollutants such as nitric oxides and ozone from ships. Lastly, bubble bursting can also be a generation pathway for particulate matter during an oil spill. [ 27 ] During the Deepwater Horizon oil spill , significant air quality issues were found on the Gulf Coast, which is the downwind of DWH oil spill. Air quality monitoring data showed that criteria pollutants had exceeded the health-based standard in the coastal regions. [ 28 ]
The majority of oil from an oil spill remains in the environment, hence a spill from an operation in the ocean is different from an operation on tundra or wetland. Wetlands are considered one of the most sensitive habitats to oil spills and the most difficult to clean. [ 29 ]
Oil spills can be caused by human error, natural disasters, technical failures or deliberate releases. [ 30 ] [ 31 ] It is estimated that 30–50% of all oil spills are directly or indirectly caused by human error, with approximately 20–40% of oil spills being attributed to equipment failure or malfunction. [ 32 ] Causes of oil spills are further distinguished between deliberate releases, such as operational discharges or acts of war and accidental releases. Accidental oil spills are in the focus of the literature, although some of the largest oil spills ever recorded, the Gulf War Oil Spill (sea based) and Kuwaiti Oil Fires (land based) were deliberate acts of war. [ 33 ] The academic study of sources and causes of oil spills identifies vulnerable points in oil transportation infrastructure and calculates the likelihood of oil spills happening. This can then guide prevention efforts and regulation policies [ 34 ]
Around 40–50% of all oil released into the oceans stems from natural seeps from seafloor rocks. This corresponds to approximately 600,000 tons annually on a global level. While natural seeps are the single largest source of oil spills, they are considered less problematic because ecosystems have adapted to such regular releases. For instance, on sites of natural oil seeps, ocean bacteria have evolved to digest oil molecules. [ 35 ] [ 36 ] [ 33 ]
Vessels can be the source of oil spills either through operational releases of oil or in the case of oil tanker accidents. As of 2007, operational discharges from vessels were estimated to account for 21% of oil releases from vessels. [ 36 ] They occur as a consequence of failure to comply with regulations or arbitrary discharges of waste oil and water containing such oil residues. [ 37 ] Such operational discharges are regulated through the MARPOL convention . [ 38 ] Operational releases are frequent, but small in the amount of oil spilled per release, and are often not in the focus of attention regarding oil spills. [ 36 ] There has been a steady decrease of operational discharges of oil, with an additional decrease of around 50% since the 1990s. [ 33 ]
As of 2007, [update] accidental oil tank vessel spills accounted for approximately 8–13% of all oil spilled into the oceans. [ 36 ] [ 39 ] The main causes of oil tank vessel spills were collision (29%), grounding (22%), mishandling (14%) and sinking (12%), among others. [ 36 ] [ 40 ] Oil tanker spills are considered a major ecological threat due to the large amount of oil spilled per accident and the fact that major sea traffic routes are close to Large Marine Ecosystems . [ 36 ] Around 90% of the world's oil transportation is through oil tankers, and the absolute amount of seaborne oil trade is steadily increasing. [ 39 ] However, there has been a reduction of the number of spills from oil tankers and of the amount of oil released per oil tanker spill. [ 39 ] [ 33 ] In 1992, MARPOL was amended and made it mandatory for large tankers (5,000 dwt and more) to be fitted with double hulls . [ 41 ] This is considered to be a major reason for the reduction of oil tanker spills, alongside other innovations such as GPS , sectioning of vessels and sea lanes in narrow straits. [ 33 ] [ 36 ]
In 2023, the International Tanker Owners Pollution Federation (ITOPF) documented a significant oil spill incident of over 700 tonnes and nine medium spills ranging between 7 and 700 tonnes. The major spill occurred in Asia involving heavy fuel oil, and the medium spills were scattered across Asia, Africa, Europe, and America, involving various oil types. [ 42 ]
The total volume of oil released from these spills in 2023 was approximately 2,000 tonnes. This contributes to a trend of decreased oil spill volumes and frequencies over the decades. Comparatively, the 1970s averaged 79 significant spills per year, which drastically reduced to an average of about 6.3 per year in the 2010s, and has maintained a similar level in the current decade. [ 42 ]
The reduction in oil spill volume has also been substantial over the years. For instance, the 1990s recorded 1,134,000 tonnes lost, mainly from 10 major spills. This figure decreased to 196,000 tonnes in the 2000s and 164,000 tonnes in the 2010s. In the early 2020s, approximately 28,000 tonnes have been lost, predominantly from major incidents. [ 42 ]
Accidental spills from oil platforms nowadays account for approximately 3% of oil spills in the oceans. [ 36 ] Prominent offshore oil platform spills typically occurred as a result of a blowout . They can go on for months until relief wells have been drilled, resulting in enormous amounts of oil leaked. [ 33 ] Notable examples of such oil spills are Deepwater Horizon and Ixtoc I . While technologies for drilling in deep water have significantly improved in the past 30–40 years, oil companies move to drilling sites in more and more difficult places. This ambiguous development results in no clear trend regarding the frequency of offshore oil platform spills. [ 33 ]
As of 2010, overall, there has been a substantial increase of pipeline oil spills in the past four decades. [ 33 ] Prominent examples include oil spills of pipelines in the Niger Delta . Pipeline oil spills can be caused by trawling of fishing boats, natural disasters, pipe corrosion, construction defects, sabotage, or an attack, [ 37 ] as with the Caño Limón-Coveñas pipeline in Colombia.
Pipelines as sources of oil spills are estimated to contribute 1% of oil pollution to the oceans. [ 36 ] Reasons for this are underreporting, and many oil pipeline leaks occur on land with only fractions of that oil reaching the oceans.
Recreational boats can spill oil into the ocean because of operational or human error and unpreparedness. The amounts are however small, and such oil spills are hard to track due to underreporting. [ 35 ]
Oil can reach the oceans as oil and fuel from land-based sources. [ 32 ] It is estimated that runoff oil and oil from rivers are responsible for 11% of oil pollution to the oceans. [ 36 ] Such pollution can also be oil on roads from land vehicles, which is then flushed into the oceans during rainstorms. [ 35 ] Purely land-based oil spills are different from maritime oil spills in that oil on land does not spread as quickly as in water, and effects thus remain local. [ 32 ]
Cleanup and recovery from an oil spill is difficult and depends upon many factors, including the type of oil spilled, the temperature of the water (affecting evaporation and biodegradation), and the types of shorelines and beaches involved. [ 1 ] Physical cleanups of oil spills are also very expensive. Until the 1960s, the best method for remediation consisted of putting straw on the spill and retrieving the oil-soaked straw manually. [ 43 ] Chemical remediation is the norm as of the early 21st century, using compounds that can herd and thicken oil for physical recovery, disperse oil in the water, or facilitate burning the oil off. [ 43 ] The future of oil cleanup technology is likely the use of microorganisms such as Fusobacteriota (formerly Fusobacteria), species demonstrate potential for future oil spill cleanup because of their ability to colonize and degrade oil slicks on the sea surface. [ 43 ] [ 44 ]
There are three kinds of oil-consuming bacteria. Sulfate-reducing bacteria (SRB) and acid-producing bacteria are anaerobic , while general aerobic bacteria (GAB) are aerobic . These bacteria occur naturally and will act to remove oil from an ecosystem, and their biomass will tend to replace other populations in the food chain. The chemicals from the oil which dissolve in water, and hence are available to bacteria, are those in the water associated fraction of the oil. Oil Spill Eater II is a highly-used first response tool with over 89,000 clean ups permanently removing up to 99% of the oil from large spills over 120,000 gallons. OSE II was used on the Exxon Valdez spill, as well as the BP Macondo spill in the Gulf of Mexico. OSE II has been successfully used Globally, since 1989. OSE II does not have the limitations of other oil spill products and processes.
Methods for cleaning up include: [ 45 ]
Equipment used includes: [ 51 ]
Spill response procedures should include elements such as;
Environmental Sensitivity Indexes (ESI) are tools used to create Environmental Sensitivity Maps (ESM). ESM's are pre-planning tools used to identify sensitive areas and resources prior to an oil spill event in order to set priorities for protection and plan clean-up strategies. [ 68 ] [ 69 ] It is to date the most commonly used mapping tool for sensitive area plotting. [ 70 ] The ESI has three components: A shoreline type ranking system, a biological resources section, and a human-use resource category. [ 71 ]
ESI is the most frequently used sensitivity mapping tool yet. It was first applied in 1979 in response to an oil-spill near Texas in the Gulf of Mexico. [ 70 ] To this time, ESI maps were prepared merely days in advance of one's arrival to an oil spill location. ESMs used to be atlases, maps consisting of thousands of pages that could solely work with spills in the oceans. In the past 3 decades, this product has been transformed into a versatile online tool. This conversion allows sensitivity indexing to become more adaptable and in 1995 by the US National Oceanic and Atmospheric Administration (NOAA) worked on the tool allowing ESI to extended maps to lakes, rivers, and estuary shoreline types. [ 71 ] ESI maps have since become integral to collecting, synthesizing, and producing data which have previously never been accessible in digital formats. Especially in the United States, the tool has made impressive advancements in developing tidal bay protection strategies, collecting seasonal information and generally in the modelling of sensitive areas. [ 70 ] Together with Geographic Information System Mapping (GIS) , ESI integrates their techniques to successfully geographically reference the three different types of resources. [ 72 ]
The ESI depicts environmental stability, coastal resilience to maritime related catastrophes, and the configurations of a stress-response relationship between all things maritime. [ 73 ] Created for ecological-related decision making, ESMs can accurately identify sensitive areas and habitats, clean-up responses, response measures and monitoring strategies for oil-spills. [ 74 ] The maps allow experts from varying fields to come together and work efficiently during fast-paced response operations.
The process of making an ESI atlas involves GIS technology. The steps involve, first zoning the area that is to be mapped, and secondly, a meeting with local and regional experts on the area and its resources. [ 75 ] Following, all the shoreline types, biological, and human use resources need to be identified and their locations pinpointed. Once all this information is gathered, it then becomes digitized. In its digital format, classifications are set in place, tables are produced and local experts refine the product before it gets released.
ESI's current most common use is within contingency planning. After the maps are calculated and produced, the most sensitive areas get picked out and authenticated. These areas then go through a scrutinization process throughout which methods of protection and resource assessments are obtained. [ 75 ] This in-depth research is then put back into the ESMs to develop their accuracy and allowing for tactical information to be stored in them as well. The finished maps are then used for drills and trainings for clean-up efficiency. [ 75 ] Trainings also often help to update the maps and tweak certain flaws that might have occurred in the previous steps.
Shoreline type is classified by rank depending on how easy the target site would be to clean up, how long the oil would persist, and how sensitive the shoreline is. [ 76 ] The ranking system works on a 10-point scale where the higher the rank, the more sensitive a habitat or shore is. The coding system usually works in colour, where warm colours are used for the increasingly sensitive types and cooler colours are used for robust shores. [ 75 ] For each navigable body of water, there is a feature classifying its sensitivity to oil. Shoreline type mapping codes a large range of ecological settings including estuarine , lacustrine , and riverine environments. [ 70 ] Floating oil slicks put the shoreline at particular risk when they eventually come ashore, covering the substrate with oil. The differing substrates between shoreline types vary in their response to oiling, and influence the type of cleanup that will be required to effectively decontaminate the shoreline. Hence ESI shoreline ranking helps committees identify which clean-up techniques are approved or detrimental the natural environment. The exposure the shoreline has to wave energy and tides, substrate type, and slope of the shoreline are also taken into account—in addition to biological productivity and sensitivity. [ 77 ] Mangroves and marshes tend to have higher ESI rankings due to the potentially long-lasting and damaging effects of both oil contamination and cleanup actions. Impermeable and exposed surfaces with high wave action are ranked lower due to the reflecting waves keeping oil from coming onshore, and the speed at which natural processes will remove the oil.
Within the biological resources, the ESI maps protected areas as well as those with bio-diverse importance. These are usually identified through the UNEP-WCMC Integrated Biodiversity Assessment Tool. There are varying types of coastal habitats and ecosystems and thus also many endangered species that need to be considered when looking at affected areas post oil spills. The habitats of plants and animals that may be at risk from oil spills are referred to as "elements" and are divided by functional group. Further classification divides each element into species groups with similar life histories and behaviors relative to their vulnerability to oil spills. There are eight element groups: birds, reptiles, amphibians, fish, invertebrates, habitats and plants, wetlands, and marine mammals and terrestrial mammals. Element groups are further divided into sub-groups, for example, the ‘marine mammals’ element group is divided into dolphins , manatees, pinnipeds (seals, sea lions & walruses), polar bears , sea otters and whales . [ 71 ] [ 77 ] Necessary when ranking and selecting species is their vulnerability to the oil spills themselves. This not only includes their reactions to such events but also their fragility, the scale of large clusters of animals, whether special life stages occur ashore, and whether any present species is threatened, endangered or rare. [ 78 ] The way in which the biological resources are mapped is through symbols representing the species, and polygons and lines to map out the special extent of the species. [ 79 ] The symbols also have the ability to identify the most vulnerable of a species life stages, such as the molting , nesting, hatching or migration patterns. This allows for more accurate response plans during those given periods. There is also a division for sub-tidal habitats which are equally important to coastal biodiversity including kelp, coral reefs and sea beds which are not commonly mapped within the shoreline ESI type. [ 79 ]
Human-use resources are also often referred to as socio-economic features, which map inanimate resources that have the potential to be directly impacted by oil pollution. Human-use resources that are mapped within the ESI will have socio-economic repercussions to an oil spill. These resources are divided into four major classifications: archaeological importance or cultural resource site, high-use recreational areas or shoreline access points, important protected management areas, and resource origins. [ 71 ] [ 78 ] Some examples include airports, diving sites, popular beach sites, marinas, hotels, factories, natural reserves or marine sanctuaries. When mapped, the human-use resources the need protecting must be certified by a local or regional policy maker. [ 75 ] These resources are often extremely vulnerable to seasonal changes due to ex. fishing and tourism. For this category there are also a set of symbols available to demonstrate their importance on ESMs.
By observing the thickness of the film of oil and its appearance on the surface of the water, it is possible to estimate the quantity of oil spilled. If the surface area of the spill is also known, the total volume of the oil can be calculated. [ 80 ]
Oil spill model systems are used by industry and government to assist in planning and emergency decision making. Of critical importance for the skill of the oil spill model prediction is the adequate description of the wind and current fields. There is a worldwide oil spill modelling (WOSM) program. [ 81 ] Tracking the scope of an oil spill may also involve verifying that hydrocarbons collected during an ongoing spill are derived from the active spill or some other source. This can involve sophisticated analytical chemistry focused on finger printing an oil source based on the complex mixture of substances present. Largely, these will be various hydrocarbons, among the most useful being polyaromatic hydrocarbons . In addition, both oxygen and nitrogen heterocyclic hydrocarbons, such as parent and alkyl homologues of carbazole , quinoline , and pyridine , are present in many crude oils. As a result, these compounds have great potential to supplement the existing suite of hydrocarbons targets to fine-tune source tracking of petroleum spills. Such analysis can also be used to follow weathering and degradation of crude spills. [ 82 ]
Crude oil and refined fuel spills from tanker ship accidents have damaged vulnerable ecosystems in Alaska , the Gulf of Mexico , the Galapagos Islands , France , the Sundarbans and many other places. The quantity of oil spilled during accidents has ranged from a few hundred tons to several hundred thousand tons (e.g., Deepwater Horizon oil spill , Atlantic Empress , Amoco Cadiz ), [ 83 ] but volume is a limited measure of damage or impact. Smaller spills have already proven to have a great impact on ecosystems, such as the Exxon Valdez oil spill because of the remoteness of the site or the difficulty of an emergency environmental response. [ 84 ]
Oil spills in the Niger Delta are among the worst on the planet and is often used as an example of ecocide . [ 85 ] [ 86 ] [ 87 ] [ 88 ] [ 89 ] Between 1970 and 2000, there were over 7,000 spills. Between 1956 and 2006, up to 1.5 million tons of oil were spilled in the Niger Delta . [ 89 ]
Oil spills at sea are generally much more damaging than those on land, since they can spread for hundreds of nautical miles in a thin oil slick which can cover beaches with a thin coating of oil. [ citation needed ] These can kill seabirds, mammals, shellfish and other organisms they coat. Oil spills on land are more readily containable if a makeshift earth dam can be rapidly bulldozed around the spill site before most of the oil escapes, and land animals can avoid the oil more easily. [ citation needed ]
Oil spills can have devastating environmental impacts; however, they often have equally detrimental economic consequences. [ 117 ] These disasters do not only pose immediate threats to marine ecosystems, but also leave lasting impacts on local and regional economies. This section will explore the multifaceted economic repercussions of oil spills, specifically considering: the decline in tourism, the reduction in fishing, and the impact on port activity.
In the short term, an oil spill will prevent tourists from partaking in usual recreational activities such as swimming, boating, diving, and angling. [ 118 ] As such, the area will witness a decline in tourism. This will negatively impact several industries. Firstly, the hotels, restaurants, and bars in the immediate vicinity will have significantly fewer customers. Local car park owners and shopkeepers will be affected too. Then, this decline in tourists will cause further damage to travel agencies, tour guides, and transport companies. [ 119 ] The beaches will likely stay shut for several days whilst clean-up operations take place, and there may be disruption caused by an increase in clean-up vehicles. [ 118 ] Overall, several businesses will be negatively impacted by the spill in the short term, which can lead to further long-term damage should companies be forced to reduce staff or shut down entirely.
Similarly, tourism in Ibiza was severely impacted in 2007. Just 20 tonnes of oil were spilled from the Don Pedro in July 2007, a relatively limited volume compared with other spills. Whilst this caused just a small amount of environmental damage, the economic damage was disproportionately large. Most beaches were reopened within a week, just a dozen seabirds were affected, and there were no reports of injured sea mammals. Nonetheless, 27 percent of hotels in Ibiza were negatively affected, with two thirds of these being seafront hotels. Thus, 32 claims were made by tourist firms, equating to approximately 1.5 million euros of compensation. [ 120 ] This provides a clear example of an oil spill resulting in massive economic disaster. Furthermore, following the world's largest oil spill, the Deepwater Horizon Oil Spill in 2010, [ 121 ] the U.S. Travel Association estimated 23 billion dollars’ worth of associated costs for affected tourist infrastructure. [ 122 ]
After the Deepwater Horizon crisis, [ 121 ] the Gulf of Mexico suffered an estimated 1.9-billion-dollar loss in revenue from fishing. This is because fishing closures were imposed due to fears of the safety of seafood, [ 123 ] there was also a decline in demand, as seafood restaurants and markets suffered such severe losses that many were forced to shut. [ 118 ] Usually, the Gulf sees an average of 106,703 fishing trips per day, [ 124 ] equating to 1 million metric tonnes of annual fishery landings. [ 125 ] Therefore, the necessary fishing ban following the disaster was highly damaging. Similarly, following the sinking of the Prestige oil tanker near Galicia, Spain, in November 2002, 77,000 tonnes of crude oil were spilled into the ocean. This disaster has had severe economic consequences, alongside the environmental damage. Large zones were cordoned in which fishing was banned, with these bans lasting for more than eight months. This affected several groups, including fishermen, ship owners, and the companies who bought and sold the fish. Several compensatory actions were introduced, including tax benefits and aid. This resulted in expenses of approximately 113 million euros in an attempt to compensate for the halt in fishing activity. [ 126 ] The examples of the Deepwater Horizon and the Prestige clearly illustrate the severe economic consequences when oil spills prevent commercial fishing.
Water pollution due to oil spills can be severe, often resulting in the death or injury of many sea creatures, including birds, sea mammals, fish, algae, and coral. [ 127 ] The impact on fish caught in the spill has both immediate and longer-term impacts. Immediately, the fish are tainted with oil, and they cannot be used commercially due to safety reasons. Then, the oil can spread and sink below the water's surface. If fish swallow the oil, they are also inconsumable due to the health risk posed to humans. [ 127 ] Therefore, massive economic damage is caused to the fishing industry following an oil spill, as the stock is vastly reduced. Furthermore, the oil can cause damage to the equipment and boats of fishermen. Clean-up operations can also interrupt usual fishing routes, and sometimes fishing bans are imposed. [ 119 ] This further illustrates the damaging economic effects of oil spills on commercial fishing, which is particularly detrimental for regions whose economy relies heavily on fishing.
Ports are major hubs for economic activity; thus, an oil spill in or near a port can have significant consequences. [ 128 ] During and following a spill, all boats entering or leaving the port must be closely managed in order to prevent further spread. Furthermore, specialist cleaning contractors must be hired to effectively clean the various port structures. [ 118 ] Oil spills are relatively regular occurrences in ports, as small spills often happen due to the large volume of boats, and these are not as well documented in the media as larger events are. [ 129 ] However, these spills must still be dealt with, and they can still have damaging economic repercussions. [ 130 ] Both the incident and the response require expensive and time-consuming management which is disruptive to port activity. [ 130 ] Furthermore, special care must be taken during clean-up operations to ensure that the oil does not get stuck under the quayside, as this could act as a continual source of oil contamination. [ 130 ] This can also be seen with sea defenses; should the oil penetrate deep into the structures, they may become a source of secondary pollution. [ 118 ] Therefore, it is crucial for ports to manage and mitigate any oil spills, in order to limit the damage to ships and shipping operations. Otherwise, should large disruption occur, the economic damage can be extensive due to costly clean-up processes and delayed shipments.
The economic impact of oil spills on tourism, fishing, and ports is substantial and important to assess. Coordinated efforts are necessary to mitigate these impacts, including effective clean-up measures, public relations campaigns to restore the image of affected areas, and support for businesses and communities that must bear the economic downturn. | https://en.wikipedia.org/wiki/Oil_spill |
An oil war is a conflict about petroleum resources , or their transportation, consumption, or regulation. The term may also refer generally to any conflict in a region that contains oil reserves or is geographically positioned in a location where an entity has or may wish to develop production or transportation infrastructure for petroleum products . It is also used to refer to any of a number of specific oil wars.
Research by Emily Meierding has characterized oil wars as largely a myth. [ 1 ] She argues that proponents of oil wars underestimate the ability to seize and exploit foreign oil fields , and thus exaggerate the value of oil wars. She has examined four cases commonly described as oil wars ( Japan 's attack on the Dutch East Indies in World War II , Iraq 's invasion of Kuwait , the Iran-Iraq War , and the Chaco War between Bolivia and Paraguay ), finding that control of additional oil resources was not the main cause of aggression in the conflicts. [ 2 ]
A 2024 study found that the presence of oil in contested territory can make states less likely to seek to acquire the territory. [ 3 ]
This military -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oil_war |
Oil waxing occurs when heating oil begins to gel, and before it has become too viscous to flow at all in the heating system oil piping, wax particles (wax platelets or little spheres of wax or in some articles, alkane "wax crystals") have already begun to form in the fuel. The wax platelets form first from the long hydrocarbon chains which are a component in the heating oil (or diesel fuel). It is these waxy particles that can clog an oil line or even an oil-fired heating boiler, furnace, or water heater.
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oil_waxing |
Oil well control is the management of the dangerous effects caused by the unexpected release of formation fluid , such as natural gas and/or crude oil , upon surface equipment of oil or gas drilling rigs and escaping into the atmosphere. Technically, oil well control involves preventing the formation gas or fluid (hydrocarbons), usually referred to as kick , from entering into the wellbore during drilling or well interventions.
Formation fluid can enter the wellbore if the pressure exerted by the column of drilling fluid is not great enough to overcome the pressure exerted by the fluids in the formation being drilled (pore pressure). [ 1 ] [ 2 ] Oil well control also includes monitoring a well for signs of impending influx of formation fluid into the wellbore during drilling and procedures, to stop the well from flowing when it happens by taking proper remedial actions. [ 3 ]
Failure to manage and control these pressure effects can cause serious equipment damage and injury, or loss of life. Improperly managed well control situations can cause blowouts , which are uncontrolled and explosive expulsions of formation hydrocarbons from the well, potentially resulting in a fire. [ 4 ]
Oil well control is one of the most important aspects of drilling operations. Improper handling of kicks in oil well control can result in blowouts with very grave consequences, including the loss of valuable resources and also lives of field personnel. Even though the cost of a blowout (as a result of improper/no oil well control) can easily reach several millions of US dollars, the monetary loss is not as serious as the other damages that can occur: irreparable damage to the environment, waste of valuable resources, ruined equipment, and most importantly, the safety and lives of personnel on the drilling rig. [ 5 ] [ 6 ]
In order to avert the consequences of blowout, the utmost attention must be given to oil well control. That is why oil well control procedures should be in place prior to the start of an abnormal situation noticed within the wellbore, and ideally when a new rig position is sited. In other words, this includes the time the new location is picked, all drilling, completion , workover , snubbing and any other drilling-related operations that should be executed with proper oil well control in mind. [ 6 ] This type of preparation involves widespread training of personnel, the development of strict operational guidelines and the design of drilling programs – maximizing the probability of successfully regaining hydrostatic control of a well after a significant influx of formation fluid has taken place. [ 6 ] [ 7 ]
Pressure is a very important concept in the oil and gas industry. Pressure can be defined as: the force exerted per unit area. Its SI unit is newtons per square metre or pascals . Another unit, bar , is also widely used as a measure of pressure, with 1 bar equal to 100 kilopascals. Normally pressure is measured in the U.S. petroleum industry in units of pounds force per square inch of area, or psi. 1000 psi equals 6894.76 kilo-pascals.
Hydrostatic pressure (HSP), as stated, is defined as pressure due to a column of fluid that is not moving. That is, a column of fluid that is static, or at rest, exerts pressure due to local force of gravity on the column of the fluid. [ 8 ]
The formula for calculating hydrostatic pressure in SI units ( N / m 2 ) is:
All fluids in a wellbore exert hydrostatic pressure, which is a function of density and vertical height of the fluid column. In US oil field units, hydrostatic pressure can be expressed as:
The 0.052 is needed as the conversion factor to psi unit of HSP. [ 10 ] [ 11 ]
To convert these units to SI units, one can use:
The pressure gradient is described as the pressure per unit length. Often in oil well control, pressure exerted by fluid is expressed in terms of its pressure gradient. The SI unit is pascals/metre. The hydrostatic pressure gradient can be written as:
Formation pressure is the pressure exerted by the formation fluids , which are the liquids and gases contained in the geologic formations encountered while drilling for oil or gas. It can also be said to be the pressure contained within the pores of the formation or reservoir being drilled. Formation pressure is a result of the hydrostatic pressure of the formation fluids, above the depth of interest, together with pressure trapped in the formation. Under formation pressure, there are 3 levels:
normally pressured formation,
abnormal formation pressure, or
subnormal formation pressure.
Normally pressured formation has a formation pressure that is the same with the hydrostatic pressure of the fluids above it. As the fluids above the formation are usually some form of water, this pressure can be defined as the pressure exerted by a column of water from the formation's depth to sea level.
The normal hydrostatic pressure gradient for freshwater is 0.433 pounds per square inch per foot (psi/ft), or 9.792 kilopascals per meter (kPa/m), and 0.465 psi/ft for water with dissolved solids like in Gulf Coast waters, or 10.516 kPa/m. The density of formation water in saline or marine environments, such as along the Gulf Coast, is about 9.0 ppg or 1078.43 kg/m 3 . Since this is the highest for both Gulf Coast water and fresh water, a normally pressured formation can be controlled with a 9.0 ppg mud.
Sometimes the weight of the overburden, which refers to the rocks and fluids above the formation, will tend to compact the formation, resulting in pressure built-up within the formation if the fluids are trapped in place. The formation in this case will retain its normal pressure only if there is a communication with the surface. Otherwise, an abnormal formation pressure will result.
As discussed above, once the fluids are trapped within the formation and not allow to escape there is a pressure build-up leading to abnormally high formation pressures. This will generally require a mud weight of greater than 9.0 ppg to control. Excess pressure, called "overpressure" or "geopressure", can cause a well to blow out or become uncontrollable during drilling.
Subnormal formation pressure is a formation pressure that is less than the normal pressure for the given depth. It is common in formations that had undergone production of original hydrocarbon or formation fluid in them. [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Overburden pressure is the pressure exerted by the weight of the rocks and contained fluids above the zone of interest. Overburden pressure varies in different regions and formations. It is the force that tends to compact a formation vertically. The density of these usual ranges of rocks is about 18 to 22 ppg (2,157 to 2,636 kg/m 3 ). This range of densities will generate an overburden pressure gradient of about 1 psi/ft (22.7 kPa/m). Usually, the 1 psi/ft is not applicable for shallow marine sediments or massive salt. In offshore however, there is a lighter column of sea water, and the column of underwater rock does not go all the way to the surface. Therefore, a lower overburden pressure is usually generated at an offshore depth, than would be found at the same depth on land.
Mathematically, overburden pressure can be derived as:
where
The bulk density of the sediment is a function of rock matrix density, porosity within the confines of the pore spaces, and porefluid density. This can be expressed as
where
Fracture pressure can be defined as pressure required to cause a formation to fail or split. As the name implies, it is the pressure that causes the formation to fracture and the circulating fluid to be lost. Fracture pressure is usually expressed as a gradient, with the common units being psi/ft (kPa/m) or ppg (kg/m 3 ).
To fracture a formation, three things are generally needed, which are:
Pump pressure , which is also referred to as system pressure loss , is the sum total of all the pressure losses from the oil well surface equipment, the drill pipe , the drill collar , the drill bit , and annular friction losses around the drill collar and drill pipe. It measures the system pressure loss at the start of the circulating system and measures the total friction pressure. [ 20 ]
Slow pump pressure is the circulating pressure (pressure used to pump fluid through the whole active fluid system, including the borehole and all the surface tanks that constitute the primary system during drilling) at a reduced rate. SPP is very important during a well kill operation in which circulation (a process in which drilling fluid is circulated out of the suction pit, down the drill pipe and drill collars, out the bit, up the annulus, and back to the pits while drilling proceeds) is done at a reduced rate to allow better control of circulating pressures and to enable the mud properties (density and viscosity) to be kept at desired values. The slow pump pressure can also be referred to as "kill rate pressure" or "slow circulating pressure" or "kill speed pressure" and so on. [ 21 ] [ 22 ] [ 23 ]
Shut-in drill pipe pressure (SIDPP), which is recorded when a well is shut in on a kick, is a measure of the difference between the pressure at the bottom of the hole and the hydrostatic pressure (HSP) in the drillpipe. During a well shut-in, the pressure of the wellbore stabilizes, and the formation pressure equals the pressure at the bottom of the hole. The drillpipe at this time should be full of known-density fluid. Therefore, the formation pressure can be easily calculated using the SIDPP. This means that the SIDPP gives a direct of formation pressure during a kick.
The shut-in casing pressure (SICP) is a measure of the difference between the formation pressure and the HSP in the annulus when a kick occurs.
The pressures encountered in the annulus can be estimated using the following mathematical equation:
where
Bottom-hole pressure (BHP) is the pressure at the bottom of a well. The pressure is usually measured at the bottom of the hole. This pressure may be calculated in a static, fluid-filled wellbore with the equation:
where
In Canada the formula is depth in meters x density in kgs x the constant gravity factor (0.00981), which will give the hydrostatic pressure of the well bore or (hp) hp=bhp with pumps off.
The bottom-hole pressure is dependent on the following:
Therefore, BHP can be said to be the sum of all pressures at the bottom of the wellhole, which equals:
There are some basic calculations that need to be carried during oil well control. A few of these essential calculations will be discussed below. Most of the units here are in US oil field units, but these units can be converted to their SI units equivalent by using this Conversion of units link.
The capacity of drill string is an essential issue in oil well control. The capacity of drillpipe, drill collars or hole is the volume of fluid that can be contained within them.
The capacity formula is as shown below:
where
Also the total pipe or hole volume is given by :
Feet of pipe occupied by a given volume is given by:
Capacity calculation is important in oil well control due to the following:
This is the volume contained between the inside diameter of the hole and the outside diameter of the pipe.
Annular capacity is given by :
where
Similarly
and
Fluid level drop is the distance the mud level will drop when a dry string(a bit that is not plugged) is being pulled from the wellbore and it is given by:
or
and the resulting loss of HSP is given by:
where
When pulling a wet string (the bit is plugged) and the fluid from the drillpipe is not returned to the hole. The fluid drop is then changed to the following:
Kill Mud weight is the density of the mud required to balance formation pressure during kill operation. The Kill Weight Mud can be calculated by:
where
But when the formation pressure can be determined from data sources such as bottom hole pressure, then KWM can be calculated as follows:
where FP = Formation pressure. [ 28 ]
Kick is the entry of formation fluid into the wellbore during drilling operations. It occurs because the pressure exerted by the column of drilling fluid is not great enough to overcome the pressure exerted by the fluids in the formation drilled. The whole essence of oil well control is to prevent kick from occurring and if it happens to prevent it from developing into blowout . An uncontrolled kick usually results from not deploying the proper equipment, using poor practices, or a lack of training of the rig crews. Loss of oil well control may lead into blowout, which represents one of the most severe threats associated with the exploration of petroleum resources involving the risk of lives and environmental and economic consequences. [ 29 ] [ 30 ]
A kick will occur when the bottom hole pressure(BHP) of a well falls below the formation pressure and the formation fluid flows into the wellbore. There are usually causes for kicks some of which are:
Tripping is the complete operation of removing the drillstring from the wellbore and running it back in the hole. This operation is typically undertaken when the bit (which is the tool used to crush or cut rock during drilling) becomes dull or broken, and no longer drills the rock efficiently. A typical drilling operation of deep oil or gas wells may require up to 8 or more trips of the drill string to replace a dull rotary bit for one well.
Tripping out of the hole means that the entire volume of steel (of drillstring) is being removed, or has been removed, from the well. This displacement of the drill string (the steel) will leave out a volume of space that must be replaced with an equal volume of mud . If the replacement is not done, the fluid level in the wellbore will drop, resulting in a loss of hydrostatic pressure (HSP) and bottom hole pressure (BHP). If this bottom hole pressure reduction goes below the formation pressure , a kick will definitely occur.
Swabbing occurs when bottom hole pressure is reduced due to the effects of pulling the drill string upward in the bored hole. During the tripping out of the hole, the space formed by the drillpipe , drill collar , or tubing (which are being removed) must be replaced by something, usually mud . If the rate of tripping out is greater than the rate the mud is being pumped into the void space (created by the removal of the drill string), then swab will occur. If the reduction in bottom hole pressure caused by swabbing is below formation pressure , then a kick will occur.
Lost circulation usually occurs when the hydrostatic pressure fractures an open formation. When this occurs, there is loss in circulation, and the height of the fluid column decreases, leading to lower HSP in the wellbore . A kick can occur if steps are not taken to keep the hole full. Lost circulation can be caused by:
If the density of the drilling fluid or mud in the well bore is not sufficient to keep the formation pressure in check, then a kick can occur. Insufficient density of the drilling fluid can be as a result of the following :
Another cause of kicks is drilling accidentally into abnormally-pressured permeable zones. The increased formation pressure may be greater than the bottom hole pressure, resulting in a kick.
Drilling into an adjacent well is a potential problem, particularly in offshore drilling where a
large number of directional wells are drilled from the same platform . If the drilling well penetrates the production string of a previously completed well, the formation fluid from the completed well will enter the wellbore of the drilling well, causing a kick. If this occurs at a shallow depth, it is an extremely dangerous situation and could easily result in an uncontrolled blowout with little to no warning of the event.
A drill-stem test is performed by setting a packer above the formation to be tested, and allowing the formation to flow. During the course of the test, the bore hole or casing below the packer, and at least a portion of the drill pipe or tubing, is filled with formation fluid. At the conclusion of the test, this fluid must be removed by proper well control techniques to return the well to a safe condition. Failure to follow the correct procedures to kill the well could lead to a blowout. [ 31 ] [ 32 ] [ 33 ]
Improper fill on trip occurs when the volume of drilling fluid to keep the hole full on a Trip (complete operation of removing the drillstring from the wellbore and running it back in the hole) is less than that calculated or less than Trip Book Record. This condition is usually caused by formation fluid entering the wellbore due to the swabbing action of the drill string, and, if action is not taken soon, the well will enter a kick state. [ 34 ] [ 35 ] [ 36 ]
In oil well control, a kick should be able to be detected promptly, and if a kick is detected, proper kick prevention operations must be taken immediately to avoid a blowout. There are various tell-tale signs that signal an alert crew that a kick is about to start. Knowing these signs will keep a kicking oil well under control, and avoid a blowout:
A sudden increase in penetration rate (drilling break) is usually caused by a change in the type of formation being drilled. However, it may also signal an increase in formation pore pressure, which may indicate a possible kick.
If the rate at which the pumps are running is held constant, then the flow from the annulus should be constant. If the annulus flow increases without a corresponding change in pumping rate, the additional flow is caused by formation fluid(s) feeding into the well bore or gas expansion. This will indicate an impending kick.
If there is an unexplained increase in the volume of surface mud in the pit (a large tank that holds drilling fluid on the rig), it could signify an impending kick. This is because as the formation fluid feeds into the wellbore, it causes more drilling fluid to flow from the annulus than is pumped down the drill string , thus the volume of fluid in the pit(s) increases.
A decrease in pump pressure or increase in pump speed can happen as a result of a decrease in hydrostatic pressure of the annulus as the formation fluids enters the wellbore. As the lighter formation fluid flows into the wellbore, the hydrostatic pressure exerted by the annular column of fluid decreases, and the drilling fluid in the drill pipe tends to U-tube into the annulus. When this occurs, the pump pressure will drop, and the pump speed will increase. The lower pump pressure and increase in pump speed symptoms can also be indicative of a hole in the drill string, commonly referred to as a washout. Until a confirmation can be made whether a washout or a well kick has occurred, a kick should be assumed.
There are basically three types of oil well control which are:
primary oil well control,
secondary oil well control, and
tertiary oil well control. Those types are explained below.
Primary oil well control is the process which maintains a hydrostatic pressure in the
wellbore greater than the pressure of the fluids in the formation being drilled, but less than formation fracture pressure. It uses the mud weight to provide sufficient pressure to prevent an influx of formation fluid into the wellbore. If hydrostatic pressure is less than formation pressure, then formation fluids will enter the wellbore. If the hydrostatic pressure of the fluid in the wellbore exceeds the fracture pressure of the formation, then the fluid in the well could be lost into the formation. In an extreme case of lost circulation, the formation pressure may exceed hydrostatic pressure, allowing formation fluids to enter into the well.
Secondary oil well control is done after the Primary oil well control has failed to prevent formation fluids from entering the wellbore. This process uses "blow out preventer" , a BOP, to prevent the escape of wellbore fluids from the well. As the rams and choke of the BOP remain closed, a pressure built up test is carried out and a kill mud weight calculated and pumped inside the well to kill the kick and circulate it out.
Tertiary oil well control describes the third line of defense, where the formation cannot be controlled by primary or secondary well control (hydrostatic and equipment). This happens in underground blowout situations. The following are examples of tertiary well control:
Using shut-in procedures is one of the oil-well-control measures to curtail kicks and prevent a blowout from occurring. Shut-in procedures are specific procedures for closing a well in case of a kick. When any positive indication of a kick is observed, such as a sudden increase in flow, or an increase in pit level, then the well should be shut-in immediately. If a well shut-in is not done promptly, a blowout is likely to happen.
Shut-in procedures are usually developed and practiced for every rig activity, such as drilling, tripping, logging, running tubular, performing a drill stem test, and so on. The primary purpose of a specific shut-in procedure is to minimize kick volume entering into a wellbore when a kick occurs, regardless of what phase of rig activity is occurring. However, a shut-in procedure is a company-specific procedure, and the policy of a company will dictate how a well should be shut-in.
They are generally two type of Shut-in procedures which are soft shut-in or hard shut-in. Of these two methods, the hard shut-in is the fastest method to shut in the well; therefore, it will minimize the volume of kick allowed into the wellbore. [ 41 ]
Source: [ 42 ] A well kill procedure is an oil well control method. Once the well has been shut-in on a kick , proper kill procedures must be done immediately. The general idea in well kill procedure is to circulate out any formation fluid already in the wellbore during kick, and then circulate a satisfactory weight of kill mud called Kill Weight Mud (KWM) into the well without allowing further fluid into the hole. If this can be done, then once the kill mud has been fully circulated around the well, it is possible to open up the well and restart normal operations. Generally, a kill weight mud (KWM) mix, which provides just hydrostatic balance for formation pressure, is circulated. This allows approximately constant bottom hole pressure, which is slightly greater than formation pressure to be maintained, as the kill circulation proceeds because of the additional small circulating friction pressure loss. After circulation, the well is opened up again.
The major well kill procedures used in oil well control are listed below:
There will always be potential oil well control problems, as long as there are drilling operations anywhere in the world. Most of these well control problems are as a result of some errors and can be eliminated, even though some are actually unavoidable. Since we know the consequences of failed well control are severe, efforts should be made to prevent some human errors which are the root causes of these incidents. These causes include:
An effective oil-well-control culture can be established within a company by requiring well control training of all rig workers, by assessing well control competence at the rigsite, and by supporting qualified personnel in carrying out safe well control practices during the drilling process. Such a culture also requires personnel involved in oil well control to commit to following the right procedures at the right time. Clearly communicated policies and procedures, credible training, competence assurance, and management support can minimize and mitigate well control incidents. An effective well control culture is built upon technically competent personnel who are also trained and skilled in crew resource management (a discipline within human factors), which comprises situation awareness, decision-making (problem-solving), communication, teamwork, and leadership. Training programs are developed and accredited by organizations such as the International Association of Drilling Contractors (IADC) and International Well Control Forum (IWCF).
IADC , headquartered in Houston, TX, is a nonprofit industry association that accredits well control training through a program called WellSharp, which is aimed at providing the necessary knowledge and practical skills critical to successful well control. This training comprises drilling and well servicing activities, as well as course levels applicable to everyone involved in supporting or conducting drilling operations—from the office support staff to the floorhands and drillers and up to the most-experienced supervisory personnel. Training such as those included in the WellSharp program and the courses offered by IWCF contribute to the competence of personnel, but true competence can be assessed only at the jobsite during operations. Therefore, IADC also accredits industry competence assurance programs to help ensure quality and consistency of the competence assurance process for drilling operations. IADC has regional offices all over the world and accredits companies worldwide. IWCF is an NGO , headquartered in Europe, whose main aim is to develop and administer well-control certification programs for personnel employed in oil-well drilling and for workover and well-intervention operations. [ 46 ] [ 47 ] [ 48 ] | https://en.wikipedia.org/wiki/Oil_well_control |
Oilgear Company is an American manufacturer of fluid power and hydraulic equipment, including pumps , valves , motors , meters and other components, as well as integrated systems, headquartered in Traverse City, Michigan . It was founded in 1921 in Milwaukee , Wisconsin , as an offshoot of hydraulic power work being done for Bucyrus-Erie , manufacturing a line of hydraulic presses, and successfully weathered the Great Depression , gradually expanding its product line, including being one of the first companies to use microprocessors with hydraulics, and in the early 1980s began expanded its research and development budgets to build complete computer-controlled manufacturing systems, buying only the memory chips . [ 1 ]
In 2006, with manufacturing and service facilities in about 15 countries around the world, it was taken private by Mason Wells at a cost of over $30 million. [ 2 ] As of 2011, it had roughly 750 employees globally, with units in Mexico , France , Italy , Great Britain , and Germany . [ 3 ] Acquisitions since founding include Petrodyne , Towler Hydraulics , Olmsted Products and Clover Industries . [ 4 ] Competitors include Flowserve and Parker Hannifin . [ 5 ]
In May 2015, it was announced that Oilgear would close down its Milwaukee factory, moving the production formerly done there to plants in Traverse City, Michigan , and Fremont, Nebraska and eliminating 45 jobs. This would leave 85 jobs in the Milwaukee area, consisting of engineering and regional support staff, who would be moved to a smaller facility in the Milwaukee area. [ 6 ] The shuttered facility was sold in 2016 to Global Power Components , which announced plans to move its manufacturing operations from three existing facilities in nearby West Allis into the 250,000 square foot former Oilgear factory. [ 7 ]
This United States manufacturing company–related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oilgear |
Oja's learning rule , or simply Oja's rule , named after Finnish computer scientist Erkki Oja ( Finnish pronunciation: [ˈojɑ] , AW-yuh ), is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis . This is a computational form of an effect which is believed to happen in biological neurons.
Oja's rule requires a number of simplifications to derive, but in its final form it is demonstrably stable, unlike Hebb's rule. It is a single-neuron special case of the Generalized Hebbian Algorithm . However, Oja's rule can also be generalized in other ways to varying degrees of stability and success.
Consider a simplified model of a neuron y {\displaystyle y} that returns a linear combination of its inputs x using presynaptic weights w :
y ( x ) = ∑ j = 1 m x j w j {\displaystyle \,y(\mathbf {x} )~=~\sum _{j=1}^{m}x_{j}w_{j}}
Oja's rule defines the change in presynaptic weights w given the output response y {\displaystyle y} of a neuron to its inputs x to be
where η is the learning rate which can also change with time. Note that the bold symbols are vectors and n defines a discrete time iteration. The rule can also be made for continuous iterations as
The simplest learning rule known is Hebb's rule, which states in conceptual terms that neurons that fire together, wire together . In component form as a difference equation, it is written
or in scalar form with implicit n -dependence,
where y ( x n ) is again the output, this time explicitly dependent on its input vector x .
Hebb's rule has synaptic weights approaching infinity with a positive learning rate. We can stop this by normalizing the weights so that each weight's magnitude is restricted between 0, corresponding to no weight, and 1, corresponding to being the only input neuron with any weight. We do this by normalizing the weight vector to be of length one:
Note that in Oja's original paper, [ 1 ] p =2 , corresponding to quadrature (root sum of squares), which is the familiar Cartesian normalization rule. However, any type of normalization, even linear, will give the same result without loss of generality .
For a small learning rate | η | ≪ 1 {\displaystyle |\eta |\ll 1} the equation can be expanded as a Power series in η {\displaystyle \eta } . [ 1 ]
For small η , our higher-order terms O ( η 2 ) go to zero. We again make the specification of a linear neuron, that is, the output of the neuron is equal to the sum of the product of each input and its synaptic weight to the power of p-1 , which in the case of p =2 is synaptic weight itself, or
We also specify that our weights normalize to 1 , which will be a necessary condition for stability, so
which, when substituted into our expansion, gives Oja's rule, or
In analyzing the convergence of a single neuron evolving by Oja's rule, one extracts the first principal component , or feature, of a data set. Furthermore, with extensions using the Generalized Hebbian Algorithm , one can create a multi-Oja neural network that can extract as many features as desired, allowing for principal components analysis .
A principal component a j is extracted from a dataset x through some associated vector q j , or a j = q j ⋅ x , and we can restore our original dataset by taking
In the case of a single neuron trained by Oja's rule, we find the weight vector converges to q 1 , or the first principal component, as time or number of iterations approaches infinity. We can also define, given a set of input vectors X i , that its correlation matrix R ij = X i X j has an associated eigenvector given by q j with eigenvalue λ j . The variance of outputs of our Oja neuron σ 2 ( n ) = ⟨y 2 ( n )⟩ then converges with time iterations to the principal eigenvalue, or
These results are derived using Lyapunov function analysis, and they show that Oja's neuron necessarily converges on strictly the first principal component if certain conditions are met in our original learning rule. Most importantly, our learning rate η is allowed to vary with time, but only such that its sum is divergent but its power sum is convergent , that is
Our output activation function y ( x ( n )) is also allowed to be nonlinear and nonstatic, but it must be continuously differentiable in both x and w and have derivatives bounded in time. [ 2 ]
Oja's rule was originally described in Oja's 1982 paper, [ 1 ] but the principle of self-organization to which it is applied is first attributed to Alan Turing in 1952. [ 2 ] PCA has also had a long history of use before Oja's rule formalized its use in network computation in 1989. The model can thus be applied to any problem of self-organizing mapping , in particular those in which feature extraction is of primary interest. Therefore, Oja's rule has an important place in image and speech processing. It is also useful as it expands easily to higher dimensions of processing, thus being able to integrate multiple outputs quickly. A canonical example is its use in binocular vision . [ 3 ]
There is clear evidence for both long-term potentiation and long-term depression in biological neural networks, along with a normalization effect in both input weights and neuron outputs. However, while there is no direct experimental evidence yet of Oja's rule active in a biological neural network, a biophysical derivation of a generalization of the rule is possible. Such a derivation requires retrograde signalling from the postsynaptic neuron, which is biologically plausible (see neural backpropagation ), and takes the form of
where as before w ij is the synaptic weight between the i th input and j th output neurons, x is the input, y is the postsynaptic output, and we define ε to be a constant analogous the learning rate, and c pre and c post are presynaptic and postsynaptic functions that model the weakening of signals over time. Note that the angle brackets denote the average and the ∗ operator is a convolution . By taking the pre- and post-synaptic functions into frequency space and combining integration terms with the convolution, we find that this gives an arbitrary-dimensional generalization of Oja's rule known as Oja's Subspace , [ 4 ] namely | https://en.wikipedia.org/wiki/Oja's_rule |
Okazaki fragments are short sequences of DNA nucleotides (approximately 150 to 200 base pairs long in eukaryotes ) which are synthesized discontinuously and later linked together by the enzyme DNA ligase to create the lagging strand during DNA replication . [ 1 ] They were discovered in the 1960s by the Japanese molecular biologists Reiji and Tsuneko Okazaki , along with the help of some of their colleagues.
During DNA replication, the double helix is unwound and the complementary strands are separated by the enzyme DNA helicase , creating what is known as the DNA replication fork . Following this fork, DNA primase and DNA polymerase begin to act in order to create a new complementary strand. Because these enzymes can only work in the 5’ to 3’ direction, the two unwound template strands are replicated in different ways. [ 2 ] One strand, the leading strand , undergoes a continuous replication process since its template strand has 3’ to 5’ directionality, allowing the polymerase assembling the leading strand to follow the replication fork without interruption. The lagging strand, however, cannot be created in a continuous fashion because its template strand has 5’ to 3’ directionality, which means the polymerase must work backwards from the replication fork. This causes periodic breaks in the process of creating the lagging strand. The primase and polymerase move in the opposite direction of the fork, so the enzymes must repeatedly stop and start again while the DNA helicase breaks the strands apart. Once the fragments are made, DNA ligase connects them into a single, continuous strand. [ 3 ] The entire replication process is considered "semi-discontinuous" since one of the new strands is formed continuously and the other is not. [ 4 ]
[ 2 ] During the 1960s, Reiji and Tsuneko Okazaki conducted experiments involving DNA replication in the bacterium Escherichia coli . Before this time, it was commonly thought that replication was a continuous process for both strands, but the discoveries involving E. coli led to a new model of replication. The scientists found there was a discontinuous replication process by pulse-labeling DNA and observing changes that pointed to non-contiguous replication.
The work of Kiwako Sakabe, Reiji Okazaki and Tsuneko Okazaki provided experimental evidence supporting the hypothesis that DNA replication is a discontinuous process. Previously, it was commonly accepted that replication was continuous in both the 3' to 5' and 5' to 3' directions. 3' and 5' are specifically numbered carbons on the deoxyribose ring in nucleic acids, and refer to the orientation or directionality of a strand. In 1967, Tsuneko Okazaki and Toru Ogawa suggested that there is no found mechanism that showed continuous replication in the 3' to 5' direction, only 5' to 3' using DNA polymerase , a replication enzyme. The team hypothesized that if discontinuous replication was used, short strands of DNA , synthesized at the replicating point, could be attached in the 5' to 3' direction to the older strand. [ 5 ]
To distinguish the method of replication used by DNA experimentally, the team pulse-labeled newly replicated areas of Escherichia coli chromosomes, denatured, and extracted the DNA. A large number of radioactive short units meant that the replication method was likely discontinuous. The hypothesis was further supported by the discovery of polynucleotide ligase , an enzyme that links short DNA strands together. [ 6 ]
In 1968, Reiji and Tsuneko Okazaki gathered additional evidence of nascent DNA strands. They hypothesized that if discontinuous replication, involving short DNA chains linked together by polynucleotide ligase, is the mechanism used in DNA synthesis, then "newly synthesized short DNA chains would accumulate in the cell under conditions where the function of ligase is temporarily impaired." E. coli were infected with bacteriophage T4 that produce temperature-sensitive polynucleotide ligase. The cells infected with the T4 phages accumulated a large number of short, newly synthesized DNA chains, as predicted in the hypothesis, when exposed to high temperatures. This experiment further supported the Okazakis' hypothesis of discontinuous replication and linkage by polynucleotide ligase. It disproved the notion that short chains were produced during the extraction process as well. [ 7 ]
The Okazakis' experiments provided extensive information on the replication process of DNA and the existence of short, newly synthesized DNA chains that later became known as Okazaki fragments.
Two pathways have been proposed to process Okazaki fragments: the short flap pathway and the long flap pathway.
In the short flap pathway in eukaryotes the lagging strand of DNA is primed in short intervals. In the short pathway only, the nuclease FEN1 is involved. Pol δ frequently encounters the downstream primed Okazaki fragment and displaces the RNA/DNA initiator primer into a 5′ flap. The FEN1 5’-3’ endonuclease recognizes that the 5’ flap is displaced, and it cleaves, creating a substrate for ligation. In this method the Pol a-synthesized primer is removed. Studies [ which? ] show that in the FEN1 suggest a ‘tracking; model where the nuclease moves from the 5’ flap to its base to preform cleavage. [ 8 ] The Pol δ does not process a nuclease activity to cleave the displaced flap. The FEN1 cleaves the short flap immediately after they form. The cleavage is inhibited when the 5’ end of the DNA flap is blocked either with a complementary primer or a biotin-conjugated streptavidin moiety. DNA ligase seals the nick made by the FEN1 and it creates a functional continuous double strand of DNA. PCNA simulates enzymatic functions of proteins for both FEN1 and DNA ligase. The interaction is crucial in creating proper ligation of the lagging DNA strand. Sequential strand displacement and cleavage by Pol δ and FEN1, respectively, helps to remove the entire initiator RNA before ligation. Many displacements need to take place and cleavage reactions are required to remove the initiator primer. The flap that is created and processes and it is matured by the short flap pathway.
In some cases, the FEN1 lasts for only a short period of time and disengages from the replication complex. This causes a delay in the cleavage that the flaps displaced by Pol δ become long. When the RPA reaches a long enough length, it can bind stably. When the RPA bound flaps are refactorized to FEN1 cleavage the require another nuclease for processing, this has been identified as an alternate nuclease, DNA2. DNA2 has defects in the DEN1 overexpression. The DNA2 showed to work with FEN1 to process long flaps. DNA2 can dissociate the RPA from a long flap, it does this by using a mechanism like the FEN1. It binds the flap and threads the 5’ end of the flap. The nuclease cleaves the flap making it too short to bind to the RPA, the flap being too short means it is available for FEN1 and ligation. This is known as the long flap method. DNA2 can act as FEN1 as a backup for nuclease activity but it is not an efficient process.
Until recently, there were only two known pathways to process Okazaki fragments. However, current investigations have concluded that a new pathway for Okazaki fragmentation and DNA replication exists. This alternate pathway involves the enzymes Pol δ with Pif1 which perform the same flap removal process as Pol δ and FEN1. [ 9 ]
Primase adds RNA primers onto the lagging strand, which allows synthesis of Okazaki fragments from 5' to 3'. However, primase creates RNA primers at a much lower rate than that at which DNA polymerase synthesizes DNA on the leading strand. DNA polymerase on the lagging strand also has to be continually recycled to construct Okazaki fragments following RNA primers. This makes the speed of lagging strand synthesis much lower than that of the leading strand. To solve this, primase acts as a temporary stop signal, briefly halting the progression of the replication fork during DNA replication. This molecular process prevents the leading strand from overtaking the lagging strand. [ 10 ]
New DNA is made during this phase by enzymes which synthesize DNA in the 5’ to 3’ direction. DNA polymerase is essential for both the leading strand which is made as a continuous strand and lagging strand which is made in small pieces in DNA Synthesis. This process happens for extension of the newly synthesized fragment and expulsion of the RNA and DNA segment. Synthesis occurs in 3 phases with two different polymerases, DNA polymerase α-primase and DNA polymerase δ. This process starts with polymerase α-primase displacing from the RNA and DNA primer by the clamp loader replication Effect, this Effect leads the sliding clamp onto the DNA. After this, DNA polymerase δ begins to go into its holoenzyme form which then synthesis begins. The synthesis process will continue until the 5’end of the previous Okazaki fragment has arrived. Once arrived, Okazaki fragment processing proceeds to join the newly synthesized fragment to the lagging strand. Last function of DNA polymerase δ is to serve as a supplement to FEN1/RAD27 5’ Flap Endonuclease activity. The rad27-p allele is lethal in most combinations but was viable with the rad27-p polymerase and exo1. Both rad27-p polymerase and exo1 portray strong synergistic increases in CAN 1 duplication mutations. The only reason this mutation is viable is due to the double-strand break repair genes RAD50, RAD51 and RAD52. The RAD27/FEN1 creates nicks between adjacent Okazaki fragments by minimizing the amount of strand-expulsion in the lagging strand.
During lagging strand synthesis, DNA ligase I connects the Okazaki fragments, following replacement of the RNA primers with DNA nucleotides by DNA polymerase δ. Okazaki fragments that are not ligated could cause double-strand-breaks, which cleaves the DNA. [ 11 ] Since only a small number of double-strand breaks are tolerated, and only a small number can be repaired, enough ligation failures could be lethal to the cell.
Further research implicates the supplementary role of proliferating cell nuclear antigen (PCNA) to DNA ligase I's function of joining Okazaki fragments. When the PCNA binding site on DNA ligase I is inactive, DNA ligase I's ability to connect Okazaki fragments is severely impaired. Thus, a proposed mechanism follows: after a PCNA-DNA polymerase δ complex synthesizes Okazaki fragments, the DNA polymerase δ is released. Then, DNA ligase I binds to the PCNA, which is clamped to the nicks of the lagging strand, and catalyzes the formation of phosphodiester bonds. [ 12 ] [ 13 ] [ 14 ]
Flap endonuclease 1 ( FEN1 ) is responsible for processing Okazaki fragments. It works with DNA polymerase to remove the RNA primer of an Okazaki fragment and can remove the 5' ribonucleotide and 5' flaps when DNA polymerase displaces the strands during lagging strand synthesis. The removal of these flaps involves a process called nick translation and creates a nick for ligation. Thus, FEN1's function is necessary to Okazaki fragment maturation in forming a long continuous DNA strand. Likewise, during DNA base repair, the damaged nucleotide is displaced into a flap and subsequently removed by FEN1. [ 15 ] [ 16 ]
Dna2 endonuclease does not have a specific structure and their properties are not well characterized, but could be referred as single-stranded DNA with free ends (ssDNA). Dna2 endonuclease is essential to cleave long DNA flaps that leave FEN1 during the Okazaki Process. Dna2 endonuclease is responsible for the removal of the initiator RNA segment on Okazaki Fragments. Also, Dna2 endonuclease has a pivotal role in the intermediates created during diverse DNA metabolisms and is functional in telomere maintenance. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Dna2 endonuclease becomes active when a terminal RNA segment attaches at the 5’ end, because it translocates in the 5’ to 3’ direction. In the presence of a single stranded DNA-binding protein RPA, the DNA 5' flaps become too long, and the nicks no longer fit as substrate for FEN1. This prevents the FEN1 from removing the 5′-flaps. Thus, Dna2's role is to reduce the 3′ end of these fragments, making it possible for FEN1 to cut the flaps, and the Okazaki fragment maturation more efficient. During the Okazaki Process, Dna2 helicase and endonuclease are inseparable. Dna2 Endonuclease does not depend on the 5’-tailed fork structure of its activity. Unproductive binding has been known to create blocks to FEN1 cleavage and tracking. It is known that ATP reduces activity, but promotes the release of the 3’-end label. Studies have suggested that a new model of Dna2 Endonuclease and FEN1 are partially responsible in Okazaki fragment maturation. [ 20 ] [ 18 ] [ 17 ] [ 22 ]
Newly synthesized DNA, otherwise known as Okazaki fragments, are bound by DNA ligase, which forms a new strand of DNA. There are two strands that are created when DNA is synthesized. The leading strand is continuously synthesized and is elongated during this process to expose the template that is used for the lagging strand (Okazaki fragments). During the process of DNA replication, DNA and RNA primers are removed from the lagging strand of DNA to allow Okazaki fragments to bind to. Since this process is so common, Okazaki maturation will take place around a million times during one completion of DNA replication. For Okazaki maturation to occur, RNA primers must create segments on the fragments to be ligated. This is used as a building block for the synthesis of DNA in the lagging strand. On the template strand, polymerase will synthesize in the opposite direction from the replication fork. Once the template becomes discontinuous, it will create an Okazaki fragment. Defects in the maturation of Okazaki fragments can potentially cause strands in the DNA to break and cause different forms of chromosome abnormality. These mutations in the chromosomes can affect the appearance, the number of sets, or the number of individual chromosomes. Since chromosomes are fixed for each specific species, it can also change the DNA and cause defects in the genepool of that species.
Okazaki fragments are present in both prokaryotes and eukaryotes . [ 23 ] DNA molecules in eukaryotes differ from the circular molecules of prokaryotes in that they are larger and usually have multiple origins of replication. This means that each eukaryotic chromosome is composed of many replicating units of DNA with multiple origins of replication. In comparison, prokaryotic DNA has only a single origin of replication. In eukaryotes, these replicating forks, which are numerous all along the DNA, form "bubbles" in the DNA during replication. The replication fork forms at a specific point called autonomously replicating sequences (ARS). Eukaryotes have a clamp loader complex and a six-unit clamp called the proliferating cell nuclear antigen. [ 24 ] The efficient movement of the replication fork also relies critically on the rapid placement of sliding clamps at newly primed sites on the lagging DNA strand by ATP-dependent clamp loader complexes. This means that the piecewise generation of Okazaki fragments can keep up with the continuous synthesis of DNA on the leading strand. These clamp loader complexes are characteristic of all eukaryotes and separate some of the minor differences in the synthesis of Okazaki fragments in prokaryotes and eukaryotes. [ 25 ] The lengths of Okazaki fragments in prokaryotes and eukaryotes are different as well. Prokaryotes have Okazaki fragments that are quite longer than those of eukaryotes. Eukaryotes typically have Okazaki fragments that are 100 to 200 nucleotides long, whereas fragments in prokaryotic E. coli can be 2,000 nucleotides long. The reason for this discrepancy is unknown.
Each eukaryotic chromosome is composed of many replicating units of DNA with multiple origins of replication. In comparison, the prokaryotic E. coli chromosome has only a single origin of replication. Replication in prokaryotes occurs inside of the cytoplasm, and this all begins the replication that is formed of about 100 to 200 or more nucleotides. Eukaryotic DNA molecules have a significantly larger number of replicons , about 50,000 or more; however, replication does not occur at the same time on all of the replicons. In eukaryotes, DNA replication takes place in the nucleus. A plethora replication form in just one replicating DNA molecule, the start of DNA replication is moved away by the multi-subunit protein. This replication is slow, and sometimes about 100 nucleotides per second are added.
We take from this that prokaryotic cells are simpler in structure, they have no nucleus, organelles, and very little of DNA, in the form of a single chromosome. Eukaryotic cells have nucleus with multiple organelles and more DNA arranged in linear chromosomes. We also see that the size is another difference between these prokaryotic and eukaryotic cells. The average eukaryotic cell has about 25 times more DNA than a prokaryotic cell does. Replication occurs much faster in prokaryotic cells than in eukaryotic cells; bacteria sometimes only take 40 minutes, while animal cells can take up to 400 hours. Eukaryotes also have a distinct operation for replicating the telomeres at the end of their last chromosomes. Prokaryotes have circular chromosomes, causing no ends to synthesize. Prokaryotes have a short replication process that occurs continuously; eukaryotic cells, on the other hand, only undertake DNA replication during the S-phase of the cell cycle .
The similarities are the steps for the DNA replication. In both prokaryotes and eukaryotes, replication is accomplished by unwinding the DNA by an enzyme called the DNA helicase. New strands are created by enzymes called DNA polymerases. Both of these follow a similar pattern, called semi-conservative replication, in which individual strands of DNA are produced in different directions, which makes a leading and lagging strand. These lagging strands are synthesized by the production of Okazaki fragments that are soon joined. Both of these organisms begin new DNA strands which also include small strands of RNA.
Although cells undergo multiple steps in order to ensure there are no mutations in the genetic sequence, sometimes specific deletions and other genetic changes during Okazaki fragment maturation go unnoticed. Because Okazaki fragments are the set of nucleotides for the lagging strand, any alteration including deletions, insertions, or duplications from the original strand can cause a mutation if it is not detected and fixed. Other causes of mutations include problems with the proteins that aid in DNA replication. For example, a mutation related to primase affects RNA primer removal and can make the DNA strand more fragile and susceptible to breaks. Another mutation concerns polymerase α, which impairs the editing of the Okazaki fragment sequence and incorporation of the protein into the genetic material. Both alterations can lead to chromosomal aberrations, unintentional genetic rearrangement, and a variety of cancers later in life. [ 26 ]
In order to test the effects of the protein mutations on living organisms, researchers genetically altered lab mice to be homozygous for another mutation in protein related to DNA replication, flap endonuclease 1 , or FEN1. The results varied based on the specific gene alterations. The homozygous knockout mutant mice experienced a "failure of cell proliferation" and "early embryonic lethality" (27). The mice with the mutation F343A and F344A (also known as FFAA) died directly after birth due to complications in birth including pancytopenia and pulmonary hypoplasia . This is because the FFAA mutation prevents the FEN1 from interacting with PCNA (proliferating cell nuclear antigen), consequently not allowing it to complete its purpose during Okazaki fragment maturation. The interaction with this protein is considered to be the key molecular function in the FEN1's biological function. The FFAA mutation causes defects in RNA primer removal and long-base pair repair, of which cause many breaks in the DNA. Under careful observation, cells homozygous for FFAA FEN1 mutations seem to display only partial defects in maturation, meaning mice heterozygous for the mutation would be able to survive into adulthood, despite sustaining multiple small nicks in their genomes. Inevitably however, these nicks prevent future DNA replication because the break causes the replication fork to collapse and causes double strand breaks in the actual DNA sequence. In time, these nicks also cause full chromosome breaks, which could lead to severe mutations and cancers. Other mutations have been implemented with altered versions of Polymerase α, leading to similar results. [ 26 ] | https://en.wikipedia.org/wiki/Okazaki_fragments |
Okenane , the diagenetic end product of okenone, is a biomarker for Chromatiaceae , the purple sulfur bacteria . [ 1 ] These anoxygenic phototrophs use light for energy and sulfide as their electron donor and sulfur source. Discovery of okenane in marine sediments implies a past euxinic environment, where water columns were anoxic and sulfidic. This is potentially tremendously important for reconstructing past oceanic conditions, but so far okenane has only been identified in one Paleoproterozoic (1.6 billion years old) rock sample from Northern Australia. [ 2 ] [ 3 ]
Okenone is a carotenoid , [ 4 ] a class of pigments ubiquitous across photosynthetic organisms. These conjugated molecules act as accessories in the light harvesting complex . Over 600 carotenoids are known, each with a variety of functional groups that alter their absorption spectrum . Okenone appears to be best adapted to the yellow-green transition (520 nm) of the visible spectrum , capturing light below marine plankton in the ocean. This depth varies based on the community structure of the water column. A survey of microbial blooms found Chromatiaceae anywhere between 1.5m and 24m depth, but more than 75% occurred above 12 meters. [ 5 ] Further planktonic sulfur bacteria occupy other niches: green sulfur bacteria , the Chlorobiaceae , that produce the carotenoid chlorobactene were found in greatest abundance above 6m while green sulfur bacteria that produce isorenieratene were predominantly identified above 17m. Finding any of these carotenoids in ancient rocks could constrain the depth of the oxic to anoxic transition as well as confine past ecology . Okenane and chlorobactane discovered in Australian Paleoproterozoic samples allowed conclusions of a temporarily shallow anoxic transition, likely between 12 and 25m. [ 2 ]
Okenone is synthesized in 12 species of Chromatiaceae, spanning eight genera . Other purple sulfur bacteria have acyclic carotenoid pigments like lycopene and rhodopin . However, geochemists largely study okenone because it is structurally unique. It is the only pigment with a 2,3,4 trimethyl aryl substitution pattern. In contrast, the green sulfur bacteria produce 2,3,6 trimethylaryl isoprenoids . [ 6 ] The synthesis of these structures produce biological specificity that can distinguish the ecology of past environments. Okenone, chlorobactene, and isorenieratene are produced by sulfur bacteria through modification of lycopene . In okenone, the end group of lycopene produces a χ-ring, while chlorobactene has a φ-ring. [ 7 ] The first step in biosynthesis of these two pigments is similar, formation of a β-ring by a β-cyclase enzyme . Then the syntheses diverge, with carotene desaturase/ methyltransferase enzyme transforming the β-ring end group into a χ-ring. Other reactions complete the synthesis to okenone: elongating the conjugation, adding a methoxy group , and inserting a ketone . However, only the first synthetic steps are well characterized biologically.
Pigments and other biomarkers produced by organisms can evade microbial and chemical degradation and persist in sedimentary rocks . [ 8 ] Under conditions of preservation, the environment is often anoxic and reducing, leading to chemical loss of functional groups like double bonds and hydroxyl groups . The exact reactions during diagenesis are poorly understood, although some have proposed reductive desulphurization as a mechanism for saturation of okenone to okenane. [ 9 ] [ 10 ] There is always the possibility that okenane is created by abiotic reactions, possibly from methyl shifts in β-carotene . [ 11 ] If this reaction was occurring, okenane would have multiple precursors and the biological specificity of the biomarker would be diminished. However, it is unlikely that isomer specific rearrangements of two methyl groups are occurring without enzymatic activity. The majority of studies conclude that okenane is a true biomarker of purple sulfur bacteria. However, other biological arguments against this interpretation hold merit. [ 12 ] Past organisms that synthesized okenone may not be modern analogues of purple sulfur bacteria. There may also be other okenone producing photosynthesizers in today's ocean that are uncharacterized. A further complication is horizontal gene transfer . [ 13 ] If Chromatiaceae gained the ability to create okenone more recently that the Paleoproterozoic, then the okenane does not track purple sulfur bacteria, but rather the original gene donor. These ambiguities indicate that interpretation of biomarkers in billion-year-old rocks will be limited by understanding of ancient metabolisms .
Prior to analysis, sedimentary rocks are extracted for organic matter . Typically, only less than one percent is extractable due to the thermal maturity of the source rock. The organic content is often separated into saturates , aromatics , and polars . Gas chromatography can be coupled to mass spectrometry to analyze the extracted aromatic fraction. Compounds elute from the column based on their mass-to-charge ratio (M/Z) and are displayed based on relative intensity. Peaks are assigned to compounds based on library searches, standards, and relative retention times . Some molecules have characteristic peaks that allow easy searches at particular mass-to-charge ratios. For the trimethylaryl isoprenoid okenane this characteristic peak occurs at M/Z of 134.
Carbon isotope ratios of purple and green sulfur bacteria are significantly different that other photosynthesizing organisms. The biomass of the purple sulfur bacteria, Chromatiaceae is often depleted in δ 13 C compared to typical oxygenic phototrophs while the green sulfur bacteria, Chlorobiaceae, are often enriched. [ 14 ] This offers an additional discrimination to determine ecological communities preserved in sedimentary rocks. For the biomarker okenane, the δ 13 C could be determined by an Isotope Ratio Mass Spectrometer .
In modern environments, purple sulfur bacteria thrive in meromictic (permanently stratified) lakes [ 15 ] and silled fjords and are seen in few marine ecosystems. Hypersaline waters like the Black Sea are exceptions. [ 16 ] However, billions of years ago, when the oceans were anoxic and sulfidic, phototrophic sulfur bacteria had more habitable space. Researchers at the Australian National University and the Massachusetts Institute of Technology investigated 1.6-billion-year-old rocks to examine the chemical conditions of the Paleoproterozoic ocean. Many believe that this time had deeply penetrating oxic water columns because of the disappearance of banded iron formations roughly 1.8 billion years ago. Others, spearheaded by Donald Canfield 's 1998 Nature paper, believe that waters were euxinic. Examining rocks from the time uncovered biomarkers of both purple and green sulfur bacteria, adding evidence to support the Canfield Ocean hypothesis. The sedimentary outcrop analyzed was the Barney Creek Formation from the McArthur group in northern Australia. Sample analysis identified both the 2,3,6 trimethylarl isoprenoids (chlorobactane) of Chlorobiaceae and the 2,3,4 trimethylaryl isoprenoids (okenane) of Chromatiaceae. Both chlorobactane and okenane indicate a euxinic ocean, with sulfidic and anoxic surface conditions below 12-25m. The authors concluded that although oxygen was in the atmosphere, the Paleoproterozoic oceans were not completely oxygenated. [ 2 ] | https://en.wikipedia.org/wiki/Okenane |
The Okinawa diet describes the traditional dietary practices of indigenous people of the Ryukyu Islands (belonging to Japan), which were claimed to have contributed to their relative longevity over a period of study in the 20th century. [ 1 ]
As assessed over 1949 to 1998, people from the Ryukyu Islands (of which Okinawa is the largest) had a life expectancy among the highest in the world (83.8 years vs. 78.9 years in the United States), [ 2 ] although the male life expectancy rank among Japanese prefectures plummeted in the 21st century. [ 3 ] [ 4 ]
Okinawa had the longest life expectancy in all prefectures of Japan for almost 30 years prior to 2000. [ 5 ] The relative life expectancy of Okinawans has since declined, due to many factors including Westernization. [ 3 ] [ 4 ] In 2000, Okinawa dropped in its ranking for longevity advantage for men to 26th out of 47 within the prefectures of Japan . [ 3 ] In 2015, Japan had the highest life expectancy of any country: 90 years for women and for men, 84 years. [ 6 ]
Although there are myriad factors that could account for differences in life expectancy, calorie restriction and regular physical activity could be factors. [ 2 ] People have promoted the "Okinawa diet", despite the fact that the diet alone is unlikely to solely explain high life expectancy among seniors on Okinawa in the 20th century. [ 7 ]
The traditional diet of the islanders contained sweet potato , green-leafy or root vegetables, and soy foods, such as miso soup , tofu or other soy preparations, occasionally served with small amounts of fish, noodles, or lean meats, all cooked with herbs, spices, and oil. [ 8 ] Although the traditional Japanese diet usually includes large quantities of rice, the traditional Okinawa diet consisted of smaller quantities of rice; instead the staple was sweet potato. [ 2 ] [ 8 ] The Okinawa diet had only 30% of the sugar and 15% of the grains of the average Japanese dietary intake. [ 2 ]
Okinawan cuisine consists of smaller meal portions of green and yellow vegetables, soy and other legumes, relatively small amounts of rice compared to mainland Japan, as well as occasional fish and pork. The center of the Okinawan cuisine is the sweet potato. Not only is the sweet potato tuber used but so are the leaves from the plant. The leaves are used often in miso soup. In Okinawa, the bitter melon is called goya and is served in the national dish, gōyā chanpurū . [ 6 ]
The dietary intake of Okinawans compared to other Japanese circa 1950 shows that Okinawans consumed: fewer total calories (1785 vs. 2068), less polyunsaturated fat (4.8% of calories vs. 8%), less rice (154g vs. 328g), significantly less wheat, barley and other grains (38g vs. 153g), less sugars (3g vs. 8g), more legumes (71g vs. 55g), significantly less fish (15g vs. 62g), significantly less meat and poultry (3g vs. 11g), less eggs (1g vs. 7g), less dairy (<1g vs. 8g), much more sweet potatoes (849g vs. 66g), less other potatoes (2g vs. 47g), less fruit (<1g vs. 44g), and no pickled vegetables (0g vs. 42g). [ 2 ] As proportions of total caloric intake, foods in the traditional Okinawa diet included sweet potato (69%), rice (12%), other grains (7%), legumes including soy (6%), green and yellow vegetables (3%), refined oils (2%), fish (1%) and seaweed, meat (mostly pork), refined sugars, potato, egg, nuts and seeds, dairy and fruit (all <1%). [ 2 ] Specifically, the Okinawans circa 1950 ate sweet potatoes for 849 grams of the total 1262 grams of food that they consumed, which constituted 69% of their total daily calories. [ 2 ]
The traditional Okinawan diet as described above was widely practiced on the islands until about the 1960s. [ 2 ] Since then, dietary practices shifted towards Western and mainland Japanese patterns, with fat intake rising from about 6% to 27% of total caloric intake and the sweet potato being supplanted with rice and bread. [ 9 ]
Another low-calorie staple in Okinawa was seaweed, particularly, konbu or kombu . [ 6 ] This plant, like much of the greenery from the island, is rich in protein, amino acids and minerals such as iodine . Another seaweed commonly eaten was wakame , which is rich in minerals like iodine, magnesium and calcium. Seaweed and tofu in one form or other were eaten on a daily basis. [ 10 ]
Okinawans ate three grams total of meat – including pork and poultry – per day, substantially less than the 11-gram average of Japanese as a whole in 1950. [ 2 ] The pig's feet, ears, and stomach were considered as everyday foodstuffs. [ 11 ] In 1979 after many years of Westernization, the quantity of pork consumption per person a year in Okinawa was 7.9 kg (17 lb), exceeding by about 50% that of the Japanese national average. [ 12 ]
In addition to their relative longevity identified in the mid-20th century, islanders were noted for their low mortality from cardiovascular disease and certain types of cancers. One study compared age-adjusted mortality of Okinawans versus Americans and found that, during 1995, an average Okinawan was 8 times less likely to die from coronary artery disease , 7 times less likely to die from prostate cancer , 6.5 times less likely to die from breast cancer, and 2.5 times less likely to die from colon cancer than an average American of the same age, [ 2 ] though more than 10% of the Okinawans suffered from cheilosis from a low consumption of vitamin B2 . [ 2 ] Delayed menstruation and deficient lactation were also relatively frequent at 9% and almost 18% due to low caloric intake and/or low body fat levels in women. [ 2 ] In the 21st century, the shifting dietary trend coincided with a decrease in longevity, where Okinawans actually developed a lower life expectancy than the Japanese average. [ 3 ]
Overall, the traditional Okinawa diet led to little weight gain with age, low body mass index throughout life, and low risk from age-related disease. [ 2 ] No ingredients or foods of any kind have been scientifically shown to possess antiaging properties. [ 13 ]
In the 1972 Japan National Nutrition Survey, it was determined that Okinawan adults consumed 83% of what Japanese adults did and that Okinawan children consumed 62% of what Japanese children consumed. [ 2 ] Since the early 2000s, the difference in life expectancy between Okinawan and mainland Japanese decreased, possibly due to Westernization and erosion of the traditional diet. [ 3 ] [ 4 ] The spread of primarily American fast-food chains was linked with an increase in cardiovascular diseases, much like the ones noted in Japanese migrants to the United States. [ 3 ] [ 4 ]
Okinawa and Japan have food-centered cultures. Festivities often include food or are food-based. Moreover, the food tends to be seasonal, fresh and raw. Portion sizes are small and meals are brought out in stages that starts with appetizers, many main courses including sashimi (raw fish) and suimono (soup), sweets and tea. [ 14 ] The food culture and presentation is preserved, passing low-calorie food from generation to generation. [ 10 ] | https://en.wikipedia.org/wiki/Okinawa_diet |
Okishio's theorem is a theorem formulated by Japanese economist Nobuo Okishio . It has had a major impact on debates about Marx 's theory of value . Intuitively, it can be understood as saying that if one capitalist raises his profits by introducing a new technique that cuts his costs, the collective or general rate of profit in society goes up for all capitalists. In 1961, Okishio established this theorem under the assumption that the real wage remains constant. Thus, the theorem isolates the effect of pure innovation from any consequent changes in the wage.
For this reason the theorem, first proposed in 1961, excited great interest and controversy because, according to Okishio, it contradicts Marx's law of the tendency of the rate of profit to fall . [ citation needed ] Marx had claimed that the new general rate of profit, after a new technique has spread throughout the branch where it has been introduced, would be lower than before. In modern words, the capitalists would be caught in a rationality trap or prisoner's dilemma : that which is rational from the point of view of a single capitalist, turns out to be irrational for the system as a whole, for the collective of all capitalists. This result was widely understood, including by Marx himself, as establishing that capitalism contained inherent limits to its own success. Okishio's theorem was therefore received in the West as establishing that Marx's proof of this fundamental result was inconsistent .
More precisely, the theorem says that the general rate of profit in the economy as a whole will be higher if a new technique of production is introduced in which, at the prices prevailing at the time that the change is introduced, the unit cost of output in one industry is less than the pre-change unit cost. The theorem, as Okishio (1961:88) points out, does not apply to non-basic branches of industry.
The proof of the theorem may be most easily understood as an application of the Perron–Frobenius theorem . This latter theorem comes from a branch of linear algebra known as the theory of nonnegative matrices . A good source text for the basic theory is Seneta (1973). The statement of Okishio's theorem, and the controversies surrounding it, may however be understood intuitively without reference to, or in-depth knowledge of, the Perron–Frobenius theorem or the general theory of nonnegative matrices.
The argument of Nobuo Okishio, a Japanese economist, is based on a Sraffa -model. The economy consists of two departments I and II, where I is the investments goods department (means of production) and II is the consumption goods department, where the consumption goods for workers are produced. The coefficients of production tell how much of the several inputs is necessary to produce one unit of output of a given commodity ("production of commodities by means of commodities"). In the model below two outputs exist x 1 {\displaystyle x_{1}} , the quantity of investment goods , and x 2 {\displaystyle x_{2}} , the quantity of consumption goods.
The coefficients of production are defined as:
The worker receives a wage at a certain wage rate w (per unit of labour), which is defined by a certain quantity of consumption goods.
Thus:
This table describes the economy:
This is equivalent to the following equations:
In department I expenses for investment goods or for constant capital are:
In Department II expenses for constant capital are:
and for variable capital :
(The constant and variable capital of the economy as a whole is a weighted sum of these capitals of the two departments. See below for the relative magnitudes of the two departments which serve as weights for summing up constant and variable capitals.)
Now the following assumptions are made:
Okishio, following some Marxist tradition, assumes a constant real wage rate equal to the value of labour power, that is the wage rate must allow to buy a basket of consumption goods necessary for workers to reproduce their labour power. So, in this example it is assumed that workers get two pieces of consumption goods per hour of labour in order to reproduce their labour power.
A technique of production is defined according to Sraffa by its coefficients of production. For a technique, for example, might be numerically specified by the following coefficients of production:
From this an equilibrium growth path can be computed. The price for the investment goods is computed as (not shown here): p 1 = 1.78 {\displaystyle p_{1}=1.78} , and the profit rate is: r = 0.0961 = 9.61 % {\displaystyle r=0.0961=9.61\%} . The equilibrium system of equations then is:
A single firm of department I is supposed to use the same technique of production as the department as a whole. So, the technique of production of this firm is described by the following:
Now this firm introduces technical progress by introducing a technique, in which less working hours are needed to produce one unit of output, the respective production coefficient is reduced, say, by half from a 21 = 0.1 {\displaystyle a_{21}=0.1} to a 21 = 0.05 {\displaystyle a_{21}=0.05} . This already increases the technical composition of capital , because to produce one unit of output (investment goods) only half as much of working hours are needed, while as much as before of investment goods are needed. In addition to this, it is assumed that the labour saving technique goes hand in hand with a higher productive consumption of investment goods, so that the respective production coefficient is increased from, say, a 11 = 0.8 {\displaystyle a_{11}=0.8} to a 11 = 0.85 {\displaystyle a_{11}=0.85} .
This firm, after having adopted the new technique of production is now described by the following equation, keeping in mind that at first prices and the wage rate remain the same as long as only this one firm has changed its technique of production:
So this firm has increased its rate of profit from r = 9 , 61 % {\displaystyle r=9{,}61\%} to 10 , 36 % {\displaystyle 10{,}36\%} . This accords with Marx's argument that firms introduce new techniques only if this raises the rate of profit. [ 1 ]
Marx expected, however, that if the new technique will have spread through the whole branch, that if it has been adopted by the other firms of the branch, the new equilibrium rate of profit not only for the pioneering firm will be again somewhat lower, but for the branch and the economy as a whole. The traditional reasoning is that only "living labour" can produce value, whereas constant capital, the expenses for investment goods, do not create value. The value of constant capital is only transferred to the final products. Because the new technique is labour-saving on the one hand, outlays for investment goods have been increased on the other, the rate of profit must finally be lower.
Let us assume, the new technique spreads through all of department I. Computing the new equilibrium rate of growth and the new price p 2 {\displaystyle p_{2}} gives under the assumption that a new general rate of profit is established:
If the new technique is generally adopted inside department I, the new equilibrium general rate of profit is somewhat lower than the profit rate, the pioneering firm had at the beginning ( 10.36 % {\displaystyle 10.36\%} ), but it is still higher than the old prevailing general rate of profit: 10.30 % {\displaystyle 10.30\%} larger than 9.61 % {\displaystyle 9.61\%} .
Nobuo Okishio proved this generally, which can be interpreted as a refutation of Marx's law of the tendency of the rate of profit to fall. This proof has also been confirmed if the model is extended to include not only circulating capital but also fixed capital. Mechanisation, defined as increased inputs of machinery per unit of output combined with the same or reduced amount of labour-input, necessarily lowers the maximum rate of profit. [ 2 ]
Some Marxists simply dropped the law of the tendency of the rate of profit to fall, claiming that there are enough other reasons to criticise capitalism, that the tendency for crises can be established without the law, so that it is not an essential feature of Marx's economic theory. [ citation needed ] Others would say that the law helps to explain the recurrent cycle of crises, but cannot be used as a tool to explain the long term developments of the capitalist economy.
Others argued that Marx's law holds if one assumes a constant ‘’ wage share ’’ instead of a constant real wage ‘’rate’’. Then, the prisoner's dilemma works like this: The first firm to introduce technical progress by increasing its outlay for constant capital achieves an extra profit. But as soon as this new technique has spread through the branch and all firms have increased their outlays for constant capital also, workers adjust wages in proportion to the higher productivity of labour. The outlays for constant capital having increased, wages having been increased now also, this means that for all firms the rate of profit is lower. [ citation needed ]
However, Marx did not know the law of a constant wage share. Mathematically the rate of profit could always be stabilised by decreasing the wage share. In our example, for instance, the rise of the rate of profit goes hand in hand with a decrease of the wage share from 58.6 % {\displaystyle 58.6\%} to 41.9 % {\displaystyle 41.9\%} , see computations below. However, a reduction in the wage share is not possible in neoclassical models due to the assumption that wages equal the marginal product of labour.
A third response is to reject the whole framework of the Sraffa-models, especially the comparative static method. [ 3 ] In a capitalist economy entrepreneurs do not wait until the economy has reached a new equilibrium path but the introduction of new production techniques is an ongoing process. Marx's law could be valid if an ever-larger portion of production is invested per working place instead of in new additional working places. Such an ongoing process cannot be described by the comparative static method of the Sraffa models. [ citation needed ]
According to Alfred Müller [ 4 ] the Okishio theorem could be true, if there was a coordination amongst capitalists for the whole economy, a centrally planned capitalist economy, which is a contradiction in itself. In a capitalist economy, in which means of production are private property, economy-wide planning is not possible. The individual capitalists follow their individual interests and do not cooperate to achieve a general high rate of growth or rate of profit.
Up to now it was sufficient to describe only monetary variables. In order to expand the analysis to compute for instance the value of constant capital c , variable capital v und surplus value (or profit) s for the economy as whole or to compute the ratios between these magnitudes like rate of surplus value s / v or value composition of capital , it is necessary to know the relative size of one department with respect to the other. If both departments I (investment goods) and II (consumption goods) are to grow continuously in equilibrium there must be a certain proportion of size between these two departments. This proportion can be found by modelling continuous growth on the physical (or material) level in opposition to the monetary level.
In the equations above a general, for all branches, equal rate of profit was computed given
whereby a price had to be arbitrarily determined as numéraire. In this case the price p 2 {\displaystyle p_{2}} for the consumption good x 2 {\displaystyle x_{2}} was set equal to 1 (numéraire) and the price for the investment good x 1 {\displaystyle x_{1}} was then computed. Thus, in money terms, the conditions for steady growth were established.
To establish this steady growth also in terms of the material level, the following must hold:
Thus, an additional magnitude K must be determined, which describes the relative size of the two branches I and II whereby I has a weight of 1 and department II has the weight of K .
If it is assumed that total profits are used for investment in order to produce more in the next period of production on the given technical level, then the rate of profit r is equal to the rate of growth g .
In the first numerical example with rate of profit r = 9.61 % {\displaystyle r=9.61\%} we have:
The weight of department II is K = 0.2808 {\displaystyle K=0.2808} .
For the second numerical example with rate of profit r = 10.30 % {\displaystyle r=10.30\%} we get:
Now, the weight of department II is K = 0.14154 {\displaystyle K=0.14154} . The rates of growth g are equal to the rates of profit r , respectively.
For the two numerical examples, respectively, in the first equation on the left hand side is the input of x 1 {\displaystyle x_{1}} and in the second equation on the left hand side is the amount of input of x 2 {\displaystyle x_{2}} . On the right hand side of the first equations of the two numerical examples, respectively, is the output of one unit of x 1 {\displaystyle x_{1}} and in the second equation of each example is the output of K units of x 2 {\displaystyle x_{2}} .
The input of x 1 {\displaystyle x_{1}} multiplied by the price p 1 {\displaystyle p_{1}} gives the monetary value of constant capital c . Multiplication of input x 2 {\displaystyle x_{2}} with the set price p 2 = 1 {\displaystyle p_{2}=1} gives the monetary value of variable capital v. One unit of output x 1 {\displaystyle x_{1}} and K units of output x 2 {\displaystyle x_{2}} multiplied by their prices p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} respectively gives total sales of the economy c + v + s .
Subtracting from total sales the value of constant capital plus variable capital ( c + v ) gives profits s .
Now the value composition of capital c / v , the rate of surplus value s / v , and the " wage share " v /( s + v ) can be computed.
With the first example the wage share is 58.6 % {\displaystyle 58.6\%} and with the second example 41.9 % {\displaystyle 41.9\%} . The rates of surplus value are, respectively, 0.706 and 1.389. The value composition of capital c / v is in the first example 6,34 and in the second 12.49. According to the formula
for the two numerical examples rates of profit can be computed, giving 9.61 % {\displaystyle 9.61\%} and 10.30 % {\displaystyle 10.30\%} , respectively. These are the same rates of profit as were computed directly in monetary terms.
The problem with these examples is that they are based on comparative statics . The comparison is between different economies each on an equilibrium growth path. Models of dis-equilibrium lead to other results. If capitalists raise the technical composition of capital because thereby the rate of profit is raised, this might lead to an ongoing process in which the economy has not enough time to reach a new equilibrium growth path. There is a continuing process of increasing the technical composition of capital to the detriment of job creation resulting at least on the labour market in stagnation. The law of the tendency of the rate of profit to fall nowadays usually is interpreted in terms of disequilibrium analysis, not the least in reaction to the Okishio critique.
Between 1999 and 2004, David Laibman , a Marxist economist, published at least nine pieces dealing with the Temporal single-system interpretation (TSSI) of Marx's value theory. [ 5 ] His "The Okishio Theorem and Its Critics" was the first published response to the temporalist critique of Okishio's theorem. The theorem was widely thought to have disproved Karl Marx 's law of the tendential fall in the rate of profit , but proponents of the TSSI claim that the Okishio theorem is false and that their work refutes it. Laibman argued that the theorem is true and that TSSI research does not refute it.
In his lead paper in a symposium carried in Research in Political Economy in 1999, [ 6 ] Laibman's key argument was that the falling rate of profit exhibited in Kliman (1996) [ 7 ] depended crucially on the paper's assumption that there is fixed capital which lasts forever. Laibman claimed that if there is any depreciation or premature scrapping of old, less productive, fixed capital: (1) productivity will increase, which will cause the temporally determined value rate of profit to rise; (2) this value rate of profit will therefore "converge toward" Okishio's material rate of profit; and thus (3) this value rate "is governed by" the material rate of profit.
These and other arguments were answered in Alan Freeman and Andrew Kliman 's (2000) lead paper in a second symposium, [ 8 ] published the following year in the same journal. In his response, Laibman chose not to defend claims (1) through (3). He instead put forward a "Temporal-Value Profit-Rate Tracking Theorem" that he described as "propos[ing] that [the temporally determined value rate of profit] must eventually follow the trend of [Okishio's material rate of profit]" [ 9 ] The "Tracking Theorem" states, in part: "If the material rate [of profit] rises to an asymptote, the value rate either falls to an asymptote, or first falls and then rises to an asymptote permanently below the material rate" [ 10 ] Kliman argues that this statement "contradicts claims (1) through (3) as well as Laibman's characterization of the 'Tracking Theorem.' If the physical [i.e. material] rate of profit rises forever, while the value rate of profit falls forever, the value rate is certainly not following the trend of the physical [i.e. material] rate, not even eventually." [ 11 ]
In the same paper, Laibman claimed that Okishio's theorem was true, even though the path of the temporally determined value rate of profit can diverge forever from the path of Okishio's material rate of profit. He wrote, " If a viable technical change is made, and the real wage rate is constant, the new MATERIAL rate of profit must be higher than the old one. That is all that Okishio, or Roemer, or Foley, or I, or anyone else has ever claimed! " [ 12 ] In other words, proponents of the Okishio theorem have always been talking about how the rate of profit would behave only in the case in which input and output prices happened to be equal. Kliman and Freeman suggested that this statement of Laibman's was simply "an effort to absolve the physicalist tradition of error." [ 13 ] Okishio's theorem, they argued, has always been understood as a disproof of Marx's law of the tendential fall in the rate of profit, and Marx's law does not pertain to an imaginary special case in which input and output prices happen for some reason to be equal. | https://en.wikipedia.org/wiki/Okishio's_theorem |
The Okorokov effect ( Russian : эффект Окорокова ) or resonant coherent excitation , occurs when heavy ions move in crystals under channeling conditions. V. Okorokov predicted this effect in 1965 [ 1 ] and it was first observed by Sheldon Datz in 1978.
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Okorokov_effect |
Oksana Chubykalo-Fesenko is a Spanish-Ukrainian physicist and materials scientist whose research interests include spintronics and the dynamics and multiscale modeling of magnetic materials and magnetic nanoparticles. She works in Spain as a senior scientist at the Materials Science Institute of Madrid, a research institute of the Spanish National Research Council . [ 1 ]
Chubykalo-Fesenko was a student at the National University of Kharkiv (then called Kharkiv State University) where she received a master's degree in 1986 and a Ph.D. in 1990, with the dissertation Soliton scattering by inpurities in one-dimensional nonlinear systems .
She worked as a researcher at the Clarendon Laboratory of the University of Oxford , at the Complutense University of Madrid , at the University of Milan , at the University of the Basque Country , and at the IBM Almaden Research Center in California, before taking her present position at the Materials Science Institute of Madrid. [ 1 ] She joined the institute in 2001 as a Ramon y Cajal Fellow, obtained a tenured scientist position in 2002, and has been senior scientist since 2004. [ 2 ]
Chubykalo-Fesenko was named to the 2025 class of IEEE Fellows , "for development of multiscale methods for modeling of thermal magnetization dynamics." [ 3 ] | https://en.wikipedia.org/wiki/Oksana_Chubykalo-Fesenko |
Oktay Sinanoğlu (February 25, 1935 – April 19, 2015) was a Turkish physical chemist and molecular biophysicist who made contributions to the theory of electron correlation in molecules, the statistical mechanics of clathrate hydrates , quantum chemistry , and the theory of solvation .
Sinanoğlu was born in Bari, Italy on February 25, 1935. His parents were Rüveyde (Karacabey) Sinanoğlu and Nüzhet Haşim. His father Rüveyde was a writer, and a consular official in the Bari consulate of Turkey . [ 1 ] Following his father's recall to Turkey in July 1938, the family returned to Turkey before the start of World War II . [ 2 ] [ 3 ] He had a sister, Esin Afşar (1936-2011), who became a well-known singer and actress. [ 4 ]
Sinanoğlu graduated from TED Ankara Koleji in 1951. He went to the United States in 1953, where he studied in University of California, Berkeley graduating with a BSc degree in 1956. The following year, he completed an MSc degree at MIT (1957), and was awarded a Sloan Research Fellowship . He completed a predoctoral fellowship (1958-1959) and earned his PhD in physical chemistry (1959-1960) at the University of California, Berkeley, advised by Kenneth Pitzer . [ 3 ] [ 4 ] [ 5 ]
In 1960, Sinanoğlu joined the chemistry department at Yale University . He was appointed full professor of chemistry in 1963. [ 6 ] At age 28, he became the youngest full professor in Yale’s 20th-century history. It has been claimed that he was also the third-youngest full professor in the 300-plus year history of Yale University. [ 3 ]
During his tenure at Yale he wrote a number of papers in various subfields of theoretical chemistry, the most widely cited of which was his 1961 paper on electron correlation. [ 7 ] This work anticipated the widely used coupled cluster method for describing electrons in molecules with greater accuracy than is possible via the Hartree-Fock method . He also published important papers on the statistical mechanics of clathrate hydrates, [ 8 ] [ 9 ] solvation, [ 10 ] [ 11 ] and surface tension. [ 12 ] His final projects were focused on the development of his valency interaction formula (VIF) theory, a method for predicting energy level patterns for compounds from the manipulation of graphs. [ 13 ] He intended for chemists to be able to use the VIF method to predict the ways in which complex chemical reactions would proceed, using only a chalkboard or pencil and paper; however, he apparently never presented it at conferences and it was not widely adopted by other chemists. [ 14 ] He continued to develop the VIF method, which he sometimes referred to as "Sinanoğlu Made Simple," and other problems related to graph theory and quantum mechanics for the rest of his career. [ 15 ] [ 16 ] [ 17 ] [ 18 ] After 37 years on the Yale faculty, Sinanoğlu retired in 1997. [ 19 ]
During his time at Yale, Sinanoğlu served as a consultant to Turkish universities, the Scientific and Technological Research Council of Turkey (TÜBİTAK), and the Japan Society for the Promotion of Science (JSPS). [ 3 ] In 1962, the Board of Trustees of Middle East Technical University in Ankara granted him the title of "consulting professor." [ 4 ]
After his retirement from Yale, Sinanoğlu was appointed to the chemistry department of Yıldız Technical University in Istanbul, serving until 2002. [ 20 ]
Sinanoğlu was the author or co-author of over 200 scientific articles and books. He also authored books on contemporary affairs in Turkey and Turkish language , such as "Target Turkey" and "Bye Bye Turkish" (2005). [ 3 ] [ 20 ] In "Bye Bye Turkish", he propounded the idea of cognation between Turkish and Japanese based on the alleged similarity of a number of words.
A 2001 best-seller book about his life and works, edited by Turkish writer Emine Çaykara , referred to him as "The Turkish Einstein, Oktay Sinanoğlu" ( Turkish : Türk Aynştaynı Oktay Sinanoğlu Kitabı ). [ 3 ] [ 21 ]
He received the "TÜBİTAK Science Award" for chemistry in 1966, [ 22 ] the Alexander von Humboldt Research Award in chemistry in 1973, and the "International Outstanding Scientist Award of Japan" in 1975. It has been reported in Turkish media that Sinanoğlu was a two-time nominee for the Nobel Prize in Chemistry , [ 20 ] but this claim is not supported by actual data from the Nobel Foundation. [ 23 ]
On December 21, 1963, Oktay Sinanoğlu married Paula Armbruster, [ 24 ] who was doing graduate work at Yale University. The wedding ceremony took place in the Branford College Chapel of Yale. [ 5 ] They had three children. After their later divorce, he married Dilek Sinanoğlu and from this marriage he became the father of twins. The family resided in the Emerald Lakes neighborhood of Fort Lauderdale , Florida , and in Istanbul , Turkey. [ 3 ]
Dilek Sinanoğlu made public on April 10, 2015, that Oktay Sinanoğlu was hospitalized in Miami , Florida , and was in a coma in the intensive care unit. [ 25 ] He died at age 80 on April 19, 2015. No medical statement was released about the cause of the death. [ 26 ] His body was transferred to Turkey, where he was buried in Karacaahmet Cemetery , Üsküdar following the religious funeral service at Şakirin Mosque . [ 27 ] | https://en.wikipedia.org/wiki/Oktay_Sinanoğlu |
In fluid mechanics , the Okubo–Weiss parameter , (normally given by "W") is a measure of the relative importance of deformation and rotation at a given point. It is calculated as the sum of the squares of normal and shear strain minus the relative vorticity . This is widely applicable in fluid properties particularly in identifying and describing oceanic eddies .
For a horizontally non-divergent flow in the ocean, the parameter is given by:
where:
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Okubo–Weiss_parameter |
In economics , Okun's law is an empirically observed relationship between unemployment and losses in a country's production. It is named after Arthur Melvin Okun , who first proposed the relationship in 1962. [ 1 ] The "gap version" states that for every 1% increase in the unemployment rate , a country's GDP will be roughly an additional 2% lower than its potential GDP . The "difference version" [ 2 ] describes the relationship between quarterly changes in unemployment and quarterly changes in real GDP . The stability and usefulness of the law has been disputed. [ 3 ]
Okun's law is an empirical relationship. In Okun's original statement of his law, a 2 % increase in output corresponds to a 1% decline in the rate of cyclical unemployment; a 0.5% increase in labor force participation; a 0.5% increase in hours worked per employee; and a 1% increase in output per hours worked ( labor productivity ). [ 4 ]
Okun's law states that a one-point increase in the cyclical unemployment rate is associated with two percentage points of negative growth in real GDP. The relationship varies depending on the country and time period under consideration.
The relationship has been tested by regressing GDP or GNP growth on change in the unemployment rate. Martin Prachowny estimated about a 3% decrease in output for every 1% increase in the unemployment rate. [ 5 ] However, he argued that the majority of this change in output is actually due to changes in factors other than unemployment, such as capacity utilization and hours worked. Holding these other factors constant reduces the association between unemployment and GDP to around 0.7% for every 1% change in the unemployment rate. The magnitude of the decrease seems to be declining over time in the United States. According to Andrew Abel and Ben Bernanke , estimates based on data from more recent years give about a 2% decrease in output for every 1% increase in unemployment. [ 6 ]
There are several reasons why GDP may increase or decrease more rapidly than unemployment decreases or increases:
As unemployment increases,
One implication of Okun's law is that an increase in labor productivity or an increase in the size of the labor force can mean that real net output grows without net unemployment rates falling (the phenomenon of " jobless growth ")
Okun's Law is sometimes confused with Lucas wedge .
The gap version of Okun's law may be written (Abel & Bernanke 2005) as:
In the United States since 1955 or so, the value of c has typically been around 2 or 3, as explained above.
The gap version of Okun's law, as shown above, is difficult to use in practice because Y ¯ {\displaystyle {\overline {Y}}} and u ¯ {\displaystyle {\overline {u}}} can only be estimated, not measured. A more commonly used form of Okun's law, known as the difference or growth rate form of
Okun's law, relates changes in output to changes in unemployment:
At the present time in the United States, k is about 3% and c is about 2, so the equation may be written
The graph at the top of this article illustrates the growth rate form of Okun's law, measured quarterly rather than annually.
We start with the first form of Okun's law:
Taking annual differences on both sides, we obtain
Putting both numerators over a common denominator, we obtain
Multiplying the left hand side by Y ¯ + Δ Y ¯ Y {\displaystyle {\frac {{\overline {Y}}+\Delta {\overline {Y}}}{Y}}} , which is approximately equal to 1, we obtain
We assume that Δ u ¯ {\displaystyle \Delta {\overline {u}}} , the change in the natural rate of unemployment, is approximately equal to 0. We also assume that Δ Y ¯ Y ¯ {\displaystyle {\frac {\Delta {\overline {Y}}}{\overline {Y}}}} , the growth rate of full-employment output, is approximately equal to its average value, k {\displaystyle k} . So we finally obtain
Through comparisons between actual data and theoretical forecasting, Okun's law proves to be an invaluable [ clarification needed ] tool in predicting trends between unemployment and real GDP. However, the accuracy of the data theoretically proved through Okun's law compared to real world numbers proves to be generally inaccurate. This is due to the variances in Okun's coefficient. Many, including the Reserve Bank of Australia, conclude that information proved by Okun's law to be acceptable to a certain degree. [ 7 ] Also, some findings [ which? ] have concluded that Okun's law tends to have higher rates of accuracy for short-run predictions, rather than long-run predictions. Forecasters [ who? ] have concluded this to be true due to unforeseen market conditions that may affect Okun's coefficient.
As such, Okun's law is generally acceptable by forecasters as a tool for short-run trend analysis between unemployment and real GDP, rather than being used for long run analysis as well as accurate numerical calculations.
The San Francisco Federal Reserve Bank determined through the use of empirical data from past recessions in the 1970s, 1990s, and 2000s that Okun’s law was a useful theory. All recessions showed two common main trends: a counterclockwise loop [ clarification needed ] for both real-time and revised data. The recoveries of the 1990s and 2000s did have smaller and tighter loops. [ 8 ] | https://en.wikipedia.org/wiki/Okun's_law |
Olavius algarvensis is a species of gutless oligochaete worm in the family Tubificidae which depends on symbiotic bacteria for its nutrition.
Olavius algarvensis lives in coastal sediments in the Mediterranean. It was first described from the Algarve Coast of Portugal, [ 2 ] but has also been found elsewhere, e.g. off the Italian island Elba , where it co-occurs with another species, O. ilvae . [ 3 ] [ 4 ] It was the first species of Olavius described from the East Atlantic coast; previously the genus was only known from the Caribbean. [ 2 ]
Olavius algarvensis is 12–25 mm long, about 0.25 mm wide, and has between 100 and 150 segments. Like all other species in the genus Olavius , this species has no digestive tract. Instead, the body cavity contains the ventral nerve cord (inside a muscular sheath) and two blood vessels which are surrounded by a "fluffy" layer of chloragocytic cells. They are distinguished from other species of Olavius by having round, flap-like external male papillae that cover the two ventral invaginations of the body wall which contain the male pores (in segment XI), and having small atria that are perpendicular rather than parallel to the body axis. [ 2 ] The symbiotic bacteria are located between the cuticle and epidermis, and also in vacuoles within epidermal cells, which often show signs of lysis. The bacteria are absent from the anterior part of the worm and the pygidium , but are found from segment VII or VIII onwards. [ 3 ] While cholesterol partially comprises the sterols in their cell membranes, sitosterol , generally a plant sterol, predominates. [ 5 ]
Oligochaete worms without any mouth , gut, or nephridial excretory system were first discovered in the 1970s-1980s near Bermuda . [ 6 ] They were later found to contain symbiotic chemosynthetic bacteria which serve as their primary food source. O. algarvensis is the species where this symbiosis has been studied in the most detail.
There are five different species of bacterial symbionts in O. algarvensis , which are located under the cuticle of the worm: two sulfide-oxidizing Gammaproteobacteria , two sulfate-reducing Deltaproteobacteria , and one spirochaete . The sulfide-oxidizers gain energy from oxidation of hydrogen sulfide , and fix carbon dioxide via the Calvin cycle . The sulfate-reducers are anaerobes that can reduce sulfate into sulfide, which is consumed by the sulfide-oxidizers. The metabolism of the spirochaete is unknown. [ 7 ] Other species of Olavius are also known to have similar symbioses with both sulfide-oxidizing and sulfate-reducing bacteria in the same worm. [ 4 ] [ 8 ]
The primary sulfur-oxidizing symbiont, known as "Gamma1", is closely related to the primary symbionts of other species of gutless oligochaetes in the Phallodrilinae , and also to the symbionts of nematodes in the subfamily Stilbonematinae . [ 9 ]
In addition to hydrogen sulfide, the symbiotic bacteria also allow the worm to use hydrogen and carbon monoxide as energy sources, and to metabolise organic compounds like malate and acetate . These abilities were first discovered by sequencing the genomes and proteomes of the bacteria. [ 10 ] [ 11 ]
The symbiotic bacteria which live with O. algarvensis have other unique properties. One of the Deltaproteobacteria symbionts, called "Delta-1", is able to produce numerous seleno- and pyrroproteins, which contain the amino acids selenocysteine and pyrrolysine that are sometimes called the 21st and 22nd proteinogenic amino acids . This bacterium has the largest known proteome that has seleno- and pyrroproteins. [ 12 ] The symbionts also express the most transposases of any known bacteria. [ 13 ] | https://en.wikipedia.org/wiki/Olavius_algarvensis |
Olbers's paradox , also known as the dark night paradox or Olbers and Cheseaux's paradox , is an argument in astrophysics and physical cosmology that says the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe . In the hypothetical case that the universe is static, homogeneous at a large scale, and populated by an infinite number of stars , any line of sight from Earth must end at the surface of a star and hence the night sky should be completely illuminated and very bright. This contradicts the observed darkness and non-uniformity of the night sky. [ 1 ]
The darkness of the night sky is one piece of evidence for a dynamic universe, such as the Big Bang model . That model explains the observed darkness by invoking expansion of the universe , which increases the wavelength of visible light originating from the Big Bang to microwave scale via a process known as redshift . The resulting microwave radiation background has wavelengths much longer (millimeters instead of nanometers), which appear dark to the naked eye. Although he was not the first to describe it, the paradox is popularly named after the German astronomer Heinrich Wilhelm Olbers (1758–1840).
Edward Robert Harrison 's Darkness at Night: A Riddle of the Universe [ 2 ] (1987) gives an account of the dark night sky paradox, seen as a problem in the history of science. According to Harrison, the first to conceive of anything like the paradox was Thomas Digges , who was also the first to expound the Copernican system in English and also postulated an infinite universe with infinitely many stars. [ 3 ] Kepler also posed the problem in 1610, and the paradox took its mature form in the 18th-century work of Halley and Cheseaux . [ 4 ] The paradox is commonly attributed to the German amateur astronomer Heinrich Wilhelm Olbers , who described it in 1823, but Harrison points out that Olbers was far from the first to pose the problem, nor was his thinking about it particularly valuable. Harrison argues that the first to set out a satisfactory resolution of the paradox was Lord Kelvin , in a little known 1901 paper, [ 2 ] : 227 and that Edgar Allan Poe 's essay Eureka (1848) curiously anticipated some qualitative aspects of Kelvin's argument: [ 1 ]
Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all. [ 5 ]
The paradox is that a static, infinitely old universe with an infinite number of stars distributed in an infinitely large space would be bright rather than dark. [ 1 ] The paradox comes in two forms: flux within the universe and the brightness along any line of sight. The two forms have different resolutions. [ 6 ] : 354
The flux form can be shown by dividing the universe into a series of concentric shells, 1 light year thick. A certain number of stars will be in the shell, say, 1,000,000,000 to 1,000,000,001 light years away. If the universe is homogeneous at a large scale, then there would be four times as many stars in a second shell between 2,000,000,000 and 2,000,000,001 light years away. However, the second shell is twice as far away, so each star in it would appear one quarter as bright as the stars in the first shell. Thus the total light received from the second shell is the same as the total light received from the first shell. Thus each shell of a given thickness will produce the same net amount of light regardless of how far away it is. That is, the light of each shell adds to the total amount. Thus the more shells, the more light; and with infinitely many shells, there would be an infinitely bright night sky. [ 7 ]
If intervening gas is added to this infinite model, the light from distant stars will be absorbed. However, that absorption will heat the gas, and over time the gas itself will begin to radiate. With this added feature, the sky would not be infinitely bright, but every point in the sky would still be like the surface of a star. [ 8 ]
The flux form is resolved by the finite age of the universe: the number of concentric shells in the model above is finite, limiting the total energy arriving on Earth. [ 6 ] : 355
Another way to describe the flux version is to suppose that the universe were not expanding and always had the same stellar density; then the temperature of the universe would continually increase as the stars put out more radiation. After something like 10 23 years, the universe would reach the average surface temperature of a star. However, the universe is only 13.8 billion (10 12 ) years old, eliminating the paradox. [ 4 ] : 486
The line-of-sight version of the paradox starts by imagining a line in any direction in an infinite Euclidean universe. In such universe, the line would terminate on a star, and thus all of the night sky should be filled with light. This version is known to be correct, but the result is different in our expanding universe governed by general relativity. The termination point is on the surface of last scattering where light from the Big Bang first emerged. This light is dramatically redshifted from the energy similar to star surfaces down to 2.73 K. Such light is invisible to human observers on Earth. [ 6 ] : 355
Recent observations suggesting that the estimated number of galaxies based on direct observations is too low by a factor of ten do not materially alter the resolution but rather suggest that the full explanation involves a combination of finite age, redshifts, and UV absorption by hydrogen followed reemission in near-IR wavelengths also plays a role. [ 9 ] | https://en.wikipedia.org/wiki/Olbers's_paradox |
The Old Nassau reaction or Halloween reaction is a chemical clock reaction in which a clear solution turns orange and then black. This reaction was discovered by two undergraduate students at Princeton University researching the inhibition of the iodine clock reaction (or Landolt reaction) by Hg 2+ , resulting in the formation of orange HgI 2 . Orange and black are the school colors of Princeton University, and " Old Nassau " is a nickname for Princeton, named for its historic administration building, Nassau Hall.
The reactions involved are as follows: | https://en.wikipedia.org/wiki/Old_Nassau_reaction |
Old age is the range of ages for people nearing and surpassing life expectancy . People who are of old age are also referred to as: old people , elderly , elders , senior citizens , seniors or older adults . [ 1 ] Old age is not a definite biological stage: the chronological age denoted as "old age" varies culturally and historically. [ 2 ] Some disciplines and domains focus on the aging and the aged, such as the organic processes of aging ( senescence ), [ 3 ] medical studies of the aging process ( gerontology ), [ 4 ] diseases that afflict older adults ( geriatrics ), [ 5 ] technology to support the aging society ( gerontechnology ), and leisure and sport activities adapted to older people (such as senior sport ).
Older people often have limited regenerative abilities and are more susceptible to illness and injury than younger adults. They face social problems related to retirement , loneliness , and ageism . [ 6 ] [ 7 ]
In 2011, the United Nations proposed a human-rights convention to protect old people. [ 8 ]
The history of old age in the History of Europe has been characterized by several prominent features across the last 3000 years: [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Current definitions of old age include official definitions, sub-group definitions, and four dimensions as follows.
Most developed Western countries set the retirement age around the age of 65; this is also generally considered to mark the transition from middle to old age. Reaching this age is commonly a requirement to become eligible for senior social programs. [ 13 ]
Old age cannot be universally defined because it is context-sensitive. The United Nations, for example, considers old age to be 60 years or older. [ 14 ] In contrast, a 2001 joint report by the U.S. National Institute on Aging and the World Health Organization [WHO] Regional Office for Africa set the beginning of old age in Sub-Saharan Africa at 50. [ 15 ] This lower threshold stems primarily from a different way of thinking about old age in developing nations. Unlike in the developed world, where chronological age determines retirement, societies in developing countries determine old age according to a person's ability to make active contributions to society. [ 16 ] This number is also significantly affected by lower life expectancy throughout the developing world. Dating back to the Middle Ages and prior, what certain scholars thought of as old age varied depending on the context, but the state of being elderly was often thought as being 60 years of age or older in many respects. [ 17 ]
Gerontologists have recognized that people experience very different conditions as they approach old age. In developed countries, many people in their later 60s and 70s (frequently called "early old age") are still fit, active, and able to care for themselves. [ 18 ] : 607 However, after age 80, they generally become increasingly frail , a condition marked by serious mental and physical debilitation. [ 19 ]
Therefore, rather than lumping together all people who have been defined as old, some gerontologists have recognized the diversity of old age by defining sub-groups. One study distinguishes the young-old (60 to 69), the middle-old (70 to 79), and the very old (80+). [ 20 ] Another study's sub-grouping is young-old (65 to 74), middle-old (75 to 84), and oldest-old (85+). [ 21 ] A third sub-grouping is young-old (65 to 74), old (74 to 84), and old-old (85+). [ 22 ] Describing sub-groups in the 65+ population enables a more accurate portrayal of significant life changes. [ 23 ] : 4
Two British scholars, Paul Higgs and Chris Gilleard, have added a "fourth age" sub-group. In British English, the "third age" is "the period in life of active retirement, following middle age". [ 24 ] Higgs and Gilleard describe the fourth age as "an arena of inactive, unhealthy, unproductive, and ultimately unsuccessful ageing". [ 25 ]
Key Concepts in Social Gerontology lists four dimensions: chronological, biological, psychological, and social. [ 26 ] : 12–3 Wattis and Curran add a fifth dimension: developmental. [ 27 ] Chronological age may differ considerably from a person's functional age. The distinguishing marks of old age normally occur in all five senses at different times and at different rates for different people. [ 28 ] In addition to chronological age, people can be considered old because of the other dimensions of old age. For example, people may be considered old when they become grandparents or when they begin to do less or different work in retirement. [ 29 ]
Senior citizen is a common euphemism for an old person used in American English , and sometimes in British English . It implies that the person being referred to is retired. [ 30 ] [ 31 ] [ 32 ] [ 33 ] This in turn usually implies that the person is over the retirement age , which varies according to country. Synonyms include old age pensioner or pensioner in British English, and retiree and senior in American English. Some dictionaries describe widespread use of "senior citizen" for people over the age of 65. [ 34 ]
When defined in a legal context, senior citizen is often used for legal or policy-related reasons in determining who is eligible for certain benefits available to the age group.
It is used in general usage instead of traditional terms such as "old person", "old-age pensioner", or "elderly" as a courtesy and to signify continuing relevance of and respect for this population group as " citizens " of society, of senior "rank". [ 35 ]
The term was apparently coined in 1938 during a political campaign. [ 36 ] Famed caricaturist Al Hirschfeld claimed on several occasions that his father Isaac Hirschfeld invented the term "senior citizen". [ 37 ] [ 38 ] [ 39 ] It has come into widespread use in recent decades in legislation, commerce, and common speech. Especially in less formal contexts, it is often abbreviated as "senior(s)", which is also used as an adjective .
The age of 65 has long been considered the benchmark for senior citizenship in numerous countries. This convention originated from Chancellor Otto von Bismarck's introduction of the pension system in Germany during the late 19th century. Bismarck's legislation set the retirement age at 70, with 65 as the age at which individuals could start receiving a pension. This age standard gradually gained acceptance in other nations and has since become deeply entrenched in public consciousness. [ 40 ]
The age which qualifies for senior citizen status varies widely. In governmental contexts, it is usually associated with an age at which pensions or medical benefits for the elderly become available. In commercial contexts, where it may serve as a marketing device to attract customers, the age is often significantly lower. [ 41 ]
In commerce, some businesses offer customers of a certain age a " senior discount ". The age at which these discounts are available varies from 55, 60, 62 or 65 upwards, and other criteria may also apply. Sometimes a special " senior discount card " or other proof of age needs to be produced to show entitlement.
In the United States , the standard retirement age is currently 66 (gradually increasing to 67). [ 42 ] The AARP allows couples in which one spouse has reached the age of 50 to join, regardless of the age of the other spouse.
In Canada , the Old Age Security (OAS) pension is available at 65 (the Conservative government of Stephen Harper had planned to gradually increase the age of eligibility to 67, starting in the years 2023–2029, although the Liberal government of Justin Trudeau is considering leaving it at 65), [ 43 ] and the Canada Pension Plan (CPP) as early as age 60.
The distinguishing characteristics of old age are both physical and mental. [ 44 ] The marks of old age are so unlike the marks of middle age that legal scholar Richard Posner suggests that, as an individual transitions into old age, that person can be thought of as different people "time-sharing" the same identity. [ 45 ] : 86–7
These marks do not occur at the same chronological age for everyone. Also, they occur at different rates and order for different people. [ 28 ] Marks of old age can easily vary between people of the same chronological age. [ 46 ]
A basic mark of old age that affects both body and mind is "slowness of behavior". [ 47 ] The term describes a correlation between advancing age and slowness of reaction and physical and mental task performance. [ 48 ] However, studies from Buffalo University and Northwestern University have shown that the elderly are a happier age group than their younger counterparts. [ 49 ]
Physical marks of old age include the following:
Mental marks of old age include the following:
A study of professional and master tenpin bowlers found that average scores declined less than 10% from age 20 to age 70. [ 97 ] This decline in a sport focusing on skill and technique is considerably smaller than that of events dominated by muscular strength, cardiovascular endurance or agility—which are known to decrease about 10% per decade . [ 97 ]
Many books written by authors in middle adulthood depict a few common perceptions on old age. [ 98 ] One writer notices the change in his parents: They move slowly, they have less strength, they repeat stories, their minds wander, and they fret. [ 99 ] Another writer sees her aged parents and is bewildered: They refuse to follow her advice, they are obsessed with the past, they avoid risk, and they live at a "glacial pace". [ 100 ]
In her The Denial of Aging , Dr. Muriel R. Gillick, a baby boomer , accuses her contemporaries of believing that by proper exercise and diet they can avoid the scourges of old age and proceed from middle age to death. [ 101 ] Studies find that many people in the 65–84 range can postpone morbidity by practicing healthy lifestyles. However, at about age 85, most people experience similar morbidity. [ 102 ] Even with healthy lifestyles, most 85+ people will undergo extended "frailty and disability". [ 93 ]
Early old age can be a pleasant time; children are grown, work is over, and there is time to pursue other interests. [ 18 ] : 603 Many old people are also willing to get involved in community and activist organizations to promote their well-being. In contrast, perceptions of old age by writers 80+ years old tend to be negative. [ 103 ]
Georges Minois [ Wikidata ] writes that the first man known to talk about his old age was an Egyptian scribe who lived 4,500 years ago. The scribe addressed God with a prayer of lament: [ 104 ] : 14
O Sovereign my Lord! Oldness has come; old age has descended. Feebleness has arrived; dotage is here anew. The heart sleeps wearily every day. The eyes are weak, the ears are deaf, the strength is disappearing because of weariness of the heart and the mouth is silent and cannot speak. The heart is forgetful and cannot recall yesterday. The bone suffers old age. Good is become evil. All taste is gone. What old age does to men is evil in every respect. [ 104 ] : 14–5
Minois comments that the scribe's "cry shows that nothing has changed in the drama of decrepitude between the age of the Pharaoh and the atomic age" and "expresses all the anguish of old people in the past and the present". [ 104 ] : 14
Lillian Rubin , active in her 80s as an author, sociologist, and psychotherapist, opens her book 60 on Up: The Truth about Aging in America with "getting old sucks. It always has, it always will." Dr. Rubin contrasts the "real old age" with the "rosy pictures" painted by middle-age writers. [ 105 ]
Writing at the age of 87, Mary C. Morrison describes the "heroism" required by old age: to live through the disintegration of one's own body or that of someone you love. Morrison concludes, "old age is not for the fainthearted". [ 106 ] In the book Life Beyond 85 Years , the 150 interviewees had to cope with physical and mental debilitation and with losses of loved ones. One interviewee described living in old age as "pure hell". [ 107 ] : 7–8, 208
Research has shown that in high-income countries, on average, one in four people over 60 and one in three over 75 feels lonely. [ 108 ]
Johnson and Barer did a pioneering study of Life Beyond 85 Years by interviews over a six-year period. In talking with 85-year-olds and older, they found some popular conceptions about old age to be erroneous. Such erroneous conceptions include (1) people in old age have at least one family member for support, (2) old age well-being requires social activity, and (3) "successful adaptation" to age-related changes demands a continuity of self-concept. In their interviews, Johnson and Barer found that 24% of the 85+ had no face-to-face family relationships; many have outlived their families. Second, that contrary to popular notions, the interviews revealed that the reduced activity and socializing of the over-85s does not harm their well-being; they "welcome increased detachment". Third, rather than a continuity of self-concept, as the interviewees faced new situations they changed their "cognitive and emotional processes" and reconstituted their "self–representation". [ 107 ] : 5–6
Based on his survey of old age in history, Georges Minois [ fr ] concludes that "it is clear that always and everywhere youth has been preferred to old age". In Western thought, "old age is an evil, an infirmity and a dreary time of preparation for death". Furthermore, death is often preferred over "decrepitude, because death means deliverance". [ 104 ] : 303
"The problem of the ambiguity of old age has ... been with us since the stage of primitive society ; it was both the source of wisdom and of infirmity, experience and decrepitude, of prestige and suffering." [ 104 ] : 11
In the Classical period of Greek and Roman cultures, old age was denigrated as a time of "decline and decrepitude". [ 109 ] : 6–7 "Beauty and strength" were esteemed and old age was viewed as defiling and ugly. Old age was reckoned as one of the unanswerable "great mysteries" along with evil, pain, and suffering. "Decrepitude, which shrivels heroes, seemed worse than death." [ 104 ] : 43
Historical periods reveal a mixed picture of the "position and status" of old people, but there has never been a "golden age of aging". [ 109 ] : 6 Studies have challenged the popular belief that in the past old people were venerated by society and cared for by their families. [ 110 ] : 1 Veneration for and antagonism toward the aged have coexisted in complex relationships throughout history. [ 111 ] "Old people were respected or despised, honoured or put to death according to circumstance." [ 104 ] : 11
In ancient times, those who were frail were seen as a burden and ignored or, in extreme cases, killed. [ 109 ] : 6 [ 112 ] People were defined as "old" because of their inability to perform useful tasks rather than their years. [ 110 ] : 6
Although he was skeptical of the gods, Aristotle concurred in the dislike of old people. In his Ethics , he wrote that "old people are miserly; they do not acknowledge disinterested friendship; only seeking for what can satisfy their selfish needs". [ 104 ] : 60
The Medieval and Renaissance periods depicted old age as "cruel or weak". [ 109 ] : 7
The 16th-century Utopians Thomas More and Antonio de Guevara allowed no decrepit old people in their fictional lands. [ 104 ] : 277–8, 280
For Thomas More, on the island of Utopia , when people are so old as to have "out-lived themselves" and are terminally ill, in pain, and a burden to everyone, the priests exhort them about choosing to die. The priests assure them that "they shall be happy after death". If they choose to die, they end their lives by starvation or by taking opium. [ 113 ]
Antonio de Guevara 's utopian nation "had a custom, not to live longer than sixty five years". At that age, they practiced self-immolation. Rather than condemn the practice, Bishop Guevara called it a "golden world" in which people "have overcome the natural appetite to desire to live". [ 114 ]
In the modern period, the cultural status of old people has declined in many cultures. [ 109 ] : 7 Joan Erikson observed that "aged individuals are often ostracized, neglected, and overlooked; elders are seen no longer as bearers of wisdom but as embodiments of shame". [ 115 ] : 114
Attitudes toward old age well-being vary somewhat between cultures. For example, in the United States, being healthy, physically, and socially active are signs of a good old age. On the other hand, Africans focus more on food and material security and a helpful family when describing old age well-being. [ 116 ] Additionally, Koreans are more anxious about aging and more scared of old people than Americans are. [ 117 ]
Research on age-related attitudes consistently finds that negative attitudes exceed positive attitudes toward old people because of their looks and behavior. [ 118 ] In his study Aging and Old Age , Posner discovers "resentment and disdain of older people" in American society. [ 45 ] : 320 Harvard University's implicit-association test measures implicit "attitudes and beliefs" about "Young vis a vis Old". [ 119 ] Blind Spot: Hidden Biases of Good People , a book about the test, reports that 80% of Americans have an "automatic preference for the young over old" and that attitude is true worldwide. The young are "consistent in their negative attitude" toward the old. [ 120 ] Ageism documents that Americans generally have "little tolerance for older persons and very few reservations about harboring negative attitudes" about them. [ 121 ]
Despite its prevalence, ageism is seldom the subject of public discourse. [ 26 ] : 23
Simone de Beauvoir wrote that "there is one form of experience that belongs only to those that are old – that of old age itself". [ 122 ] Nevertheless, simulations of old age attempt to help younger people gain some understanding.
Texas A&M University offers a plan for an "Aging Simulation" workshop. [ 123 ] The workshop is adapted from Sensitizing People to the Processes of Aging . [ 124 ] Some of the simulations include:
The Macklin Intergenerational Institute conducts Xtreme Aging workshops, as depicted in The New York Times . [ 125 ] A condensed version was presented on NBC's Today Show and is available online. [ 126 ] One exercise was to lay out 3 sets of 5 slips of paper. On set #1, write your 5 most enjoyed activities; on set #2, write your 5 most valued possessions; on set #3, write your 5 most loved people. Then "lose" them one by one, trying to feel each loss, until you have lost them all, as happens in old age.
Most people in the age range of 65–79 (the years of retirement and early old age) enjoy rich possibilities for a full life, but the condition of frailty , distinguished by "bodily failure" and greater dependence, becomes increasingly common from around age 80. [ 103 ] In the United States, hospital discharge data from 2003 to 2011 shows that injury was the most common reason for hospitalization among patients aged 65+. [ 128 ]
Gerontologists note the lack of research regarding and the difficulty in defining frailty. However, they add that physicians recognize frailty when they see it. [ 129 ] : xxi, 4, 6
A group of geriatricians proposed a general definition of frailty as "a physical state of increased vulnerability to stressors [ 130 ] that results from decreased reserves and disregulation [ 131 ] in multiple physiological systems". [ 129 ] : 20
Frailty is a common condition in later old age but different definitions of frailty produce diverse assessments of prevalence. One study placed the incidence of frailty for ages 65+ at 10.7%. [ 132 ] Another study placed the incidence of frailty in age 65+ population at 22% for women and 15% for men. [ 133 ] : 106 A Canadian study illustrated how frailty increases with age and calculated the prevalence for 65+ as 22.4% and for 85+ as 43.7%. [ 134 ]
A worldwide study of "patterns of frailty" based on data from 20 nations found (a) a consistent correlation between frailty and age, (b) a higher frequency among women, and (c) more frailty in wealthier nations where greater support and medical care increases longevity. [ 135 ]
In Norway, a 20-year longitudinal study of 400 people found that bodily failure and greater dependence became prevalent in the 80+ years. The study calls these years the "fourth age" or "old age in the real meaning of the term". Similarly, the "Berlin Aging Study" rated overall functionality on four levels: good, medium, poor, and very poor. People in their 70s were mostly rated good. In the 80–90 year range, the four levels of functionality were divided equally. By the 90–100 year range, 60% would be considered frail because of very poor functionality and only 5% still possessed good functionality. [ 103 ]
Three unique markers of frailty have been proposed: (a) loss of any notion of invincibility, (b) loss of ability to do things essential to one's care, and (c) loss of possibility for a subsequent life stage. [ 136 ]
Old age survivors on average deteriorate from agility in their early retirement years (65–79) to a period of frailty preceding death. This deterioration is gradual for some and precipitous for others. Frailty is marked by an array of chronic physical and mental problems which means that frailty is not treatable as a specific disease. These problems, coupled with increased dependency in the basic activities of daily living (ADLs) required for personal care, add emotional problems: depression and anxiety. [ 137 ] In sum, frailty has been depicted as a group of "complex issues", distinct but "causally interconnected", that often include "comorbid diseases", [ 138 ] progressive weakness, stress, exhaustion, and depression. [ 129 ] : 25–6
Healthy humans after age 50, accumulate endogenous DNA single- and double-strand breaks in a linear fashion in cellular DNA. [ 139 ] Other forms of DNA damage also increase with age. [ 139 ] After age 50 a decline in DNA repair capability also occurs. [ 139 ] These findings are in accord with the theory that DNA damage is a fundamental aspect of aging in older people. [ 140 ]
Frail people require a high level of care. Medical advances have made it possible to extend life, or "postpone death", at old age for years. This added time costs many frail people "prolonged sickness, dependence, pain, and suffering". [ 129 ] : 9
According to a study by the Agency for Healthcare Research and Quality (AHRQ), the rate of emergency department visits was consistently highest among patients ages 85 years and older in 2006–2011 in the United States. [ 141 ] Additionally, patients aged 65+ had the highest percentage of hospital stays for adults with multiple chronic conditions but the second highest percentage of hospital costs in 2003–2014. [ 142 ]
These final years are also costly in economic terms. [ 143 ] : 17–8, 92 One out of every four Medicare dollars is spent on the frail in their last year of life, in attempts to postpone death. [ 144 ]
Medical treatments in the final days are not only economically costly, but they are often unnecessary or even harmful. [ 144 ] Nortin Hadler, M.D. warns against the tendency to medicalize and overtreat the frail. [ 145 ] In her Choosing Medical Care in Old Age , Michael R. Gillick M.D. argues that appropriate medical treatment for the frail is not the same as for the robust. The frail are vulnerable to "being tipped over" by any physical stress put on the system such as medical interventions. [ 133 ] : 116, 189
In addition to everyday care, frail elderly people and others with disabilities are particularly vulnerable during natural disasters. [ 146 ] They may be unable or unwilling to evacuate to avoid a hurricane or wildfire. [ 146 ]
Old age, death, and frailty are closely linked, with approximately half the deaths in old age preceded by months or years of frailty. [ 129 ] : 3, 19
Older Adults' Views on Death is based on interviews with 109 people in the 70–90 age range, with a mean age of 80.7. Almost 20% of the people wanted to use whatever treatment that might postpone death. About the same number said that, given a terminal illness, they would choose assisted suicide . Roughly half chose doing nothing except live day by day until death comes naturally without medical or other intervention designed to prolong life. This choice was coupled with a desire to receive palliative care if needed. [ 23 ] : 6–7, 9, 12, 32
About half of older adults have multimorbidity , that is, they have three or more chronic conditions. [ 147 ] Medical advances have made it possible to "postpone death", but in many cases this postponement adds "prolonged sickness, dependence, pain, and suffering", a time that is costly in social, psychological, and economic terms. [ 143 ] : 18, 72
The longitudinal interviews of 150 age 85+ people summarized in Life Beyond 85 Years found "progressive terminal decline" in the year prior to death: constant fatigue, much sleep, detachment from people, things, and activities, simplified lives. Most of the interviewees did not fear death; some would welcome it. One person said, "Living this long is pure hell." However, nearly everyone feared a long process of dying. Some wanted to die in their sleep; others wanted to die "on their feet". [ 107 ] : 202–7
The study of Older Adults' Views on Death found that the more frail people were, the more "pain, suffering, and struggles" they were enduring, the more likely they were to "accept and welcome" death as a release from their misery. Their fear about the process of dying was that it would prolong their distress. Besides being a release from misery, some saw death as a way to reunite with deceased loved ones. Others saw death as a way to free their caretakers from the burden of their care. [ 23 ] : 55, 270, 276
Generally speaking, old people have always been more religious than young people. [ 148 ] At the same time, wide cultural variations exist. [ 18 ] : 608
In the United States, 90% of old age Hispanics view themselves as very, quite, or somewhat religious. [ 149 ] : 125 The Pew Research Center's study of black and white old people found that 62% of those in ages 65–74 and 70% in ages 75+ asserted that religion was "very important" to them. For all 65+ people, more women (76%) than men (53%) and more blacks (87%) than whites (63%) consider religion "very important" to them. This compares to 54% in the 30–49 age range. [ 150 ]
In a British 20-year longitudinal study, less than half of the old people surveyed said that religion was "very important" to them, and a quarter said they had become less religious in old age. [ 18 ] : 608 The late-life rise in religiosity is stronger in Japan than in the United States, but in the Netherlands it is minimal. [ 18 ] : 608
In the practice of religion, a study of 60+ people found that 25% read the Bible every day and over 40% watch religious television. [ 149 ] : 12 Pew Research found that in the age 65+ range, 75% of whites and 87% of blacks pray daily. [ 150 ] When comparing religiosity, the individual practice may be a more accurate measure than participation in organized religion. With organized religion, participation may often be hindered due to transportation or health problems. [ 149 ] : 125
In the industrialized countries, life expectancy and, thus, the old age population have increased consistently over the last decades. [ 151 ] In the United States the proportion of people aged 65 or older increased from 4% in 1900 to about 12% in 2000. [ 152 ] In 1900, only about 3 million of the nation's citizens were 65 or older (out of 76 million total American citizens). By 2000, the number of senior citizens had increased to about 35 million (of 280 million US citizens). Population experts estimate that more than 50 million Americans—about 17 percent of the population—will be 65 or older in 2020. [ 153 ] By 2050, it is projected that at least 400,000 Americans will be 100 or older. [ 154 ]
The number of old people is growing around the world chiefly because of the post–World War II baby boom and increases in the provision and standards of health care. [ 155 ] By 2050, 33% of the developed world's population and almost 20% of the less developed world's population will be over 60 years old. [ 156 ]
The growing number of people living to their 80s and 90s in the developed world has strained public welfare systems and has also resulted in increased incidence of diseases like cancer and dementia that were rarely seen in premodern times. When the United States Social Security program was created, people older than 65 numbered only around 5% of the population and the average life expectancy of a 65-year-old in 1936 was approximately 5 years, while in 2011 it could often range from 10 to 20 years. Other issues that can arise from an increasing population are growing demands for health care and an increase in demand for different types of services. [ 157 ]
Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. [ 158 ] In industrialized nations, the proportion is much higher, reaching 90%. [ 158 ]
According to Erik Erikson 's "Stages of Psychosocial Development" , the human personality is developed in a series of eight stages that take place from the time of birth and continue on throughout an individual's complete life. He characterises old age as a period of "Integrity vs. Despair", during which people focus on reflecting back on their lives. Those who are unsuccessful during this phase will feel that their life has been wasted and will experience many regrets. The individual will be left with feelings of bitterness and despair. Those who feel proud of their accomplishments will feel a sense of integrity. Successfully completing this phase means looking back with few regrets and a general feeling of satisfaction. These individuals will attain wisdom, even when confronting death. [ 159 ] [ 160 ] [ 161 ] Coping is a very important skill needed in the aging process to move forward with life and not be 'stuck' in the past. The way people adapt and cope, reflects their aging process on a psycho-social level. [ 162 ]
For people in their 80s and 90s, Joan Erikson added a ninth stage in The Life Cycle Completed: Extended Version . [ 115 ] As she wrote, she added the ninth stage because the Integrity of the eighth stage imposes "a serious demand on the senses of elders" and the Wisdom of the eighth stage requires capacities that ninth stage elders "do not usually have". [ 115 ] : 112–3
Newman & Newman also proposed a ninth stage of life, Elderhood. Elderhood refers to those individuals who live past the life expectancy of their birth cohorts. They described two different types of people in this stage of life. The "young old" are the healthy individuals who can function on their own without assistance and can complete their daily tasks independently, while the "old old" are those who depend on specific services due to declining health or diseases. [ 163 ]
Social theories, or concepts, [ 164 ] propose explanations for the distinctive relationships between old people and their societies.
One theory, proposed in 1961, is the disengagement theory , which proposes that, in old age, a mutual disengagement between people and their society occurs in anticipation of death. By becoming disengaged from work and family responsibilities, according to this concept, people are enabled to enjoy their old age without stress. This theory has been subjected to the criticism that old age disengagement is neither natural, inevitable, nor beneficial. [ 165 ] Furthermore, disengaging from social ties in old age is not across the board: unsatisfactory ties are dropped and satisfying ones kept. [ 18 ] : 613
In opposition to the disengagement theory, the activity theory of old age argues that disengagement in old age occurs not by desire, but by the barriers to social engagement imposed by society. This theory has been faulted for not factoring in psychological changes that occur in old age as shown by reduced activity, even when available. It has also been found that happiness in old age is not proportional to activity. [ 18 ] : 614
According to the continuity theory , in spite of the inevitable differences imposed by their old age, most people try to maintain continuity in personhood, activities, and relationships with their younger days. [ 18 ] : 614
Socioemotional selectivity theory also depicts how people maintain continuity in old age. The focus of this theory is continuity sustained by social networks, albeit networks narrowed by choice and by circumstances. The choice is for more harmonious relationships. The circumstances are loss of relationships by death and distance. [ 18 ] : 614–5
Life expectancy by nation at birth in the year 2011 ranged from 48 years to 82 years. Low values were caused by high death rates for infants and children. [ 166 ]
In almost all countries, women, on average, live longer than men. The disparities vary between 12 years in Russia to no difference or higher life expectancy for men in countries such as Zimbabwe and Uganda. [ 167 ]
The number of elderly people worldwide began to surge in the second half of the 20th century. In developed countries before then, five or less percent of the population was over 65. Few lived longer than their 70s and people who attained advanced age (i.e. their 80s) were rare enough to be a novelty and were revered as wise sages. The worldwide over-65 population in 1960 was one-third of the under-5 population. By 2013, the over-65 population had grown to equal the under-5 population and is projected to double the under-5 population by 2050. [ 168 ]
Before the surge in the over-65 population, accidents and disease claimed many people before they could attain old age, and health problems in those over 65 meant a quick death in most cases. If a person lived to an advanced age, it was generally due to genetic factors and/or a relatively easy lifestyle, since diseases of old age could not be treated before the 20th century. [ 169 ]
In October 2016, a group of scientists identified the maximum human lifespan at an average age of 115, with an absolute upper limit of 125 years. [ 170 ] However, the concept of a maximum lifespan of humans is still widely debated among the scientific community. [ 171 ]
German chancellor Otto von Bismarck created the world's first comprehensive government social safety net in the 1880s, providing for old age pensions. It was a political solution by conservatives to weaken the socialist movement. [ 172 ]
In the United States of America , and the United Kingdom , 65 (UK 60 for women) was traditionally the age of retirement with full old age benefits. [ 173 ] [ 174 ]
In 2003, the age at which a United States citizen became eligible for full Social Security benefits began to increase gradually, and will continue to do so until it reaches 67 in 2027. Full retirement age for Social Security benefits for people retiring in 2012 is age 66. [ 175 ] In the United Kingdom, the state pension age for men and women will rise to 66 in 2020 with further increases scheduled after that.
Originally, the purpose of old age pensions was to open up jobs for younger unemployed people, and also prevent elderly people from being reduced to beggary, which is still common in some underdeveloped countries, but growing life expectancies and older populations have brought into question the model under which pension systems were designed. [ 176 ] Some complained that "powerful" and "greedy", old people were getting more than their share of the nation's resources. [ 177 ] In 2011, using a Supplemental Poverty Measure (SPM), the old age American poverty rate was measured as 15.9%. [ 52 ]
In the United States in 2008, 11 million people aged 65+ lived alone: 5 million or 22% of ages 65–74, 4 million or 34% of ages 75–84, and 2 million or 41% of ages 85+. The 2007 gender breakdown for all people 65+ was men 19% and women 39%. [ 178 ]
Many new assistive devices made especially for the home have enabled more old people to care for their own activities of daily living (ADL). Some examples of devices are a medical alert and safety system, shower seat (making it so the person does not get tired in the shower and fall), a bed cane (offering support to those with unsteadiness getting in and out of bed) and an ADL cuff (used with eating utensils for people with paralysis or hand weakness). [ 179 ]
A Swedish study found that at age 76, 46% of the subjects used assistive devices. When they reached age 86, 69% used them. The subjects were ambivalent regarding the use of the assistive devices: as "enablers" or as "disablers". [ 180 ] People who view assistive devices as enabling greater independence accept and use them, whereas those who see them as symbols of disability reject them. [ 181 ] However, organizations like Love for the Elderly aim to combat such age-related prejudice by educating the public about the importance of appreciating growing older, while also providing services of kindness to elders in senior homes. [ 182 ]
Even with assistive devices as of 2006, 8.5 million Americans needed personal assistance because of impaired basic activities of daily living required for personal care or impaired instrumental activities of daily living (IADL) required for independent living. Projections place this number at 21 million by 2030 when 40% of Americans over 70 will need assistance. [ 129 ] : 17 There are many options for such long-term care to those who require it. [ 183 ] [ 184 ] [ 185 ] There is home care in which a family member, volunteer, or trained professional will aid the person in need and help with daily activities. Another option is community services which can provide the person with transportation, meal plans, or activities in senior centers . A third option is assisted living where 24-hour round-the-clock supervision is given with aid in eating, bathing, dressing, etc. A final option is a nursing home which provides professional nursing care. [ 186 ]
In 2014, a documentary film called The Age of Love used humor and the poignant adventures of 30 seniors who attend a speed dating event for 70 to 90-year-olds and discover how the search for romance changes—or does not change—from childhood to old age. [ 187 ]
Scholarly literature has emerged, especially in Britain, showing historical trends in the visual depiction of old age. [ 188 ] [ 189 ] [ 190 ] [ 191 ] | https://en.wikipedia.org/wiki/Old_age |
Old person smell is the characteristic odor of elderly humans. [ 1 ] Like many other animal species, human odor undergoes distinct stages based on chemical changes initiated through the aging process. Research suggests that this enables humans to determine the suitability of potential partners based on age, in addition to other factors. [ 2 ]
One study suggested that old person smell may be the result of 2-nonenal , an unsaturated aldehyde which is associated with human body odor alterations during aging . [ 3 ] Another study failed to detect 2-nonenal at all, but found significantly increased concentrations of benzothiazole , dimethylsulphone , and nonanal on older subjects. [ 4 ] There are also other hypotheses, [ 5 ] such as change of the monounsaturated fatty acid composition of skin surface lipids and the increase of lipid peroxides associated with aging. [ 6 ]
In 2012, the Monell Chemical Senses Center published a press release claiming that the human ability to identify information such as age, illness, and genetic suitability from odor is responsible for the distinctive "old man smell". Sensory neuroscientist Johan Lundström stated, "Elderly people have a discernible underarm odor that younger people consider to be fairly neutral and not very unpleasant." [ 7 ]
Old person smell is known as kareishū ( 加齢臭 ) in Japan, where much social value is placed on personal grooming , and specific upmarket odor-eliminating soaps are targeted at more elderly consumers. [ 8 ] | https://en.wikipedia.org/wiki/Old_person_smell |
An oleaginous microorganism is a type of microbe that accumulates lipid as a normal part of its metabolism . Oleaginous microbes may accumulate an array of different lipid compounds, including polyhydroxyalkanoates , triacylglycerols , and wax esters . Various microorganisms, including bacteria , fungi , and yeast , are known to accumulate lipids. These organisms are often researched for their potential use in producing fuels from waste products.
For a typical bacteria, polar lipids such as phospholipids are synthesized to maintain the cell membrane. However, in oleaginous organisms, lipids can be synthesized and accumulated within the cell to act as energy storage in nutrient deprived conditions. Lipid accumulation can also serve secondary purposes such as acting as a water source in water stressed conditions, and to prevent oxidative stress from the formation of reactive oxygen species as a result of ultraviolet radiation. [ 1 ]
Lipid accumulation occurs as a storage of energy and nutrients, which appears to be triggered by inadequate environmental conditions. Bacteria such as Methylobacterium rhodesianum strain MB126 have been observed to accumulate poly-β-hydroxybutyrate when grown under phosphorus-, nitrogen-, and carbon-deficient conditions. [ 2 ] Similarly, other organisms such as oleaginous Rhodococcus species like R. opacus are known to accumulate triacylglycerols instead, [ 3 ] with the fatty acid content of these compounds varying by organism and environmental conditions. [ 1 ] Lipid accumulation is proposed to be advantageous to oleaginous microbes as it provides a source of energy and nutrients when they are absent from the environment. It allows the organisms to survive through 'feast and famine' conditions, to prevent die offs before a new source of energy and nutrients may be provided to the population. [ 2 ]
The specific conditions causing triacylglycerol synthesis and accumulation have been studied in order to develop processes where its intracellular content is maximized. The carbon to nitrogen ratio has been identified as being particularly important for the accumulation of lipids. Conditions with low nitrogen and excess carbon content have been observed to cause increased lipogenesis in bacteria in the genus Rhodococcus . [ 3 ]
Lipid accumulation may also provide benefits to organisms in water stressed conditions. The metabolism of triacylglycerols and wax esters have the potential to produce 1.4 molecules of water for each molecule of lipid processed, while poly-β-hydroxybutyrate has a return of 0.5 molecules of water for each molecule of lipid metabolism. [ 1 ] Another analysis determined that 107.1g of water can be harvested from 100g of microbial lipids, which is higher than the same amount of carbohydrate storage molecules. [ 4 ] This means that oleaginous organisms can rely on lipid storage to reduce water stress and prevent desiccation in arid environments.
Another way that lipid accumulation supports oleaginous microbes survival is the tempering of oxidative stress caused by reactive oxygen species. In environments where ultraviolet radiation is intense, such as deserts or polar regions, microbes can be damaged by the formation of reactive oxygen species from water molecules that interact with the cell's DNA, cellular components, or metabolic processes. Osmolytes such as glycerol – a component of triacylglycerol – stabilize the intracellular water and prevent the formation of reactive oxygen species, preventing cell damage and lysis as a result of ultraviolet radiation. [ 1 ]
The genetic component of triacylglycerol biosynthesis has been investigated. Its biosynthesis is catalyzed by the wax ester/acyl-CoA:diacylglycerol acyltransferase enzymes, which is associated with the atf genes, A study investigating the atf genes determines that out of the 10 atf genes identified in R. opacus PD630, only atf1 and aft2 had significant impacts on the activity of the enzymes and the resulting synthesis and accumulation of triacylglycerol. [ 5 ]
Oleaginous microbes have attracted attention for their potential as sources for biofuels such as biodiesel . Instead of using nonrenewable fuel sources such as petroleum and natural gas, biodiesel has the potential to produce fuel sources from other forms of waste such as agricultural waste or wastewater. Oleaginous microbes have the ability to degrade various different materials into polyhydroxyalkanoates, which may have the potential to form bioplastics , and triacylglycerols, which may have the potential to form biodiesel. Microbes can be integrated into existing processes or waste streams such as wastewater treatment to harvest resources from the waste. [ 6 ] Currently, many biofuels are made using plant oils, but products such as single cell oil utilizing microbial lipids require less land and have shorter time constraints compared to the processing of plant oils. [ 7 ] Considering both the use of waste as a substrate, the integration into established processes, and the low resource investment, using oleaginous microbes as a source of biofuel is attractive to many researchers. [ 3 ] | https://en.wikipedia.org/wiki/Oleaginous_microorganism |
Oleanane is a natural triterpenoid . It is commonly found in woody angiosperms and as a result is often used as an indicator of these plants in the fossil record. It is a member of the oleanoid series, which consists of pentacyclic triterpenoids (such as beta- amyrin and taraxerol [ verification needed ] ) where all rings are six-membered.
Oleanane is a pentacyclic triterpenoid, a class of molecules made up of six connected isoprene units. The naming of both the ring structures and individual carbon atoms in oleanane is the same as in steroids . As such, it consists of a A, B, C, D, and E ring, all of which are six-membered rings. [ 2 ] [ failed verification ]
The structure of oleanane contains a number of different methyl groups, that vary in orientation between different oleananes. For example, in 18-alpha-oleanane contains a downward facing methyl group for the 18th carbon atom, while 18-beta-oleanane contains an upward facing methyl group at the same position.
A and B rings of the oleanane structure are identical to that of hopane . As a result, both molecules produce a fragment of m/z 191. Because this fragment is often used to identify hopanes, oleanane can be mis-identified in hopane analysis.
Like other triterpenoids, are formed from six combined isoprene units. [ 2 ] These isoprene units can be combined via a number of different pathways. In eukaryotes (including plants), this pathway is the mevalonate (MVA) pathway. For the formation of steroids and other triterpenoids the isoprenoids are combined into a precursor known as squalene, which then undergoes enzymatic cyclization to produce the various different triterpenoids, including oleanane. [ 2 ]
Once the oleananes have been transported into rocks or sediments they will undergo further alteration before they are measured.
Oleananes can be identified in extracts from rock samples (or plants) using GC/MS. A GC/MS is a gas chromatograph coupled with a mass spectrometer. The sample is first injected into the system, then run through as chromatographic column. How fast a material moves through a chromatographic column depends on how long it spends in each of the two stages there. Compounds that partition more into the mobile phase will move faster as opposed to compounds that partition more into the stationary phase. The result of this is a separation of different organic molecules based on their retention time in the GC.
After being separated by the GC, the compounds can then be analyzed by a mass spectrometer. Each compound will contain a characteristic mass spectrum, based on the fragments it splits into during ionization in the mass spectrometer. This means that the GC can not only separate different types of molecules, it can also identify them.
As mentioned above, they have a characteristic mass fragment at m/z = 191, and thus will appear in the same selected ion chromatograph (SIC) as hopanes. This can help one identify them in GC/MS datasets.
Oleanane has been identified as a compound in modern day angiosperms. [ 3 ]
Because of this, its presence is the fossil record has also been used to trace angiosperms through the fossil record. For example, the ratio of 18-alpha-oleanane + 18-beta-oleanane:17-alpha-hopane in rock extracts (and associated petroleums/oils) has been found to correlate (at least broadly) to the presence of angiosperms in the fossil record. [ 4 ] In this study, the combination of alpha and beta-oleanane were used as indicators for the presence of angiosperms. They are normalized to hopanes, which are broadly present in almost all rock extracts coming from petroleum. Furthermore, because of the structural similarities between hopanes and oleananes, it is assumed that they will react similarly to the various weathering processes that degrade the biomarkers present. As such, the ratio of hopanes to oleananes should be similar to the initial ratio, and unaffected by processes occurring in the rock after fossilization.
There is some delay in the accepted increases in taxonomic diversification of angiosperms (which occurred during the mid-Cretaceous period) and the increase of oleanane concentrations in the fossil record (which occurred in the late-Cretaceous or even after). This could be due to a number of factors, one being that the early angiosperms were more herbaceous than woody and that woody angiosperms only appeared after further taxonomic diversification. [ 4 ]
Lastly, the study introduced the idea of an "oleanane parameter," which could be used in assessing angiosperm input to petroleum sources. This, in turn, gives some idea of the age of said petroleum sources. [ 4 ]
That being said, the presence of angiosperms may not be the only thing affecting the oleanane content of sediments, rock extracts and petroleum. For example, there is evidence that contact with seawater during early sedimentation processes can increase the concentration of oleananes in the mature sediment. [ 5 ] This evidence comes from the fact that various indicators of marine influence (C27/C29 sterane ratios, changes in elemental composition in the downstream direction that are indicative of the infiltration of water into the system and the homophane index). Despite this, it is still unclear as to how marine influence enhances the expression of oleananes (thus increasing observed concentration). Some ideas include the changes in pH, Eh and the microbial environment that come with the interaction with seawater. [ 5 ] | https://en.wikipedia.org/wiki/Oleanane |
In dynamical systems theory , the Olech theorem establishes sufficient conditions for global asymptotic stability of a two-equation system of non-linear differential equations . The result was established by Czesław Olech in 1963, [ 1 ] based on joint work with Philip Hartman . [ 2 ]
The differential equations x ˙ = f ( x ) {\displaystyle \mathbf {\dot {x}} =f(\mathbf {x} )} , x = [ x 1 x 2 ] T ∈ R 2 {\displaystyle \mathbf {x} =[x_{1}\,x_{2}]^{\mathsf {T}}\in \mathbb {R} ^{2}} , where f ( x ) = [ f 1 ( x ) f 2 ( x ) ] T {\displaystyle f(\mathbf {x} )={\begin{bmatrix}f^{1}(\mathbf {x} )&f^{2}(\mathbf {x} )\end{bmatrix}}^{\mathsf {T}}} , for which x ∗ = 0 {\displaystyle \mathbf {x} ^{\ast }=\mathbf {0} } is an equilibrium point , is uniformly globally asymptotically stable if: | https://en.wikipedia.org/wiki/Olech_theorem |
In organic chemistry , olefin metathesis is an organic reaction that entails the redistribution of fragments of alkenes (olefins) by the scission and regeneration of carbon-carbon double bonds . [ 1 ] [ 2 ] Because of the relative simplicity of olefin metathesis, it often creates fewer undesired by-products and hazardous wastes than alternative organic reactions. For their elucidation of the reaction mechanism and their discovery of a variety of highly active catalysts , Yves Chauvin , Robert H. Grubbs , and Richard R. Schrock were collectively awarded the 2005 Nobel Prize in Chemistry . [ 3 ]
The reaction requires metal catalysts . Most commercially important processes employ heterogeneous catalysts . The heterogeneous catalysts are often prepared by in-situ activation of a metal halide (MCl x ) using organoaluminium or organotin compounds , e.g. combining MCl x –EtAlCl 2 . A typical catalyst support is alumina . Commercial catalysts are often based on molybdenum and ruthenium. Well-defined organometallic compounds have mainly been investigated for small-scale reactions or in academic research. The homogeneous catalysts are often classified as Schrock catalysts and Grubbs catalysts . Schrock catalysts feature molybdenum(VI)- and tungsten(VI)-based centers supported by alkoxide and imido ligands. [ 4 ]
Grubbs catalysts, on the other hand, are ruthenium(II) carbenoid complexes. [ 5 ] Many variations of Grubbs catalysts are known. Some have been modified with a chelating isopropoxybenzylidene ligand to form the related Hoveyda–Grubbs catalyst .
Olefin metathesis has several industrial applications. Almost all commercial applications employ heterogeneous catalysts using catalysts developed well before the Nobel-Prize winning work on homogeneous complexes. [ 6 ] Representative processes include: [ 1 ]
Molecular catalysts have been explored for the preparation of a variety of potential applications, [ 8 ] such as the synthesis of pharmaceutical drugs , [ 9 ] the manufacturing of high-strength materials, the preparation of cancer-targeting nanoparticles , [ 10 ] and the conversion of renewable plant-based feedstocks into hair and skin care products. [ 11 ]
Some classes of olefin metathesis include:
Hérisson and Chauvin first proposed the widely accepted mechanism of transition metal alkene metathesis. [ 12 ] The direct [2+2] cycloaddition of two alkenes is formally symmetry forbidden and thus has a high activation energy . The Chauvin mechanism involves the [2+2] cycloaddition of an alkene double bond to a transition metal alkylidene to form a metallacyclobutane intermediate. The metallacyclobutane produced can then cycloeliminate to give either the original species or a new alkene and alkylidene. Interaction with the d-orbitals on the metal catalyst lowers the activation energy enough that the reaction can proceed rapidly at modest temperatures.
Olefin metathesis involves little change in enthalpy for unstrained alkenes. Product distributions are determined instead by le Chatelier's Principle , i.e. entropy .
Cross metathesis and ring-closing metathesis are driven by the entropically favored evolution of ethylene or propylene , which can be removed from the system because they are gases. Because of this CM and RCM reactions often use alpha-olefins . The reverse reaction of CM of two alpha-olefins, ethenolysis , can be favored but requires high pressures of ethylene to increase ethylene concentration in solution. The reverse reaction of RCM, ring-opening metathesis, can likewise be favored by a large excess of an alpha-olefin, often styrene . Ring-opening metathesis usually involves a strained alkene (often a norbornene ) and the release of ring strain drives the reaction. Ring-closing metathesis, conversely, usually involves the formation of a five- or six-membered ring, which is enthalpically favorable; although these reactions tend to also evolve ethylene, as previously discussed. RCM has been used to close larger macrocycles, in which case the reaction may be kinetically controlled by running the reaction at high dilutions. [ 13 ] The same substrates that undergo RCM can undergo acyclic diene metathesis, with ADMET favored at high concentrations. The Thorpe–Ingold effect may also be exploited to improve both reaction rates and product selectivity.
Cross-metathesis is synthetically equivalent to (and has replaced) a procedure of ozonolysis of an alkene to two ketone fragments followed by the reaction of one of them with a Wittig reagent .
"Olefin metathesis is a child of industry and, as with many catalytic processes, it was discovered by accident." [ 1 ] As part of ongoing work in what would later become known as Ziegler–Natta catalysis Karl Ziegler discovered the conversion of ethylene into 1-butene instead of a saturated long-chain hydrocarbon (see nickel effect ). [ 14 ]
In 1960 a Du Pont research group polymerized norbornene to polynorbornene using lithium aluminum tetraheptyl and titanium tetrachloride [ 15 ] (a patent by this company on this topic dates back to 1955 [ 16 ] ),
a reaction then classified as a so-called coordination polymerization . According to the then proposed reaction mechanism a RTiX titanium intermediate first coordinates to the double bond in a pi complex . The second step then is a concerted SNi reaction breaking a CC bond and forming a new alkylidene-titanium bond; the process then repeats itself with a second monomer:
Only much later the polynorbornene was going to be produced through ring opening metathesis polymerisation . The DuPont work was led by Herbert S. Eleuterio . Giulio Natta in 1964 also observed the formation of an unsaturated polymer when polymerizing cyclopentene with tungsten and molybdenum halides. [ 17 ]
In a third development leading up to olefin metathesis, researchers at Phillips Petroleum Company in 1964 [ 18 ] described olefin disproportionation with catalysts molybdenum hexacarbonyl , tungsten hexacarbonyl , and molybdenum oxide supported on alumina for example converting propylene to an equal mixture of ethylene and 2-butene for which they proposed a reaction mechanism involving a cyclobutane (they called it a quasicyclobutane) – metal complex:
This particular mechanism is symmetry forbidden based on the Woodward–Hoffmann rules first formulated two years earlier. Cyclobutanes have also never been identified in metathesis reactions, which is another reason why it was quickly abandoned.
Then in 1967 researchers led by Nissim Calderon at the Goodyear Tire and Rubber Company described a novel catalyst system for the metathesis of 2-pentene based on tungsten hexachloride , ethanol , and the organoaluminum compound EtAlMe 2 . The researchers proposed a name for this reaction type: olefin metathesis. [ 19 ] Formerly the reaction had been called "olefin disproportionation."
In this reaction 2-pentene forms a rapid (a matter of seconds) chemical equilibrium with 2-butene and 3-hexene . No double bond migrations are observed; the reaction can be started with the butene and hexene as well and the reaction can be stopped by addition of methanol .
The Goodyear group demonstrated that the reaction of regular 2-butene with its all- deuterated isotopologue yielded C 4 H 4 D 4 with deuterium evenly distributed. [ 20 ] In this way they were able to differentiate between a transalkylidenation mechanism and a transalkylation mechanism (ruled out):
In 1971 Chauvin proposed a four-membered metallacycle intermediate to explain the statistical distribution of products found in certain metathesis reactions. [ 21 ] This mechanism is today considered the actual mechanism taking place in olefin metathesis.
Chauvin's experimental evidence was based on the reaction of cyclopentene and 2-pentene with the homogeneous catalyst tungsten(VI) oxytetrachloride and tetrabutyltin :
The three principal products C9, C10 and C11 are found in a 1:2:1 regardless of conversion. The same ratio is found with the higher oligomers. Chauvin also explained how the carbene forms in the first place: by alpha-hydride elimination from a carbon metal single bond. For example, propylene (C3) forms in a reaction of 2-butene (C4) with tungsten hexachloride and tetramethyltin (C1).
In the same year Pettit who synthesised cyclobutadiene a few years earlier independently came up with a competing mechanism. [ 22 ] It consisted of a tetramethylene intermediate with sp 3 hybridized carbon atoms linked to a central metal atom with multiple three-center two-electron bonds .
Experimental support offered by Pettit for this mechanism was based on an observed reaction inhibition by carbon monoxide in certain metathesis reactions of 4-nonene with a tungsten metal carbonyl [ 23 ]
Robert H. Grubbs got involved in metathesis in 1972 and also proposed a metallacycle intermediate but one with four carbon atoms in the ring. [ 24 ] The group he worked in reacted 1,4-dilithiobutane with tungsten hexachloride in an attempt to directly produce a cyclomethylenemetallacycle producing an intermediate, which yielded products identical with those produced by the intermediate in the olefin metathesis reaction. This mechanism is pairwise:
In 1973 Grubbs found further evidence for this mechanism by isolating one such metallacycle not with tungsten but with platinum by reaction of the dilithiobutane with cis-bis(triphenylphosphine)dichloroplatinum(II) [ 25 ]
In 1975 Katz also arrived at a metallacyclobutane intermediate consistent with the one proposed by Chauvin [ 26 ] He reacted a mixture of cyclooctene , 2-butene and 4-octene with a molybdenum catalyst and observed that the unsymmetrical C14 hydrocarbon reaction product is present right from the start at low conversion.
In any of the pairwise mechanisms with olefin pairing as rate-determining step this compound, a secondary reaction product of C12 with C6, would form well after formation of the two primary reaction products C12 and C16.
In 1974 Casey was the first to implement carbenes into the metathesis reaction mechanism: [ 27 ]
Grubbs in 1976 provided evidence against his own updated pairwise mechanism:
with a 5-membered cycle in another round of isotope labeling studies in favor of the 4-membered cycle Chauvin mechanism: [ 28 ] [ 29 ]
In this reaction the ethylene product distribution ( d 4 , d 2 , d 0 ) {\displaystyle (d_{4},d_{2},d_{0})} at low conversion was found to be consistent with the carbene mechanism. On the other hand, Grubbs did not rule out the possibility of a tetramethylene intermediate.
The first practical metathesis system was introduced in 1978 by Tebbe based on the (what later became known as the) Tebbe reagent . [ 30 ] In a model reaction isotopically labeled carbon atoms in isobutene and methylenecyclohexane switched places:
The Grubbs group then isolated the proposed metallacyclobutane intermediate in 1980 also with this reagent together with 3-methyl-1-butene: [ 31 ]
They isolated a similar compound in the total synthesis of capnellene in 1986: [ 32 ]
In that same year the Grubbs group proved that metathesis polymerization of norbornene by Tebbe's reagent is a living polymerization system [ 33 ] and a year later Grubbs and Schrock co-published an article describing living polymerization with a tungsten carbene complex [ 34 ] While Schrock focussed his research on tungsten and molybdenum catalysts for olefin metathesis, Grubbs started the development of catalysts based on ruthenium, which proved to be less sensitive to oxygen and water and therefore more functional group tolerant.
In the 1960s and 1970s various groups reported the ring-opening polymerization of norbornene catalyzed by hydrated trichlorides of ruthenium and other late transition metals in polar, protic solvents. [ 35 ] [ 36 ] [ 37 ] This prompted Robert H. Grubbs and coworkers to search for well-defined, functional group tolerant catalysts based on ruthenium. The Grubbs group successfully polymerized the 7-oxo norbornene derivative using ruthenium trichloride , osmium trichloride as well as tungsten alkylidenes. [ 38 ] They identified a Ru(II) carbene as an effective metal center and in 1992 published the first well-defined, ruthenium-based olefin metathesis catalyst, (PPh 3 ) 2 Cl 2 Ru=CHCH=CPh 2 : [ 39 ]
The corresponding tricyclohexylphosphine complex (PCy 3 ) 2 Cl 2 Ru=CHCH=CPh 2 was also shown to be active. [ 40 ] This work culminated in the now commercially available 1st generation Grubbs catalyst . [ 41 ] [ 42 ]
Schrock entered the olefin metathesis field in 1979 as an extension of work on tantalum alkylidenes. [ 43 ] The initial result was disappointing as reaction of CpTa(=CH−t−Bu)Cl 2 with ethylene yielded only a metallacyclopentane, not metathesis products: [ 44 ]
But by tweaking this structure to a PR 3 Ta(CHt−bu)(Ot−bu) 2 Cl (replacing chloride by t-butoxide and a cyclopentadienyl by an organophosphine , metathesis was established with cis-2-pentene. [ 45 ] In another development, certain tungsten oxo complexes of the type W(O)(CHt−Bu)(Cl) 2 (PEt) 3 were also found to be effective. [ 46 ]
Schrock alkylidenes for olefin metathesis of the type Mo(NAr)(CHC(CH 3 ) 2 R){OC(CH 3 )(CF 3 ) 2 } 2 were commercialized starting in 1990. [ 47 ] [ 48 ]
The first asymmetric catalyst followed in 1993 [ 49 ]
With a Schrock catalyst modified with a BINOL ligand in a norbornadiene ROMP leading to highly stereoregular cis, isotactic polymer. | https://en.wikipedia.org/wiki/Olefin_metathesis |
Oleochemistry is the study of vegetable oils and animal oils and fats , and oleochemicals derived from these fats and oils. The resulting product can be called oleochemicals (from Latin: oleum "olive oil"). The major product of this industry is soap, approximately 8.9×10 6 tons of which were produced in 1990. Other major oleochemicals include fatty acids , fatty acid methyl esters , fatty alcohols and fatty amines . Glycerol is a side product of all of these processes. [ 1 ] Intermediate chemical substances produced from these basic oleochemical substances include alcohol ethoxylates , alcohol sulfates, alcohol ether sulfates, quaternary ammonium salts , monoacylglycerols (MAG), diacylglycerols (DAG), structured triacylglycerols (TAG), sugar esters, and other oleochemical products.
As the price of crude oil rose in the late 1970s, [ 2 ] manufacturers switched from petrochemicals to oleochemicals [ 3 ] because plant-based lauric oils processed from palm kernel oil were cheaper. Since then, palm kernel oil is predominantly used in the production of laundry detergent and personal care items like toothpaste, soap bars, shower cream and shampoo. [ 4 ]
Important process in oleochemical manufacturing include hydrolysis and transesterification, among others. [ 1 ]
The splitting (or hydrolysis) of the triglycerides produces fatty acids and glycerol follows this equation:
To this end, hydrolysis is conducted in water at 250 °C. The cleavage of triglycerides with base proceeds more quickly than hydrolysis, the process being saponification . Saponification however produces soap, whereas the desired product of hydrolysis are the fatty acids.
Fats react with alcohols (R'OH) instead of with water in hydrolysis in a process called transesterification . Glycerol is produced together with the fatty acid esters . Most typically, the reaction entails the use of methanol (MeOH) to give fatty acid methyl esters :
FAMEs are less viscous than the precursor fats and can be purified to give the individual fatty acid esters, e.g. methyl oleate vs methyl palmitate.
The fatty acid or fatty esters are susceptible to hydrogenation converts unsaturated fatty acids into saturated fatty acids . [ 1 ] The acids or esters can also be reduced to the fatty alcohols . For some applications, fatty acids are converted to fatty nitriles. Hydrogenated of these nitriles gives fatty amines , which have a variety of applications. [ 5 ]
Liquid oil can also be immobilized in a 3D-network provided by various molecules called oleogelators. [ 6 ]
The largest application for oleochemicals, about 30% of market share for fatty acids and 55% for fatty alcohols, is for making soaps and detergents . [ 7 ] : 21 Lauric acid is used to produce sodium lauryl sulfate and related compounds, which are used to make soaps and other personal care products.
Other applications of oleochemicals include the production of lubricants , solvents , biodiesel and bioplastics . Due to the use of methyl esters in biodiesel production, they represent the fastest growing sub-sector of oleochemical production in recent years. [ 7 ] : 15
Through the 1996 creation of Novance and the 2008 acquisition of Oleon , Avril Group has dominated the European market of oleochemistry. [ 8 ]
Southeast Asian countries' rapid production growth of palm oil and palm kernel oil in the 1980s spurred the oleochemical industry in Malaysia, Indonesia, and Thailand. Many oleochemical plants were built. Though a nascent and small industry when pitted against big detergent giants in the US and Europe, oleochemical companies in southeast Asia had competitive edge in cheap ingredients. [ 9 ] The US fatty chemical industry found it difficult to consistently maintain acceptable levels of profits. Competition was intense with market shares divided among many companies there where neither imports nor exports played a significant role. [ 10 ] By the late 1990s, giants like Henkel , Unilever , and Petrofina sold their oleochemical factories to focus on higher profit activities like retail of consumer goods. Since the Europe outbreak of 'mad cow disease' (or bovine spongiform encephalopathy ) in 2000, tallow is replaced for many uses by vegetable oleic fatty acids, such as palm kernel and coconut oils. [ 7 ] : 24 | https://en.wikipedia.org/wiki/Oleochemistry |
An olfactory receptor neuron (ORN), also called an olfactory sensory neuron (OSN), is a sensory neuron within the olfactory system . [ 2 ]
Humans have between 10 and 20 million olfactory receptor neurons (ORNs). [ 3 ] In vertebrates , ORNs are bipolar neurons with dendrites facing the external surface of the cribriform plate with axons that pass through the cribriform foramina with terminal end at olfactory bulbs. The ORNs are located in the olfactory epithelium in the nasal cavity. The cell bodies of the ORNs are distributed among the stratified layers of the olfactory epithelium. [ 4 ]
Many tiny hair-like non-motile cilia protrude from the olfactory receptor cell's dendrites . The dendrites extend to the olfactory epithelial surface and each ends in a dendritic knob from which around 20 to 35 cilia protrude. The cilia have a length of up to 100 micrometres and with the cilia from other dendrites form a meshwork in the olfactory mucus . [ 5 ] The surface of the cilia is covered with olfactory receptors , a type of G protein-coupled receptor . Each olfactory receptor cell expresses only one type of olfactory receptor (OR), but many separate olfactory receptor cells express ORs which bind the same set of odors. The axons of olfactory receptor cells which express the same OR converge to form glomeruli in the olfactory bulb . [ 6 ]
ORs, which are located on the membranes of the cilia have been classified as a complex type of ligand-gated metabotropic channels. [ 7 ] There are approximately 1000 different genes that code for the ORs, making them the largest gene family. An odorant will dissolve into the mucus of the olfactory epithelium and then bind to an OR. ORs can bind to a variety of odor molecules, with varying affinities. The difference in affinities causes differences in activation patterns resulting in unique odorant profiles. [ 8 ] [ 9 ] The activated OR in turn activates the intracellular G-protein, GOLF ( GNAL ), adenylate cyclase and production of cyclic AMP (cAMP) opens ion channels in the cell membrane , resulting in an influx of sodium and calcium ions into the cell, and an efflux of chloride ions. This influx of positive ions and efflux of negative ions causes the neuron to depolarize, generating an action potential .
The olfactory receptor neuron has a fast working negative feedback response upon depolarization . When the neuron is depolarizing, the CNG ion channel is open allowing sodium and calcium to rush into the cell. The influx of calcium begins a cascade of events within the cell. Calcium first binds to calmodulin to form CaM . CaM will then bind to the CNG channel and close it, stopping the sodium and calcium influx. [ 10 ] CaMKII will be activated by the presence of CaM, which will phosphorylate ACIII and reduce cAMP production. [ 11 ] CaMKII will also activate phosphodiesterase , which will then hydrolyze cAMP. [ 12 ] The effect of this negative feedback response inhibits the neuron from further activation when another odor molecule is introduced.
A widely publicized study suggested that humans can detect more than one trillion different odors. [ 13 ] This finding has been disputed. Critics argued that the methodology used for the estimation was fundamentally flawed, showing that applying the same argument for better-understood sensory modalities, such as vision or audition, leads to wrong conclusions. [ 14 ] Other researchers have also showed that the result is extremely sensitive to the precise details of the calculation, with small variations changing the result over dozens of orders of magnitude, possibly going as low as a few thousand. [ 15 ] The authors of the original study have argued that their estimate holds as long as it is assumed that odor space is sufficiently high-dimensional. [ 16 ] | https://en.wikipedia.org/wiki/Olfactory_receptor_neuron |
Olga García Mancheño is an organic chemistry professor at the University of Münster in Germany. [ 1 ] [ 2 ] García Mancheño directs an organic chemistry research group at University of Münster that focuses on development of new catalytic methods with the goal of developing sustainable synthetic routes to accomplish carbon-hydrogen functionalization, organic chemical rearrangements, and photocatalyzed chemical reactions. [ 3 ]
García Mancheño earned her bachelor's degree in 2001 from the Faculty of Sciences of the Autonomous University of Madrid in Madrid, Spain. She continued at the Autonomous University of Madrid to earn her Ph.D. in 2005 under the mentorship of Juan Carlos Carretero. [ 4 ] She continued her training in organic chemistry as a postdoctoral researcher in the lab of Carsten Bolm at RWTH Aachen University in Aachen, Germany. [ 4 ] She completed her habilitation at University of Münster mentored by Frank Glorius , and then worked in a temporary professorship at the University of Göttingen in the city of Göttingen, Germany before acquiring her first permanent faculty position. [ 4 ] She was an assistant professor of organic chemistry at the University of Regensburg in Bavaria, Germany from 2013-2017. In 2017, García Mancheño became a professor of organic chemistry at the University of Münster , in Münster, North Rhine-Westphalia, Germany, where she also completed her habilitation. [ 1 ] [ 2 ]
García Mancheño is head of a research group at the University of Münster that focuses on developing new catalysts to accomplish organic chemical transformations. [ 3 ] She has authored several review articles in peer-reviewed journals on topics in organocatalytic chemistry, [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] and is the editor of a textbook on anion-binding catalysts. [ 12 ]
García Mancheño was successful is acquiring funding from the European Research Council in 2017 to start her research program at the University of Münster . [ 13 ] She has been a speaker at several training events to help other early career scientists in Germany to acquire funding for their research programs. In 2018 she was a speaker at the Interactive Information Event: ERC Consolidator Grant at the University of Münster to share advice about applying for that specific grant opportunity. [ 14 ] She was invited by the German Fulbright Association and Research Corporation for Science Advancement to speak at workshops that are aimed to prepare university professors in Germany to be successful. She spoke at the Fulbright-Cottrell Junior Faculty Professional Development Workshops in 2018 (Berlin) [ 15 ] and in 2019 (Göttingen). [ 16 ]
García Mancheño has received the following honors and awards during her career: | https://en.wikipedia.org/wiki/Olga_García_Mancheño |
Oligoclonal antibodies are an emerging immunological treatment relying on the combinatory use of several monoclonal antibodies (mAb) in one single drug. [ 1 ] The composition can be made of mAb targeting different epitopes of a same protein (homo-combination) or mAb targeting different proteins (hetero-combination). It mimicks the natural polyclonal humoral immunological response to get better efficiency of the treatment. This strategy is most efficient in infections and in cancer treatment as it allow to overcome acquired resistance by pathogens and the plasticity of cancers. [ 2 ]
Oligoclonal antibody treatment is a part of the serotherapy strategy (or antiserum).
19th century: Serotherapy was initiated thanks to Shibasaburo Kitasato and Emil von Behring in Germany, and Emile Roux in France. It is the administration of animal or human serum that was previously exposed to a pathogen and thus contains antibodies against it and will help the patient to fight infection. [ 2 ]
1975 and 1986: First mAb was produced by hybridomas technique and then fully licensed. It was great progress since it allows targeting of specific epitope that can be shared among several diseases. [ 3 ]
1982: Combination of two antibodies to enhance the immune response against viruses.
2000's: Several research teams came up with the idea of combining antibodies against different epitopes of the same receptor in cancer treatment. Particularly in anti- EGFR , anti- HER2 or anti- cMET combinations.
2010: Combination of two antibodies against immune control checkpoint to enhance cytotoxic T lymphocytes response and inhibit regulatory T lymphocytes suppressive effect on the immune response. [ 2 ]
2012: First oligoclonal antibody combination was approved for use. It is composed of trastuzumab and pertuzumab both targeting HER-2 in breast cancer. [ 2 ]
Numerous studies on animal models or in clinical trials are currently ongoing for treatment of infections and cancers. [ 2 ] [ 4 ]
In infection oligoclonal treatment may be used to directly target the pathogen (e.g. surface marker on viruses [ 5 ] or bacterias) or to neutralize toxins (e.g. botulinum neurotoxins , [ 6 ] Clostridioides difficile toxins [ 7 ] ).
Many pathogens show increasing resistance to currently available drugs, especially antibiotics. This is particularly true for bacteria, but they harbor many membrane surface markers that can be targeted by antibodies. Oligoclonal treatment is recognized to have the potential to address this issue by aiming for multiple surface proteins and still can bind to proteins after mutation even if the affinity is lowered. However, most of these treatments are still in the stage of clinical trials. [ 1 ]
In cancer treatment, several targets and strategies can be used: [ 4 ]
Today, more than 300 antibody combinations are undergoing phase II or phase III clinical trials for various targets and cancer types (both solid and liquid). Most of them are targeting immune checkpoints (CTLA-4, PD1/PD-L1, ...). [ 4 ] The only oligoclonal antibody treatment against immune checkpoint currently approved is the cocktail of nivolumab (anti-PD1 antibody) and ipilimumab (anti-CTLA-4 antibody). It is used to treat melanomas, [ 8 ] low-risk renal cancer and colorectal cancer. [ 4 ] This combination is also on phase III clinical trial to be used to treat non-small lung cancer, it shows good efficacy. [ 9 ]
Treatments on non-small lung cancer have shown higher efficiency on patient with tumors of heavy mutational background. This underlines the potential of oligoclonal treatments to tackle cancer plasticity. [ 10 ] | https://en.wikipedia.org/wiki/Oligoclonal_antibody |
Oligoclonal bands (OCBs) are bands of immunoglobulins that are seen when a patient's blood serum , or cerebrospinal fluid (CSF) is analyzed. They are used in the diagnosis of various neurological and blood diseases. Oligoclonal bands are present in the CSF of more than 95% of patients with clinically definite multiple sclerosis . [ 1 ]
Two methods of analysis are possible: (a) protein electrophoresis , a method of analyzing the composition of fluids, also known as "SDS-PAGE (sodium dodecyl sulphate polyacrylamide gel electrophoresis)/ Coomassie blue staining", and (b) the combination of isoelectric focusing /silver staining. The latter is more sensitive. [ 2 ]
For the analysis of cerebrospinal fluid, a sample is first collected via lumbar puncture (LP). Normally it is assumed that all the proteins that appear in the CSF, but are not present in the serum, are produced intrathecally (inside the central nervous system). Therefore, it is normal to subtract bands in serum from bands in CSF when investigating CNS diseases. A sample of blood serum is usually obtained from a clotted blood sample taken around the time of the LP.
OCBs are especially important for multiple sclerosis (MS). In MS, normally only OCBs made of immunoglobulin G antibodies are considered, though sometimes other proteins can be taken into account, like lipid-specific immunoglobulin M . [ 3 ] [ 4 ] The presence of these IgM OCBs is associated with a more severe course. [ 5 ]
Typically for an OCB analysis, the CSF is concentrated and the serum is diluted. After this dilution/concentration prealbumin appears as higher on CSF. Albumin is typically the dominant band on both fluids. Transferrin is another prominent protein on CSF column because its small molecular size easily increases its filtration in to CSF. CSF has a relatively higher concentration of prealbumin than does serum. As expected large molecular proteins are absent in CSF column. After all these bands are localized, OCBs should be assessed in the γ region which normally hosts small group of polyclonal immunoglobulins. [ 6 ]
New techniques like "capillary isoelectric focusing immunoassay" are able to detect IgG OCBs in more than 95% of multiple sclerosis patients. [ 7 ]
Even more than 12 OCBs can appear in MS. [ 8 ] Each one of them represent antibody proteins (or protein fragments) secreted by plasma cells , although why exactly these bands are present, and which proteins these bands represent, has not yet been fully elucidated. The target antigens for these antibodies are not easy to find because it requires to isolate a single kind of protein in each band, though new techniques are able to do so. [ 9 ]
In 40% of MS patients with OCBs, antibodies specific to the viruses HHV-6 and EBV have been found. [ 10 ]
HHV-6 specific OCBs have also been found in other demyelinating diseases. [ 11 ] [ 12 ] A lytic protein of HHV-6A virus was identified as the target of HHV-6 specific oligoclonal bands. [ 13 ]
Though early theories assumed that the OCBs were somehow pathogenic autoantigens, recent research has shown that the IgG present in the OCBs are antibodies against debris, and therefore, OCBs seem to be just a secondary effect of MS. [ 14 ] Nevertheless, OCBs remain useful as a biomarker.
Oligoclonal bands are an important indicator in the diagnosis of multiple sclerosis . Up to 95% of all patients with multiple sclerosis have observable oligoclonal bands [ 15 ] at least for those with European ancestry. [ 16 ] The last available reports in 2017 were pointing to a sensitivity of 98% and specificity of 87% for differential diagnosis versus MS mimickers (specificity respect unselected population should be equal or higher). [ 17 ]
Other application for OCBs is as a tool to classify patients. It is known since long ago that OCB negative MS patients have a slower evolution. Some reports point that the underlying condition that causes the MS lesions in these patients is different. There are four pathological patterns of damage, and in the majority of patients with pattern II and III brain lesions oligoclonal bands are absent or only transiently present. [ 18 ]
It has been reported that oligoclonal bands are nearly absent in patients with pattern II and pattern III lesion types. [ 19 ]
Six groups of patients are usually separated, based on OCBs: [ 20 ]
Type 2 and 3 indicate intrathecal synthesis, and the rest are considered as negative results (No MS).
The main importance of oligoclonal bands was to demonstrate the production of intrathecal immunoglobins (IgGs) for establishing a MS diagnosis. Currently alternative methods for detection of this intrathecal synthesis have been published, and therefore it has lost some of its importance in this area.
A specially interesting method are free light chains (FLC), specially the kappa-FLCs (kFLCs). Several authors have reported that the nephelometric and ELISA FLCs determination is comparable with OCBs as markers of IgG synthesis, and kFLCs behave even better than oligoclonal bands. [ 21 ]
Another alternative to oligoclonal bands for MS diagnosis is the MRZ-reaction (MRZR), a polyspecific antiviral immune response against the viruses of measles , rubella and zoster found in 1992. [ 22 ]
In some reports the MRZR showed a lower sensitivity than OCB (70% vs. 100%), but a higher specificity (92% vs. 69%) for MS. [ 22 ]
The presence of one band (a monoclonal band) may be considered serious, such as lymphoproliferative disease , or may simply be normal—it must be interpreted in the context of each specific patient. More bands may reflect the presence of a disease.
Oligoclonal bands may be found in: | https://en.wikipedia.org/wiki/Oligoclonal_band |
Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM). [ 1 ]
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligocrystalline_material |
The oligodynamic effect (from Greek oligos , "few", and dynamis , "force") is a biocidal effect of metals , especially heavy metals , that occurs even in low concentrations. This effect is attributed to the antibacterial behavior of metal ions, which are absorbed by bacteria upon contact and damage their cell membranes . [ 1 ]
In modern times, the effect was observed by Carl Nägeli , although he did not identify the cause. [ 2 ] Brass doorknobs , brass handrails , and silverware all exhibit this effect to an extent.
The metals react with thiol (-SH) or amine (-NH (1,2,3) ) groups of proteins, a mode of action to which microorganisms may develop resistance . Such resistance may be transmitted by plasmids . [ 3 ]
Aluminium has been found to compete with iron and magnesium and bind to DNA, membranes, or cell walls, leading to its toxic effect on microbes, such as cyanobacteria, soil bacteria and mycorrhizal fungi. [ 4 ]
Aluminium triacetate ( Burow's solution ) is used as an astringent mild antiseptic . [ 5 ]
Orthoesters of diarylstibinic acids are fungicides and bactericides , used in paints , plastics , and fibers . [ 6 ] Trivalent organic antimony was used in therapy for schistosomiasis . [ 7 ]
For many decades, arsenic was used medicinally to treat syphilis . It is still used in sheep dips , rat poisons , wood preservatives , weed killers , and other pesticides . Arsenic is poisonous if it enters the human body. [ 8 ]
Barium polysulfide is a fungicide and acaricide used in fruit and grape growing. [ 9 ]
Bismuth compounds have been used because of their astringent , antiphlogistic , bacteriostatic , and disinfecting actions. In dermatology bismuth subgallate is still used in vulnerary salves and powders as well as in antimycotics. [ 10 ] In the past, bismuth has also been used to treat syphilis and malaria . [ 11 ]
Boric acid esters derived from glycols (example, organo-borate formulation, Biobor JF ) are being used for the control of microorganisms in fuel systems containing water. [ 12 ]
Brass vessels release a small amount of copper ions into stored water, thus killing fecal bacterial counts as high as 1 million bacteria per milliliter. [ 13 ]
Copper sulfate mixed with lime ( Bordeaux mixture ) is used as a fungicide and antihelminthic . [ 14 ] Copper sulfate is used chiefly to destroy green algae ( algicide ) that grow in reservoirs, stock ponds, swimming pools, and fish tanks. Copper 8-hydroxyquinoline is sometimes included in paint to prevent mildew . [ 15 ]
Paint containing copper is used on boat bottoms to prevent barnacle growth ( biofouling ).
Copper also has the ability to destroy viruses, such as influenza viruses, noroviruses or human immunodeficiency virus (HIV). [ 16 ]
Gold is used in dental inlays and inhibits the growth of bacteria. [ 17 ]
Physicians prescribed various forms of lead to heal ailments ranging from constipation to infectious diseases such as the plague . Lead was also used to preserve or sweeten wine. [ 18 ] Lead arsenate is used in insecticides and herbicides. [ 19 ] Some organic lead compounds are used as industrial biocides: thiomethyl triphenyllead is used as an antifungal agent, cotton preservative, and lubricant additive; thiopropyl triphenyllead as a rodent repellant; tributyllead acetate as a wood and cotton preservative; tributyllead imidazole as a lubricant additive and cotton preservative. [ 20 ]
Phenylmercuric borate and acetate were used for disinfecting mucous membranes at an effective concentration of 0.07% in aqueous solutions. Due to toxicological and ecotoxicological reasons phenylmercury salts are no longer in use. However, some surgeons use mercurochrome despite toxicological objections. [ 3 ] Mercurochrome is still available to purchase in Australia to use on minor wounds. Dental amalgam used in fillings inhibits bacterial reproduction. [ 13 ]
Organic mercury compounds have been used as topical disinfectants ( thimerosal , nitromersol , and merbromin ) and preservatives in medical preparations ( thimerosal ) and grain products (both methyl and ethyl mercurials ). Mercury was used in the treatment of syphilis . Calomel was commonly used in infant teething powders in the 1930s and 1940s. Mercurials are also used agriculturally as insecticides and fungicides . [ 21 ]
The toxicity of nickel to bacteria, yeasts, and fungi differs considerably. [ 22 ]
The metabolism of bacteria is adversely affected by silver ions at concentrations of 0.01–0.1 mg/L. Therefore, even less soluble silver compounds, such as silver chloride , also act as bactericides or germicides, but not the much less soluble silver sulfide . In the presence of atmospheric oxygen, metallic silver also has a bactericidal effect due to the formation of silver oxide , which is soluble enough to cause it. Even objects with a solid silver surface (e.g., table silver, silver coins, or silver foil) have a bactericidal effect. Silver drinking vessels were carried by military commanders on expeditions for protection against disease. It was once common to place silver foil or even silver coins on wounds for the same reason. [ 23 ]
Silver sulfadiazine is used as an antiseptic ointment for extensive burns. An equilibrium dispersion of colloidal silver with dissolved silver ions can be used to purify drinking water at sea. [ 3 ] Silver is incorporated into medical implants and devices such as catheters . Surfacine ( silver iodide ) is a relatively new antimicrobial for application to surfaces. Silver-impregnated wound dressings have proven especially useful against antibiotic-resistant bacteria. Silver nitrate is used as a hemostatic, antiseptic and astringent. At one time, many states [ clarification needed ] required that the eyes of newborns be treated with a few drops of silver nitrate to guard against an infection of the eyes called gonorrheal neonatal ophthalmia , which the infants might have contracted as they passed through the birth canal. Silver ions are increasingly incorporated into many hard surfaces, such as plastics and steel, as a way to control microbial growth on items such as toilet seats, stethoscopes, and even refrigerator doors. Among the newer products being sold are plastic food containers infused with silver nanoparticles , which are intended to keep food fresher, and silver-infused athletic shirts and socks, which claim to minimize odors. [ 15 ] [ 17 ]
Thallium compounds such as thallium sulfate have been used for impregnating wood and leather to kill fungal spores and bacteria, and for the protection of textiles from attack by moths. [ 24 ] Thallium sulfate has been used as a depilatory and in the treatment of venereal disease, skin fungal infections, and tuberculosis. [ 25 ]
Tetrabutyltin is used as an antifouling paint for ships, for the prevention of slimes in industrial recirculating water systems, for combating freshwater snails that cause bilharzia , as a wood and textile preservative, and as a disinfectant. Tricyclohexyltin hydroxide is used as an acaricide. Triphenyltin hydroxide and triphenyltin acetate are used as fungicides. [ 26 ]
Zinc oxide is used as a weak antiseptic and in paints as a white pigment and mold-growth inhibitor. [ 27 ] Zinc chloride is a common ingredient in mouthwashes and deodorants, and zinc pyrithione is an ingredient in antidandruff shampoos. Galvanized (zinc-coated) fittings on roofs impede the growth of algae. Copper- and zinc-treated shingles are available. [ 15 ] Zinc iodide and zinc sulfate are used as topical antiseptics. [ 28 ]
Besides the individual toxic effects of each metal, a wide range of metals are nephrotoxic in humans and/or in animals. [ 29 ] Some metals and their compounds are carcinogenic to humans. [ citation needed ] A few metals, such as lead and mercury, can cross the placental barrier and adversely affect fetal development . [ 30 ] Several (cadmium, zinc, copper, and mercury) can induce special protein complexes called metallothioneins . [ 31 ] | https://en.wikipedia.org/wiki/Oligodynamic_effect |
An oligoester is an ester oligomer chain containing a small number of repeating ester units ( monomers ). [ 1 ] Oligoesters are short analogs of polymeric polyesters .
An example is oligo-( R )-3-hydroxybutyrate. [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligoester |
The term oligolecty is used in pollination ecology to refer to bees that exhibit a narrow, specialized preference for pollen sources, typically to a single family or genus of flowering plants. The preference may occasionally extend broadly to multiple genera within a single plant family, or be as narrow as a single plant species. When the choice is very narrow, the term monolecty is sometimes used, originally meaning a single plant species but recently broadened to include examples where the host plants are related members of a single genus. [ 1 ] The opposite term is polylectic and refers to species that collect pollen from a wide range of species. The most familiar example of a polylectic species is the domestic honey bee .
Oligolectic pollinators are often called oligoleges or simply specialist pollinators , and this behavior is especially common in the bee families Andrenidae and Halictidae , though there are thousands of species in hundreds of genera, in essentially all known bee families; in certain areas of the world, such as deserts, oligoleges may represent half or more of all the resident bee species. [ 2 ] Attempts have been made to determine whether a narrow host preference is due to an inability of the bee larvae to digest and develop on a variety of pollen types, or a limitation of the adult bee's learning and perception (i.e., they simply do not recognize other flowers as potential food sources), and most of the available evidence suggests the latter. However, a few plants whose pollen contains toxic substances (e.g., Toxicoscordion and related genera in the Melanthieae ) are visited by oligolectic bees, and these may fall into the former category. The evidence from large-scale phylogenetic analyses of bee evolution suggests that, for most groups of bees, oligolecty is the ancestral condition and polylectic lineages arose from among those ancestral specialists. [ 2 ]
There are some cases where oligoleges collect their host plant's pollen as larval food but, for various reasons, rarely or never actually pollinate the flowers. A well-studied example is the relationship between the yellow passionflower ( Passiflora lutea ) and the passionflower bee ( Anthemurgus passiflorae ) in Texas . [ 3 ] | https://en.wikipedia.org/wiki/Oligolecty |
In chemistry and biochemistry , an oligomer ( / ə ˈ l ɪ ɡ ə m ər / ⓘ ) is a molecule that consists of a few repeating units which could be derived, actually or conceptually, from smaller molecules, monomers . [ 1 ] [ 2 ] [ 3 ] The name is composed of Greek elements oligo- , "a few" and -mer , "parts". An adjective form is oligomeric . [ 3 ]
The oligomer concept is contrasted to that of a polymer , which is usually understood to have a large number of units, possibly thousands or millions. However, there is no sharp distinction between these two concepts. One proposed criterion is whether the molecule's properties vary significantly with the removal of one or a few of the units. [ 3 ]
An oligomer with a specific number of units is referred to by the Greek prefix denoting that number, with the ending -mer : thus dimer , trimer , tetramer , pentamer , and hexamer refer to molecules with two, three, four, five, and six units, respectively. The units of an oligomer may be arranged in a linear chain (as in melam , a dimer of melamine ); a closed ring (as in 1,3,5-trioxane , a cyclic trimer of formaldehyde ); or a more complex structure (as in tellurium tetrabromide , a tetramer of TeBr 4 with a cube -like core). If the units are identical, one has a homo-oligomer ; otherwise one may use hetero-oligomer . An example of a homo-oligomeric protein is collagen , which is composed of three identical protein chains.
Some biologically important oligomers are macromolecules like proteins or nucleic acids ; for instance, hemoglobin is a protein tetramer. An oligomer of amino acids is called an oligopeptide or just a peptide . An oligosaccharide is an oligomer of monosaccharides (simple sugars). An oligonucleotide is a short single-stranded fragment of nucleic acid such as DNA or RNA , or similar fragments of analogs of nucleic acids such as peptide nucleic acid or Morpholinos .
The units of an oligomer may be connected by covalent bonds , which may result from bond rearrangement or condensation reactions , or by weaker forces such as hydrogen bonds . The term multimer ( / ˈ m ʌ l t ɪ m ər / ) is used in biochemistry for oligomers of proteins that are not covalently bound. The major capsid protein VP1 that comprises the shell of polyomaviruses is a self-assembling multimer of 72 pentamers held together by local electric charges.
Many oils are oligomeric, such as liquid paraffin . Plasticizers are oligomeric esters widely used to soften thermoplastics such as PVC . They may be made from monomers by linking them together, or by separation from the higher fractions of crude oil . Polybutene is an oligomeric oil used to make putty .
Oligomerization is a chemical process that converts monomers to macromolecular complexes through a finite degree of polymerization . [ 3 ] Telomerization is an oligomerization carried out under conditions that result in chain transfer , limiting the size of the oligomers. [ 4 ] [ 3 ] (This concept is not to be confused with the formation of a telomere , a region of highly repetitive DNA at the end of a chromosome .)
In the oil and gas industry, green oil refers to oligomers formed in all C2, C3, and C4 hydrogenation reactors of ethylene plants and other petrochemical production facilities; it is a mixture of C4 to C20 unsaturated and reactive components with about 90% aliphatic dienes and 10% of alkanes plus alkenes . [ 5 ] Different heterogeneous and homogeneous catalysts are operative in producing green oils via the oligomerization of alkenes. [ 6 ] | https://en.wikipedia.org/wiki/Oligomer |
Oligomer Restriction (abbreviated OR ) is a procedure to detect an altered DNA sequence in a genome . A labeled oligonucleotide probe is hybridized to a target DNA, and then treated with a restriction enzyme . If the probe exactly matches the target, the restriction enzyme will cleave the probe, changing its size. If, however, the target DNA does not exactly match the probe, the restriction enzyme will have no effect on the length of the probe. The OR technique, now rarely performed, was closely associated with the development of the popular polymerase chain reaction (PCR) method.
In part 1a of the schematic the oligonucleotide probe, labeled on its left end (asterisk), is shown on the top line. It is fully complementary to its target DNA (here taken from the human β-hemoglobin gene ), as shown on the next line. Part of the probe includes the Recognition site for the restriction enzyme Dde I (underlined).
In part 1b, the restriction enzyme has cleaved the probe and its target (Dde I leaves three bases unpaired at each end). The labeled end of the probe is now just 8 bases in length, and is easily separated by Gel electrophoresis from the uncut probe, which was 40 bases long.
In part 2, the same probe is shown hybridized to a target DNA which includes a single base mutation (here the mutation responsible for Sickle Cell Anemia , or SCA). The mismatched hybrid no longer acts as a recognition site for the restriction enzyme, and the probe remains at its original length.
The Oligomer Restriction technique was developed as a variation of the Restriction Fragment Length Polymorphism (RFLP) assay method, with the hope of avoiding the laborious Southern blotting step used in RFLP analysis. OR was conceived by Randall Saiki and Henry Erlich in the early 1980s, working at Cetus Corporation in Emeryville, California . It was patented in 1984 [ 1 ] and published in 1985, [ 2 ] having been applied to the genomic mutation responsible for Sickle Cell Anemia . OR was soon replaced by the more general technique of Allele Specific Oligonucleotide (ASO) probes. [ 3 ]
The Oligomer Restriction method was beset by a number of problems:
Despite its limitations, the OR technique benefited from its close association with the development of the polymerase chain reaction. Kary Mullis , who also worked at Cetus, had synthesized the oligonucleotide probes being tested by Saiki and Erlich. Aware of the problems they were encountering, he envisioned an alternative method for analyzing the SCA mutation that would use components of the Sanger DNA sequencing technique. Realizing the difficulty of hybridizing an oligonucleotide primer to a single location in the genome, he considered using a second primer on the opposite strand. He then generalized that process and realized that repeated extensions of the two primers would lead to an exponential increase in the segment of DNA between the primers - a chain reaction of replication catalyzed by DNA polymerase . [ 4 ] [ 5 ]
As Mullis encountered his own difficulties in demonstrating PCR , [ 6 ] he joined an existing group of researchers that were addressing the problems with OR. Together, they developed the combined PCR-OR assay. Thus, OR became the first method used to analyze PCR-amplified genomic DNA.
Mullis also encountered difficulties in publishing the basic idea of PCR (scientific journals rarely publish concepts without accompanying results). When his manuscript for the journal Nature was rejected, the basic description of PCR was hurriedly added to the paper originally intended to report the OR method (Mullis was also a co-author there). This OR paper [ 2 ] thus became the first publication of PCR, and for several years would become the report most cited by other researchers. | https://en.wikipedia.org/wiki/Oligomer_restriction |
Oligonucleotidase ( EC 3.1.13.3 , oligoribonuclease ) is an exoribonuclease derived from Flammulina velutipes . [ 1 ] This enzyme catalyses the following chemical reaction
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligonucleotidase |
Oligonucleotide synthesis is the chemical synthesis of relatively short fragments of nucleic acids with defined chemical structure ( sequence ). The technique is extremely useful in current laboratory practice because it provides a rapid and inexpensive access to custom-made oligonucleotides of the desired sequence. Whereas enzymes synthesize DNA and RNA only in a 5' to 3' direction , chemical oligonucleotide synthesis does not have this limitation, although it is most often carried out in the opposite, 3' to 5' direction. Currently, the process is implemented as solid-phase synthesis using phosphoramidite method and phosphoramidite building blocks derived from protected 2'-deoxynucleosides ( dA , dC , dG , and T ), ribonucleosides ( A , C , G , and U ), or chemically modified nucleosides, e.g. LNA or BNA .
To obtain the desired oligonucleotide, the building blocks are sequentially coupled to the growing oligonucleotide chain in the order required by the sequence of the product (see Synthetic cycle below). The process has been fully automated since the late 1970s. Upon the completion of the chain assembly, the product is released from the solid phase to solution, deprotected, and collected. The occurrence of side reactions sets practical limits for the length of synthetic oligonucleotides (up to about 200 nucleotide residues) because the number of errors accumulates with the length of the oligonucleotide being synthesized. [ 1 ] Products are often isolated by high-performance liquid chromatography (HPLC) to obtain the desired oligonucleotides in high purity. Typically, synthetic oligonucleotides are single-stranded DNA or RNA molecules around 15–25 bases in length.
Oligonucleotides find a variety of applications in molecular biology and medicine. They are most commonly used as antisense oligonucleotides, small interfering RNA , primers for DNA sequencing and amplification , probes for detecting complementary DNA or RNA via molecular hybridization , tools for the targeted introduction of mutations and restriction sites , and for the synthesis of artificial genes . An emerging application of Oligonucleotide synthesis is the re-creation of viruses from sequence alone — either harmless, such as Phi X 174 , or dangerous such as the 1917 influenza virus or SARS-CoV-2 .
The evolution of oligonucleotide synthesis saw four major methods of the formation of internucleosidic linkages and has been reviewed in the literature in great detail. [ 2 ] [ 3 ] [ 4 ]
In the early 1950s, Alexander Todd 's group pioneered H-phosphonate and phosphate triester methods of oligonucleotide synthesis. [ 5 ] [ 6 ] The reaction of compounds 1 and 2 to form H-phosphonate diester 3 is an H-phosphonate coupling in solution while that of compounds 4 and 5 to give 6 is a phosphotriester coupling (see phosphotriester synthesis below).
Thirty years later, this work inspired, independently, two research groups to adopt the H-phosphonate chemistry to the solid-phase synthesis using nucleoside H-phosphonate monoesters 7 as building blocks and pivaloyl chloride, 2,4,6-triisopropylbenzenesulfonyl chloride (TPS-Cl), and other compounds as activators. [ 7 ] [ 8 ] The practical implementation of H-phosphonate method resulted in a very short and simple synthetic cycle consisting of only two steps, detritylation and coupling (Scheme 2). Oxidation of internucleosidic H-phosphonate diester linkages in 8 to phosphodiester linkages in 9 with a solution of iodine in aqueous pyridine is carried out at the end of the chain assembly rather than as a step in the synthetic cycle. If desired, the oxidation may be carried out under anhydrous conditions. [ 9 ] Alternatively, 8 can be converted to phosphorothioate 10 [ 10 ] [ 11 ] [ 12 ] [ 13 ] or phosphoroselenoate 11 (X = Se), [ 14 ] or oxidized by CCl 4 in the presence of primary or secondary amines to phosphoramidate analogs 12 . [ 15 ] [ 16 ] The method is very convenient in that various types of phosphate modifications (phosphate/phosphorothioate/phosphoramidate) may be introduced to the same oligonucleotide for modulation of its properties. [ 17 ] [ 18 ] [ 19 ]
Most often, H-phosphonate building blocks are protected at the 5'-hydroxy group and at the amino group of nucleic bases A, C, and G in the same manner as phosphoramidite building blocks (see below). However, protection at the amino group is not mandatory. [ 9 ] [ 20 ]
In the 1950s, Har Gobind Khorana and co-workers developed a phosphodiester method where 3'- O -acetylnucleoside-5'- O -phosphate 2 (Scheme 3) was activated with N , N ' -dicyclohexylcarbodiimide (DCC) or 4-toluenesulfonyl chloride (Ts-Cl). The activated species were reacted with a 5'- O -protected nucleoside 1 to give a protected dinucleoside monophosphate 3 . [ 21 ] Upon the removal of 3'- O -acetyl group using base-catalyzed hydrolysis, further chain elongation was carried out. Following this methodology, sets of tri- and tetradeoxyribonucleotides were synthesized and were enzymatically converted to longer oligonucleotides, which allowed elucidation of the genetic code . The major limitation of the phosphodiester method consisted in the formation of pyrophosphate oligomers and oligonucleotides branched at the internucleosidic phosphate. The method seems to be a step back from the more selective chemistry described earlier; however, at that time, most phosphate-protecting groups available now had not yet been introduced. The lack of the convenient protection strategy necessitated taking a retreat to a slower and less selective chemistry to achieve the ultimate goal of the study. [ 2 ]
In the 1960s, groups led by R. Letsinger [ 22 ] and C. Reese [ 23 ] developed a phosphotriester approach. The defining difference from the phosphodiester approach was the protection of the phosphate moiety in the building block 1 (Scheme 4) and in the product 3 with 2-cyanoethyl group. This precluded the formation of oligonucleotides branched at the internucleosidic phosphate. The higher selectivity of the method allowed the use of more efficient coupling agents and catalysts, [ 24 ] [ 25 ] which dramatically reduced the length of the synthesis. The method, initially developed for the solution-phase synthesis, was also implemented on low-cross-linked "popcorn" polystyrene, [ 26 ] and later on controlled pore glass (CPG, see "Solid support material" below), which initiated a massive research effort in solid-phase synthesis of oligonucleotides and eventually led to the automation of the oligonucleotide chain assembly.
In the 1970s, substantially more reactive P(III) derivatives of nucleosides, 3'- O -chlorophosphites, were successfully used for the formation of internucleosidic linkages. [ 27 ] This led to the discovery of the phosphite triester methodology. The group led by M. Caruthers took the advantage of less aggressive and more selective 1 H -tetrazolidophosphites and implemented the method on solid phase. [ 28 ] Very shortly after, the workers from the same group further improved the method by using more stable nucleoside phosphoramidites as building blocks. [ 29 ] The use of 2-cyanoethyl phosphite-protecting group [ 30 ] in place of a less user-friendly methyl group [ 31 ] [ 32 ] led to the nucleoside phosphoramidites currently used in oligonucleotide synthesis (see Phosphoramidite building blocks below). Many later improvements to the manufacturing of building blocks, oligonucleotide synthesizers, and synthetic protocols made the phosphoramidite chemistry a very reliable and expedient method of choice for the preparation of synthetic oligonucleotides. [ 1 ]
As mentioned above, the naturally occurring nucleotides (nucleoside-3'- or 5'-phosphates) and their phosphodiester analogs are insufficiently reactive to afford an expeditious synthetic preparation of oligonucleotides in high yields. The selectivity and the rate of the formation of internucleosidic linkages is dramatically improved by using 3'- O -( N , N -diisopropyl phosphoramidite) derivatives of nucleosides (nucleoside phosphoramidites) that serve as building blocks in phosphite triester methodology. To prevent undesired side reactions, all other functional groups present in nucleosides have to be rendered unreactive (protected) by attaching protecting groups . Upon the completion of the oligonucleotide chain assembly, all the protecting groups are removed to yield the desired oligonucleotides. Below, the protecting groups currently used in commercially available [ 33 ] [ 34 ] [ 35 ] [ 36 ] and most common nucleoside phosphoramidite building blocks are briefly reviewed:
The protection of the exocyclic amino groups has to be orthogonal to that of the 5'-hydroxy group because the latter is removed at the end of each synthetic cycle. The simplest to implement, and hence the most widely used, strategy is to install a base-labile protection group on the exocyclic amino groups. Most often, two protection schemes are used.
Non-nucleoside phosphoramidites are the phosphoramidite reagents designed to introduce various functionalities at the termini of synthetic oligonucleotides or between nucleotide residues in the middle of the sequence. In order to be introduced inside the sequence, a non-nucleosidic modifier has to possess at least two hydroxy groups, one of which is often protected with the DMT group while the other bears the reactive phosphoramidite moiety.
Non-nucleosidic phosphoramidites are used to introduce desired groups that are not available in natural nucleosides or that can be introduced more readily using simpler chemical designs. A very short selection of commercial phosphoramidite reagents is shown in Scheme for the demonstration of the available structural and functional diversity. These reagents serve for the attachment of 5'-terminal phosphate ( 1 ), [ 52 ] NH 2 ( 2 ), [ 53 ] SH ( 3 ), [ 54 ] aldehydo ( 4 ), [ 55 ] and carboxylic groups ( 5 ), [ 56 ] CC triple bonds ( 6 ), [ 57 ] non-radioactive labels and quenchers (exemplified by 6-FAM amidite 7 [ 58 ] for the attachment of fluorescein and dabcyl amidite 8 , [ 59 ] respectively), hydrophilic and hydrophobic modifiers (exemplified by hexaethyleneglycol amidite 9 [ 60 ] [ 61 ] and cholesterol amidite 10 , [ 62 ] respectively), and biotin amidite 11 . [ 63 ]
Oligonucleotide synthesis is carried out by a stepwise addition of nucleotide residues to the 5'-terminus of the growing chain until the desired sequence is assembled. Each addition is referred to as a synthesis cycle (Scheme 5) and consists of four chemical reactions:
The DMT group is removed with a solution of an acid, such as 2% trichloroacetic acid (TCA) or 3% dichloroacetic acid (DCA), in an inert solvent ( dichloromethane or toluene ). The orange-colored DMT cation formed is washed out; the step results in the solid support-bound oligonucleotide precursor bearing a free 5'-terminal hydroxyl group.
It is worth remembering that conducting detritylation for an extended time or with stronger than recommended solutions of acids leads to depurination of solid support-bound oligonucleotide and thus reduces the yield of the desired full-length product. [ citation needed ]
A 0.02–0.2 M solution of nucleoside phosphoramidite (or a mixture of several phosphoramidites) in acetonitrile is activated by a 0.2–0.7 M solution of an acidic azole catalyst, 1 H -tetrazole , 5-ethylthio-1H-tetrazole, [ 64 ] 2-benzylthiotetrazole, [ 65 ] [ 66 ] 4,5-dicyano imidazole , [ 67 ] or a number of similar compounds. More extensive information on the use of various coupling agents in oligonucleotide synthesis can be found in a recent review. [ 68 ] The mixing is usually very brief and occurs in fluid lines of oligonucleotide synthesizers (see below) while the components are being delivered to the reactors containing solid support. The activated phosphoramidite in 1.5 – 20-fold excess over the support-bound material is then brought in contact with the starting solid support (first coupling) or a support-bound oligonucleotide precursor (following couplings) whose 5'-hydroxy group reacts with the activated phosphoramidite moiety of the incoming nucleoside phosphoramidite to form a phosphite triester linkage. The coupling of 2'-deoxynucleoside phosphoramidites is very rapid and requires, on small scale, about 20 s for its completion. In contrast, sterically hindered 2'- O -protected ribonucleoside phosphoramidites require 5-15 min to be coupled in high yields. [ 47 ] [ 69 ] [ 70 ] [ 71 ] The reaction is also highly sensitive to the presence of water, particularly when dilute solutions of phosphoramidites are used, and is commonly carried out in anhydrous acetonitrile. Generally, the larger the scale of the synthesis, the lower the excess and the higher the concentration of the phosphoramidites is used. In contrast, the concentration of the activator is primarily determined by its solubility in acetonitrile and is irrespective of the scale of the synthesis. Upon the completion of the coupling, any unbound reagents and by-products are removed by washing.
The capping step is performed by treating the solid support-bound material with a mixture of acetic anhydride and 1-methylimidazole or, less often, DMAP as catalysts and, in the phosphoramidite method, serves two purposes.
The newly formed tricoordinated phosphite triester linkage is not natural and is of limited stability under the conditions of oligonucleotide synthesis. The treatment of the support-bound material with iodine and water in the presence of a weak base (pyridine, lutidine , or collidine ) oxidizes the phosphite triester into a tetracoordinated phosphate triester, a protected precursor of the naturally occurring phosphate diester internucleosidic linkage. Oxidation may be carried out under anhydrous conditions using tert-Butyl hydroperoxide [ 74 ] or, more efficiently, (1S)-(+)-(10-camphorsulfonyl)-oxaziridine (CSO). [ 75 ] [ 76 ] [ 77 ] The step of oxidation may be substituted with a sulfurization step to obtain oligonucleotide phosphorothioates (see Oligonucleotide phosphorothioates and their synthesis below). In the latter case, the sulfurization step is best carried out prior to capping.
In solid-phase synthesis, an oligonucleotide being assembled is covalently bound, via its 3'-terminal hydroxy group, to a solid support material and remains attached to it over the entire course of the chain assembly. The solid support is contained in columns whose dimensions depend on the scale of synthesis and may vary between 0.05 mL and several liters. The overwhelming majority of oligonucleotides are synthesized on small scale ranging from 10 n mol to 1 μmol. More recently, high-throughput oligonucleotide synthesis where the solid support is contained in the wells of multi-well plates (most often, 96 or 384 wells per plate) became a method of choice for parallel synthesis of oligonucleotides on small scale. [ 78 ] At the end of the chain assembly, the oligonucleotide is released from the solid support and is eluted from the column or the well.
In contrast to organic solid-phase synthesis and peptide synthesis , the synthesis of oligonucleotides proceeds best on non-swellable or low-swellable solid supports. The two most often used solid-phase materials are controlled pore glass (CPG) and macroporous polystyrene (MPPS). [ 79 ]
To make the solid support material suitable for oligonucleotide synthesis, non-nucleosidic linkers or nucleoside succinates are covalently attached to the reactive amino groups in aminopropyl CPG, LCAA CPG, or aminomethyl MPPS. The remaining unreacted amino groups are capped with acetic anhydride . Typically, three conceptually different groups of solid supports are used.
Oligonucleotide phosphorothioates (OPS) are modified oligonucleotides where one of the oxygen atoms in the phosphate moiety is replaced by sulfur. Only the phosphorothioates having sulfur at a non-bridging position as shown in figure are widely used and are available commercially. The replacement of the non-bridging oxygen with sulfur creates a new center of chirality at phosphorus . In a simple case of a dinucleotide, this results in the formation of a diastereomeric pair of S p - and R p -dinucleoside monophosphorothioates whose structures are shown in Figure. In an n -mer oligonucleotide where all ( n – 1) internucleosidic linkages are phosphorothioate linkages, the number of diastereomers m is calculated as m = 2 ( n – 1) . Being non-natural analogs of nucleic acids, OPS are substantially more stable towards hydrolysis by nucleases , the class of enzymes that destroy nucleic acids by breaking the bridging P-O bond of the phosphodiester moiety. This property determines the use of OPS as antisense oligonucleotides in in vitro and in vivo applications where the extensive exposure to nucleases is inevitable. Similarly, to improve the stability of siRNA , at least one phosphorothioate linkage is often introduced at the 3'-terminus of both sense and antisense strands. In chirally pure OPS, all-Sp diastereomers are more stable to enzymatic degradation than their all-Rp analogs. [ 91 ] However, the preparation of chirally pure OPS remains a synthetic challenge. [ 13 ] [ 92 ] In laboratory practice, mixtures of diastereomers of OPS are commonly used.
Synthesis of OPS is very similar to that of natural oligonucleotides. The difference is that the oxidation step is replaced by sulfur transfer reaction (sulfurization) and that the capping step is performed after the sulfurization. Of many reported reagents capable of the efficient sulfur transfer, only three are commercially available:
In the past, oligonucleotide synthesis was carried out manually in solution or on solid phase. The solid phase synthesis was implemented using, as containers for the solid phase, miniature glass columns similar in their shape to low-pressure chromatography columns or syringes equipped with porous filters. [ 101 ] Currently, solid-phase oligonucleotide synthesis is carried out automatically using computer-controlled instruments (oligonucleotide synthesizers) and is technically implemented in column, multi-well plate, and array formats. The column format is best suited for research and large scale applications where a high-throughput is not required. [ 102 ] Multi-well plate format is designed specifically for high-throughput synthesis on small scale to satisfy the growing demand of industry and academia for synthetic oligonucleotides. [ 103 ]
Large scale oligonucleotide synthesizers were often developed by augmenting the capabilities of a preexisting instrument platform. One of the first mid scale synthesizers appeared in the late 1980s, manufactured by the Biosearch company in Novato, CA (The 8800). This platform was originally designed as a peptide synthesizer and made use of a fluidized bed reactor essential for accommodating the swelling characteristics of polystyrene supports used in the Merrifield methodology. Oligonucleotide synthesis involved the use of CPG (controlled pore glass) which is a rigid support and is more suited for column reactors as described above. The scale of the 8800 was limited to the flow rate required to fluidize the support. Some novel reactor designs as well as higher than normal pressures enabled the 8800 to achieve scales that would prepare 1 mmol of oligonucleotide. In the mid 1990s several companies developed platforms that were based on semi-preparative and preparative liquid chromatographs. These systems were well suited for a column reactor approach. In most cases all that was required was to augment the number of fluids that could be delivered to the column. Oligo synthesis requires a minimum of 10 and liquid chromatographs usually accommodate 4. This was an easy design task and some semi-automatic strategies worked without any modifications to the preexisting LC equipment. PerSeptive Biosystems as well as Pharmacia (GE) were two of several companies that developed synthesizers out of liquid chromatographs. Genomic Technologies, Inc. [ 104 ] was one of the few companies to develop a large scale oligonucleotide synthesizer that was, from the ground up, an oligonucleotide synthesizer. The initial platform called the VLSS for very large scale synthesizer utilized large Pharmacia liquid chromatograph columns as reactors and could synthesize up to 75 mmol of material. Many oligonucleotide synthesis factories designed and manufactured their own custom platforms and little is known due to the designs being proprietary. The VLSS design continued to be refined and is continued in the QMaster synthesizer [ 105 ] which is a scaled down platform providing milligram to gram amounts of synthetic oligonucleotide.
The current practices of synthesis of chemically modified oligonucleotides on large scale have been recently reviewed. [ 106 ]
One may visualize an oligonucleotide microarray as a miniature multi-well plate where physical dividers between the wells (plastic walls) are intentionally removed. With respect to the chemistry, synthesis of oligonucleotide microarrays is different from the conventional oligonucleotide synthesis in two respects:
After the completion of the chain assembly, the solid support-bound oligonucleotide is fully protected:
To furnish a functional oligonucleotide, all the protecting groups have to be removed. The N-acyl base protection and the 2-cyanoethyl phosphate protection may be, and is often removed simultaneously by treatment with inorganic bases or amines. However, the applicability of this method is limited by the fact that the cleavage of 2-cyanoethyl phosphate protection gives rise to acrylonitrile as a side product. Under the strong basic conditions required for the removal of N-acyl protection, acrylonitrile is capable of alkylation of nucleic bases, primarily, at the N3-position of thymine and uracil residues to give the respective N3-(2-cyanoethyl) adducts via Michael reaction . The formation of these side products may be avoided by treating the solid support-bound oligonucleotides with solutions of bases in an organic solvent, for instance, with 50% triethylamine in acetonitrile [ 109 ] or 10% diethylamine in acetonitrile. [ 110 ] This treatment is strongly recommended for medium- and large scale preparations and is optional for syntheses on small scale where the concentration of acrylonitrile generated in the deprotection mixture is low.
Regardless of whether the phosphate protecting groups were removed first, the solid support-bound oligonucleotides are deprotected using one of the two general approaches.
As with any other organic compound, it is prudent to characterize synthetic oligonucleotides upon their preparation. In more complex cases (research and large scale syntheses) oligonucleotides are characterized after their deprotection and after purification. Although the ultimate approach to the characterization is sequencing , a relatively inexpensive and routine procedure, the considerations of the cost reduction preclude its use in routine manufacturing of oligonucleotides. In day-by-day practice, it is sufficient to obtain the molecular mass of an oligonucleotide by recording its mass spectrum . Two methods are currently widely used for characterization of oligonucleotides: electrospray mass spectrometry (ESI MS) and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry ( MALDI-TOF ). To obtain informative spectra, it is very important to exchange all metal ions that might be present in the sample for ammonium or trialkylammonium [ e.c. triethylammonium, (C 2 H 5 ) 3 NH + ] ions prior to submitting a sample to the analysis by either of the methods. | https://en.wikipedia.org/wiki/Oligonucleotide_synthesis |
An Oligopeptidase is an enzyme that cleaves peptides but not proteins . This property is due to its structure: the active site of this enzyme is located at the end of a narrow cavity which can only be reached by peptides.
Proteins are essential macromolecules of living organisms. They are continuously being degraded into their constituent amino acids which can be reused in the synthesis of new proteins. Every cellular protein has its own half-life time. In humans, for instance, 50% of the liver and plasma proteins are replaced in 10 days, whereas in muscles it takes 180 days. In average, every 80 days about 50% of our proteins are totally replaced. [ 2 ] Although the regulation of protein degradation is as important as their synthesis to keep each cell protein concentration at the optimum level, research in this area remained until the end of the 1970s. Up to this time, lysosomes , discovered in the 1950s by the Belgian cytologist Christian de Duve , were thought responsible for the complete digestion of intra- and extracellular proteins by the lysosomal hydrolytic enzymes. [ citation needed ]
Between the 1970s and 1980s, this view drastically changed. New experimental evidences showed that, under physiological conditions, non-lysosomal proteases were responsible for limited proteolysis of intra- and/or extracellular proteins, a concept originally conceived by Linderstᴓm-Lang in 1950. [ 3 ] Endogenous or exogenous proteins are processed by non-lysosomal proteases into intermediate-sized polypeptides, which display gene and metabolic regulation, neurologic, endocrine, and immunological roles, whose dysfunction might explain a number of pathologies. Consequently, protein degradation did not represent anymore the end of the biological function of proteins, but rather the beginning of a yet unexplored side of the biology of the cells. A number of intra- or extracellular proteases release protein fragments endowed with essential biological activities. These hydrolytic processes could be carried out by proteases such as Proteasomes , Proprotein Convertases, [ 4 ] Caspases , Rennin and Kallikreins . Among the products released by the non-lysosomal proteases are the bioactive oligopeptides such as hormones, neuropeptides and epitopes that, once released, could be modulated in their biological activities by specific peptidases, which promote the trimming, conversion and/or inactivation of the bioactive oligopeptides.
The history of oligopeptidases originates in the late 1960s, when the rabbit brain was searched for enzymes that cause inactivation of the nonapeptide bradykinin . [ 5 ] In the early and mid 1970s two thiol-activated endopeptidases, responsible for more than 90% of bradykinin inactivation, were isolated from cytosol of rabbit brain, and characterized. [ 6 ] [ 7 ] They correspond to EOPA (endooligopeptidase A, EC 3.4.22.19), and Prolyl endopeptidase or Prolyl oligopeptidase (POP) (EC 3.4.21.26). Since their activities are restricted to oligopeptides (usually from 8-13 amino acid residues), and do not hydrolyze proteins or large peptides (>30 amino acid residues), they were designated oligopeptidases. [ 8 ] In the early and mid 1980s other oligopeptidases, mostly metallopeptidases, were described in the cytosol of mammalian tissues, such as the TOP (thimet oligopeptidase, EC 3.4.24.15), [ 9 ] and the neurolysin (EC 3.4.24.16). [ 10 ] Earlier on, the ACE ( angiotensin-converting enzyme , EC 3.4.15.1), and the NEP ( neprilysin , EC 3.4.24.11), had been described, at the end of the 1960s, [ 11 ] and in 1973, [ 12 ] respectively.
Short ' oligopeptides ', predominantly smaller than 30 amino acids in length, play essential roles as hormones , in the surveillance against pathogens , and in neurological activities. Therefore, these molecules constantly need to be specifically generated and inactivated, which is the role of the oligopeptidases. Oligopeptidase is a term coined in 1979 to designate a sub-group of the endopeptidases , [ 8 ] [ 13 ] which are not involved in the digestion nor in the processing of proteins like the pancreatic enzymes, proteasomes , cathepsins among many others. The prolyl-oligopeptidase or prolyl endopeptidase (POP) is a good example of how an oligopeptidase interacts with and metabolizes an oligopeptide. The peptide has first to penetrate into a 4 Å hole on the surface of the enzyme in order to reach an 8,500Å 3 internal cavity, where the active site is located. [ 1 ] [ 14 ] Even though the size of the peptide is crucial for its docking, the flexibility of both enzyme and ligand seems to play an essential role in determining whether a peptide bond will be hydrolyzed or not. [ 15 ] [ 16 ] This contrasts with the classical specificity of proteolytic enzymes, which derives from the chemical features of the amino acid side chains around the scissile bond. [ 17 ] A number of enzymatic studies supports this conclusion. [ 15 ] [ 18 ] This peculiar specificity suggests that the concept of conformational melding of the peptides used to explain the interaction between T-cell receptor and its epitopes, [ 19 ] seems more likely to describe the enzymatic specificity of the oligopeptidases. Another important feature of the oligopeptidases is their sensitivity to the oxidation-reduction (redox) state of the environment. [ 6 ] [ 7 ] An "on-off" switch provides a qualitative change in peptide binding and/or degradation activity. However, the redox state only exerts strong influence on cytosolic enzymes (TOP [ 20 ] [ 21 ] neurolysin [ 22 ] [ 23 ] POP [ 24 ] and Ndl-1 oligopeptidase, [ 25 ] [ 26 ] not on cytoplasmic membrane oligopeptidases (angiotensin-converting enzyme and neprilysin). Thus, the redox state of the intracellular environment very likely modulates the activity of the thiol-sensitive oligopeptidases, thereby contributing to define the fate of proteasome products, driving them to complete hydrolysis, or, alternatively, converting them into bioactive peptides, such as the MHC-Class I peptides. [ 16 ] [ 27 ] [ 28 ]
Since the discovery of the neuropeptides and peptide hormones from the central nervous system ( ACTH , β-MSH , endorphin , oxytocin , vasopressin , LHRH , enkephalins , substance P ), and of peripheral vasoactive peptides ( angiotensin , bradykinin) around the middle of last century, the number of known biologically active peptides has exponentially increased. They are signaling molecules, participating in all essential aspects of life, from physiological homeostasis (as neuropeptides, peptide hormones, vasoactive peptides), to immunological defense (as MHC class I and II, cytokinins ), and as regulatory peptides displaying more than a single action. These peptides result from partial proteolysis of intracellular or extracellular protein precursors performed by several processing enzymes or protease complexes ( rennin , kallikreins , calpains , prohormone convertases, proteasomes , endosomes , lysosomes), which convert proteins into peptides, including those with biological activities. The resulting protein fragments of various sizes are either readily degraded into free amino acids, [ 29 ] or captured by oligopeptidases, whose peculiar binding and/or catalytic properties allow them to fulfill their physiological roles by trimming inactive peptide precursors leading to their active form, [ 27 ] [ 11 ] converting bioactive peptides into novel ones., [ 30 ] inactivating them, thus restraining the continuous activation of specific receptors, [ 6 ] [ 7 ] or protecting the newly generated bioactive peptide from further degradation, suggesting a peptide chaperon-like activity. [ 16 ] [ 28 ] TOP, a ubiquitous cytosolic oligopeptidase, is a remarkable example of how this enzyme could play an essential role in immune defense against cancer cells. [ 27 ] It has also been successfully used as a hook to fish novel bioactive peptides from cytosol of cells. [ 31 ]
The involvement of peptides in cell-cell interactions and in neuropsychiatric, autoimmune, and neurovegetative diseases are waiting for peptidomics [ 32 ] and gene silencing approaches, which will expedite the formation of new concepts in an emerging era for oligopeptidases.
The participation of oligopeptidases in a number of pathologies has long been reported. The ACE has benefited the most from a thorough knowledge on the enzyme structure and its mechanism of catalysis leading to the better understanding of its role in cardiovascular pathologies and therapeutics. Accordingly, for over 30 years, the treatment of human arterial hypertension has taken advantage of ACE inhibition by active site-directed inhibitors like captopril , enalapril , lisinopril , and others. [ 33 ] For the other oligopeptidases, especially those involved in human diseases, the existing studies are promising but not yet as developed as for the ACE. Some examples are: a) the POP of nervous tissues has been suggested to be involved in neuropsychiatric disorders, like in post-traumatic stress, depression, mania, nervous bulimia, anorexia, and schizophrenia, as reviewed in. [ 14 ] b) NEP has been involved in cancer; [ 34 ] c) the TOP has been involved in tuberculosis [ 28 ] and in cancer; [ 27 ] d) the EOPA or NUDEL/EOPA ( NDEL1 /EOPA gene product) has been involved in neuronal migration during the cortex formation in human embryo (lissencephaly) and neurite outgrowth in adults, as in schizophrenia. [ 26 ] [ 35 ] Coincidentally, an activity related to the development of nervous tissue has been suggested for POP, nevertheless not involving its proteolytic activity. [ 36 ] The absence of an oligopeptidase in the intestine was also responsible for the decreased
serum zinc levels observed in patients who have the disease Acrodermatitis Enteropathica. [ 37 ] | https://en.wikipedia.org/wiki/Oligopeptidase |
Oligophagy refers to the eating of only a few specific foods , and to monophagy when restricted to a single food source. [ 1 ] The term is usually associated with insect dietary behaviour. [ 2 ] Organisms may exhibit narrow or specific oligophagy where the diet is restricted to a very few foods or broad oligophagy where the organism feeds on a wide variety of specific foods but none other. [ 3 ]
Polyphagy , on the contrary, refers to eating a broad spectrum of foods. In the insect world it refers usually to insects that feed on plants belonging to different families.
The diet of the yucca moths is restricted to the developing fruits of species of yucca [ 3 ] while the sea hare , Aplysia juliana (Quoy & Gaimard), is found on and feeds only on a single alga , Ulva lactuca ( Linnaeus ) in east Australian waters. [ 4 ] These are both narrow oligophages. Conversely the migratory locust may be said to be broadly oligophagous or even polyphagous . [ 3 ] | https://en.wikipedia.org/wiki/Oligophagy |
Oligoporin D is a natural product isolated from the "bitter bracket" mushroom Amaropostia stiptica . It was found to be one of the most potent agonists yet discovered for the bitter taste receptor TAS2R46 , and consequently one of the most bitter substances known. [ 1 ]
This pharmacology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligoporin_D |
An oligosaccharide ( / ˌ ɒ l ɪ ɡ oʊ ˈ s æ k ə ˌ r aɪ d / ; [ 1 ] from Ancient Greek ὀλίγος ( olígos ) ' few ' and σάκχαρ ( sákkhar ) ' sugar ' ) is a saccharide polymer containing a small number (typically three to ten [ 2 ] [ 3 ] [ 4 ] [ 5 ] ) of monosaccharides (simple sugars). Oligosaccharides can have many functions including cell recognition and cell adhesion . [ 6 ]
They are normally present as glycans : oligosaccharide chains are linked to lipids or to compatible amino acid side chains in proteins , by N - or O - glycosidic bonds . N -Linked oligosaccharides are always pentasaccharides attached to asparagine via a beta linkage to the amine nitrogen of the side chain. [ 7 ] Alternately, O -linked oligosaccharides are generally attached to threonine or serine on the alcohol group of the side chain. Not all natural oligosaccharides occur as components of glycoproteins or glycolipids. Some, such as the raffinose series, occur as storage or transport carbohydrates in plants. Others, such as maltodextrins or cellodextrins , result from the microbial breakdown of larger polysaccharides such as starch or cellulose .
In biology, glycosylation is the process by which a carbohydrate is covalently attached to an organic molecule, creating structures such as glycoproteins and glycolipids. [ 8 ]
N -Linked glycosylation involves oligosaccharide attachment to asparagine via a beta linkage to the amine nitrogen of the side chain. [ 7 ] The process of N -linked glycosylation occurs cotranslationally, or concurrently while the proteins are being translated. Since it is added cotranslationally, it is believed that N -linked glycosylation helps determine the folding of polypeptides due to the hydrophilic nature of sugars. All N -linked oligosaccharides are pentasaccharides: five monosaccharides long. [ citation needed ]
In N -glycosylation for eukaryotes, the oligosaccharide substrate is assembled right at the membrane of the endoplasmatic reticulum . [ 9 ] For prokaryotes , this process occurs at the plasma membrane . In both cases, the acceptor substrate is an asparagine residue. The asparagine residue linked to an N -linked oligosaccharide usually occurs in the sequence Asn-X-Ser/Thr, [ 7 ] where X can be any amino acid except for proline , although it is rare to see Asp, Glu, Leu, or Trp in this position. [ citation needed ]
Oligosaccharides that participate in O -linked glycosylation are attached to threonine or serine on the hydroxyl group of the side chain. [ 7 ] O -linked glycosylation occurs in the Golgi apparatus , where monosaccharide units are added to a complete polypeptide chain. Cell surface proteins and extracellular proteins are O -glycosylated. [ 10 ] Glycosylation sites in O -linked oligosaccharides are determined by the secondary and tertiary structures of the polypeptide, which dictate where glycosyltransferases will add sugars. [ citation needed ]
Glycoproteins and glycolipids are by definition covalently bonded to carbohydrates. They are very abundant on the surface of the cell, and their interactions contribute to the overall stability of the cell. [ citation needed ]
Glycoproteins have distinct Oligosaccharide structures which have significant effects on many of their properties, [ 11 ] affecting critical functions such as antigenicity , solubility , and resistance to proteases . Glycoproteins are relevant as cell-surface receptors , cell-adhesion molecules, immunoglobulins , and tumor antigens. [ 12 ]
Glycolipids are important for cell recognition, and are important for modulating the function of membrane proteins that act as receptors. [ 13 ] Glycolipids are lipid molecules bound to oligosaccharides, generally present in the lipid bilayer . Additionally, they can serve as receptors for cellular recognition and cell signaling. [ 13 ] The head of the oligosaccharide serves as a binding partner in receptor activity. The binding mechanisms of receptors to the oligosaccharides depends on the composition of the oligosaccharides that are exposed or presented above the surface of the membrane. There is great diversity in the binding mechanisms of glycolipids, which is what makes them such an important target for pathogens as a site for interaction and entrance. [ 14 ] For example, the chaperone activity of glycolipids has been studied for its relevance to HIV infection.
All cells are coated in either glycoproteins or glycolipids, both of which help determine cell types. [ 7 ] Lectins , or proteins that bind carbohydrates, can recognize specific oligosaccharides and provide useful information for cell recognition based on oligosaccharide binding. [ citation needed ]
An important example of oligosaccharide cell recognition is the role of glycolipids in determining blood types . The various blood types are distinguished by the glycan modification present on the surface of blood cells. [ 15 ] These can be visualized using mass spectrometry. The oligosaccharides found on the A, B, and H antigen occur on the non-reducing ends of the oligosaccharide. The H antigen (which indicates an O blood type) serves as a precursor for the A and B antigen. [ 7 ] Therefore, a person with A blood type will have the A antigen and H antigen present on the glycolipids of the red blood cell plasma membrane. A person with B blood type will have the B and H antigen present. A person with AB blood type will have A, B, and H antigens present. And finally, a person with O blood type will only have the H antigen present. This means all blood types have the H antigen, which explains why the O blood type is known as the "universal donor". [ citation needed ]
Vesicles are directed by many ways, but the two main ways are: [ citation needed ]
The sorting signals are recognised by specific receptors that reside in the membranes or surface coats of budding vesicles, ensuring that the protein is transported to the appropriate destination.
Many cells produce specific carbohydrate-binding proteins known as lectins, which mediate cell adhesion with oligosaccharides. [ 16 ] Selectins , a family of lectins, mediate certain cell–cell adhesion processes, including those of leukocytes to endothelial cells. [ 7 ] In an immune response, endothelial cells can express certain selectins transiently in response to damage or injury to the cells. In response, a reciprocal selectin–oligosaccharide interaction will occur between the two molecules which allows the white blood cell to help eliminate the infection or damage. Protein-Carbohydrate bonding is often mediated by hydrogen bonding and van der Waals forces . [ citation needed ]
Fructo-oligosaccharides (FOS), which are found in many vegetables, are short chains of fructose molecules. They differ from fructans such as inulin , which as polysaccharides have a much higher degree of polymerization than FOS and other oligosaccharides, but like inulin and other fructans, they are considered soluble dietary fibre. Using fructo-oligosaccharides (FOS) as fiber supplementations is shown to have an effect on glucose homeostasis quite similar to insulin. [ 17 ] These (FOS) supplementations can be considered prebiotics [ 18 ] which produce short-chain fructo-oligosaccharides (scFOS). [ 19 ] Galacto-oligosaccharides (GOS) in particular are used to create a prebiotic effect for infants that are not being breastfed. [ 20 ]
Galactooligosaccharides (GOS), which also occur naturally, consist of short chains of galactose molecules. Human milk is an example of this and contains oligosaccharides, known as human milk oligosaccharides (HMOs), which are derived from lactose . [ 21 ] [ 22 ] These oligosaccharides have biological function in the development of the gut flora of infants . Examples include lacto-N-tetraose , lacto-N-neotetraose, and lacto-N-fucopentaose. [ 21 ] [ 22 ] These compounds cannot be digested in the human small intestine , and instead pass through to the large intestine , where they promote the growth of Bifidobacteria , which are beneficial to gut health. [ 23 ]
HMOs can also protect infants by acting as decoy receptors against viral infection. [ 24 ] HMOs accomplish this by mimicking viral receptors which draws the virus particles away from host cells. [ 25 ] Experimentation has been done to determine how glycan-binding occurs between HMOs and many viruses such as influenza, rotavirus, human immunodeficiency virus (HIV), and respiratory syncytial virus (RSV). [ 26 ] The strategy HMOs employ could be used to create new antiviral drugs. [ 25 ]
Mannan oligosaccharides (MOS) are widely used in animal feed to improve gastrointestinal health. They are normally obtained from the yeast cell walls of Saccharomyces cerevisiae . Mannan oligosaccharides differ from other oligosaccharides in that they are not fermentable and their primary mode of action includes agglutination of type-1 fimbria pathogens and immunomodulation. [ 27 ]
Oligosaccharides are a component of fibre from plant tissue. FOS and inulin are present in Jerusalem artichoke , burdock , chicory , leeks , onions , and asparagus . Inulin is a significant part of the daily diet of most of the world's population. FOS can also be synthesized by enzymes of the fungus Aspergillus niger acting on sucrose . GOS is naturally found in soybeans and can be synthesized from lactose . FOS, GOS, and inulin are also sold as nutritional supplements. [ citation needed ] | https://en.wikipedia.org/wiki/Oligosaccharide |
Oligosaccharides and polysaccharides are an important class of polymeric carbohydrates found in virtually all living entities. [ 1 ] Their structural features make their nomenclature challenging and their roles in living systems make their nomenclature important.
Oligosaccharides are carbohydrates that are composed of several monosaccharide residues joined through glycosidic linkage , which can be hydrolyzed by enzymes or acid to give the constituent monosaccharide units. [ 2 ] While a strict definition of an oligosaccharide is not established, it is generally agreed that a carbohydrate consisting of two to ten monosaccharide residues with a defined structure is an oligosaccharide. [ 2 ]
Some oligosaccharides, for example maltose , sucrose , and lactose , were trivially named before their chemical constitution was determined, and these names are still used today. [ 2 ]
Trivial names, however, are not useful for most other oligosaccharides and, as such, systematic rules for the nomenclature of carbohydrates have been developed. To fully understand oligosaccharide and polysaccharide nomenclature, one must understand how monosaccharides are named.
An oligosaccharide has both a reducing and a non-reducing end. The reducing end of an oligosaccharide is the monosaccharide residue with hemiacetal functionality, thereby capable of reducing the Tollens’ reagent, while the non-reducing end is the monosaccharide residue in acetal form, thus incapable of reducing the Tollens’ reagent. [ 2 ] The reducing and non-reducing ends of an oligosaccharide are conventionally drawn with the reducing-end monosaccharide residue furthest to the right and the non-reducing (or terminal) end furthest to the left. [ 2 ]
Naming of oligosaccharides proceeds from left to right (from the non-reducing end to the reducing end) as glycosyl [glycosyl] n glycoses or glycosyl [glycosyl] n glycosides, depending on whether or not the reducing end is a free hemiacetal group. [ 3 ] In parentheses, between the names of the monosaccharide residues, the number of the anomeric carbon atom, an arrow symbol, and the number of the carbon atom bearing the connecting oxygen of the next monosaccharide unit are listed. [ 3 ] Appropriate symbols are used to indicate the stereochemistry of the glycosidic bonds (α or β), the configuration of the monosaccharide residue ( D or L ), and the substitutions at oxygen atoms ( O ). [ 2 ] Maltose and a derivative of sucrose illustrate these concepts:
In the case of branched oligosaccharides, meaning that the structure contains at least one monosaccharide residue linked to more than two other monosaccharide residues, terms designating the branches should be listed in square brackets, with the longest linear chain (the parent chain) written without square brackets. [ 3 ] The following example will help illustrate this concept:
These systematic names are quite useful in that they provide information about the structure of the oligosaccharide. They do require a lot of space, however, so abbreviated forms are used when possible. [ 4 ] In these abbreviated forms, the names of the monosaccharide units are shortened to their corresponding three-letter abbreviations, followed by p for pyranose or f for furanose ring structures, with the abbreviated aglyconic alcohol placed at the end of the name. [ 2 ] Using this system, the previous example would have the abbreviated name α- L -Fuc p -(1→3)-[α- D -Gal p -(1→4)]-α- D -Glc p -(1→3)-α- D -Gal p OAll (General formula: C n + 1 ( H 2 O ) n {\displaystyle {\ce {C}}_{n+1}{\ce {(H2O)}}_{n}} . structure formula C 12 H 22 O 11 {\displaystyle {\ce {C12H22O11}}} ).
Polysaccharides are considered to be polymers of monosaccharides containing ten or more monosaccharide residues. [ 2 ] Polysaccharides have been given trivial names that reflect their origin. [ 2 ] Two common examples are cellulose , a main component of the cell wall in plants, and starch, a name derived from the Anglo-Saxon stercan, meaning to stiffen. [ 2 ]
To name a polysaccharide composed of a single type of monosaccharide, that is a homopolysaccharide, the ending “-ose” of the monosaccharide is replaced with “-an”. [ 3 ] For example, a glucose polymer is named glucan , a mannose polymer is named mannan , and a galactose polymer is named galactan . When the glycosidic linkages and configurations of the monosaccharides are known, they may be included as a prefix to the name, with the notation for glycosidic linkages preceding the symbols designating the configuration. [ 3 ] The following example will help illustrate this concept:
A heteropolysaccharide is a polymer containing more than one kind of monosaccharide residue. [ 3 ] The parent chain contains only one type of monosaccharide and should be listed last with the ending “-an”, and the other types of monosaccharides listed in alphabetical order as “glyco-” prefixes. [ 3 ] When there is no parent chain, all different monosaccharide residues are to be listed alphabetically as “glyco-” prefixes and the name should end with “-glycan”. [ 3 ] The following example will help illustrate this concept: | https://en.wikipedia.org/wiki/Oligosaccharide_nomenclature |
Oligosaprobes are organisms that inhabit clean water or water that is only slightly polluted by organic matter. Oxidation processes predominate in such waters owing to an excess of dissolved oxygen. Nitrates are among the nitrogen compounds present; there is little carbonic acid and no hydrogen sulfide . Oligosaprobic environments are aquatic environments rich in dissolved oxygen and (relatively) free from decayed organic matter. [ 1 ]
Oligosaprobes include some green and diatomaceous algae , flowering plants (for example, European white water lilies ), some rotifers , Bryozoa , sponges, mollusks of the genus Dreissena , cladocerans (daphnids, bithotrephes), dragonfly and mayfly larvae, sterlets, trout, minnows, and newts.
Oligosaprobes also embrace a few saprophytes, including bacteria (scores and hundreds per 1 cu mm of water) and organisms that feed on bacteria. The term “oligosaprobe” is usually applied only to freshwater organisms. [ 2 ]
This oceanography article is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligosaprobe |
Oligotyping is the process of correcting DNA sequence measured during the process of DNA sequencing based on frequency data of related sequences across related samples.
DNA sequences were originally read from sequencing gels by eye. With the advent of computerized base callers, humans no longer 'called' the bases and instead 'corrected' the called bases. The bases were called by the software using the relative intensity of each putative basepair signal and the local spacing of the signals.
With the advent of high throughput sequencing , the volume of sequence to be corrected exceeded human capacity for sequence correction.
Multiple applications require single-base pair accuracy across populations of closely related sequences. An example is amplicon sequencing to assess the relative contribution of DNA from diverse organisms to a sample.
The requirement for single basepair accuracy led to the development of methods which drew on frequency data distributed across several samples to identify variant sequences which shared the same frequency profile and were thus likely errors from the same original sequence. [ 1 ] [ 2 ] The ability to use higher-order statistics to correct sequences is an important element in decreasing the burden of error in DNA sequence datasets. | https://en.wikipedia.org/wiki/Oligotyping_(sequencing) |
Oligotyping is a diagnostic or molecular biological method for classification of organisms by short intervals of primary DNA sequence .
Oligotyping 'systems' are sets of recognized target sequences which identify the members of the categories within the classification. The classification may be for the purpose of primary biological taxonomy , or for a functional classification.
Oligotyping has been used for classifying bacteria , [ 1 ] identifying bacterial antibiotic resistance genes, [ 2 ] identifying genetic factors in human infectious disease, [ 3 ] and performing histocompatibility tests for human blood or bone marrow donors/recipients .
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Oligotyping_(taxonomy) |
The Olin Raschig process is a chemical process for the production of hydrazine . The main steps in this process, patented by German chemist Friedrich Raschig in 1906 and one of three reactions named after him, are the formation of monochloramine from ammonia and hypochlorite , and the subsequent reaction of monochloramine with ammonia towards hydrazine. [ 1 ] The process was further optimised and used by the Olin Corporation for the production of anhydrous hydrazine for aerospace applications. [ 2 ]
The commercially used Olin Raschig process consists of the following steps: [ 2 ]
First, sodium hypochlorite solution is mixed with a threefold excess of ammonia at 5 °C to give monochloramine. The primary reaction proceeds according to the idealised equation [ 3 ]
The monochloramine solution is then added to a 30-fold excess of ammonia at 130 °C and elevated pressure, causing a second reaction
The hydrochloric acid and sodium hydroxide byproducts undergo a secondary reaction to release the byproducts of water and sodium chloride . The overall reaction is thus
Excess ammonia and sodium chloride are removed by distillation, followed by azeotropic distillation with aniline to remove water.
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Olin_Raschig_process |
The Olio Model One is a discontinued smartwatch sold from 2015 to 2016 by the now defunct Olio Devices, Inc. [ 2 ] [ 3 ] [ 4 ]
Olio Devices, Inc. was founded in 2013 by Steven K. Jacobs, Dr. Ashley J. "AJ" Cooper, and Evan Wilson. [ 5 ] [ 6 ] Two years later in 2015 they sold the first of their flagship watches. Olio Devices was acquired by Flex Ltd. in 2017. [ 5 ] [ 7 ] Olio Devices never sold any more watches after being acquired by Flex; later in 2017, Olio Devices' servers ceased to be online.
The Olio features a single-core 600 MHz Texas Instruments OMAP3430 CPU, 512 megabytes of LPDDR RAM , 1 gigabyte of NAND flash memory , a Broadcom BCM20702 Low Energy -capable Bluetooth 4.0 transceiver, a Texas Instruments DRV2605 Haptic Driver , an ambient light sensor , a microphone , a rechargeable lithium polymer battery , a wireless charging coil , and a touchscreen . The Bluetooth MAC address of every Olio watch starts with B0:C5:CA:D0 .
The Olio watch comes preinstalled with the Android operating system with a custom user interface and startup logo. The system uses the Das U-Boot bootloader. At first boot, the watch will instruct the user to create a Bluetooth pairing between the watch and a smartphone. After doing so, the watch will display the text "Open Smartphone App". This is referring to the Olio Assist app, a companion smartphone app required to configure the Olio. Although previously available for iOS on the App Store and for Android on Google Play , the Olio Assist app is no longer available on either. The Olio Assist app can then be used to configure various features of the Olio watch. The app will automatically set the time on the watch from the phone's clock.
The Olio Smartwatch and the Olio Assist app communicate over a custom RFCOMM protocol. The app hosts the protocol on an available RFCOMM port and advertises that port over service discovery protocol with the UUID 11e63bf3-6baa-47d9-b31d-6045138c9add . The watch sends SDP queries to the phone (once every few seconds on the setup screen and once every time it turns its screen on after having been configured) until it sees that UUID, then connects to the RFCOMM port associated with it. At this point, the watch and app communicate with a binary format based on MessagePack . This protocol can be used to update the watch's underlying Android system: zip archives sent over this protocol will be extracted to the directory /data/media/0/olio/firmware_updates_apply/ in the internal filesystem; then, the watch executes the file /data/media/0/olio/firmware_updates_apply/update.sh ( /update.sh in the archive) as root.
The Olio watch includes several pieces of software licensed under the GNU General Public License , such as the Linux kernel and U-Boot, but Olio Devices never released the source code to any of this software, thus violating the General Public License.
Olio Devices sold the Olio in "batches" of 1,000 each. Five batches were completely sold out, for a total of 5,000 watches sold. The Olio was sold in four "collections" (colors): a black collection, a steel (silver) collection, a gold collection, and a rose gold collection.
The Olio watch was manufactured on contract by Flex Ltd., [ 1 ] the company that would later acquire Olio Devices. Some of the watch's components were manufactured in China; [ 1 ] this can be seen on the charger sold to consumers, which is engraved with "Made In China". Furthermore, prototype Olios had "ASSEMBLED IN CHINA" printed on the back casing. [ 8 ] The Olio's motherboard, meanwhile, appears to have been manufactured in the United States, as it has "MADE IN USA" printed on it. [ 9 ]
Olio Devices had planned to sell an Olio Model Two by 2016, but ultimately did not do so. According to Jacobs, the Model Two would have focused on variation, rather than being an enhancement to or exploiting planned obsolescence of the Model One. [ 10 ] Furthermore, the Model One was designed to be modular , so that individual components could be upgraded without replacing the whole device. [ 11 ] Olio Devices never utilized this, either. | https://en.wikipedia.org/wiki/Olio_Model_One |
Olive mill pomace or two-phase olive mill waste ( TPOMW ) [ 1 ] is a by-product from the olive oil mill extraction process . Usually it is used as fuel in a cogeneration system or as organic fertiliser after a composting operation.
Olive mill pomace compost is made by a controlled biologic process that transforms organic waste into a stable humus . Adding composted olive mill pomace as organic fertiliser in olive orchards allows the soil to get nutrients back after each olive crop.
In crude olive oil production, the traditional system, i.e. pressing, and the three-phase system produce a press cake and a considerable amount of olive mill waste water while the two-phase system, which is mainly used in Spain, produces a paste-like waste called "alperujo" or "two-phase pomace" that has a higher water content and is more difficult to treat than traditional solid waste. The water content of the press cake, composed of crude olive cake, pomace and husk, is about 30 percent if it is produced by traditional pressing technology and about 45–50 percent using decanter centrifuges . The press cake still has some oil that is normally recovered in a separate installation. The exhausted olive cake is incinerated or used as a soil conditioner in olive groves.
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Olive_mill_pomace |
Olive oil contains small amounts of free fatty acids (meaning not attached to other fatty acids in the form of a triglyceride ). Free acidity is an important parameter that defines the quality of olive oil. It is usually expressed as a percentage of oleic acid (the main fatty acid present in olive oil) in the oil. As defined by the European Commission regulation No. 2568/91 and subsequent amendments, [ 1 ] the highest quality olive oil (extra-virgin olive oil) must feature a free acidity lower than 0.8%. Virgin olive oil is characterized by acidity between 0.8% and 2%, while lampante olive oil (a low quality oil that is not edible) features a free acidity higher than 2%. [ 2 ] The increase of free acidity in olive oil is due to free fatty acids that are released from triglycerides.
The presence of free fatty acids in olive oil is caused by a reaction ( lipolysis ) started when lipolytic enzymes (that are normally present in the pulp and seed cells of the olive ) come in contact with the oil (that is contained in particular vacuoles ) due to loss of integrity of the olive. [ 3 ] High values of free acidity in olive oil can be due to different factors such as: production from unhealthy olives (due to microorganisms and moulds contamination or attacked by flies and parasites ), bruised olives, delayed harvesting and storage before processing. The lipolysis reaction is greatly enhanced by the presence of an aqueous phase, so when oil is separated from water during processing, lipolysis slows down and stops.
Free acidity is a defect of olive oil that is tasteless and odorless, thus can not be detected by sensory analysis . Since vegetable oils are not aqueous fluids, a pH-meter can not be used for this measure. Various approaches exist that can measure oil acidity with good accuracy .
The official technique to measure free acidity in olive oil (as defined by the European Commission regulation No. 2568/91) is a manual titration procedure: a known volume of the oil to be tested is added to a mix of ether , methanol and phenolphthalein . Known volumes of 0.1 M potassium hydroxide (KOH) (the titrant) are added until there is a change in the color of the solution. The total volume of added titrant is then used to estimate the free acidity. The official technique for acidity measure in olive oil is accurate and reliable, but is essentially a laboratory method that must be carried out by trained personnel (mainly because of the toxic compounds used). Hence it is not suitable for in situ measurements in small oil mills .
One of the most promising methods is based on optical near-infrared spectroscopy (NIR) where the optical absorbance , i.e. the fraction of intensity of the incident light that is absorbed by the oil sample, is used to estimate the oil acidity. An oil sample is placed in a cuvette and analyzed by a spectrophotometer on a wide range of wavelengths . The results (i.e. the absorbance data for every tested wavelength) are thus processed by a statistical algorithm, such as Principal Component Analysis (PCA) or Partial Least Squares regression (PLS), to estimate the oil acidity. The feasibility to measure olive oil free acidity and peroxide value by NIR spectroscopy in the wavenumber range 4,541 to 11,726 cm −1 was reported. [ 4 ] Many commercial spectrophotometers exist [ 5 ] that can be used for analysis of different quality parameters in olive oil. The main advantage of NIR spectroscopy is the possibility to carry out the analysis on raw olive oil samples, without any chemical pretreatment. The main drawbacks are the high cost of commercial spectrophotometer and the need of calibration for different types of oil (produced by olives of different varieties, different geographical origin, etc.).
Another approach [ 6 ] is based on electrochemical impedance spectroscopy (EIS). EIS is a powerful technique that has been widely used to characterize different food products such as the analysis of milk composition, [ 7 ] the characterization and the determination of the freezing end-point of ice-cream mixes, the measure of meat ageing, [ 8 ] and the investigation of ripeness and quality in fruits . [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Olive_oil_acidity |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.