text stringlengths 60 353k | source stringclasses 2 values |
|---|---|
**Ganaxolone**
Ganaxolone:
Ganaxolone, sold under the brand name Ztalmy, is a medication used to treat seizures in people with cyclin-dependent kinase-like 5 (CDKL5) deficiency disorder (CDD).Ganaxolone was approved for medical use in the United States in March 2022. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Pharmacology:
Mechanism of action The exact mechanism of action for ganaxolone is unknown; however, results from animal studies suggest that it acts by blocking seizure propagation and elevating seizure thresholds.Ganaxolone is thought to modulate both synaptic and extrasynaptic GABAA receptors to normalize over-excited neurons. Ganaxolone's activation of the extrasynaptic receptor is an additional mechanism that provides stabilizing effects that potentially differentiates it from other drugs that increase GABA signaling.Ganaxolone binds to allosteric sites of the GABAA receptor to modulate and open the chloride ion channel, resulting in a hyperpolarization of the neuron. This causes an inhibitory effect on neurotransmission, reducing the chance of a successful action potential (depolarization) from occurring.It is unknown whether ganaxolone possesses significant hormonal activity in vivo, with a 2020 study finding evidence of in vitro binding to the membrane progesterone receptor.
Chemistry:
Ganaxolone is an analog of the neurosteroid allopregnanolone that possesses no known hormonal activity and, instead, is thought to primarily function by binding to GABAA receptors as a positive allosteric modulator.Other pregnane neurosteroids include alfadolone, alfaxolone, hydroxydione, minaxolone, pregnanolone (eltanolone), and renanolone, among others.
Research:
Ganaxolone is being investigated for potential medical use in the treatment of epilepsy. It is well tolerated in human trials, with the most commonly reported side effects being somnolence (sleepiness), dizziness, and fatigue. Trials in adults with focal onset seizures and in children with infantile spasms have recently been completed. There are ongoing studies in patients with focal onset seizures, PCDH19 pediatric epilepsy, and behaviors in Fragile X syndrome.Ganaxolone has been shown to protect against seizures in animal models, and to act a positive allosteric modulator of the GABAA receptor.
Research:
Clinical trials The most common adverse events reported across clinical trials have been somnolence (sleepiness), dizziness, and fatigue. In 2015, the MIND Institute at the University of California, Davis, announced that it was conducting, in collaboration with Marinus Pharmaceuticals, a randomized, placebo-controlled, Phase 2 clinical trial evaluating the effect of ganaxolone on behaviors associated with Fragile X syndrome in children and adolescents. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Podestat Formation**
Podestat Formation:
The Podestat Formation is a geologic formation in France. It preserves fossils dating back to the Cretaceous period. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Coupling parameter**
Coupling parameter:
The coupling parameter of the resonator specifies the part of the energy of the laser field, which is output at each round-trip. The coupling parameter should not be confused with the round-trip loss, which refers to the part of the energy which is absorbed or scattered at each round-trip of laser field in the laser resonator, and cannot be used.
Coupling parameter:
At the continuous wave operation, the round-trip gain is determined by the coupling parameter and the round-trip loss. In simple configurations of the laser cavity or laser resonator, the coupling parameter may be just the transmission coefficient of the output coupler or just square of the magnification coefficient in the case of an unstable resonator.
Coupling parameter:
The round-trip loss may limit the power scaling of the active mirrors, or disk lasers, while the size of the gain medium scales up, and the gain size product is limited by the exponential growth of the amplified spontaneous emission; the powerful disk laser should work at low values of the coupling parameter and even lower values of the round-trip loss. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Supply chain resilience**
Supply chain resilience:
Supply chain resilience is "the adaptive capability of the supply chain to prepare for unexpected events, respond to disruptions, and recover from them by maintaining continuity of operations at the desired level of connectedness and control over structure and function".
Origins:
Since around the turn of the millennium, supply chain risk management has attempted to transfer traditional risk management approaches from the "company" system to the "supply chain" system. However, the scalability of the traditional steps of risk management (identification, assessment, treatment and monitoring of risks) quickly reaches its limits: It is entirely possible to identify all conceivable risks within a company; However, a supply chain often consists of thousands of companies – the attempt to identify all possible risks in this system is therefore much more complex, if not in vain. It is a popular concept in contemporary supply chain management. It has therefore been argued that the complexity of supply chains requires complementary measures such as supply chain resilience. Resilience is able to cope with all sorts of changes and is thus less about the identification of specific risks but more about the characteristics of the system.
Interpretations of supply chain resilience:
Resilience in the sense of engineering resilience For a long time, the interpretation of resilience in the sense of engineering resilience prevailed in supply chain management. It is implied here that a supply chain is a closed system that can be controlled, similar to a system designed and planned by engineers (e.g. subway network). The expectations placed on managers come close to those placed on engineers, who should react quickly in the event of a disturbance in order to restore the system's ideal and original state as quickly as possible. A popular implementation of this idea in supply chain management is given by measuring the time-to-survive and the time-to-recover of the supply chain, allowing to identify weak points in the system. Acting like an engineer by redesigning the supply chain like on the drawing board, often by creating redundancies (e.g. multiple sourcing), strengthens resilience. In the short term, a supply chain can be viewed as a relatively rigid system. The idea of persistence of a supply chain that follows from engineering resilience therefore makes sense in the short term. However, this approach has its limits in the mid to long term.
Interpretations of supply chain resilience:
Resilience in the sense of socio-ecological resilience Social-ecological resilience goes back to ecological resilience, adding to it human decision-makers and their social interactions. A supply chain is thus interpreted as a social-ecological system that – similar to an ecosystem (e.g. forest) – is able to constantly adapt to external environmental conditions and – through the presence of social actors and their ability to foresight – also to transform itself into a fundamentally new system. This leads to a panarchical interpretation of a supply chain, embedding it into a system of systems, allowing to analyze the interactions of the supply chain with systems that operate at other levels (e.g. society, political economy, planet Earth). For example, Tesla's supply chain can be described as resilient because it reflects the transformation from internal combustion engines to electric engines, which is based on the ability of human actors to foresee long-term changes in the planet in the context of the climate crisis and to implement them in a business model. In contrast to engineering resilience, the supply chain is not interpreted as a system that needs to be stabilized in a fixed state (focus: persistence), but as a fluid system or even as a fluid process that interacts with the rest of the world (focus: adaptation or even transformation).
Literature:
Sheffi, Y. (2007). The resilient enterprise: overcoming vulnerability for competitive advantage. Zone Books.
Kummer, S. et al. (2022). Supply Chain Resilience: Insights from Theory and Practice (Springer Series in Supply Chain Management) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Psychic World**
Psychic World:
Psychic World (サイキック・ワールド) is an action platform video game. Originally released in Japan for the MSX2 as Psycho World (サイコ・ワールド) in 1988, it was later released as Psychic World on the Master System and Game Gear worldwide in 1991.
Gameplay:
Psychic World is a platform game wherein the player's character Lucía runs from one stage to the other using her "ESP Booster" to blast monstrous enemies while obtaining item power-ups through them or by jumping on various ledges and platforms. The Booster has a gauge of how often certain items and abilities can be used, but that, as with her health, can be replenished by power-ups. All her weapons are upgradeable by merely picking up the same item for that particular weapon and new weapons are obtained through mini-bosses and end-level bosses. The player has to use Lucia's psionic weapons strategically in levels using different elements to their advantage (in the ice stage, rocks being doused by falling water can be frozen and used as a stable platform for Lucía to jump on by blasting the rock with the ice shot, while the sonic wave weapon can destroy certain foreground objects blocking her path).
Plot:
Taking place at a remote laboratory in the year 19XX, a three-staff research team consisting of Dr. Knavik and his assistants the TWIN sisters Cecile and Lucia is studying the exploration and usage of ESP. One day while Lucia was getting ready for work, an explosion burst from the lab. By the time Lucia got there, Dr. Knavik was all right, but Cecile had disappeared. Dr. Knavik explains that part of his experiments involved running tests on a variety of monsters, but eventually the subjects rebelled and took Cecile with them. As Lucia follows the monster's track, Dr. Knavik gives her the ESP Booster, a device he created that will enable the user to wield psychic powers.
Reception:
Psychic World for the Master System was given mixed but mostly positive reviews 70% by RAZE, and 69% by Video Games. The Game Gear version received a score of 83% from Joystick. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Shodex**
Shodex:
Shodex is the brand name of HPLC columns and is best known for polymer-based columns. The product range covers aqueous and organic Size Exclusion Chromatography columns for large (bio-)molecules, columns for the routine analysis of sugars and organic acids, and a variety of Reversed Phase and HILIC columns. Additionally they offer Ion Chromatography (IC) and Ion Exchange columns.
Shodex HPLC Columns are manufactured in Japan by Showa Denko (will be renamed on January 1, 2023, in Resonac), one of the largest Japanese chemical companies and listed in the Nikkei 225 index. They produce around 260 different columns, most packed with polymer-based particles, and have been doing so since 1973.
The portfolio includes standard analytical columns, semi-micro columns, and preparative columns. Also size exclusion chromatography calibration standards are available (Pullulan, Polystyrene, Polymethylmethacrylate) Shodex is distributed worldwide by the different sales offices and by a range of local distributors. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Biceps (prosody)**
Biceps (prosody):
Biceps is a point in a metrical pattern where a pair of short syllables can freely be replaced by a long one. In Greek and Latin poetry, it is found in the dactylic hexameter and the first half of a dactylic pentameter, and also in anapaestic metres.
It is not to be confused with resolution, which is a phenomenon where a normally long syllable in a line is sometimes replaced by two shorts. Resolution is typically found in an iambic metre such as the iambic trimeter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GSM 03.40**
GSM 03.40:
GSM 03.40 or 3GPP TS 23.040 is a mobile telephony standard describing the format of the Transfer Protocol Data Units (TPDU) of the Short Message Transfer Protocol (SM-TP) used in the GSM networks to carry Short Messages. This format is used throughout the whole transfer of the message in the GSM mobile network. In contrast, application servers use different protocols, like Short Message Peer-to-Peer or Universal Computer Protocol, to exchange messages between them and the Short Message Service Center (SMSC).
GSM 03.40:
GSM 03.40 is the original name of the standard. Since 1999 has been developed by the 3GPP under the name 3GPP TS 23.040. However, the original name is often used to refer even to the 3GPP document.
Usage:
The GSM 03.40 TPDUs are used to carry messages between the Mobile Station (MS) and Mobile Switching Centre (MSC) using the Short Message Relay Protocol (SM-RP), while between MSC and Short Message Service Centre (SMSC) the TPDUs are carried as a parameter of a Mobile Application Part (MAP) package.In emerging networks which use IP Multimedia Subsystem (IMS) Short Messages are carried in the MESSAGE command of Session Initiation Protocol (SIP). Even in these IP-based networks an option exists which (due to compatibility reasons) defines transfer of Short Messages in the GSM 03.40 format embedded in 3GPP 24.011 as Content-Type: application/vnd.3gpp.sms.
TPDU Types:
GSM 03.40 defines six types of messages between Mobile Station (MS) and SMS Center (SC), which are distinguished by the message direction and the two least significant bits in the first octet of SM-TP message (the TP-MTI field): SMS-SUBMIT is used to submit a short message from a mobile phone (Mobile Station, MS) to a short message service centre (SMSC, SC).
TPDU Types:
SMS-SUBMIT-REPORT is an acknowledgement to the SMS-SUBMIT; a success means that the message was stored (buffered) in the SMSC, a failure means that the message was rejected by the SMSC.
SMS-COMMAND may be used to query for a message buffered in the SMSC, to modify its parameters or to delete it.
SMS-DELIVER is used to deliver a message from SMSC to a mobile phone. The acknowledgement returned by the mobile phone may optionally contain a SMS-DELIVER-REPORT. When home routing applies, SMS-DELIVER is used to submit messages from an SMSC to another one.
SMS-STATUS-REPORT may be sent by the SMSC to inform the originating mobile phone about the final outcome of the message delivery or to reply to a SMS-COMMAND.
TPDU Fields:
The fields of SM-TP messages, including their order and size, are summarized in the following table, where M means a mandatory field, O an optional field, E is used for fields which are mandatory in negative responses (RP-ERR) and not present in positive responses (RP-ACK), x is a field present elsewhere: The first octet of the TPDU contains various flags including the TP-MTI field described above: By setting the TP-More-Messages-to-Send (TP-MMS) bit to 0 (reversed logic), the SMSC signals it has more messages for the recipient (often further segments of a concatenated message). The MSC usually does not close the connection to the mobile phone and does not end the MAP dialogue with the SMSC, which allows faster delivery of subsequent messages or message segments. If by coincidence the further messages vanish from the SMSC in the meantime (when they are for example deleted), the SMSC terminates the MAP dialogue with a MAP Abort message.
TPDU Fields:
The TP-Loop-Prevention (TP-LP) bit is designed to prevent looping of SMS-DELIVER or SMS-STATUS-REPORT messages routed to a different address than is their destination address or generated by an application. Such message may be sent only if the original message had this flag cleared and the new message must be sent with the flag set.
By setting the TP-Status-Report-Indication (TP-SRI) bit to 1, the SMSC requests a status report to be returned to the SME.
By setting the TP-Status-Report-Request (TP-SRR) bit to 1 in a SMS-SUBMIT or SMS-COMMAND, the mobile phone requests a status report to be returned by the SMSC.
When the TP-SRQ has value of 1 in an SMS-STATUS-REPORT message, the message is the result of an SMS-COMMAND; otherwise it is a result of an SMS-SUBMIT.
When TP-UDHI has value 1, the TP-UD field starts with User Data Header.
Setting the TP-RP bits turns on a feature which allows to send a reply for a message using the same path as the original message. If the originator and the recipient home networks differ, the reply would go through another SMSC then usually. The mobile operator must take special measures to charge such messages.
TPDU Fields:
Both SM-RP and MAP used to transmit GSM 03.40 TPDU carry enough information to return acknowledgement—the information whether a request was successful or not. However, a GSM 03.40 TPDU may be included in the acknowledgement to carry even more information. The GSM 03.40 has undergone the following development: Up to GSM 03.40 5.2.0 SMS-DELIVER-REPORT and SMS-SUBMIT-REPORT was sent only in the case of an error. Since 5.3.0 they are sent in case of success as well. MO-ForwardSM-Res was introduced back in GSM 09.02 5.6.0 August 1997 Up to GSM 03.40 6.0.0 SMS-DELIVER-REPORT and SMS-SUBMIT-REPORT sent in case of an error contained only TP-MTI and TP-FCS fields and the last field in SMS-STATUS-REPORT was TP-ST. Since version 6.1.0 these TPDUs has format shown in the table above.Although these changes are ancient (version 6.1.0 occurred in July 1998), old formats of MAP are frequently seen even in today's networks.
TPDU Fields:
Message Content The content of the message (its text when the message is not a binary one) is carried in the TP-UD field. Its size may be up to 160 × 7 = 140 × 8 = 1120 bits. Longer messages can be split into multiple parts and sent as a Concatenated SMS. The length of message content is given in the TP-UDL field. When the message encoding is GSM 7-bit default alphabet (depends on TP-DCS field), the TP-UDL gives length of TP-UD in 7-bit units; otherwise TP-UDL gives length of the TP-UD in octets.
TPDU Fields:
When TP-UDHI is 1, the TP-UD starts with User Data Header (UDH); in this case the first octet of the TP-UD is User Data Header Length (UDHL) octet, containing the length of the UDH in octets without UDHL itself. UDH eats room from the TP-UD field. When the message encoding is GSM 7-bit default alphabet and a UDH is present, fill bits are inserted to align start of the first character of the text after UDH with septet boundary. This behaviour was designed for older mobile phones which don't understand UDH; such mobile phones might display the UDH as a jumble of strange characters; if the first character after UDH was Carriage Return (CR), the mobile phone would rewrite the message with the rest of the message.
TPDU Fields:
Addresses A GSM 03.40 message contains at most one address: destination address (TP-DA) in SMS-SUBMIT and SMS-COMMAND, originator address (TP-OA) in SMS-DELIVER and recipient address (TP-RA) in SMS-STATUS-REPORT. Other addresses are carried by lower layers.
TPDU Fields:
The format of addresses in the GSM 03.40 is described in the following table: Type of number (TON): If a subscriber enters a telephone number with `+' sign at its start, the `+' sign will be removed and the address gets TON=1 (international number), NPI=1. The number itself must always start with a country code and must be formatted exactly according to the E.164 standard.
TPDU Fields:
In contrast, for numbers written without `+' sign the address gets TON=0 (unknown), NPI=1. In this case the number must adhere to the mobile operator's dial plan, which means that international numbers must have the international prefix (00 in most countries, but 011 in the USA) before the country code and numbers for long-distance calls must start with the trunk prefix (0 in most countries, 1 in the USA) followed by a trunk code.
TPDU Fields:
Numbering plan identification (NPI): Telephone numbers should have NPI=1. Application servers may use alphanumeric addresses which have TON=5, NPI=0 combination.
The EXT bit is always 1 meaning "no extension".
Address examples U.S. number +1 555 123 4567 would be encoded as 0B 91 51 55 21 43 65 F7 (the F in upper four bits of the last octet is a filler which is used when the number length is odd).
Alphanumeric address is at first put to the GSM 7-bit default alphabet, then encoded the same way as any message text in TP-UD field (that means it is 7-bit packed) and then the address is supplied with the "number" length and TON and NPI.
TPDU Fields:
For example, a fictional alphanumeric address Design@Home is converted to the GSM 7-bit default alphabet which yields 11 bytes 44 65 73 69 67 6E 00 48 6F 6D 65 (hex), the 7-bit packing transforms it to 77 bits stored in 10 octets as C4 F2 3C 7D 76 03 90 EF 76 19; 77 bits is 20 nibbles (14 hex) which is the value of the first octet of the address. The second octet contains TON (5) and NPI (0), which yields D0 hex. The complete address in the GSM format is 14 D0 C4 F2 3C 7D 76 03 90 EF 76 19.
TPDU Fields:
Message Reference The Message Reference field (TP-MR) is used in all messages on the submission side with exception of the SMS-SUBMIT-REPORT (that is in SMS-SUBMIT, SMS-COMMAND and SMS-STATUS-REPORT). It is a single-octet value which is incremented each time a new message is submitted or a new SMS-COMMAND is sent. If the message submission fails, the mobile phone should repeat the submission with the same TP-MR value and with the TP-RD bit set to 1.
TPDU Fields:
Time Format A date and time used in TP-SCTS, TP-DT and in Absolute format of TP-VP is stored in 7 octets: In all octets the values are stored in binary coded decimal format with switched digits (number 35 is stored as 53 hex).
Time zone is given in quarters of an hour. If the time zone offset is negative (in Western hemisphere) bit 3 of the last octet is set to 1.
23:01:56 Mar 25th 2013 PST (GMT-7) would be encoded as 31 30 52 32 10 65 8A.
TPDU Fields:
In this example, the time zone, 8A is binary 1000 1010. Bit 3 is 1, therefore the time zone is negative. The remaining number (bit-wise 'and' with 1111 0111) is 1000 0010, hexadecimal 82. Treat this as any previous element in the sequence, (hex 82 represents number 28). Finally the time zone offset is given by 28 × 15 minutes = 420 minutes (7 hours).
TPDU Fields:
Validity Period An SMS-SUBMIT TPDU may contain a TP-VP parameter which limits the time period for which the SMSC would attempt to deliver the message. However, the validity period is usually limited globally by the SMSC configuration parameter— often to 48 or 72 hours. The Validity Period format is defined by the Validity Period Format field: Relative format Absolute format The absolute format is identical to the other time formats in GSM 03.40.
TPDU Fields:
Enhanced format Enhanced format of TP-VP field is seldom used. It has always 7 octets, although some of them are not used. The first octet is TP-VP Functionality Indicator. Its 3 least significant bits have the following meaning: The value of 1 in the bit 6 of the first octet means that the message is Single-shot. The value of 1 in the bit 7 of the first octet indicates that TP-VP functionality indicator extends to another octet. However, no such extensions are defined.
TPDU Fields:
Protocol Identifier TP-PID (Protocol identifier) either refers to the higher layer protocol being used, indicates interworking with a certain type of telematic device (like fax, telex, pager, teletex, e-mail), specifies replace type of the message or allows download of configuration parameters to the SIM card. Plain MO-MT messages have PID=0.
For TP-PID = 63 the SC converts the SM from the received TP Data Coding Scheme to any data coding scheme supported by that MS (e.g. the default).
TPDU Fields:
Short Message Type 0 is known as a silent SMS. Any handset must be able to receive such short message irrespective of whether there is memory available in the (U)SIM or ME or not, must acknowledge receipt of the message, but must not indicate its receipt to the user and must discard its contents, so the message will not be stored in the (U)SIM or ME.
TPDU Fields:
Data Coding Scheme A special 7-bit encoding called GSM 7 bit default alphabet was designed for Short Message System in GSM. The alphabet contains the most-often used symbols from most Western-European languages (and some Greek uppercase letters). Some ASCII characters and the Euro sign did not fit into the GSM 7-bit default alphabet and must be encoded using two septets. These characters form GSM 7-bit default alphabet extension table. Support of the GSM 7-bit alphabet is mandatory for GSM handsets and network elements.Languages which use Latin script, but use characters which are not present in the GSM 7-bit default alphabet, often replace missing characters with diacritic marks with corresponding characters without diacritics, which causes not entirely satisfactory user experience, but is often accepted. For best look the 16-bit UTF-16 (in GSM called UCS-2) encoding may be used at price of reducing length of a (non segmented) message from 160 to 70 characters.
TPDU Fields:
The messages in Chinese, Korean or Japanese languages must be encoded using the UTF-16 character encoding. The same was also true for other languages using non-Latin scripts like Russian, Arabic, Hebrew and various Indian languages. In 3GPP TS 23.038 8.0.0 published in 2008 a new feature, an extended National language shift table was introduced, which in the version 11.0.0 published in 2012 covers Turkish, Spanish, Portuguese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Oriya, Punjabi, Tamil, Telugu and Urdu languages. The mechanism replaces GSM 7-bit default alphabet code table and/or extended table with a national table(s) according to special information elements in User Data Header. The non-segmented message using national language shift table(s) may carry up to 155 (or 153) 7-bit characters.
TPDU Fields:
The Data Coding Scheme (TP-DCS) field contains primarily information about message encoding. GSM recognizes only 2 encodings for text messages and 1 encoding for binary messages: GSM 7-bit default alphabet (which includes using of National language shift tables as well) UCS-2 8-bit dataThe TP-DCS octet has a complex syntax to allow carrying of other information; the most notable are message classes: Flash messages are received by a mobile phone even though it has full memory. They are not stored in the phone, they just displayed on the phone display.
TPDU Fields:
Another feature available through TP-DCS is Automatic Deletion: after reading the message is deleted from the phone.
Message Waiting Indication group of DCS values can set or reset flags of indicating presence of unread voicemail, fax, e-mail or other messages.
A special DCS values also allows message compression, but it perhaps is not used by any operator.
The values of TP-DCS are defined in GSM recommendation 03.38. Messages sent via this encoding can be encoded in the default GSM 7-bit alphabet, the 8-bit data alphabet, and the 16-bit UCS-2 alphabet.
TPDU Fields:
Discharge Time The TP-DT field indicates the time and date associated with a particular TP-ST outcome: if the message has been delivered or, more generally, other transaction completed (TP-ST is 0-31), the TP-DT is the time of the completion of the transaction if the SMSC is still trying to deliver the message (TP-ST is 32-63), the TP-DT is the time of the last delivery attempt if the SMSC is not making any more delivery attempts (TP-ST is 64-127), the TP-DT is either the time of the last delivery attempt or the time at which the SMSC disposed the message Parameter Indicator The TP-PI field indicates presence of further fields in the SUBMIT-REPORT, DELIVER-REPORT or SMS-STATUS-REPORT TPDU.
TPDU Fields:
As currently there are still four free bits in TP-PI, it can be expected that the extension bit will be zero even in the future, which helps to distinguish TP-PI field from TP-FCS field when information whether TPDU is part of positive or negative response is not available: if the most significant bit of the second octet of TPDU is 1, the second octet is TP-FCS (in a negative response), otherwise it is TP-PI (in a positive response). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**MIPI Debug Architecture**
MIPI Debug Architecture:
MIPI Alliance Debug Architecture provides a standardized infrastructure for debugging deeply embedded systems in the mobile and mobile-influenced space. The MIPI Alliance MIPI Debug Working Group has released a portfolio of specifications; their objective is to provide standard debug protocols and standard interfaces from a system on a chip (SoC) to the debug tool. The whitepaper Architecture Overview for Debug summarizes all the efforts. In recent years, the group focused on specifying protocols that improve the visibility of the internal operations of deeply embedded systems, standardizing debug solutions via the functional interfaces of form factor devices, and specifying the use of I3C as debugging bus.
The term "debug":
The term "debug" encompasses the various methods used to detect, triage, trace, and potentially eliminate mistakes, or bugs, in hardware and software. Debug includes control/configure methods, stop/step mode debugging, and various forms of tracing.
Control/configure methods Debug can be used to control and configure components, including embedded systems, of a given target system. Standard functions include setting up hardware breakpoints, preparing and configuring the trace system, and examining system states.
The term "debug":
Stop/step mode debugging In stop/step mode debugging, the core/microcontroller is stopped through the use of breakpoints and then "single-stepped" through the code by executing instructions one at a time. If the other cores/microcontrollers of the SoC have finished synchronously, the overall state of the system can be examined. Stop/step mode debugging includes control/configure techniques, run control of a core/microcontroller, start/stop synchronization with other cores, memory and register access, and additional debug features such as performance counter and run-time memory access.
The term "debug":
Tracing Traces allow an in-depth analysis of the behavior and the timing characteristics of an embedded system. The following traces are typical: A "core trace" provides full visibility of program execution on an embedded core. Trace data are created for the instruction execution sequence (sometimes referred to as an instruction trace) and data transfers (sometimes referred to as a data trace). An SoC may generate several core traces.
The term "debug":
A "bus trace" provides complete visibility of the data transfers across a specific bus.
A "system trace" provides visibility of various events/states inside the embedded system. Trace data can be generated by instrument application code and by hardware modules within the SoC. An SoC may generate several system traces.
Visibility of SoC-internal operations:
Tracing is the tool of choice to monitor and analyze what is going on in a complex SoC. There are several well established non-MIPI core-trace and bus-trace standards for the embedded market. Thus, there was no need for the MIPI Debug Working Group to specify new ones. But no standard existed for a "system trace" when the Debug Working Group published its first version of the MIPI System Trace Protocol (MIPI STP) in 2006.
Visibility of SoC-internal operations:
MIPI System Software Trace (MIPI SyS-T) The generation of system trace data from the software is typically done by inserting additional function calls, which produce diagnostic information valuable for the debug process. This debug technique is called instrumentation. Examples are: printf-style string generating functions, value information, assertions, etc. The purpose of MIPI System Software Trace (MIPI SyS-T) is to define a reusable, general-purpose data protocol and instrumentation API for debugging. The specification defines message formats that allow a trace-analysis tool to decode the debug messages, either into human-readable text or to signals optimized for automated analysis.
Visibility of SoC-internal operations:
Since verbose textual messages stress bandwidth limits for debugging, so-called "catalog messages" are provided. Catalog messages are compact binary messages that replace strings with numeric values. The translation from the numeric value to a message string is done by the trace analysis tool, with the help of collateral XML information. This information is provided during the software-build process using an XML schema that is part of the specification as well.
Visibility of SoC-internal operations:
The SyS-T data protocol is designed to work efficiently on top of lower-level transport links such as those defined by the MIPI System Trace Protocol. SyS-T protocol features such as timestamping or data-integrity checksums can be disabled if the transport link already provides such capabilities. The use of other transport links—such as UART, USB, or TCP/IP—is also possible.
The MIPI Debug Working Group will provide an open-source reference implementation for the SyS-T instrumentation API, a SyS-T message pretty printer, and a tool to generate the XML collateral data as soon as the Specification for System Software Trace (SyS-T) is approved.
Visibility of SoC-internal operations:
MIPI System Trace Protocol (MIPI STP) The MIPI System Trace Protocol (MIPI STP) specifies a generic protocol that allows the merging of trace streams originated from anywhere in the SoC to a trace stream of 4-bit frames. It was intentionally designed to merge system trace information. The MIPI System Trace Protocol uses a channel/master topology that allows the trace receiving analysis tool to collate the individual trace streams for analysis and display. The protocol additionally provides the following features: stream synchronization and alignment, trigger markers, global timestamping, and multiple stream time synchronization.
Visibility of SoC-internal operations:
The stream of STP packets produced by the System Trace Module can be directly saved to trace RAM, directly exported off-chip, or can be routed to a "trace wrapper protocol" (TWP) module to merge with further trace streams. ARM's CoreSight System Trace Macrocell, which is compliant with MIPI STP, is today an integral part of most multi-core chips used in the mobile space.
Visibility of SoC-internal operations:
The last MIPI board-adopted version of Specification for System Trace Protocol (STPSM) is version 2.2 (February 2016).
Visibility of SoC-internal operations:
MIPI Trace Wrapper Protocol (MIPI TWP) The MIPI Trace Wrapper Protocol enables multiple trace streams to be merged into a single trace stream (byte streams). A unique ID is assigned to each trace stream by a wrapping protocol. The detection of byte/word boundaries is possible even if the data is transmitted as a stream of bits. Inert packets are used if a continuous export of trace data is required. MIPI Trace Wrapper Protocol is based on ARM's Trace Formatter Protocol specified for ARM CoreSight.
Visibility of SoC-internal operations:
The last MIPI board-adopted version of Specification for Trace Wrapper Protocol (TWPSM) is version 1.1 (December 2014).
From dedicated to functional interfaces:
Dedicated debug interfaces In the early stages of product development, it is common to use development boards with dedicated and readily accessible debug interfaces for connecting the debug tools. SoCs employed in the mobile market rely on two debug technologies: stop-mode debugging via a scan chain and stop-mode debugging via memory-mapped debug registers.
The following non-MIPI debug standards are well established in the embedded market: IEEE 1149.1 JTAG (5-pin) and ARM Serial Wire Debug (2-pin), both using single-ended pins. Thus, there was no need for the MIPI Debug Working Group to specify a stop-mode debug protocol or to specify a debug interface.
Trace data generated and merged to a trace stream within the SoC can be streamed, via a dedicated unidirectional trace interface, off-chip to a trace analysis tool. The MIPI Debug Architecture provides specifications for both parallel and serial trace ports.
From dedicated to functional interfaces:
The MIPI Parallel Trace Interface (MIPI PTI) specifies how to pass the trace data to multiple data pins and a clock pin (single-ended). The specification includes signal names and functions, timing, and electrical constraints. The last MIPI board-adopted version of Specification for Parallel Trace Interface is version 2.0 (October 2011).The MIPI High-Speed Trace Interface (MIPI HTI) specifies how to stream trace data over the physical layer of standard interfaces, such as PCI Express, DisplayPort, HDMI, or USB. The current version of the specification allows for one to six lanes. The specification includes: The PHY layer, which represents the electrical and clocking characteristics of the serial lanes.
From dedicated to functional interfaces:
The LINK layer, which defines how the trace is packaged into the Aurora 8B/10B protocol.
From dedicated to functional interfaces:
A programmer's model for controlling the HTI and providing status information.The HTI is a subset of the High Speed Serial Trace Port (HSSTP) specification defined by ARM. The last MIPI board-adopted version of Specification for High-speed Trace Interface is version 1.0 (July 2016).Board developers and debug tool vendors benefit from standard debug connectors and standard pin mappings. The MIPI Recommendation for Debug and Trace Connectors recommends 10-/20-/34-pin board-level 1.27-millimetre (0.050 in) connectors (MIPI10/20/34). Seven different pin mappings that address a wide variety of debug scenarios have been specified. They include standard JTAG (IEEE 1149.1), cJTAG (IEEE 1149.7) and 4-bit parallel trace interfaces (mainly used for system traces), supplemented by the ARM-specific Serial Wire Debug (SWD) standard. MIPI10/20/34 debug connectors became the standard for ARM-based embedded designs.
From dedicated to functional interfaces:
Many embedded designs in the mobile space use high-speed parallel trace ports (up to 600 megabits per second per pin). MIPI recommends a 60-pin Samtec QSH/QTH connector named MIPI60, which allows JTAG/cJTAG for run control, up to 40 trace data signals, and up to 4 trace clocks. To minimize complexity, the recommendation defines four standard configurations with one, two, three, or four trace channels of varying width.
From dedicated to functional interfaces:
The last MIPI board-adopted version of MIPI Alliance Recommendation for Debug and Trace Connectors is version 1.1 (March 2011).
From dedicated to functional interfaces:
PHY and pin overlaid interfaces Readily-accessible debug interfaces are not available in the product's final form factor. This hampers the identification of bugs and performance optimization in the end product. Since the debug logic is still present in the end product, an alternative access path is needed. An effective way is to equip a mobile terminal's standard interface with a multiplexer that allows for accessing the debug logic. The switching between the interface's essential function and the debug function can be initiated by the connected debug tool or by the mobile terminal's software. Standard debug tools can be used under the following conditions: A switching protocol is implemented on the debug tool and in the mobile terminal.
From dedicated to functional interfaces:
A debug adapter exists that connects the debug tool to the standard interface. The debug adapter has to assist the switching protocol if required.
A mapping from the standard interface pins to the debug pins is specified.The MIPI Narrow Interface for Debug and Test (MIPI NIDnT) covers debugging via the following standard interfaces: microSD, USB 2.0 Micro-B/-AB receptacle, USB Type-C receptacle, and DisplayPort. The last MIPI board-adopted version of Specification for Narrow Interface for Debug and Test (NIDnTSM) is version 1.2 (December 2017).
From dedicated to functional interfaces:
Network interfaces Instead of re-using the pins, debugging can also be done via the protocol stack of a standard interface or network. Here debug traffic co-exists with the traffic of other applications using the same communication link. The MIPI Debug Working Group named this approach GigaBit Debug. Since no debug protocol existed for this approach, the MIPI Debug Working Group specified its SneakPeak debug protocol.
From dedicated to functional interfaces:
MIPI SneakPeek Protocol (MIPI SPP) moved from a dedicated interface for basic debugging towards a protocol-driven interface: It translates incoming command packets into read/write accesses to memory, memory-mapped debug registers, and other memory-mapped system resources.
It translates command results (status information and read data coming from memory, memory-mapped debug registers, and other memory-mapped system resources) to outgoing response packets.
From dedicated to functional interfaces:
Since SneakPeek accepts packets coming through an input buffer and delivers packets through an output buffer, it can be easily connected to any standard I/O or network.The MIPI Alliance Specification for SneakPeek Protocol describes the basic concepts, the required infrastructure, the packets, and the data flow. The last MIPI board-adopted version of Specification for SneakPeek Protocol (SPPSM) is version 1.0 (August 2015).The MIPI Gigabit Debug Specification Family is providing details for mapping debug and trace protocols to standard I/Os or networks available in mobile terminals. These details include: endpoint addressing, link initialization and management, data packaging, data-flow management, and error detection and recovery. The last MIPI board-adopted version of Specification for Gigabit Debug for USB (MIPI GbD USB) is version 1.1 (March 2018). The last MIPI board-adopted version of Specification for Gigabit Debug for Internet Protocol Sockets (MIPI GbD IPS) is version 1.0 (July 2016).
I3C as debug bus:
Current debug solutions, such as JTAG and ARM CoreSight, are statically structured, which makes for limited scalability regarding the accessibility of debug components/devices. MIPI Debug for I3C specifies a scalable, 2-pin, single-ended debug solution, which has the advantage of being available for the entire product lifetime. The I3C bus can be used as a debug bus only, or the bus can be shared between debug and its essential function as data acquisition bus for sensors. Debugging via I3C works in principle as follows: The I3C bus is used for the physical transport, and the native I3C functionality is used to configure the bus and to hot-join new components.
I3C as debug bus:
The debug protocol is wrapped into dedicated I3C commands. Supported debug protocols are JTAG, ARM CoreSight, and MIPI SneakPeek Protocol. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Catshead (architecture)**
Catshead (architecture):
A hay hood is a roof extension which projects from the ridge of a barn roof, usually at the top of a gable. It provides shelter over a window or door used for passing hay into the attic or loft of the barn; it may hold a pulley for hoisting hay or hay bales up to the loft, or a fork or grapple and track system (or hay carrier) where hay can be lifted and then moved throughout the barn. A barn may have the ridge beam extended past the wall with a lifting mechanism but no hay hood.This simplest hay hood includes a tapered roof extension providing some protection from the weather. A non-tapered extension provides more protection. A hay hood with partial or full walls underneath the extension on two sides is more protective, while an extension with three sides, allowing hay to be brought into the barn only through its "floor" keeps virtually all rain or snow out of the barn.A hay hood can be built on a barn with any roof type. The type of hood is generally determined by the weather of a particular region. A barn in a semi-arid region may have no hood or just a simple pointed one. A barn in a region with frequent driving rain may have a completely enclosed hay hood. This is common in an area of western Oregon in the United States, centered on the town of Monroe in the Willamette Valley, where it is called a hay cupola. Most gambrel roofed barns in the western U.S. have pointed hay hoods. Hay hoods have been built on round barns.It may also be known as a hay bonnet, hay gable, widow's peak, crow's beak, or hanging gable.To prevent small gaps around the closed doors at the beam penetration that would allow birds to enter the barn, one farmer in Reasnor, Iowa, designed a hay hood with a "bunker door" that when closed, was an angled floor on the hay hood, completely enclosing the hood and keeping birds such as sparrows and pigeons out of the barn.
Other uses:
A little catshead (alternatively cat's head or cats head) is an architectural feature commonly found on multi-storied mills, agricultural buildings, and factories, composed of a small extension protruding from the gable end of a larger roof.
A grist mill with a single main roof and catsheads at each end vaguely resembles a cat's head in sillohuette, with the catsheads forming the "ears" of the imaginary feline; this may be the origin of the name.
Other uses:
Catsheads originally existed to protect the ropes and pulleys associated with lifting equipment (such as the block and tackle rigs used to shift multi-ton milling equipment and the simple wheel pulleys used to lift fodder into haylofts) from ice and the corrosion caused by rain. In driest climates, if they had an opening to the building which lacked a door or window, this may have been adequate to prevent the goods from deteriorating.
Other uses:
Adding the protective catshead to the gable end of a roofline makes roofing tasks such as taking and removing tiles/panels simpler, and was an economy as obviates the flashing (weatherproofing) of a completely separate roof.
Whimsical architectural styles (such as New World Queen Anne Revival architecture) may also sport catsheads as non-functional decorative features. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Curve**
Curve:
In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight.
Curve:
Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that appeared more than 2000 years ago in Euclid's Elements: "The [curved] line is […] the first species of quantity, which has only one dimension, namely length, without any width nor depth, and is nothing else than the flow or run of the point which […] will leave from its imaginary moving some vestige in length, exempt of any width."This definition of a curve has been formalized in modern mathematics as: A curve is the image of an interval to a topological space by a continuous function. In some contexts, the function that defines the curve is called a parametrization, and the curve is a parametric curve. In this article, these curves are sometimes called topological curves to distinguish them from more constrained curves such as differentiable curves. This definition encompasses most curves that are studied in mathematics; notable exceptions are level curves (which are unions of curves and isolated points), and algebraic curves (see below). Level curves and algebraic curves are sometimes called implicit curves, since they are generally defined by implicit equations.
Curve:
Nevertheless, the class of topological curves is very broad, and contains some curves that do not look as one may expect for a curve, or even cannot be drawn. This is the case of space-filling curves and fractal curves. For ensuring more regularity, the function that defines a curve is often supposed to be differentiable, and the curve is then said to be a differentiable curve.
Curve:
A plane algebraic curve is the zero set of a polynomial in two indeterminates. More generally, an algebraic curve is the zero set of a finite set of polynomials, which satisfies the further condition of being an algebraic variety of dimension one. If the coefficients of the polynomials belong to a field k, the curve is said to be defined over k. In the common case of a real algebraic curve, where k is the field of real numbers, an algebraic curve is a finite union of topological curves. When complex zeros are considered, one has a complex algebraic curve, which, from the topological point of view, is not a curve, but a surface, and is often called a Riemann surface. Although not being curves in the common sense, algebraic curves defined over other fields have been widely studied. In particular, algebraic curves over a finite field are widely used in modern cryptography.
History:
Interest in curves began long before they were the subject of mathematical study. This can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric times. Curves, or at least their graphical representations, are simple to create, for example with a stick on the sand on a beach.
History:
Historically, the term line was used in place of the more modern term curve. Hence the terms straight line and right line were used to distinguish what are today called lines from curved lines. For example, in Book I of Euclid's Elements, a line is defined as a "breadthless length" (Def. 2), while a straight line is defined as "a line that lies evenly with the points on itself" (Def. 4). Euclid's idea of a line is perhaps clarified by the statement "The extremities of a line are points," (Def. 3). Later commentators further classified lines according to various schemes. For example: Composite lines (lines forming an angle) Incomposite lines Determinate (lines that do not extend indefinitely, such as the circle) Indeterminate (lines that extend indefinitely, such as the straight line and the parabola)The Greek geometers had studied many other kinds of curves. One reason was their interest in solving geometrical problems that could not be solved using standard compass and straightedge construction.
History:
These curves include: The conic sections, studied in depth by Apollonius of Perga The cissoid of Diocles, studied by Diocles and used as a method to double the cube.
The conchoid of Nicomedes, studied by Nicomedes as a method to both double the cube and to trisect an angle.
The Archimedean spiral, studied by Archimedes as a method to trisect an angle and square the circle.
History:
The spiric sections, sections of tori studied by Perseus as sections of cones had been studied by Apollonius.A fundamental advance in the theory of curves was the introduction of analytic geometry by René Descartes in the seventeenth century. This enabled a curve to be described using an equation rather than an elaborate geometrical construction. This not only allowed new curves to be defined and studied, but it enabled a formal distinction to be made between algebraic curves that can be defined using polynomial equations, and transcendental curves that cannot. Previously, curves had been described as "geometrical" or "mechanical" according to how they were, or supposedly could be, generated.Conic sections were applied in astronomy by Kepler.
History:
Newton also worked on an early example in the calculus of variations. Solutions to variational problems, such as the brachistochrone and tautochrone questions, introduced properties of curves in new ways (in this case, the cycloid). The catenary gets its name as the solution to the problem of a hanging chain, the sort of question that became routinely accessible by means of differential calculus.
History:
In the eighteenth century came the beginnings of the theory of plane algebraic curves, in general. Newton had studied the cubic curves, in the general description of the real points into 'ovals'. The statement of Bézout's theorem showed a number of aspects which were not directly accessible to the geometry of the time, to do with singular points and complex solutions.
History:
Since the nineteenth century, curve theory is viewed as the special case of dimension one of the theory of manifolds and algebraic varieties. Nevertheless, many questions remain specific to curves, such as space-filling curves, Jordan curve theorem and Hilbert's sixteenth problem.
Topological curve:
A topological curve can be specified by a continuous function γ:I→X from an interval I of the real numbers into a topological space X. Properly speaking, the curve is the image of γ.
However, in some contexts, γ itself is called a curve, especially when the image does not look like what is generally called a curve and does not characterize sufficiently γ.
For example, the image of the Peano curve or, more generally, a space-filling curve completely fills a square, and therefore does not give any information on how γ is defined.
A curve γ is closed or is a loop if I=[a,b] and γ(a)=γ(b) . A closed curve is thus the image of a continuous mapping of a circle. A non-closed curve may also be called an open curve.
If the domain of a topological curve is a closed and bounded interval I=[a,b] , the curve is called a path, also known as topological arc (or just arc).
Topological curve:
A curve is simple if it is the image of an interval or a circle by an injective continuous function. In other words, if a curve is defined by a continuous function γ with an interval as a domain, the curve is simple if and only if any two different points of the interval have different images, except, possibly, if the points are the endpoints of the interval. Intuitively, a simple curve is a curve that "does not cross itself and has no missing points" (a continuous non-self-intersecting curve).A plane curve is a curve for which X is the Euclidean plane—these are the examples first encountered—or in some cases the projective plane. A space curve is a curve for which X is at least three-dimensional; a skew curve is a space curve which lies in no plane. These definitions of plane, space and skew curves apply also to real algebraic curves, although the above definition of a curve does not apply (a real algebraic curve may be disconnected).
Topological curve:
A plane simple closed curve is also called a Jordan curve. It is also defined as a non-self-intersecting continuous loop in the plane. The Jordan curve theorem states that the set complement in a plane of a Jordan curve consists of two connected components (that is the curve divides the plane in two non-intersecting regions that are both connected).
Topological curve:
The definition of a curve includes figures that can hardly be called curves in common usage. For example, the image of a curve can cover a square in the plane (space-filling curve), and a simple curve may have a positive area. Fractal curves can have properties that are strange for the common sense. For example, a fractal curve can have a Hausdorff dimension bigger than one (see Koch snowflake) and even a positive area. An example is the dragon curve, which has many other unusual properties.
Differentiable curve:
Roughly speaking a differentiable curve is a curve that is defined as being locally the image of an injective differentiable function γ:I→X from an interval I of the real numbers into a differentiable manifold X, often Rn.
More precisely, a differentiable curve is a subset C of X where every point of C has a neighborhood U such that C∩U is diffeomorphic to an interval of the real numbers. In other words, a differentiable curve is a differentiable manifold of dimension one.
Differentiable arc In Euclidean geometry, an arc (symbol: ⌒) is a connected subset of a differentiable curve. Arcs of lines are called segments, rays, or lines, depending on how they are bounded.
A common curved example is an arc of a circle, called a circular arc. In a sphere (or a spheroid), an arc of a great circle (or a great ellipse) is called a great arc.
Length of a curve If X=Rn is the n -dimensional Euclidean space, and if γ:[a,b]→Rn is an injective and continuously differentiable function, then the length of γ is defined as the quantity Length def ∫ab|γ′(t)|dt.
The length of a curve is independent of the parametrization γ In particular, the length s of the graph of a continuously differentiable function y=f(x) defined on a closed interval [a,b] is s=∫ab1+[f′(x)]2dx.
Differentiable curve:
More generally, if X is a metric space with metric d , then we can define the length of a curve γ:[a,b]→X by Length def sup and a=t0<t1<…<tn=b}, where the supremum is taken over all n∈N and all partitions t0<t1<…<tn of [a,b] A rectifiable curve is a curve with finite length. A curve γ:[a,b]→X is called natural (or unit-speed or parametrized by arc length) if for any t1,t2∈[a,b] such that t1≤t2 , we have Length (γ|[t1,t2])=t2−t1.
Differentiable curve:
If γ:[a,b]→X is a Lipschitz-continuous function, then it is automatically rectifiable. Moreover, in this case, one can define the speed (or metric derivative) of γ at t∈[a,b] as Speed def lim sup s→td(γ(s),γ(t))|s−t| and then show that Length Speed γ(t)dt.
Differentiable curve:
Differential geometry While the first examples of curves that are met are mostly plane curves (that is, in everyday words, curved lines in two-dimensional space), there are obvious examples such as the helix which exist naturally in three dimensions. The needs of geometry, and also for example classical mechanics are to have a notion of curve in space of any number of dimensions. In general relativity, a world line is a curve in spacetime.
Differentiable curve:
If X is a differentiable manifold, then we can define the notion of differentiable curve in X . This general idea is enough to cover many of the applications of curves in mathematics. From a local point of view one can take X to be Euclidean space. On the other hand, it is useful to be more general, in that (for example) it is possible to define the tangent vectors to X by means of this notion of curve.
Differentiable curve:
If X is a smooth manifold, a smooth curve in X is a smooth map γ:I→X .This is a basic notion. There are less and more restricted ideas, too. If X is a Ck manifold (i.e., a manifold whose charts are k times continuously differentiable), then a Ck curve in X is such a curve which is only assumed to be Ck (i.e. k times continuously differentiable). If X is an analytic manifold (i.e. infinitely differentiable and charts are expressible as power series), and γ is an analytic map, then γ is said to be an analytic curve.
Differentiable curve:
A differentiable curve is said to be regular if its derivative never vanishes. (In words, a regular curve never slows to a stop or backtracks on itself.) Two Ck differentiable curves γ1:I→X and γ2:J→X are said to be equivalent if there is a bijective Ck map p:J→I such that the inverse map p−1:I→J is also Ck , and γ2(t)=γ1(p(t)) for all t . The map γ2 is called a reparametrization of γ1 ; and this makes an equivalence relation on the set of all Ck differentiable curves in X . A Ck arc is an equivalence class of Ck curves under the relation of reparametrization.
Algebraic curve:
Algebraic curves are the curves considered in algebraic geometry. A plane algebraic curve is the set of the points of coordinates x, y such that f(x, y) = 0, where f is a polynomial in two variables defined over some field F. One says that the curve is defined over F. Algebraic geometry normally considers not only points with coordinates in F but all the points with coordinates in an algebraically closed field K. If C is a curve defined by a polynomial f with coefficients in F, the curve is said to be defined over F. In the case of a curve defined over the real numbers, one normally considers points with complex coordinates. In this case, a point with real coordinates is a real point, and the set of all real points is the real part of the curve. It is therefore only the real part of an algebraic curve that can be a topological curve (this is not always the case, as the real part of an algebraic curve may be disconnected and contain isolated points). The whole curve, that is the set of its complex point is, from the topological point of view a surface. In particular, the nonsingular complex projective algebraic curves are called Riemann surfaces.
Algebraic curve:
The points of a curve C with coordinates in a field G are said to be rational over G and can be denoted C(G). When G is the field of the rational numbers, one simply talks of rational points. For example, Fermat's Last Theorem may be restated as: For n > 2, every rational point of the Fermat curve of degree n has a zero coordinate.
Algebraic curve:
Algebraic curves can also be space curves, or curves in a space of higher dimension, say n. They are defined as algebraic varieties of dimension one. They may be obtained as the common solutions of at least n–1 polynomial equations in n variables. If n–1 polynomials are sufficient to define a curve in a space of dimension n, the curve is said to be a complete intersection. By eliminating variables (by any tool of elimination theory), an algebraic curve may be projected onto a plane algebraic curve, which however may introduce new singularities such as cusps or double points.
Algebraic curve:
A plane curve may also be completed to a curve in the projective plane: if a curve is defined by a polynomial f of total degree d, then wdf(u/w, v/w) simplifies to a homogeneous polynomial g(u, v, w) of degree d. The values of u, v, w such that g(u, v, w) = 0 are the homogeneous coordinates of the points of the completion of the curve in the projective plane and the points of the initial curve are those such that w is not zero. An example is the Fermat curve un + vn = wn, which has an affine form xn + yn = 1. A similar process of homogenization may be defined for curves in higher dimensional spaces.
Algebraic curve:
Except for lines, the simplest examples of algebraic curves are the conics, which are nonsingular curves of degree two and genus zero. Elliptic curves, which are nonsingular curves of genus one, are studied in number theory, and have important applications to cryptography. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Samsung Galaxy S III Neo**
Samsung Galaxy S III Neo:
The Samsung Galaxy S III Neo (also known as the Galaxy S3 Lite) is a slightly different version of the Samsung Galaxy S III. It is slightly less feature packed, such as a lack of a scratch proof screen, and its use of a Qualcomm Snapdragon 400 processor vs the first party Samsung Exynos 4 processor in the flagship.
Samsung unveiled the Galaxy S III Neo in April 2014, two years later than the original Galaxy S III release date.
Differences between S III and S III Neo:
The original Galaxy S III does not have the same specifications as the Neo, they differ depending on the topic.
Differences between S III and S III Neo:
Another difference is the software support from the community: as of mid 2023, the S III Neo is one of the oldest smartphone still fully officially supported by alternative Android ROM, e.g. LineAgeOS, almost 10 years after it came on the market! LineAge OS 18 / Android 11 is up to date with latest security patches on this smartphone! The S III from 2012 does NOT enjoy this support. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Checkerboarding (beekeeping)**
Checkerboarding (beekeeping):
Checkerboarding is a term used in beekeeping that describes a specific hive management technique to prevent swarming. The technique was developed by Walt Wright, a long time beekeeper from Tennessee.Checkerboarding takes advantage of the bee colony's primary motivation, which is survival as survival of the existing colony takes priority over swarm preparation and swarming. Bees will not prepare for a reproductive swarm if they perceive the survival of the existing colony might be jeopardized. It is hypothesized that swarm preparation starts in the late winter. The over-wintering colony consumes honey and expands the brood volume. At this point, the bee colony apparently senses that it has enough remaining honey stores and a large enough brood nest to risk swarm preparation. The bee colony's first activity of swarm preparation is to reduce the brood volume by creating additional stores inside the brood area. As brood emerges, selected cells are back-filled with honey, nectar, or pollen. Later into the season, as space for egg laying decreases the queen will not be able to lay as many eggs. This forces the queen to diet and lose weight so she becomes fit for the swarm flight.
Checkerboarding (beekeeping):
Apparently, at the first time this type of backfilling behavior starts is the best time to do the Checkerboarding intervention. It requires filling two hive boxes above the broodnest each alternately with capped honey-filled frames and empty drawn frames. Alternating empty drawn combs above the brood nest "fools" the bees into thinking they don't have enough stores yet for swarming and causes them to expand the brood nest, giving both a bigger field force and avoiding reproductive swarming.
Checkerboarding (beekeeping):
Timing the checkerboarding intervention is very important (note that all these times are different reference points that refer to the same time in the United States). The best time for the intervention is: when the elm trees bloom four weeks before the maple trees bloom five weeks before redbud trees bloom eight weeks before the start of the apple tree blossom nine weeks before the peak of the apple tree blossom
Sources:
Michael Bush (January 17, 2006). "Thread: Experiment participants wanted". Beesource. Retrieved March 1, 2006. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mirtron**
Mirtron:
Mirtrons are a type of microRNAs that are located in the introns of the mRNA encoding host genes. These short hairpin introns formed via atypical miRNA biogenesis pathways. Mirtrons arise from the spliced-out introns and are known to function in gene expression.
Mirtron:
Mirtrons were first identified in Drosophila melanogaster and Caenorhabditis elegans. The number of mirtrons identified to date are 14, 9, and 19 in D. melanogaster, C. elegans and mammals respectively. Mirtrons are alternative precursors for microRNA biogenesis. The short hairpin introns use splicing to bypass DROSHA cleavage, which is otherwise essential for the generation of canonical animal microRNAs. Mirtrons arise from the spliced-out introns and are known to function like classical microRNAs (miRs) and regulate gene expression, by either mRNA destabilisation, inhibition of the translation or target mRNA cleavage.Now more evidence is emerging that supports the existence of mirtrons in plants. All the miRNAs in plants are derived from the sequential DCL1 cleavages from pri-miRNA to give pre-miRNA (or miRNA precursor), but the mirtrons bypass the DCL1 cleavage and enter as pre-miRNA in the miRNA maturation pathway.Mirtrons are distinct from canonical miRNA sequences, and can be distinguished with machine learning methods in data analysis. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Double line automatic signalling**
Double line automatic signalling:
Double Line Automatic Signalling is a form of railway signalling used on the majority of double line sections in New Zealand. Double Line Automatic Signalling uses track circuits to detect the presence of trains in sections broken up by intermediate signals. Usually there is an 'up' and a 'down' main line, and beyond station limits the lines are not bi-directionally signalled. DLAS is not designed for wrong-line running in emergency situations.
Junctions or points:
At junctions or points, one of or both mains signals are usually controlled either remotely (by a Train Controller or Signalmen) or switched in at a local panel.
Sidings:
Sidings off one or both mains are usually operated by switchlock lever points secured by padlock and track circuit presence that enables a release to be given before points can be operated.
Areas of use:
Papakura - Amokura Te Kauwhata - Hamilton.
Trentham - Kaiwharawhara Kaiwharawhara - South Junction North Junction - Waikanae Islington - Heathcote
Formerly double line:
Islington - Rolleston Mosgiel - St Leonards | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Aggregation problem**
Aggregation problem:
In economics, an aggregate is a summary measure. It replaces a vector that is composed of many real numbers by a single real number, or a scalar. Consequently, there occur various problems that are inherent in the formulations that use aggregated variables.The aggregation problem is the difficult problem of finding a valid way to treat an empirical or theoretical aggregate as if it reacted like a less-aggregated measure, say, about behavior of an individual agent as described in general microeconomic theory (see Representative agent, Heterogeneity in economics).
Aggregation problem:
The second meaning of "aggregation problem" is the theoretical difficulty in using and treating laws and theorems that include aggregate variables. A typical example is the aggregate production function. Another famous problem is Sonnenschein-Mantel-Debreu theorem. Most of macroeconomic statements comprise this problem.
Aggregation problem:
Examples of aggregates in micro- and macroeconomics relative to less aggregated counterparts are: Food vs. apples Price level and real GDP vs. the price and quantity of apples Capital stock vs. the value of computers of a certain type and the value of steam shovels Money supply vs. paper currency General unemployment rate vs. the unemployment rate of civil engineersStandard theory uses simple assumptions to derive general, and commonly accepted, results such as the law of demand to explain market behavior. An example is the abstraction of a composite good. It considers the price of one good changing proportionately to the composite good, that is, all other goods. If this assumption is violated and the agents are subject to aggregated utility functions, restrictions on the latter are necessary to yield the law of demand. The aggregation problem emphasizes: How broad such restrictions are in microeconomics Use of broad factor inputs ("labor" and "capital"), real "output", and "investment", as if there was only a single such aggregate is without a solid foundation for rigorously deriving analytical results.Franklin Fisher notes that this has not dissuaded macroeconomists from continuing to use such concepts.
Aggregate consumer demand curve:
The aggregate consumer demand curve is the summation of the individual consumer demand curves. The aggregation process preserves only two characteristics of individual consumer preference theory—continuity and homogeneity. Aggregation introduces three additional non-price determinants of demand: Number of consumers Distribution of tastes among the consumers Distribution of incomes among consumers of different tasteThus if the population of consumers increases, ceteris paribus the demand curve will shift out; if the proportion of consumers with a strong preference for a good increases, ceteris paribus the demand for that good will change. Finally, if the distribution of income changes in favor of consumers who prefer the good in question, the demand will shift out. It is important to remember that factors that affect individual demand can also affect aggregate demand. However, net effects must be considered. The most important problem for micro- and macro-economics is the Sonnenschein–Mantel–Debreu theorem, which shows that almost no properties of the individual preference are inherited to the aggregate demand functions.
Aggregate consumer demand curve:
Difficulties with aggregation Sonnenschein-Mantel-Debreu theorem Sonnenschein-Mantel-Debreu theorem (SMD theorem) is a theorem for exchange economy that can be expressed in the following way: for a function that is continuous, homogeneous of degree zero, and in accord with Walras's law,there is an economy with at least as many agents as goods such that, for prices bounded away from zero, the function is the aggregate demand function for this economy.
Aggregate consumer demand curve:
Independence assumption First, to sum the demand functions without other strong assumptions it must be assumed that they are independent – that is, that one consumer's demand decisions are not influenced by the decisions of another consumer. For example, A is asked how many pairs of shoes he would buy at a certain price. A says at that price I would be willing and able to buy two pairs of shoes. B is asked the same question and says four pairs. Questioner goes back to A and says B is willing to buy four pairs of shoes, what do you think about that? A says if B has any interest in those shoes then I have none. Or A, not to be outdone by B, says "then I'll buy five pairs". And on and on. This problem can be eliminated by assuming that the consumers' tastes are fixed in the short run. This assumption can be expressed as assuming that each consumer is an independent idiosyncratic decision maker.
Aggregate consumer demand curve:
No interesting properties This second problem is more serious. As David M. Kreps notes, “total demand will shift about as a function of how individual incomes are distributed even holding total (societal) income fixed. So it makes no sense to speak of aggregate demand as a function of price and societal income". Since any change in relative price brings about a redistribution of real income, there is a separate demand curve for every relative price. Kreps continues, "So what can we say about aggregate demand based on the hypothesis that individuals are preference/utility maximizers? Unless we are able to make strong assumptions about the distribution of preferences or income throughout the economy (everyone has the same homothetic preferences for example) there is little we can say”. The strong assumptions are that everyone has the same tastes and that each person's tastes remain the same as income changes so additional income is spent in exactly the same way as before.
Aggregate consumer demand curve:
Microeconomist Hal Varian reached a more muted conclusion: "The aggregate demand function will in general possess no interesting properties". However, Varian continued: "the neoclassical theory of the consumer places no restriction on aggregate behavior in general". This means the preference conditions (with the possible exception of continuity) simply do not apply to the aggregate function. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Kepler-445d**
Kepler-445d:
Kepler-445 is a red dwarf star located 401 light-years (123 parsecs) away in the constellation Cygnus. It hosts three known exoplanets, discovered by the transit method using data from the Kepler space telescope and confirmed in 2015. None of the planets orbit within the habitable zone.
Planetary system:
Kepler-445b, c, and d orbit Kepler-445 every 3, 5, and 8 days, and have equilibrium temperatures of 401 K (128 °C; 262 °F), 341 K (68 °C; 154 °F), and 305 K (32 °C; 89 °F), respectively. With a radius of 2.72 times that of Earth, Kepler-445c is likely a mini-Neptune with a volatile-rich composition, and has been compared to GJ 1214 b. Kepler-445d is only slightly larger than the Earth, with a radius of 1.33 REarth. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**LHA (file format)**
LHA (file format):
LHA or LZH is a freeware compression utility and associated file format. It was created in 1988 by Haruyasu Yoshizaki (吉崎栄泰, Yoshizaki Haruyasu), a doctor and originally named LHarc. A complete rewrite of LHarc, tentatively named LHx, was eventually released as LH. It was then renamed to LHA to avoid conflicting with the then-new MS-DOS 5.0 LH ("load high") command. The original LHA and its Windows port, LHA32, are no longer in development because Yoshizaki is busy at work.Although no longer much used in the west, LHA remained popular in Japan until the 2000s. It was used by id Software to compress installation files for their earlier games, including Doom and Quake. Because some versions of LHA have been distributed with source code under the permissive license, LHA has been ported to many operating systems and is still the main archiving format used on the Amiga computer, although it competed with LZX in the mid 1990s. This was due to Aminet, the world's largest archive of Amiga-related software and files, standardising on Stefan Boberg's implementation of LHA for the Amiga.
LHA (file format):
Microsoft released the Microsoft Compressed (LZH) Folder Add-on, which was designed for the Japanese version of Windows XP. The Japanese version of Windows 7 ships with the LZH folder add-on built-in. Users of non-Japanese versions of Windows 7 Enterprise and Ultimate can also install the LZH folder add-on by installing the optional Japanese language pack from Windows Update.
Compression methods:
In an LZH archive, the compression method is stored as a five-byte text string, e.g. -lz1-. These are the third through seventh bytes of the file.
Canonical LZH LHarc compresses files using an algorithm from Yoshizaki's earlier LZHUF product, which was modified from LZARI developed by Haruhiko Okumura (奥村晴彦, Okumura Haruhiko), but uses Huffman coding instead of arithmetic coding. LZARI uses Lempel–Ziv–Storer–Szymanski with arithmetic coding.
lh0 No compression method is applied to the source data.
lh1 This method is introduced in LHarc version 1.
It supports 4 KiB sliding window, with support of maximum 60 bytes of matching length. Dynamic Huffman encoding is used.
lh2 lh1 variant. This method supports 8 KiB sliding window, with support of maximum 256 bytes of matching length. Dynamic Huffman encoding is used.
lh3 lh2 variant with Static Huffman.
lh4, lh5, lh6, lh7 Methods 4, 5, 6, 7 support 4, 8, 32, 64 KiB sliding window respectively, with support of maximum 256 bytes of matching length. Static Huffman encoding is used. lh5 is first introduced in LHarc 2, followed by lh6 in LHA 2.66 (MSDOS), lh7 in LHA 2.67 beta (MSDOS). LHA itself never compresses into lh4.
lhd Technically it is not a compression method, but it is used in .LZH archive to indicate that the compressed object is an empty directory.
Joe Jared extensions Joe Jared extended LZSS to use larger dictionaries.
lh8, lh9, lha, lhb, lhc, lhe Dictionary (sliding window) sizes are 64, 128, 256, 512, 1024, 2048 KiB respectively.Jared ported LZH to Atari. The fact that lh8 is the same as lh7 was an oversight. Files using larger numbered methods may as well not exist, as Jared only considers them planned features.
UNLHA32 extensions UNLHA32.DLL uses its own method for testing purposes.
lhx It uses 128–256 KiB dictionary.
PMarc extensions These compression methods are created by PMarc, a CP/M archiver created by Miyo. The archive usually has a .PMA extension.
pc1 PopCom compressed executable archive. Details unknown.
pm0 No compression method is applied to the source data.
pm1 8 KB sliding window, static huffman. Seldom generated, decompressor is reverse-engineered.
pm2 lh5 variant, 4K sliding window.
pms Used to indicate PMarc self-extracting archive. Should be skipped to reveal the real format.
LArc extensions LArc uses the same file format as .LZH, but was written by Kazuhiko Miki, Haruhiko Okumura and Ken Masuyama, with extension name ".LZS". The program seems to have come before LZH. It uses a binary search tree in the LZ matching.
lzs It supports 2 KiB sliding window, with support of maximum 17 bytes of matching length.
lz2 It is similar to lzs, except dictionary size and match length can be changed.
lz3 Unknown.
lz4 No compression method is applied to the source data.
lz5 It supports 4 KiB sliding window, with support of maximum 17 bytes of matching length.
lz7 lz8 Unknown.Common implementations appear to only support lzs, lz5, plus the storage-only lz4.
Issues:
LHICE/ICE There are copies of LHICE marked as version 1.14. According to Okumura, LHICE is not written by Yoshizaki.
Issues:
Y2K11 bug Because of a bug, DOS time stamps from Level 0 and 1 headers after the year 2011 will be set to 1980, meaning that some utilities need to be patched. This is caused by a bug that interprets the unsigned 7-bit year number bitfield as a 5-bit number. The maximum year should be 2107 instead.The newer Level 2 and 3 headers use a 32-bit Unix time instead. It suffers from the Year 2038 problem.
Issues:
Header size According to Micco, the author of a popular LHA library UNLHA32.DLL, many LHA implementations do not check for the length of LHA file headers when reading the archive. Two problems could emerge from this scenario: a buffer-overrun may occur for naive implementations assuming a 4KB max size from the original specification; antivirus software may skip over files with such large headers and fail to scan for a virus. A similar problem exists with ARJ. Micco reported this problem to Japanese authorities, but they do not consider it a valid vulnerability.Micco went so far to conclude the development of UNLHA32 and advise people to give up on the format. Nevertheless, they came back in 2017 to fix a DLL hijacking issue. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Robotic sensing**
Robotic sensing:
Robotic sensing is a subarea of robotics science intended to provide sensing capabilities to robots. Robotic sensing provides robots with the ability to sense their environments and is typically used as feedback to enable robots to adjust their behavior based on sensed input. Robot sensing includes the ability to see, touch, hear and move and associated algorithms to process and make use of environmental feedback and sensory data. Robot sensing is important in applications such as vehicular automation, robotic prosthetics, and for industrial, medical, entertainment and educational robots.
Vision:
Method Visual sensing systems can be based on a variety of technologies and methods including the of use of camera, sonar, laser and radio frequency identification (RFID) technology. All four methods aim for three procedures—sensation, estimation, and matching.
Image processing Image quality is important in applications that require excellent robotic vision. Algorithms based on wavelet transform that are used for fusing images of different spectra and different foci result in improved image quality. Robots can gather more accurate information from the resulting improved image.
Usage Visual sensors help robots to identify the surrounding environment and take appropriate action. Robots analyze the image of the immediate environment based on data input from the visual sensor. The result is compared to the ideal, intermediate or end image, so that appropriate movement or action can be determined to reach the intermediate or final goal.
Touch:
Robot skin Types and examples Examples of the current state of progress in the field of robot skins as of mid-2022 are a robotic finger covered in a type of manufactured living human skin, an electronic skin giving biological skin-like haptic sensations and touch/pain-sensitivity to a robotic hand, a system of an electronic skin and a human-machine interface that can enable remote sensed tactile perception, and wearable or robotic sensing of many hazardous substances and pathogens, and a multilayer tactile sensor hydrogel-based robot skin.
Touch:
Tactile discrimination Signal processing Touch sensory signals can be generated by the robot's own movements. It is important to identify only the external tactile signals for accurate operations. Previous solutions employed the Wiener filter, which relies on the prior knowledge of signal statistics that are assumed to be stationary. Recent solution applies an adaptive filter to the robot's logic. It enables the robot to predict the resulting sensor signals of its internal motions, screening these false signals out. The new method improves contact detection and reduces false interpretation.
Touch:
Usage Touch patterns enable robots to interpret human emotions in interactive applications. Four measurable features—force, contact time, repetition, and contact area change—can effectively categorize touch patterns through the temporal decision tree classifier to account for the time delay and associate them to human emotions with up to 83% accuracy. The Consistency Index is applied at the end to evaluate the level of confidence of the system to prevent inconsistent reactions.
Touch:
Robots use touch signals to map the profile of a surface in hostile environment such as a water pipe. Traditionally, a predetermined path was programmed into the robot. Currently, with the integration of touch sensors, the robots first acquire a random data point; the algorithm of the robot will then determine the ideal position of the next measurement according to a set of predefined geometric primitives. This improves the efficiency by 42%.In recent years, using touch as a stimulus for interaction has been the subject of much study. In 2010, the robot seal PARO was built, which reacts to many stimuli from human interaction, including touch. The therapeutic benefits of such human-robot interaction is still being studied, but has shown very positive results.
Hearing:
Signal processing Accurate audio sensors require low internal noise contribution. Traditionally, audio sensors combine acoustical arrays and microphones to reduce internal noise level. Recent solutions combine also piezoelectric devices. These passive devices use the piezoelectric effect to transform force to voltage, so that the vibration that is causing the internal noise could be eliminated. On average, internal noise up to about 7dB can be reduced.Robots may interpret strayed noise as speech instructions. Current voice activity detection (VAD) system uses the complex spectrum circle centroid (CSCC) method and a maximum signal-to-noise ratio (SNR) beamformer. Because humans usually look at their partners when conducting conversations, the VAD system with two microphones enable the robot to locate the instructional speech by comparing the signal strengths of the two microphones. Current system is able to cope with background noise generated by televisions and sounding devices that come from the sides.
Hearing:
Usage Robots can perceive emotions through the way we talk and associated characteristics and features. Acoustic and linguistic features are generally used to characterize emotions. The combination of seven acoustic features and four linguistic features improves the recognition performance when compared to using only one set of features.
Acoustic feature Duration Energy Pitch Spectrum Cepstral Voice quality Wavelets Linguistic feature Bag of words Part-of-speech Higher semantics Varia
Taste:
For example, robot cooks may be able to taste food for dynamic cooking.
Motion perception:
Usage Automated robots require a guidance system to determine the ideal path to perform its task. However, at the molecular scale, nano-robots lack such guidance system because individual molecules cannot store complex motions and programs. Therefore, the only way to achieve motion in such environment is to replace sensors with chemical reactions. Currently, a molecular spider that has one streptavidin molecule as an inert body and three catalytic legs is able to start, follow, turn and stop when came across different DNA origami. The DNA-based nano-robots can move over 100 nm with a speed of 3 nm/min.In a TSI operation, which is an effective way to identify tumors and potentially cancer by measuring the distributed pressure at the sensor's contacting surface, excessive force may inflict a damage and have the chance of destroying the tissue. The application of robotic control to determine the ideal path of operation can reduce the maximum forces by 35% and gain a 50% increase in accuracy compared to human doctors.
Motion perception:
Performance Efficient robotic exploration saves time and resources. The efficiency is measured by optimality and competitiveness. Optimal boundary exploration is possible only when a robot has square sensing area, starts at the boundary, and uses the Manhattan metric. In complicated geometries and settings, a square sensing area is more efficient and can achieve better competitiveness regardless of the metric and of the starting point.
Non-human senses:
Robots may not only be equipped with higher sensitivity and capabilities per sense than all or most non-cyborg humans such as being able to "see" more of the electromagnetic spectrum such as ultraviolet and with higher fidelity and granularity, but may also be able have more senses such as sensing of magnetic fields (magnetoreception) or of various hazardous air components.
Collective sensing and sensemaking:
Robots may share, store, and transmit sensory data as well as data based on such. They may learn from or interpret the same or related data in different ways and some robots may have remote senses (e.g. without local interpretation or processing or computation such as with common types of telerobotics or with embedded or mobile "sensor nodes"). Processing of sensory data may include processes such as facial recognition, facial expression recognition, gesture recognition and integration of interpretative abstract knowledge. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Predictive microbiology**
Predictive microbiology:
Predictive Microbiology is the area of food microbiology where controlling factors in foods and responses of pathogenic and spoilage microorganisms are quantified and modelled by mathematical equations It is based on the thesis that microorganisms' growth and environment are reproducible, and can be modeled. Temperature, pH and water activity impact bacterial behavior. These factors can be changed to control food spoilage.Models can be used to predict pathogen growth in foods. Models are developed in several steps including design, development, validation, and production of an interface to display results. Models can be classified attending to their objective in primary models (describing bacterial growth), secondary models (describing factors affecting bacterial growth) or tertiary models (computer software programs) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Injection fibrosis**
Injection fibrosis:
Injection fibrosis is a complication of intramuscular injection, occurring especially often in infants and children. Injections are often delivered to the quadriceps, triceps, and gluteal muscles, and thus the complication often manifests itself in those muscles.
Patients are unable to fully flex the affected muscle. The condition is painless, but progressively worsens over time. Orthopedic surgery is the typical treatment. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Waffle slab**
Waffle slab:
A waffle slab or two-way joist slab is a concrete slab made of reinforced concrete with concrete ribs running in two directions on its underside. The name waffle comes from the grid pattern created by the reinforcing ribs. Waffle slabs are preferred for spans greater than 40 feet (12 m), as they are much stronger than flat slabs, flat slabs with drop panels, two-way slabs, one-way slabs, and one-way joist slabs.
Description:
A waffle slab is flat on top, while joists create a grid like surface on the bottom. The grid is formed by the removal of molds after the concrete sets. This structure was designed to be more solid when used on longer spans and with heavier loads. This type of structure, because of its rigidity, is recommended for buildings that require minimal vibration, like laboratories and manufacturing facilities. It is also used in buildings that require big open spaces, like theatres or train stations. Waffle slabs are composed by intricate formwork, and may be more expensive than other types of slabs, but depending on the project and the quantity of concrete needed it may be cheaper to build.
Description:
There are two types of waffle slab system: One way waffle slab system Two way waffle slab system
Construction process:
A waffle slab can be made in different ways but generic forms are needed to give the waffle shape to the slab. The formwork is made up of many elements: waffle pods, horizontal supports, vertical supports, cube junctions, hole plates, clits and steel bars. First the supports are built, then the pods are arranged in place, and finally the concrete is poured.
Construction process:
This process may occur in three different approaches, however the basic method is the same in each: In situ: Formwork construction and pouring of concrete occur on site, then the slab is assembled (if required).
Precast: The slabs are made somewhere else and then brought to the site and assembled.
Pre-fabricated: The reinforcements are integrated into the slab while being manufactured, without needing to reinforce the assembly on site. This is the most expensive option.
Construction process:
Waffle slab design Different guides have been made for architects and engineers to determine various parameters of waffle slabs, primarily the overall thickness and rib dimensions. The following are rules of thumb, which are explained further in the accompanying diagrams: Slab depth is typically 75 mm (3 in) to 130 mm (5 in) thick. As a rule of thumb, the depth should be 1⁄24 of the span.
Construction process:
The width of the ribs is typically 130 mm (5 in) to 150 mm (6 in), and ribs usually have steel rod reinforcements.
The distance between ribs is typically 915 mm (3 ft).
The height of the ribs and beams should be 1⁄25 of the span between columns.
The width of the solid area around the column should be 1⁄8 of the span between columns. Its height should be the same as the ribs.
Advantages:
The waffle slab floor system has several advantages: Better for buildings that require less vibrations – this is managed by the two way joist reinforcements that form the grid.
Advantages:
Bigger spans can be achieved with less material, being more economical and environmentally friendly Some people find the waffle pattern aesthetically pleasing Greater load capacity than traditional one-way slabs Forms can be implemented with wood, concrete or steel If holes are provided between the ribs, building services can be run through them. One proprietary implementation of this system is called Holedeck.
Disadvantages:
Greater quantities of formwork materials are needed, which can be very costly Waffle slabs are thicker than flat slabs, so the height between each floor must be greater to have enough space for the slab system and other building services Waffle slabs are preferred for flat topographical areas not sloped sites
Examples:
Royal National Theatre, London, United Kingdom Washington Metro Building Logistic and Telecommunication SL, Madrid, Spain Barangaroo House, Sydney, Australia GS1 Portugal, Lisboa, Portugal Galbraith Hall, UC San Diego, California odD House, Quito, Ecuador | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Pentadecagon**
Pentadecagon:
In geometry, a pentadecagon or pentakaidecagon or 15-gon is a fifteen-sided polygon.
Regular pentadecagon:
A regular pentadecagon is represented by Schläfli symbol {15}.
A regular pentadecagon has interior angles of 156°, and with a side length a, has an area given by 15 cot 15 15 15 15 15 17.6424 a2.
Construction:
As 15 = 3 × 5, a product of distinct Fermat primes, a regular pentadecagon is constructible using compass and straightedge: The following constructions of regular pentadecagons with given circumcircle are similar to the illustration of the proposition XVI in Book IV of Euclid's Elements.
Compare the construction according Euclid in this image: Pentadecagon In the construction for given circumcircle: FG¯=CF¯,AH¯=GM¯,|E1E6| is a side of equilateral triangle and |E2E5| is a side of a regular pentagon.
The point H divides the radius AM¯ in golden ratio: 1.618 .
Compared with the first animation (with green lines) are in the following two images the two circular arcs (for angles 36° and 24°) rotated 90° counterclockwise shown. They do not use the segment CG¯ , but rather they use segment MG¯ as radius AH¯ for the second circular arc (angle 36°).
A compass and straightedge construction for a given side length. The construction is nearly equal to that of the pentagon at a given side, then also the presentation is succeed by extension one side and it generates a segment, here FE2¯, which is divided according to the golden ratio: 1.618 .
Circumradius E2M¯=R; Side length E1E2¯=a; Angle 78 ∘ 15 sin 78 sin 24 2.40486 ⋅a
Symmetry:
The regular pentadecagon has Dih15 dihedral symmetry, order 30, represented by 15 lines of reflection. Dih15 has 3 dihedral subgroups: Dih5, Dih3, and Dih1. And four more cyclic symmetries: Z15, Z5, Z3, and Z1, with Zn representing π/n radian rotational symmetry.
Symmetry:
On the pentadecagon, there are 8 distinct symmetries. John Conway labels these symmetries with a letter and order of the symmetry follows the letter. He gives r30 for the full reflective symmetry, Dih15. He gives d (diagonal) with reflection lines through vertices, p with reflection lines through edges (perpendicular), and for the odd-sided pentadecagon i with mirror lines through both vertices and edges, and g for cyclic symmetry. a1 labels no symmetry.
Symmetry:
These lower symmetries allows degrees of freedoms in defining irregular pentadecagons. Only the g15 subgroup has no degrees of freedom but can seen as directed edges.
Pentadecagrams There are three regular star polygons: {15/2}, {15/4}, {15/7}, constructed from the same 15 vertices of a regular pentadecagon, but connected by skipping every second, fourth, or seventh vertex respectively.
There are also three regular star figures: {15/3}, {15/5}, {15/6}, the first being a compound of three pentagons, the second a compound of five equilateral triangles, and the third a compound of three pentagrams.
The compound figure {15/3} can be loosely seen as the two-dimensional equivalent of the 3D compound of five tetrahedra.
Isogonal pentadecagons Deeper truncations of the regular pentadecagon and pentadecagrams can produce isogonal (vertex-transitive) intermediate star polygon forms with equal spaced vertices and two edge lengths.
Petrie polygons The regular pentadecagon is the Petrie polygon for some higher-dimensional polytopes, projected in a skew orthogonal projection:
Uses:
A regular triangle, decagon, and pentadecagon can completely fill a plane vertex. However, due to the triangle's odd number of sides, the figures cannot alternate around the triangle, so the vertex cannot produce a semiregular tiling. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Surface probability**
Surface probability:
In immunology, surface probability is the amount of reflection of an antigen's secondary or tertiary structure to the outside of the molecule.A greater surface probability means that an antigen is more likely to be immunogenic (i.e. induce the formation of antibodies). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Limbic imprint**
Limbic imprint:
In psychology, limbic imprint refers to the process by which prenatal, perinatal and post-natal experiences imprint upon the limbic system, causing lifelong effects. The term is used to explain how early care of a fetus and newborn is important to lifelong psychological development and has been used as an argument for alternative birthing methods, and against circumcision. Some also refer to the concept as the human emotional map, deep-seated beliefs, and values that are stored in the brain's limbic system. When a fetus or newborn experiences trauma, the brain will register trauma as normal affecting the newborn into adulthood. However, when a fetus or newborn does not experience trauma, the brain will develop healthy coping mechanisms that work effectively into adulthood.
Limbic imprint:
This phenomenon, since experienced during prenatal, perinatal and postnatal stages, generally affects children. Different types of perinatal and childhood experiences shape the future experiences of adults. This means that if a child is born under traumatic circumstances, then as an adult trauma will register as normal in the brain. Trauma will become expected and because of this early imprinting, adults may be more susceptible to dangerous or abusive situations. This also depends on the circumstance in which they were born and can negatively impact the adult through their lifespan.
Limbic imprint:
Prenatal psychologists have suggested that it is possible to “reverse” the effects of negative imprinting. To improve their future experiences, individuals that have been negatively impacted, need to recognize, accept, and change their perspectives on their past experiences. However, there are not currently any cures or much research done for correcting a negative imprint. There are such therapies as re-coding meditation, which seeks to reset the imprints that were made upon an individual during gestation and shortly after. Many of these therapies can be done individually or within a group setting. Some individuals experience lifelong prognoses such as lowered depressive symptoms or a happier psyche in general. More recently experimentation on psychedelic assisted therapy seems to offer a technique to address these issues.
Limbic system:
Limbic imprint is a psychological concept associated with the limbic system. The limbic system includes the structures of the brain that control emotions, memories, and arousal. Through the prefrontal cortex, the system plays a role in the expression of moods and emotional feelings. The structures most involved with Limbic Imprint are known as the hippocampus and amygdala. The hippocampus is majorly associated with memory. The process of imprinting emotional and physical experiences into the brain utilizes memory functions. While the emotional regulation and responses of these experiences are majorly associated with and controlled by the amygdala. The system's connections with the cerebral cortex allow an individual to experience negative or positive feeling through his perceptions and he remembers such event with accompanying feeling.It is said that male and female limbic systems are different. The use of the limbic system also differs in the sense that women use its more recently evolved part while the more ancient part showed more activity in men. These explain why women are more capable of remembering emotions and memories than men. Women are also more likely influenced by emotional attachments in their decision making.
Incidence:
As opposed to other psychological concepts, the limbic imprint is not specific to or more common in one group of people but is applicable to everyone. All human beings are affected, in some way, by their experiences in utero, their experiences during birth, and their first experiences after birth.
Causes:
Trauma is a form of damage to either the mind or body that results from a distressing event. Traumatic experiences can occur in utero, during birth, and/or after birth. Trauma experienced in utero includes maternal smoking, alcohol or drug use during pregnancy; exposure to toxins such as methylmercury; and even exposure to maternal psycho-social stress. Trauma in utero increases the risk of neurodevelopmental delays and disorders causing a long-term effect on limbic imprinting such as difficulty regulating and processing emotions. Trauma experienced during birth includes the use of interventions during labor such as obstetrical forceps or vacuum extraction, cesarean section, or exposure to medicines used to relieve maternal pain or induce labor. These experiences can cause both physical and psychological harm to the baby that also affects the limbic imprinting process. Finally, trauma experienced after birth can include malnutrition; neglect, physical, emotional, or sexual abuse; and lack of a safe or healthy environment. Traumatic experiences after birth also have long-term effects on limbic imprinting.
Effects:
Stress and trauma experienced as a baby in the womb will turn into a normal expectation when born. As Freud has identified, infant development is affected severely by negative limbic imprints on the brain because at this stage the "Ego" is vulnerable and susceptible adverse effects. This is in part due to an infant's brain being in a state of rapid development. Other effects include difficulty maintaining interpersonal relationships, dealing with overstimulation, and emotional regulation. On the other hand, a baby that is nurtured in hormones like Oxytocin will develop well physically.
Effects:
There are four stops in the trauma loop of the Limbic System in the brain that occurs when an individual is experiencing a negative imprint effect. The brain moves in the first stage called "Hyper alert to danger". This is when the individual is highly sensitive to the stimulus. The Limbic System then moves into the "Normal Cue or Danger Cue" comes in. This lets the individual know whether the situation is dangerous or if it is a safe condition. Thirdly, the brain experiences a "Fight or Flight Freeze" and the individual cannot process how to react to the situation or stimulus presenting itself. Lastly, the limbic system reaches a point where the brain cannot take in information to understand the cues that it is receiving. This cycle continues and can continually put an individual at an increased risk for traumatic experiences to reoccur.
Treatment:
There are therapists who recognize that body and emotions - facilitated by experiences - leave imprints in deep neuronal circuits of the limbic brain and use such position to devise psychological interventions such as the therapy within a group setting. Some therapists suggest a course of "limbic repatterning" to consciously rewrite bad limbic imprints and thus improve the patient's overall psychological health.Another suggested therapy is called "Limbic System Therapy". In this therapy, the patients participate in physical experiences that contradict the limbic imprint. For example, an individual that has low self-esteem may be given an activity to deliberately make them feel good about themselves like positive affirmation exercises. The more the patient participates in this "re-wiring" the better they will feel about themselves and thus correcting a negative imprint.According to psychologists, there are many more ways to help with re-coding a negative imprint. These strategies include things such as journaling, talking and sharing feelings, and participating in body therapies like a body massage. Other strategies psychologists suggest are spending time with loved ones, positive thinking, breathing awareness, body awareness, relaxation, meditation, and/or prayer. Regular exercise and warm baths are also suggested by professionals.
Criticisms:
The major problem with limbic imprint is that there is very little scientific research done specifically on the concept. There is plenty of research on trauma and how it affects people as they develop which is useful to explain limbic imprint. However, standing alone, limbic imprint is fairly new and more common in pop-culture psychology than in research/scientific psychology. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Endoscopic endonasal surgery**
Endoscopic endonasal surgery:
Endoscopic endonasal surgery is a minimally invasive technique used mainly in neurosurgery and otolaryngology. A neurosurgeon or an otolaryngologist, using an endoscope that is entered through the nose, fixes or removes brain defects or tumors in the anterior skull base. Normally an otolaryngologist performs the initial stage of surgery through the nasal cavity and sphenoid bone; a neurosurgeon performs the rest of the surgery involving drilling into any cavities containing a neural organ such as the pituitary gland. The use of endoscope was first introduced in Transsphenoidal Pituitary Surgery by R Jankowsky, J Auque, C Simon et al. in 1992 G (Laryngoscope. 1992 Feb;102(2):198-202).
Introduction:
History of endoscopic endonasal surgery Antonin Jean Desomeaux, a urologist from Paris, was the first person to use the term, endoscope.
Introduction:
However, the precursor to the modern endoscope was invented in the 1800s when a physician in Frankfurt, Germany by the name of Philipp Bozzini, developed a tool to see the inner workings of the body. Bozzini called his invention a Light Conductor, or Lichtleiter in German, and later wrote about his experiments on live patients with this device that consisted of an eyepiece and a container for a candle. Following Bozzini's success, The University of Vienna starting using the device to test its practicality in other forms of medicine. After Bozzini's device received negative results from live human trials, it had to be discontinued. However, Maximilian Nitze and Joseph Leiter used the invention of the light bulb by Thomas Edison to make a more refined device similar to modern day endoscopes. This iteration was used for urological procedures, and eventually otolaryngologists began to use Nitze and Leiter's device for eustachian tube manipulation and removal of foreign bodies. The endoscope made its way to the US when Walter Messerklinger began teaching David Kennedy at Johns Hopkins Hospital.The transsphenoidal and intracranial approaches to pituitary tumors began in the 1800s but with little success. Gerard Guiot popularized the transphenoidal approach which later became part of the neurosurgical curriculum, however he himself discontinued the use of this technique because of inadequate sight. In the late 1970s, the endoscopic endonasal approach was used by neurosurgeons to augment microsurgery which allowed them to view objects out of their line of sight. Another surgeon, Axel Perneczky, is considered to be a pioneer of the use of an endoscope in neurosurgery. Perneczky said that endoscopy, "improved appreciation of micro-anatomy not apparent with the microscope."The surgery was pioneered in Algeria by Bouyoucef Kheireddine and Faiza Lalam.
Introduction:
Endoscopic instrumentation The endoscope consists of a glass fiber bundle for cold light illumination, a mechanical housing, and an optics component with four different views: 0 degree for straight forward, 30 degrees for forward plane, 90 degrees for lateral view, and 120 degrees for retrospective view. For endoscopic endonasal surgery, rigid rod-lens endoscopes are used for better quality of vision, since these endoscopes are smaller than the normal endoscope used colonoscopies. The endoscope has an eyepiece for the surgeon, but it is rarely used because it requires the surgeon to be in a fixed position. Instead, a video camera broadcasts the image to a monitor that shows the surgical field.
Areas of interest for surgical planning:
Several specialties need to be involved to determine the complete surgical plan. These include: an Endocrinologist, a Neuroradiologist, an Ophthalmologist, a Neurosurgeon, and an Otolaryngologist.
Areas of interest for surgical planning:
Endocrinology An endocrinologist is only involved in preparation for an endoscopic endonasal surgery, if the tumor is located on the pituitary gland. The tumor is first treated pharmacologically in two ways: controlling the levels of hormones that the pituitary gland secretes and reducing the size of the tumor. If this approach does not work, the patient is referred to surgery. The main types of pituitary adenomas are: PRL-secreting or prolactinomas: These are the most common pituitary tumors. They are associated with infertility, gonad, and sexual dysfunction because they increase the secretion of prolactin or PRL. One drug that endocrinologist use is bromocriptine (BRC), which normalizes PRL levels and has been shown to lead to tumor shrinkage. Other drugs to treat prolactinomas include quinagolide (CV) or cabergoline (CAB) acting as dopamine (D2) antagonists. Endoscopic endonasal surgery is normally performed as a last resort when the tumor is resistant to the drugs, shows no tumor shrinkage, or the PRL levels cannot be normalized.
Areas of interest for surgical planning:
GH-secreting: A very rare condition that is a result of the increase in the secretion of growth hormone. There are currently less than 400,000 cases worldwide and approximately 30,000 new cases every year. Despite the rarity of this condition, these tumors constitute 16% of the pituitary tumors that are removed. The tumor normally results in acral enlargement, arthropathy, hyperhidrosis, changes in facial features, soft tissue swelling, headaches, visual changes, or hypopituitarism. Since pharmacological therapy has had little effect on these tumors, a trans-sphenoidal surgery to remove part of the pituitary gland is the first treatment option.
Areas of interest for surgical planning:
TSH-secreting: Another rare condition only resulting in 1% of pituitary surgeries is a result of the increase in the secretion of the thyroid-stimulating hormone. This tumor leads to hyperthyroidism, resulting in headaches and visual disturbances. Although surgery is the first step of treatment, it does not usually cure the patient. After surgery, patients are treated by somatostatin analogues, a type of hormone replacement therapy, because TSH related tumors increase the expression of somatostatin receptors.
Areas of interest for surgical planning:
ACTH-secreting: This tumor is a result of the increase in the secretion of adrenocorticotropic hormone (ACTH) and leads to Cushing's syndrome. Pharmacology has little effect and therefore surgery is the best option. Removal of the tumor results in an 80%-90% cure rate.
Areas of interest for surgical planning:
Neuroradiology A neuroradiologist takes images of the defect so that the surgeon is prepared on what to expect before surgery. This includes identifying the lesion or tumor, controlling the effects of the medical therapy, defining the spatial situation of the lesions, and verifying the removal of the lesions. The lesions associated with endoscopic endonasal surgery include: Pituitary microadenomas Pituitary macroadenomas Rathke's cleft cysts Pituitary inflammatory disease Pituitary metastasis Empty Sella Craniopharyngiomas Meningiomas Chiasmatic and Hypothalamic gliomas Germinomas Tuber Cinereum Hamartomas Arachnoid cysts Neurinomas of the trigeminal nerve Ophthalmology Some suprasellar tumors invade the chiasmatic cistern, causing impaired vision. In these cases, an ophthalmologist maintains optic health by administering pre-surgical treatment, advising proper surgical techniques so that the optic nerve is not in danger, and managing post-surgery eye care. Common problems include: Visual field defects Reduced visual activity Visually evoked potential (VEP) abnormalities Color blindness Eye motility impairment
Surgical approaches to the anterior skull base:
Transnasal approach The transnasal approach is used when the surgeon needs to access the roof of the nasal cavity, the clivus, or the odontoid. This approach is used to remove chordomas, chondrosarcoma, inflammatory lesions of the clivus, or metastasis in the cervical spine region. The anterior septum or posterior septum is removed so that the surgeon can use both sides of the nose. One side can be used for a microscope and the other side for a surgical instrument, or both sides can be used for surgical instruments.
Surgical approaches to the anterior skull base:
Transsphenoidal approach This approach is the most common and useful technique of endoscopic endonasal surgery and was first described in 1910 concurrently by Harvey Cushing and Oskar Hirsch. This procedure allows the surgeon to access the sellar space, or sella turcica. The sella is a cradle where the pituitary gland sits. Under normal circumstances, a surgeon would use this approach on a patient with a pituitary adenoma. The surgeon starts with the transnasal approach prior to using the transsphenoidal approach. This allows access to the sphenoid ostium and sphenoid sinus. The sphenoid ostium is located on the anterosuperior surface of the sphenoid sinus. The anterior wall of the sphenoid sinus and the sphenoid rostrum is then removed to allow the surgeon a panoramic view of the surgical area. This procedure also requires the removal of the posterior septum to allow the use of both nostrils for tools during surgery. There are several triangles of blood vessels traversing this region, which are just very delicate areas of blood vessels that can be deadly if injured. A surgeon uses stereotactic imaging and a micro Doppler to visualize the surgical field.
Surgical approaches to the anterior skull base:
The invention of the angled endoscope is used to go beyond the sella to the suprasellar (above the sellar) region. This is done with the addition of four approaches. First the transtuberculum and transplanum approaches are used to reach the suprasellar cistern. The lateral approach is then used to reach the medial cavernous sinus and petrous apex. Lastly, the inferior approach is used to reach the superior clivus. Endoscopic endonasal transclival approaches are often described according to which segment of the clivus is involved in the approach, with the clivus typically divided into three regions. Depending on which segment of the clivus is involved in the surgical approach, different neurovascular structures are placed at risk. The upper third lies inferior to the dorsum sellae and posterior clinoid processes and superior to the petrous apex, the middle third lies at the level of the petrous segments of the internal carotid artery (ICA), and the inferior third extends from the jugular tubercle to the foramen magnum. It is important that the Perneczky triangle is treated carefully. This triangle has optic nerves, cerebral arteries, the third cranial nerve, and the pituitary stalk. Damage to any of these could provide a devastating post-surgical outcome.
Surgical approaches to the anterior skull base:
Transpterygoidal approach The transpterygoidal approach enters through the posterior edge of the maxillary sinus ostium and posterior wall of the maxillary sinus. This involves penetrating three separate sinus cavities: the ethmoid sinus, the sphenoidal sinus, and the maxillary sinus. Surgeons use this method to reach the cavernous sinus, lateral sphenoid sinus, infra temporal fossa, pterygoid fossa, and the petrous apex. Surgery includes a uninectomy (removal of the osteomeatal complex), a medial maxillectomy (removal of maxilla), an ethmoidectomy (removal of ethmoid cells and/or ethmoid bone), a sphenoidectomy (removal of part of sphenoid), and removal of the maxillary sinus and the palatine bone. The posterior septum is also removed at the beginning to allow use of both nostrils.
Surgical approaches to the anterior skull base:
Transethmoidal approach This approach makes a surgical corridor from the frontal sinus to the sphenoid sinus. This is done by the complete removal of the ethmoid bone, which allows a surgeon to expose the roof of the ethmoid, and the medial orbital wall. This procedure is often successful in the removal of small encephaloceles of the ethmoid osteomas of the ethmoid sinus wall or small olfactory groove meningiomas. However, with larger tumors or lesions, one of the other approaches listed above is required.
Different approaches to specific regions:
Approach to sellar region For removal of a small tumor, it is accessed through one nostril. However, for larger tumors, access through both nostrils is required and the posterior nasal septum must be removed. Then the surgeon slides the endoscope into the nasal choana until the sphenoid ostium is found. Then the mucosa around the ostium is cauterized for microadenomas and removed completely for macroadenomas. Then the endoscope enters the ostium and meets the sphenoid rostrum where the mucosa is retracted from this structure and is removed from the sphenoid sinus to open the surgical pathway. At this point, imaging and Doppler devices are used to define the important structures. Then the floor of the sella turcica is opened with a high speed drill being careful to not pierce the dura mater. Once the dura is visible, it is cut with microscissors for precision. If the tumor is small, the tumor can be removed by an en bloc procedure, which consists of cutting the tumor into many sections for removal. If the tumor is larger, the center of the tumor is removed first, then the back, then the sides, then top of the tumor to make sure that the arachnoid membrane does not expand into the surgical view. This will happen if the top part of the tumor is taken out too early. After tumor removal, CSF leaks are tested for with fluorescent dye, and if there are no leaks, the patient is closed.
Different approaches to specific regions:
Approach to suprasellar region This technique is the same as to the sellar region. However the tuberculum sellae is drilled into instead of the sella. Then an opening is made that extends halfway down the sella to expose the dura, and the intercavernous sinuses is exposed. When the optic chiasm, optic nerve, and pituitary gland are visible, the pituitary gland and optic chasm are pushed apart to see the pituitary stalk. An ethmoidectomy is performed, the dura is then cut, and the tumor is removed. These types of tumors are separated into two types: Prechiasmal Lesions: This tumor is closest to the dura. The tumor is decompressed by the surgeon. After decompression, the tumor is removed taking care to not disrupt any optic nerve or major arteries.
Different approaches to specific regions:
Postchiasmal Lesions: This time the pituitary stalk is in the front because the tumor is pushing it towards the area the dura was opened. Removal then starts on both sides of the stalk to preserve the connection between the pituitary and the hypothalamus and above pituitary gland to protect the stalk. The tumor is carefully removed and the patient is closed up.
Different approaches to specific regions:
Skull base reconstruction When there is a tumor, injury, or some type of defect at the skull base whether the surgeon used an endoscopic or open surgical method, the problem still arises of providing separation of the cranial cavity and cavity between the sinuses and nose to prevent cerebrospinal fluid leakage through the opening referred to as a defect.For this procedure, there are two ways to start: with a free graft repair or with a vascularized flap repair. The free grafts use secondary material like cadaver flaps or titanium mesh to repair the skull base defects, which is very successful (95% without CSF leaks) with small CSF fistulas or small defects. The local or regional vascularized flaps are pieces of tissue relatively close to the surgery site that have been mostly freed up but are still attached to the original tissue. These flaps are then stretched or maneuvered onto the desired location. When technology advanced and larger defects could be fixed endoscopically, more and more failures and leaks started to occur with the free graft technique. The larger defects are associated with a wider dural removal and an exposure to high flow CSF, which could be the reason for failure among the free graft.
Pituitary gland surgery:
This surgery is turned from a very serious surgery into a minimally invasive one through the nose with the use of the endoscope. For instance craniopharyngiomas (CRAs) are starting to be removed via this method. Dr. Paolo Cappabianca described the perfect CRA for this surgery to be a median lesion with a solid parasellar component (beside the sellar) or encasement of the main neuromuscular structures that are localized in the subchiasmatic (below the optic chiasm) and retrochiasmatic (behind the optic chiasm) regions. He also says that when these conditions are met, endoscopic endonasal surgery is a valid surgical option. For a case study on large adenomas, the doctors showed that out of 50 patients, 19 had complete tumor removal, 9 had near complete removal, and 22 had partial removal. The partial removal came from the tumors extending into more dangerous areas. They concluded that endoscopic endonasal surgery was a valid option for surgery if the patients used pharmacological therapy after surgery. Another study showed that with endoscopic endonasal surgery 90% of microadenomas could be removed, and that 2/3 of normal macroadenomas could be removed if they did not go into the cavernous sinus, which means fragile blood vessel triangles would have to be dealt with so only 1/3 of those patients recovered. Endoscopic endonasal approach has been shown even among young patients to be superior to traditional microscopic transsphenoidal surgery.
Pituitary gland surgery:
3-D approach vs 2-D approach The newer 3-D technique is gaining ground as the ideal way to do surgery because it gives the surgeon a better understanding of the spatial configuration of what they are seeing on a computer screen. Dr. Nelson Oyesiku at Emory University helped develop the 3-D technique. In an article he helped write, he and the other authors compared the effects of the 2-D technique vs the 3-D technique on patient outcome. It showed that the 3-D endoscopy gave the surgeon more depth of field and stereoscopic vision and that the new technique did not show any significant changes in patient outcomes during or after surgery.
Endoscopic techniques vs open techniques:
In a case study from 2013, they compared the open vs endoscopic techniques for 162 other studies that contained 5,701 patients. They only looked at four tumor types: the olfactory groove meningiomas (OGM), tuberculum sellae meningiomas (TSM), craniopharyngiomas (CRA), and clival chordomas (CHO). They looked at gross total resection and cerebrospinal fluid (CSF) leaks, neurological death, post-operative visual function, post operative diabetes insipidus, and post-operative obesity. The study showed that there was a greater chance of CSF leaks with endoscopic endonasal surgery. The visual function improved more with endoscopic surgery for TSM, CRA, and CHO patients. Diabetes insipidus occurred more in open procedure patients. The endoscopic patients showed a higher recurrence rate. In another case study on CRAs, they showed similar results with the CSF leaks being more of a problem in endoscopic patients. Open procedure patients showed a higher rate of post operative seizures as well. Both of these studies still suggest that despite the CSF leaks, that the endoscopic technique is still an appropriate and suitable surgical option. Otologic surgery, which is traditionally performed via an open approach using a microscope, may also be performed endoscopically, and is called Endoscopic Ear Surgery or EES. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Task Manager (Windows)**
Task Manager (Windows):
Task Manager, previously known as Windows Task Manager, is a task manager, system monitor, and startup manager included with Microsoft Windows systems. It provides information about computer performance and running software, including name of running processes, CPU and GPU load, commit charge, I/O details, logged-in users, and Windows services. Task Manager can also be used to set process priorities, processor affinity, start and stop services, and forcibly terminate processes.
Task Manager (Windows):
The program can be started in recent versions of Windows by pressing ⊞ Win+R and then typing in taskmgr.exe, by pressing Ctrl+Alt+Delete and clicking Task Manager, by pressing Ctrl+⇧ Shift+Esc, by using Windows Search in the Start Menu and typing taskmgr, by right-clicking on the Windows taskbar and selecting "Task Manager", by typing taskmgr in the File Explorer address bar, or by typing taskmgr in Command Prompt or Windows PowerShell.
Task Manager (Windows):
Task Manager was introduced in its current form with Windows NT 4.0. Prior versions of Windows NT, as well as Windows 3.x, include the Task List application, are capable of listing currently running processes and killing them, or creating new processes. Windows 9x has a program known as Close Program which lists the programs currently running and offers options to close programs as well shut down the computer.
Functionality:
Since Windows 8, Task Manager has two views. The first time Task Manager is invoked by a user, it shows in a simplified summary mode (described in the user experience as Fewer Details). It can be switched to a more detailed mode by clicking More Details. This setting is remembered for that user on that machine.Since at least Windows 2000, the CPU usage can be displayed as a tray icon in the task bar for quick glance.
Functionality:
Summary mode In summary mode, Task Manager shows a list of currently running programs that have a main window. It has a "more details" hyperlink that activates a full-fledged Task Manager with several tabs.
Right-clicking any of the applications in the list allows switching to that application or ending the application's task. Issuing an end task causes a request for graceful exit to be sent to the application.
Functionality:
Processes and details The Processes tab shows a list of all running processes on the system. This list includes Windows Services and processes from other accounts. The Delete key can also be used to terminate processes on the Processes tab. By default, the processes tab shows the user account the process is running under, the amount of CPU, and the amount of memory the process is currently consuming. There are more columns that can be shown. The Processes tab divides the process into three categories: Apps: Programs with a main window Windows processes: Components of Windows itself that do not have a main window, including services Background process: Programs that do not have a main window, including services, and are not part of the Windows itselfThis tab shows the name of every main window and every service associated with each process. Both a graceful exit command and a termination command can be sent from this tab, depending on whether the command is sent to the process or its window.
Functionality:
The Details tab is a more basic version of the Processes tab, and acts similar to the Processes tab in Windows 7 and earlier. It has a more rudimentary user experience and can perform some additional actions. Right-clicking a process in the list allows changing the priority the process has, setting processor affinity (setting which CPU(s) the process can execute on), and allows the process to be ended. Choosing to End Process causes Windows to immediately kill the process. Choosing to "End Process Tree" causes Windows to immediately kill the process, as well as all processes directly or indirectly started by that process. Unlike choosing End Task from the Applications tab, when choosing to End Process the program is not given a warning nor a chance to clean up before ending. However, when a process that is running under a security context different from the one which issued the call to Terminate Process, the use of the KILL command-line utility is required.
Functionality:
Performance The Performance tab shows overall statistics about the system's performance, most notably the overall amount of CPU usage and how much memory is being used. A chart of recent usage for both of these values is shown. Details about specific areas of memory are also shown.
Functionality:
There is an option to break the CPU usage graph into two sections: kernel mode time and user mode time. Many device drivers, and core parts of the operating system run in kernel mode, whereas user applications run in user mode. This option can be turned on by choosing Show kernel times from the View menu. When this option is turned on the CPU usage graph will show a green and a red area. The red area is the amount of time spent in kernel mode, and the green area shows the amount of time spent in user mode.
Functionality:
The Performance tab also shows statistics relating to each of the network adapters present in the computer. By default, the adapter name, percentage of network utilization, link speed, and state of the network adapter are shown, along with a chart of recent activity.
App history The App history tab shows resource usage information about Universal Windows Platform apps. Windows controls the life cycle of these apps more tightly. This tab is where the data that Windows has collected about them, and then be viewed at a later time.
Startup The Startup tab manages software that starts with Windows shell.
Users The Users tab shows all users that currently have a session on the computer. On server computers, there may be several users connected to the computer using Terminal Services (or the Fast User Switching service, on Windows XP). Users can be disconnected or logged off from this tab.
History:
Task Manager was originally an external side project developed at home by Microsoft developer David Plummer; encouraged by Dave Cutler and coworkers to make it part of the main product "build", he donated the project in 1995. The original task manager design featured a different Processes page with information being taken from the public Registry APIs rather than the private internal operating system metrics.
History:
Windows 9x A Close Program dialog box comes up when Ctrl+Alt+Delete is pressed in Windows 9x. Also, in Windows 9x, there is a program called Tasks (TASKMAN.EXE) located in the Windows directory. It is rudimentary and has fewer features. The System Monitor utility in Windows 9x contains process and network monitoring functionality similar to that of the Windows Task Manager. Also, the Tasks program is called by clicking twice on the desktop if Explorer process is down.
History:
Windows XP In Windows XP only, there is a "Shut Down" menu that provides access to Standby, Hibernate, Turn off, Restart, Log Off, and Switch User. This is because, by default in Windows XP, pressing Ctrl+Alt+Delete opens the Task Manager instead of opening a dialog that provides access to the Task Manager in addition to the options mentioned above.
History:
On the Performance tab, the display of the CPU values was changed from a display mimicking a LED seven-segment display, to a standard numeric value. This was done to accommodate non-Arabic numeral systems, such as Eastern Arabic numerals, which cannot be represented using a seven-segment display.Prior to Windows XP, process names longer than 15 characters in length are truncated. This problem is resolved in Windows XP.The users tab is introduced by Windows XP.
History:
Beginning with Windows XP, the Delete key is enabled on the Processes tab.
Windows Vista Windows Task Manager has been updated in Windows Vista with new features, including: A "Services" tab to view and modify currently running Windows services and start and stop any service as well as enable/disable the User Account Control (UAC) file and registry virtualization of a process.
New "Image Path Name" and "Command Line", and "Description" columns in the Processes tab. These show the full name and path of the executable image running in a process, any command-line parameters that were provided, and the image file's "Description" property.
New columns showing DEP and virtualization statuses. Virtualization status refers to UAC virtualization, under which file and registry references to certain system locations will be silently redirected to user-specific areas.
By right-clicking on any process, it is possible to directly open the Properties of the process's executable image file or of the directory (folder) containing the process.
History:
The Task Manager has also been made less vulnerable to attack from remote sources or viruses as it must be operating under administrative rights to carry out certain tasks, such as logging off other connected users or sending messages. The user must go into the "Processes" tab and click "Show processes from other users" in order to verify administrative rights and unlock these privileges. Showing processes from all users requires all users including administrators to accept a UAC prompt, unless UAC is disabled. If the user is not an administrator, they must enter a password for an administrator account when prompted to proceed, unless UAC is disabled, in which case the elevation does not occur.
History:
By right-clicking on any running process, it is possible to create a dump. This feature can be useful if an application or a process is not responding, so that the dump file can be opened in a debugger to get more information.
The "Shut Down" menu containing Standby, Hibernate, Turn off, Restart, Log Off and Switch User has been removed. This was done due to low usage, and to reduce the overall complexity of Task Manager.
The Performance tab shows the system uptime.
History:
Windows 8 In Windows 8, Windows Task Manager has been overhauled and the following changes were made: Starting in Windows 8, the tabs are hidden by default and Task Manager opens in summary mode (Fewer details). This view only shows applications and their associated processes. Prior to Windows 8, what is shown in the summary mode was shown in the tab named "Applications".
History:
Resource utilization in the Processes tab is shown with various shades of yellow, with darker color representing heavier use.
The Performance tab is split into CPU, memory, disk, ethernet, and wireless network (if applicable) sections. There are overall graphs for each, and clicking on one reaches details for that particular resource. This includes consolidating information that previously appeared in the Networking tab from Windows XP through Windows 7.
The CPU tab no longer displays individual graphs for every logical processor on the system by default. It now can show data for each NUMA node.
The CPU tab now displays simple percentages on heat-mapping tiles to display utilization for systems with many (64 up to 640) logical processors. The color used for these heat maps is blue, with darker color again indicating heavier utilization.
Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID.
A new Startup tab has been added that lists running startup applications. Previously, MSConfig was in charge of this task, or in Windows Vista only, the "Software Explorer" section of Windows Defender. The Windows Defender that shipped built-into Windows 7 lacked this option, and it was also not present in the downloadable Microsoft Security Essentials either.
The Processes tab now lists application names, application status, and overall usage data for CPU, memory, hard disk, and network resources for each process.
A new App History tab is introduced.
The application status can be changed to suspended.
The normal process information found in the older Task Manager can be found in the new Details tab.
Windows 10 The Processes tab is divided into categories.
Display GPU information in the Performance tab, if available.
Windows 11 Windows 11 22H2 has introduced a redesigned Task Manager.
Weakness:
Task Manager is a common target of computer viruses and other forms of malware; typically malware will close the Task Manager as soon as it is started, so as to hide itself from users. Some malware will also disable task manager as an administrator. Variants of the Zotob and Spybot worms have used this technique, for example. Using Group Policy, it is possible to disable the Task Manager. Many types of malware also enable this policy setting in the registry. Rootkits can prevent themselves from getting listed in the Task Manager, thereby preventing their detection and termination using it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Short-term effects of alcohol consumption**
Short-term effects of alcohol consumption:
The short-term effects of alcohol consumption range from a decrease in anxiety and motor skills and euphoria at lower doses to intoxication (drunkenness), to stupor, unconsciousness, anterograde amnesia (memory "blackouts"), and central nervous system depression at higher doses. Cell membranes are highly permeable to alcohol, so once it is in the bloodstream, it can diffuse into nearly every cell in the body.
Short-term effects of alcohol consumption:
The concentration of alcohol in blood is measured via blood alcohol content (BAC). The amount and circumstances of consumption play a large role in determining the extent of intoxication; for example, eating a heavy meal before alcohol consumption causes alcohol to absorb more slowly. The amount of alcohol consumed largely determines the extent of hangovers, although hydration also plays a role. After excessive drinking, stupor and unconsciousness can both occur. Extreme levels of consumption can cause alcohol poisoning and death; a concentration in the blood stream of 0.36% will kill half of those affected. Alcohol may also cause death indirectly by asphyxiation, caused from vomiting.
Short-term effects of alcohol consumption:
Alcohol can greatly exacerbate sleep problems. During abstinence, residual disruptions in sleep regularity and sleep patterns are the greatest predictors of relapse.
Effects by dosage:
The definition of a unit of alcohol ranges between 8 and 14 grams of pure alcohol/ethanol depending on the country. There is no agreement on definitions of a low, moderate or high dose of alcohol either. The U.S. National Institute on Alcohol Abuse and Alcoholism defines a moderate dose as alcohol intake up to two standard drinks or 28 grams for men and one standard drink or 14 grams for women. The immediate effect of alcohol depends on the drinker's blood alcohol concentration (BAC). BAC can be different for each person depending on their age, sex, pre-existing health condition, even if they drink the same amount of alcohol.Different BACs have different effects. The following lists describe the common effects of alcohol on the body depending on the BAC. However, tolerance varies considerably between individuals, as does individual response to a given dosage; the effects of alcohol differ widely between people. Hence in this context, BAC percentages are just estimates used for illustrative purposes.
Effects by dosage:
Grand Rapids Dip Studies suggest that a BAC of 0.01–0.04% slightly lowers risk of being in a vehicle accident when compared to a BAC of 0.00%, an effect referred to as the Grand Rapids Effect or Grand Rapids Dip. Some literature has attributed the effect to erroneous data. Other studies have proposed that it could be due to drivers exerting extra caution at low BAC levels. Other proposed explanations are that the effect is at least in part the blocking effect of ethanol excitotoxicity and the effect of alcohol in essential tremor and other movement disorders, but this remains speculative.
Effects by dosage:
Moderate doses Ethanol inhibits the ability of glutamate to open the cation channel associated with the N-methyl-D-aspartate (NMDA) subtype of glutamate receptors. Stimulated areas include the cortex, hippocampus, and nucleus accumbens, which are all responsible for both thinking and pleasure seeking. Another one of alcohol's agreeable effects is body relaxation, which is possibly caused by neurons transmitting electrical signals in an alpha waves-pattern; such waves are actually observed (with the aid of EEGs) whenever the body is relaxed.Short-term effects of alcohol include the risk of injuries, violence, and fetal damage. Alcohol has also been linked with lowered inhibitions, although it is unclear as to what degree this is chemical or psychological as studies with placebos can often duplicate the social effects of alcohol at either low or moderate doses. Some studies have suggested that intoxicated people have much greater control over their behavior than is generally recognized, though they have a reduced ability to evaluate the consequences of their behavior. Behavioral changes associated with drunkenness are, to some degree, contextual.Areas of the brain that are responsible for planning and motor learning are sharpened. A related effect, which is caused by even low levels of alcohol, is the tendency for people to become more animated in speech and movement. This is caused by increased metabolism in areas of the brain associated with movement, such as the nigrostriatal pathway. This causes reward systems in the brain to become more active, which may induce certain individuals to behave in an uncharacteristically loud and cheerful manner.
Effects by dosage:
Alcohol has been known to mitigate the production of antidiuretic hormone, which is a hormone that acts on the kidney to favor water reabsorption in the kidneys during filtration. This occurs because alcohol confuses osmoreceptors in the hypothalamus, which relay osmotic pressure information to the posterior pituitary, the site of antidiuretic hormone release. Alcohol causes the osmoreceptors to signal that there is low osmotic pressure in the blood, which triggers an inhibition of the antidiuretic hormone. As a consequence, one's kidneys are no longer able to reabsorb as much water as they should be absorbing, therefore creating excessive volumes of urine and the subsequent overall dehydration.
Effects by dosage:
Excessive doses Acute alcohol intoxication through excessive doses in general causes short- or long-term health effects. NMDA receptors become unresponsive, slowing areas of the brain for which they are responsible. Contributing to this effect is the activity that alcohol induces in the gamma-aminobutyric acid (GABA) system. The GABA system is known to inhibit activity in the brain. GABA could also be responsible for causing the memory impairment that many people experience. It has been asserted that GABA signals interfere with both the registration and the consolidation stages of memory formation. As the GABA system is found in the hippocampus (among other areas in the CNS), which is thought to play a large role in memory formation, this is thought to be possible.
Effects by dosage:
Anterograde amnesia, colloquially referred to as "blacking out", is another symptom of heavy drinking. This is the loss of memory during and after an episode of drinking.
Effects by dosage:
Another classic finding of alcohol intoxication is ataxia, in its appendicular, gait, and truncal forms. Appendicular ataxia results in jerky, uncoordinated movements of the limbs, as if each muscle were working independently from the others. Truncal ataxia results in postural instability; gait instability is manifested as a disorderly, wide-based gait with inconsistent foot positioning. Ataxia causes the observation that drunk people are clumsy, sway back and forth, and often fall down. It is presumed to be due to alcohol's effect on the cerebellum.
Effects by dosage:
Mellanby effect The Mellanby effect is the phenomenon that the behavioral impairment due to alcohol is less, at the same BAC, when the BAC is decreasing than when it is increasing. In other words, at a given point on the upward slope of BAC (while drinking), impairment is greater than at a given point on the downward slope (after drinking), even though the BAC is the same at the two points. This effect was confirmed in a 2017 meta-analysis.
Effect on different population:
Based on sex Alcohol affects males and females differently because of difference in body fat percentage and water content. On average, for equal body weight, women have a higher body fat percentage than men. Since alcohol is absorbed into body water content and men have more water in their bodies than women, for women there will be a higher blood alcohol concentration from same amount of alcohol consumption. Women are also thought to have less alcohol dehydrogenase (ADH) enzyme which is required to break down alcohol. That is why the drinking guidelines are different for men and women.
Effect on different population:
Based on genetic variation Alcohol metabolism depends on the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Genetic variants of the genes coding for these enzymes can affect the rate of alcohol metabolism. Some ADH gene variants lead to higher metabolic activity, resulting in the accumulation of acetaldehyde, whereas, a null allele in ALDH2 causes an accumulation of acetaldehyde by preventing its catabolism to acetate. The genetic variants of these enzymes can explain the differences in the alcohol metabolism in different races. The different isoforms of ADH showed protection against alcoholic disorders in Han Chinese and Japanese (due to presence of ADH1B*2 ) and in African (due to presence of ADH1B*3). On the other hand, presence of ALDH2*2 in East Asians (a variant of the ALDH gene), can cause blood acetaldehyde levels of 30 to 75 μM or higher, which is more than 10 times the normal level. The excess amount of blood aldehyde produce facial flushing, nausea, rapid heartbeat, and other adverse effects. Presence of these alleles causes rapid conversion of alcohol to acetaldehyde which can be toxic in large amount. So, the East Asians and Africans feel the adverse effects of alcohol early and stop drinking. For Caucasians, ADH1B*1 allele is the most prevalent allele which causes slower conversion of alcohol to acetaldehyde and it makes them more vulnerable to alcohol use disorders.
Allergic reaction-like symptoms:
Humans metabolize ethanol primarily through NAD+-dependent alcohol dehydrogenase (ADH) class I enzymes (i.e. ADH1A, ADH1B, and ADH1C) to acetaldehyde and then metabolize acetaldehyde primarily by NAD2-dependent aldehyde dehydrogenase 2 (ALDH2) to acetic acid. Eastern Asians reportedly have a deficiency in acetaldehyde metabolism in a surprisingly high percentage (approaching 50%) of their populations. The issue has been most thoroughly investigated in native Japanese where persons with a single-nucleotide polymorphism (SNP) variant allele of the ALDH2 gene were found; the variant allele, encodes lysine (lys) instead of glutamic acid (glu) at amino acid 487; this renders the enzyme essentially inactive in metabolizing acetaldehyde to acetic acid. The variant allele is variously termed glu487lys, ALDH2*2, and ALDH2*504lys. In the overall Japanese population, about 57% of individuals are homozygous for the normal allele (sometimes termed ALDH2*1), 40% are heterozygous for glu487lys, and 3% are homozygous for glu487lys. Since ALDH2 assembles and functions as a tetramer and since ALDH2 tetramers containing one or more glu487lys proteins are also essentially inactive (i.e. the variant allele behaves as a dominant negative), homozygote individuals for glu487lys have undetectable while heterozygote individuals for glu487lys have little ALDH2 activity. In consequence, Japanese individuals homozygous or, to only a slightly lesser extent, heterozygous for glu487lys metabolize ethanol to acetaldehyde normally but metabolize acetaldehyde poorly and are susceptible to a set of adverse responses to the ingestion of, and sometimes even the fumes from, ethanol and ethanol-containing beverages; these responses include the transient accumulation of acetaldehyde in blood and tissues; facial flushing (i.e. the "oriental flushing syndrome" or Alcohol flush reaction), urticaria, systemic dermatitis, and alcohol-induced respiratory reactions (i.e. rhinitis and, primarily in patients with a history of asthma, mild to moderately bronchoconstriction exacerbations of their asthmatic disease. These allergic reaction-like symptoms, which typically occur within 30–60 minutes of ingesting alcoholic beverages, do not appear to reflect the operation of classical IgE- or T cell-related allergen-induced reactions but rather are due, at least in large part, to the action of acetaldehyde in stimulating tissues to release histamine, the probable evoker of these symptoms.The percentages of glu487lys heterozygous plus homozygous genotypes are about 35% in native Caboclo of Brazil, 30% in Chinese, 28% in Koreans, 11% in Thai people, 7% in Malaysians, 3% in natives of India, 3% in Hungarians, and 1% in Filipinos; percentages are essentially 0 in individuals of Native African descent, Caucasians of Western European descent, Turks, Australian Aborigines, Australians of Western European descent, Swedish Lapps, and Alaskan Eskimos. The prevalence of ethanol-induced allergic symptoms in 0 or low levels of glu487lys genotypes commonly ranges above 5%. These "ethanol reactors" may have other gene-based abnormalities that cause the accumulation of acetaldehyde following the ingestion of ethanol or ethanol-containing beverages. For example, the surveyed incidence of self-reported ethanol-induced flushing reactions in Scandinavians living in Copenhagen as well as Australians of European descent is about 16% in individuals homozygous for the "normal" ADH1B gene but runs to ~23% in individuals with the ADH1-Arg48His SNP variant; in vitro, this variant metabolizes ethanol rapidly and in humans, it is proposed, may form acetaldehyde at levels that exceed the capacity of ALDH2 to metabolize. Notwithstanding such considerations, experts suggest that the large proportion of alcoholic beverage-induced allergic-like symptoms in populations with a low incidence of the glu487lys genotype reflect true allergic reactions to the natural and/or contaminating allergens particularly those in wines and to a lesser extent beers.
Pathophysiology:
At low or moderate doses, alcohol acts primarily as a positive allosteric modulator of GABAA. Alcohol binds to several different subtypes of GABAA, but not to others. The main subtypes responsible for the subjective effects of alcohol are the α1β3γ2, α5β3γ2, α4β3δ and α6β3δ subtypes, although other subtypes such as α2β3γ2 and α3β3γ2 are also affected. Activation of these receptors causes most of the effects of alcohol such as relaxation and relief from anxiety, sedation, ataxia and increase in appetite and lowering of inhibitions that can cause a tendency toward violence in some people.Alcohol has a powerful effect on glutamate as well. Alcohol decreases glutamate's ability to bind with NMDA and acts as an antagonist of the NMDA receptor, which plays a critical role in LTP by allowing Ca2+ to enter the cell. These inhibitory effects are thought to be responsible for the "memory blanks" that can occur at levels as low as 0.03% blood level. In addition, reduced glutamate release in the dorsal hippocampus has been linked to spatial memory loss. Chronic alcohol users experience an upregulation of NMDA receptors because the brain is attempting to reestablish homeostasis. When a chronic alcohol user stops drinking for more than 10 hours, apoptosis can occur due to excitotoxicity. The seizures experienced during alcohol abstinence are thought to be a result of this NMDA upregulation. Alteration of NMDA receptor numbers in chronic alcoholics is likely to be responsible for some of the symptoms seen in delirium tremens during severe alcohol withdrawal, such as delirium and hallucinations. Other targets such as sodium channels can also be affected by high doses of alcohol, and alteration in the numbers of these channels in chronic alcoholics is likely to be responsible for as well as other effects such as cardiac arrhythmia. Other targets that are affected by alcohol include cannabinoid, opioid and dopamine receptors, although it is unclear whether alcohol affects these directly or if they are affected by downstream consequences of the GABA/NMDA effects. People with a family history of alcoholism may exhibit genetic differences in the response of their NMDA glutamate receptors as well as the ratios of GABAA subtypes in their brain. Alcohol inhibits sodium-potassium pumps in the cerebellum and this is likely how it corrupts cerebellar computation and body co-ordination.Contrary to popular belief, research suggests that acute exposure to alcohol is not neurotoxic in adults and actually prevents NMDA antagonist-induced neurotoxicity.
Alcohol and sleep:
Low doses of alcohol (one 360 mL (13 imp fl oz; 12 US fl oz) beer) appear to increase total sleep time and reduce awakening during the night. The sleep-promoting benefits of alcohol dissipate at moderate and higher doses of alcohol. Previous experience with alcohol also influences the extent to which alcohol positively or negatively affects sleep. Under free-choice conditions, in which subjects chose between drinking alcohol or water, inexperienced drinkers were sedated while experienced drinkers were stimulated following alcohol consumption. In insomniacs, moderate doses of alcohol improve sleep maintenance.Moderate alcohol consumption 30–60 minutes before sleep, although decreasing, disrupts sleep architecture. Rebound effects occur once the alcohol has been largely metabolized, causing late night disruptions in sleep maintenance. Under conditions of moderate alcohol consumption where blood alcohol levels average 0.06–0.08 percent and decrease 0.01–0.02 percent per hour, an alcohol clearance rate of 4–5 hours would coincide with disruptions in sleep maintenance in the second half of an 8-hour sleep episode. In terms of sleep architecture, moderate doses of alcohol facilitate "rebounds" in rapid eye movement (REM) following suppression in REM and stage 1 sleep in the first half of an 8-hour sleep episode, REM and stage 1 sleep increase well beyond baseline in the second half. Moderate doses of alcohol also very quickly increase slow wave sleep (SWS) in the first half of an 8-hour sleep episode. Enhancements in REM sleep and SWS following moderate alcohol consumption are mediated by reductions in glutamatergic activity by adenosine in the central nervous system. In addition, tolerance to changes in sleep maintenance and sleep architecture develops within three days of alcohol consumption before bedtime.
Alcohol consumption and balance:
Alcohol can affect balance by altering the viscosity of the endolymph within the otolithic membrane, the fluid inside the semicircular canals inside the ear. The endolymph surrounds the ampullary cupula which contains hair cells within the semicircular canals. When the head is tilted, the endolymph flows and moves the cupula. The hair cells then bend and send signals to the brain indicating the direction in which the head is tilted. By changing the viscosity of the endolymph to become less dense when alcohol enters the system, the hair cells can move more easily within the ear, which sends the signal to the brain and results in exaggerated and overcompensated movements of body. This can also result in vertigo, or "the spins".
Alcohol and postprandial triglycerides:
Alcohol taken with a meal increases and prolongs postprandial triglyceridemia. This is true despite the observation that the relationship between alcohol consumption and triglyceridemia is "J-shaped," meaning that fasting triglycerides concentration is lower in people who drink 10–20 g/alcohol a day compared to people who either abstain from alcohol or who drink more per day.
Alcohol and blood pressure:
A systematic review reported that alcohol has bi-phasic effect on blood pressure. Both systolic and diastolic blood pressure fell when they were measured couple of hours after alcohol consumption. However, the longer term measurement (20 hours average) showed a modest but statistically significant increase in blood pressure: a 2.7 mmHg rise in systolic blood pressure and 1.4 mmHg rise in diastolic blood pressure. A Cochrane systematic review based on only randomized controlled trials which investigates the acute effect of alcohol consumption in healthy and hypertensive adults is in progress.
Alcohol and pain:
A 2015 literature review found that alcohol administration confers acute pain-inhibitory effects. It also found the relationship between alcohol consumption and pain is curvilinear; moderate alcohol use was associated with positive pain-related outcomes and heavy alcohol use was associated with negative pain-related outcomes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**The AfriPop Project**
The AfriPop Project:
The AfriPop Project was a non-profit project primarily funded by the Fondation Philippe Wiener - Maurice Anspach Foundation that was merged in the WorldPop Project in October 2013. AfriPop was a collaboration between the University of Florida, United States, the Université libre de Bruxelles, Belgium and the Malaria Public Health & Epidemiology Group, Centre for Geographic Medicine, Kenya. The project was led by Dr. Andrew Tatem and Dr. Catherine Linard.
The AfriPop Project:
High resolution, contemporary data on human population distributions are a prerequisite for the accurate measurement of the impacts of population growth, for monitoring changes and for planning interventions. The AfriPop project was initiated in July 2009 with an aim of producing detailed and freely-available population distribution maps for the whole of sub-Saharan Africa.
The AfriPop team assembled a unique spatial database of linked information on contemporary census data across Africa, satellite-imagery derived settlement maps and land cover information. Novel approaches to extracting detailed spatial data on settlements from satellite imagery were combined with contemporary detailed census data and land cover to map population densities across sub-Saharan Africa at unprecedented levels of detail. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Woolsey convention**
Woolsey convention:
This article describes the contract bridge bidding convention.Devised by Kit Woolsey, the convention is a defense against an opponent's one notrump opening; especially used at matchpoints. Initial bids are as follows: The convention has similarities to Multi-Landy.
Abuses:
Common abuses as described by Kit Woolsey include: 3-1=4-5 distributional hands in the balancing seat regularly double, even with no 4-card major suit.
Strong hands, with 19 high card points plus, start with a double and then rebid 2 Notrump (or double) to try to expose a psychic bid.
Good 4-4=4-1 distributional hands with a stiff minor suit can start with 2♣.
Single-suited minor hands often start with double, hoping to be able to play at the two-level. These hands will pass a 2♦ asking bid. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mobile enterprise asset management**
Mobile enterprise asset management:
Mobile enterprise asset management (or mobile EAM) refers to the mobile extension of work processes for maintenance, operations, and repair of corporate or public-entity physical assets, equipment, buildings, and grounds. It involves the management of work orders (such as service requests) via communication between a mobilized workforce and computer systems to maintain an organization's facilities, structures, and other assets.
Mobile enterprise asset management:
The idea behind mobile EAM as a business practice is that it enables remote workers access to data from the organization's computer application software for enterprise asset management (commonly referred to as an enterprise system, EAM system, or backend system), typically using a handheld or another mobile computer. This is to distinguish from the term mobile asset management, which refers more broadly to the actual tools, instruments, and containers organizations use to track and secure equipment and other such assets that are portable or on the move. In the mobile EAM process, the organization eliminates a need for paper forms or other data reporting and communication methods (push-to-talk and radio) to move work order information to and from the point where the work is being performed.
Mobile enterprise asset management:
While enterprise asset management encompasses the management of an organization's entire asset portfolio across processes including equipment addition/reduction, replacement, over-hauling, redundancy setup, and maintenance budgets, mobile enterprise asset management is focused, by definition, strictly on the wireless automation of asset management data for such processes.
Mobile EAM technology:
When viewed and used on a handheld device, mobile work order applications provide details such as location, stepwise job plans, safety alerts, lock-outs, and prior work history on the asset, giving a maintenance technician or other remote worker more detailed asset information as well as the ability to transmit work data to the organization's enterprise system when completed – through a wireless network, docking station or another synchronization method.
Mobile EAM technology:
Using computer software to achieve standard mobile EAM practices, organizations often report such advantages as an increase in timely, accurate data flow between their remote workers and central management such as planners and schedulers, which thereby improves capital and labor allocation decision processes (including an ability to schedule more planned/preventive maintenance work). With the proliferation of smartphones and other mobile computing technologies, asset managers can expect an ever more tech-savvy workforce, lower costs in mobile devices, and a higher propensity for feature-rich, workflow-specific mobile applications.
Challenges:
Nearly all challenges in mobile EAM practices can be traced to two factors: time and labor resources (including IT or information technology management) and investment costs.
Challenges:
Developing and implementing a mobile application architecture on the enterprise scale is not an easy undertaking by any means, as mobile applications are faced with a diversity of device operating systems, output media (voice and data), and connectivity methods, contrasting with a PC (personal computer) environment where in most cases software requires relatively few, if major, updates and lower upfront costs. Organizations looking to implement mobile EAM applications often seek the help of technology consulting firms and spend months researching, planning, and selecting an implementation strategy.
Industries using mobile EAM:
The use of mobile enterprise application platforms (MEAPs), designed around service-oriented architecture principles for multiple systems integration and custom modification, and other forms of wireless computing technology for mobile EAM solutions is growing rapidly, particularly in industries where physical assets form a significant cost proportion of organizations’ total assets. These industries can include: Facilities management Utilities Life sciences Government organizations Manufacturing Oil and gas industry Transportation industryIn such high-value asset scenarios, the asset lifecycle improvements introduced by the increase in the enterprise data flow of mobile EAM processes can bring significant savings, particularly when part of an enterprise-wide capital and labor management strategy that integrates multiple systems in an enterprise architecture (EAM system, mobile EAM application, labor dispatch/scheduling software, GIS, etc.).
Market growth:
In a 2009 study, market analyst Gartner, Inc. forecasted, “For the MEAP and packaged mobile application market…we now expect market growth annually of 15 to 20% through 2013.” Gartner attributes this anticipated growth to enterprises’ increasing willingness (and ability) to extend decision-relevant information to employees, who are themselves increasingly mobile.
For EAM practices as a whole, this means that an increasing proportion of organizations in capital-intensive industry sectors (such as those above) are adopting mobile technology as an integral part of their enterprise asset management strategy – corresponding with an enterprise-wide emphasis on whole-life planning, life cycle costing, planned and proactive maintenance, and other industry best practices. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hop diffusion**
Hop diffusion:
Hop diffusion - is a non-Brownian diffusion of proteins and lipid molecules within the plasma membrane. Hop diffusion occurs due to the discontinuity of the cell cytoplasmic membrane. According to the fences and pickets model, plasma membrane is compartmentalized by actin-based membrane-skeleton “fences”, that occur when cytoplasmic domains collide with the actin-based membrane skeleton; and anchored-transmembrane protein “pickets”. Due to these obstacles membrane proteins undergo temporary confinement within 30–700- nm compartments with infrequent intercompartmental hops.Hops between adjacent compartments presumably occur due to: thermal fluctuations of the membrane and following creation of spaces between cytoplasmic membrane layer and cytoskeleton, large enough to allow the passage of integral membrane proteins temporal actin filament breakage membrane molecules have sufficient kinetic energy to cross the barrierWhile simple Brownian diffusion is isotropic and homogeneous, hop diffusion is more complex and combines free diffusion, which occurs inside cell membrane compartments, and infrequent intercompartmental transitions (hops). The complexity of this type of anomalous diffusion is further enhanced due to an inherent broad distribution of compartment sizes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HLA-B55**
HLA-B55:
HLA-B55 (B55) is an HLA-B serotype. B55 is a split antigen from the B22 broad antigen, sister serotypes are B54 and B56. The serotype identifies the more common HLA-B*55 gene products. (For terminology help see: HLA-serotype tutorial)
Serotype:
B*55:03 is one of the four B alleles that reacts with neither Bw4 nor Bw6. The others are B*18:06, B*46:01, and B*73:01. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Game classification**
Game classification:
Game classification is the classification of games, forming a game taxonomy. Many different methods of classifying games exist.
Physical education:
There are four basic approaches to classifying the games used in physical education: Game categories This is a classification scheme proposed by Nicols, who classifies games according to three major categories: the game's physical requirements (i.e. what the game requires in addition to the players — equipment, size and nature of playing field, and so forth), the structure of the game (i.e. number of players, groupings of players, strategies, and so forth), and the game's personal requirements (i.e. what the game requires of the player — motor skills, fitness levels, numeracy, social skills, and so forth).
Physical education:
Games for understanding This is a classification scheme proposed by Werner and Alomond that classifies games according to their strategies. It divides games into target games (e.g. archery); net or wall games (e.g. tennis); striking and field games (e.g. cricket); and invasion games (e.g. football).
Physical education:
Core content This is a classification scheme proposed by Allison and Barrett that categorizes games by their form (i.e. whether they are novel games proposed by the teacher or children, or whether they are existing games already widely played), by the movement skills that they require, by the "movement concepts" and game tactics that they require, and by the educational results of the game.
Physical education:
Developmental games This is a classification scheme proposed by Gallahue and Celand that classifies games into four developmental levels, as part of an overall educational strategy of applying, reinforcing, and implementing movement and sports skills. The levels, in ascending order, are "low-level", "complex", "lead-up", and "official sports".
Video games:
There are several methods of classifying video games, alongside the system of video game genres commonly used by retailers and player communities.
Video games:
Solomon puts forward a "commonsense, but broad" classification of video games, into simulations (the game reflects reality), abstract games (the game itself is the focus of interest), and sports. In addition to these, he points out that games (in general, not just video games) fall into classes according to the number of players. Games with two players encompass board games such as chess. Games with multiple players encompass card games such as poker, and marketed family games such as Monopoly and Scrabble. Puzzles and Solitaire are one-player games. He also includes zero-player games, such as Conway's Game of Life, although acknowledging that others argue that such games do not constitute a game, because they lack any element of competition. He asserts that such zero-player games are nonetheless games because they are used recreationally.
Video games:
Another method, developed by Wright, divides games into the following categories: educational or informative, sports, sensorimotor (e.g. action games, video games, fighting and shoot 'em up games, and driving and racing simulators), other vehicular simulators (not covered by driving and racing), strategy games (e.g. adventure games, war games, strategic simulations, role-playing games, and puzzles), and "other".A third method, developed by Funk and Buchman, and refined by others, classifies electronic games into six categories: general entertainment (no fighting or destruction), educational (learning or problem-solving), fantasy violence (cartoon characters that must fight or destroy things, and risk being killed, to achieve a goal), human violence (like fantasy violence, but with human rather than cartoon characters), nonviolent sports (no fighting or destruction), and sports violence (fighting or destruction involved).
Classification by causes of uncertainty:
Games can be categorized by the source of uncertainty which confront the players: Chance Combinatorics (a large number of sequences of moves) Different states of information among the players (each player knows only his own cards)Based on these three causes, three classes of games arise: Combinatorial games Games of bluffing and strategy Games of chance
Game theory:
Game theory classifies games according to several criteria: whether a game is a symmetric game or an asymmetric one, what a game's "sum" is (zero-sum, constant sum, and so forth), whether a game is a sequential game or a simultaneous one, whether a game comprises perfect information or imperfect information, and whether a game is determinate. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Effective torque**
Effective torque:
Effective Torque is often referred to as wheel torque or torque to the wheels is primarily associated with automotive tuning. Torque can be measured using a dynamometer. Common units used in automotive applications can include ft·lbf and N·m. For more on units see: Foot-pound force.The formula for effective torque to the wheels is:Tw = Te * Ntf * ηtf Ntf = Nt * Nf ηtf = ηt * ηf ... where Tw is wheel torque, Te is engine torque, N is the gear ratio, η is the efficiency, and the subscripts t and f are for the gearbox and differential, respectively. Effective torque will often be 5-15% lower than the shaft or crank ratings of an engine due to a loss through the drivetrain.
Effective torque:
For a general article please see: Machine torque. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**8mm Gasser**
8mm Gasser:
The 8mm Gasser is a rimmed cartridge used in the Rast-Gasser M1898 revolver and a small number of converted Mauser C96 pistols. Its bullet is cylindro-ogival and is of the jacketed type. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Headache attributed to a substance or its withdrawal**
Headache attributed to a substance or its withdrawal:
Headaches can be attributed to many different substances. Some of these include alcohol, NO, carbon monoxide poisoning, cocaine, caffeine and monosodium glutamate. Chronic use of certain medications used to treat headaches can also start causing headaches, known as medication overuse headaches. Headaches may also be a symptom of medication withdrawal.
Classification:
These headaches have been further sub classified by the ICHD2 into Headaches induced by acute substance use or exposure Medication overuse headaches (MOH) Headaches attributed to chronic medication use Headaches attributed to substance withdrawal
Causes:
A number of different causes contribute to this class of headache. Several common chemicals may be the culprit. Nitrite compounds dilate blood vessels, causing dull and pounding headaches with repeat exposure. Nitrite is found in dynamite, heart medicine and it is a chemical used to preserve meat (ergo these being known as "nitrite" or "hot dog" headaches). Eating foods prepared with monosodium glutamate (MSG) may thus result in headache. Acetaldehyde from alcohol may also cause a headache either acutely or after a number of hours (hangover).
Causes:
Poisons, like carbon tetrachloride found in insecticides and lead can also cause headaches with repeated exposure. Ingesting lead paint or having contact with lead batteries can cause headaches, and so can exposure to materials that contain chemical solvents, like benzene, which are found in turpentine, spray adhesives, rubber cement, and inks. Headaches are also a symptom of carbon monoxide poisoning.
Causes:
Drugs such as amphetamines can cause headaches as a side effect. Another type of drug-related headache occurs during withdrawal from long-term therapy with the antimigraine drug ergotamine tartrate. This is more commonly known as rebound headache, although some sources use the term interchangeably.
Diagnosis:
Headaches due to environmental causes are usually diagnosed by taking an exposure history.
Treatment:
These headaches are treated by determining the cause of the headache and treating or removing this cause | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trigonitis**
Trigonitis:
Trigonitis is a condition of inflammation of the trigone region of the bladder. It is more common in women.The cause of trigonitis is not known, and there is no solid treatment. Electrocautery is sometimes used, but is generally unreliable as a treatment, and typically does not have quick results. Several drugs, such as muscle relaxants, antibiotics, antiseptics such as Urised, have varied and unreliable results. Other forms of treatment include urethrotomy, cryosurgery, and neurostimulation. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Acyl-(acyl-carrier-protein)—phospholipid O-acyltransferase**
Acyl-(acyl-carrier-protein)—phospholipid O-acyltransferase:
In enzymology, an acyl-[acyl-carrier-protein]-phospholipid O-acyltransferase (EC 2.3.1.40) is an enzyme that catalyzes the chemical reaction acyl-[acyl-carrier protein] + O-(2-acyl-sn-glycero-3-phospho)ethanolamine ⇌ [acyl-carrier protein] + O-(1,2-diacyl-sn-glycero-3-phospho)ethanolamineThus, the two substrates of this enzyme are acyl-acyl-carrier protein and O-(2-acyl-sn-glycero-3-phospho)ethanolamine, whereas its two products are acyl-carrier protein and O-(1,2-diacyl-sn-glycero-3-phospho)ethanolamine.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-[acyl-carrier protein]:O-(2-acyl-sn-glycero-3-phospho)ethanolamine O-acyltransferase. Other names in common use include acyl-[acyl-carrier, protein]:O-(2-acyl-sn-glycero-3-phospho)-ethanolamine, and O-acyltransferase. This enzyme participates in glycerophospholipid metabolism. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Code Red (computer worm)**
Code Red (computer worm):
Code Red was a computer worm observed on the Internet on July 15, 2001. It attacked computers running Microsoft's IIS web server. It was the first large-scale, mixed-threat attack to successfully target enterprise networks.The Code Red worm was first discovered and researched by eEye Digital Security employees Marc Maiffret and Ryan Permeh when it exploited a vulnerability discovered by Riley Hassell. They named it "Code Red" because they were drinking the Mountain Dew flavor of the same name at the time of discovery.Although the worm had been released on July 13, the largest group of infected computers was seen on July 19, 2001. On that day, the number of infected hosts reached 359,000.It spread worldwide, becoming particularly prevalent in North America, Europe and Asia (including China and India).
Concept:
Exploited vulnerability The worm showed a vulnerability in the growing software distributed with IIS, described in Microsoft Security Bulletin MS01-033, for which a patch had become available a month earlier.
Concept:
The worm spread itself using a common type of vulnerability known as a buffer overflow. It did this by using a long string of the repeated letter 'N' to overflow a buffer, allowing the worm to execute arbitrary code and infect the machine with the worm. Kenneth D. Eichman was the first to discover how to block it, and was invited to the White House for his discovery.
Concept:
Worm payload The payload of the worm included: Defacing the affected web site to display:HELLO! Welcome to http://www.worm.com! Hacked By Chinese! Other activities based on the day of the month:Days 1-19: Trying to spread itself by looking for more IIS servers on the Internet.
Days 20–27: Launch denial of service attacks on several fixed IP addresses. The IP address of the White House web server was among these.
Concept:
Days 28-end of month: Sleeps, no active attacks.When scanning for vulnerable machines, the worm did not test to see if the server running on a remote machine was running a vulnerable version of IIS, or even to see if it was running IIS at all. Apache access logs from this time frequently had entries such as these: GET /default.ida?NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN%u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801%u9090%u9090%u8190%u00c3%u0003%u8b00%u531b%u53ff%u0078%u0000%u00=a HTTP/1.0 The worm's payload is the string following the last 'N'. Due to a buffer overflow, a vulnerable host interpreted this string as computer instructions, propagating the worm.
Similar worms:
On August 4, 2001, Code Red II appeared. Although it used the same injection vector, it had a completely different payload. It pseudo-randomly chose targets on the same or different subnets as the infected machines according to a fixed probability distribution, favoring targets on its own subnet more often than not. Additionally, it used the pattern of repeating 'X' characters instead of 'N' characters to overflow the buffer.
Similar worms:
eEye believed that the worm originated in Makati, Philippines, the same origin as the VBS/Loveletter (aka "ILOVEYOU") worm. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Olympus M.Zuiko Digital ED 40-150mm f/4-5.6**
Olympus M.Zuiko Digital ED 40-150mm f/4-5.6:
The M.Zuiko Digital ED 40-150mm f/4.0-5.6 is a Micro Four Thirds System lens by Olympus Corporation. It is sold as a standalone item, and also as part of a kit along with bodies for all cameras in the Olympus PEN series (the discontinued E-P1 and the current E-P2, E-PL1, E-PL2, E-P3, E-PL3, E-PM1, E-PM2 and E-PL5).
The lens is available in black or silver. An updated "r" variant is available which appears to have identical basic characteristics and only minor cosmetic differences. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Trett**
Trett:
Trett (or tret) was an allowance made up until the early 19th century, for waste, dust, and other impurity in items in commerce, generally amounting to 4 pounds (1.8 kg) in each 104 pounds (47 kg) (3.85%). It fell into disuse because merchants preferred to simply adjust the price, rather than make a calculation for trett.
Usage:
Trett was most commonly used in calculating the true weight for imported commodities to Great Britain which would be sold by the pound avoirdupois. From the gross weight, tare weight would be deducted, to account for the weight of any container that the commodity was enclosed in. The remainder was known as the suttle. Trett was deducted from the suttle at the rate of four pounds for every 104 pounds of commodity, to take account of any dust, sand, and other impurity included in the suttle. The remainder, after trett was deducted, became known as the "neat weight" or "nett weight", a term used in common for commodities in which no trett was to be deducted, but from which tare had been deducted.
Disuse:
By the early 19th century, trett had fallen into disuse, with merchants preferring to make allowances in the price for any impurity, as such allowance was far easier than the arithmetic calculation for trett. It was not used at custom houses, nor was it available at the British East India Company's warehouses.
In Ireland:
Trett varied considerably from the British usage, for goods in Ireland. For most goods there, trett was deducted at the rate of 1 pound per 112 pounds (0.89%). However, in Cork, trett was deducted after tare for wool at the rate of 8 pounds per 20 stone (2.86%), and in Dublin 8 pounds per 21 stone (2.72%). Additionally in Dublin, trett was deducted at the rate of 2 pounds (0.91 kg) per large cask of butter, 4 pounds (1.8 kg) per raw hide, and 8 pounds (3.6 kg) per beef carcasse. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Micromatabilin**
Micromatabilin:
Micromatabilin, the green pigment of the spider species Micrommata virescens, is characterized as a mixture of biliverdin conjugates. The two isolated fractions have identical absorption bands (free base: 620–630 μm, hydrochloride: 690 μm, zinc complex: 685–690 μm). Chromic acid degradation yields imides I, II, IIIa, and IIIb. Differences in the non-hydrolytic degradation and in polarity lead to the conclusion that fraction 1 is a monoconjugate and fraction 2a diconjugate of biliverdin. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Copper in biology**
Copper in biology:
Copper is an essential trace element that is vital to the health of all living things (plants, animals and microorganisms). In humans, copper is essential to the proper functioning of organs and metabolic processes. The human body has complex homeostatic mechanisms which attempt to ensure a constant supply of available copper, while eliminating excess copper whenever this occurs. However, like all essential elements and nutrients, too much or too little nutritional ingestion of copper can result in a corresponding condition of copper excess or deficiency in the body, each of which has its own unique set of adverse health effects.
Copper in biology:
Daily dietary standards for copper have been set by various health agencies around the world. Standards adopted by some nations recommend different copper intake levels for adults, pregnant women, infants, and children, corresponding to the varying need for copper during different stages of life.
Biochemistry:
Copper proteins have diverse roles in biological electron transport and oxygen transportation, processes that exploit the easy interconversion of Cu(I) and Cu(II). Copper is essential in the aerobic respiration of all eukaryotes. In mitochondria, it is found in cytochrome c oxidase, which is the last protein in oxidative phosphorylation. Cytochrome c oxidase is the protein that binds the O2 between a copper and an iron; the protein transfers 4 electrons to the O2 molecule to reduce it to two molecules of water. Copper is also found in many superoxide dismutases, proteins that catalyze the decomposition of superoxides by converting it (by disproportionation) to oxygen and hydrogen peroxide: Cu+-SOD + O2− + 2H+ → Cu2+-SOD + H2O2 (oxidation of copper; reduction of superoxide) Cu2+-SOD + O2− → Cu+-SOD + O2 (reduction of copper; oxidation of superoxide)The protein hemocyanin is the oxygen carrier in most mollusks and some arthropods such as the horseshoe crab (Limulus polyphemus). Because hemocyanin is blue, these organisms have blue blood rather than the red blood of iron-based hemoglobin. Structurally related to hemocyanin are the laccases and tyrosinases. Instead of reversibly binding oxygen, these proteins hydroxylate substrates, illustrated by their role in the formation of lacquers. The biological role for copper commenced with the appearance of oxygen in earth's atmosphere. Several copper proteins, such as the "blue copper proteins", do not interact directly with substrates; hence they are not enzymes. These proteins relay electrons by the process called electron transfer.
Biochemistry:
A unique tetranuclear copper center has been found in nitrous-oxide reductase.Chemical compounds which were developed for treatment of Wilson's disease have been investigated for use in cancer therapy.
Optimal Copper levels:
Copper deficiency and toxicity can be either of genetic or non-genetic origin. The study of copper's genetic diseases, which are the focus of intense international research activity, has shed insight into how human bodies use copper, and why it is important as an essential micronutrient. The studies have also resulted in successful treatments for genetic copper excess conditions, empowering patients whose lives were once jeopardized.
Optimal Copper levels:
Researchers specializing in the fields of microbiology, toxicology, nutrition, and health risk assessments are working together to define the precise copper levels that are required for essentiality, while avoiding deficient or excess copper intakes. Results from these studies are expected to be used to fine-tune governmental dietary recommendation programs which are designed to help protect public health.
Essentiality:
Copper is an essential trace element (i.e., micronutrient) that is required for plant, animal, and human health.
It is also required for the normal functioning of aerobic (oxygen-requiring) microorganisms.Copper's essentiality was first discovered in 1928, when it was demonstrated that rats fed a copper-deficient milk diet were unable to produce sufficient red blood cells. The anemia was corrected by the addition of copper-containing ash from vegetable or animal sources.
Essentiality:
Fetuses, infants, and children Human milk is relatively low in copper, and the neonate's liver stores falls rapidly after birth, supplying copper to the fast-growing body during the breast feeding period. These supplies are necessary to carry out such metabolic functions as cellular respiration, melanin pigment and connective tissue synthesis, iron metabolism, free radical defense, gene expression, and the normal functioning of the heart and immune systems in infants.Since copper availability in the body is hindered by an excess of iron and zinc intake, pregnant women prescribed iron supplements to treat anemia or zinc supplements to treat colds should consult physicians to be sure that the prenatal supplements they may be taking also have nutritionally-significant amounts of copper.
Essentiality:
When newborn babies are breastfed, the babies' livers and the mothers' breast milk provide sufficient quantities of copper for the first 4–6 months of life. When babies are weaned, a balanced diet should provide adequate sources of copper.
Cow's milk and some older infant formulas are depleted in copper. Most formulas are now fortified with copper to prevent depletion.
Essentiality:
Most well-nourished children have adequate intakes of copper. Health-compromised children, including those who are premature, malnourished, have low birth weights, develop infections, and who experience rapid catch-up growth spurts, are at elevated risk for copper deficiencies. Fortunately, diagnosis of copper deficiency in children is clear and reliable once the condition is suspected. Supplements under a physician's supervision usually facilitate a full recovery.
Essentiality:
Homeostasis Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals.Many aspects of copper homeostasis are known at the molecular level. Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu1+(cuprous) and Cu2+ (cupric). As a component of about a dozen cuproenzymes, copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin, and cross-linking of collagen. Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. A list of some key copper-containing enzymes and their functions is summarized below: The transport and metabolism of copper in living organisms is currently the subject of much active research. Copper transport at the cellular level involves the movement of extracellular copper across the cell membrane and into the cell by specialized transporters. In the bloodstream, copper is carried throughout the body by albumin, ceruloplasmin, and other proteins. The majority of blood copper (or serum copper) is bound to ceruloplasmin. The proportion of ceruloplasmin-bound copper can range from 70 to 95% and differs between individuals, depending, for example, on hormonal cycle, season, and copper status. Intracellular copper is routed to sites of synthesis of copper-requiring enzymes and to organelles by specialized proteins called metallochaperones. Another set of these transporters carries copper into subcellular compartments. Certain mechanisms exist to release copper from the cell. Specialized transporters return excess unstored copper to the liver for additional storage and/or biliary excretion. These mechanisms ensure that free unbound toxic ionic copper is unlikely to exist in the majority of the population (i.e., those without genetic copper metabolism defects).
Essentiality:
Absorption In mammals copper is absorbed in the stomach and small intestine, although there appear to be differences among species with respect to the site of maximal absorption. Copper is absorbed from the stomach and duodenum in rats and from the lower small intestine in hamsters. The site of maximal copper absorption is not known for humans, but is assumed to be the stomach and upper intestine because of the rapid appearance of 64Cu in the plasma after oral administration.Absorption of copper ranges from 15 to 97%, depending on copper content, form of the copper, and composition of the diet.Various factors influence copper absorption. For example, copper absorption is enhanced by ingestion of animal protein, citrate, and phosphate. Copper salts, including copper gluconate, copper acetate, or copper sulfate, are more easily absorbed than copper oxides. Elevated levels of dietary zinc, as well as cadmium, high intakes of phytate and simple sugars (fructose, sucrose) inhibit dietary absorption of copper. Furthermore, low levels of dietary copper appear to inhibit iron absorption.Some forms of copper are not soluble in stomach acids and cannot be absorbed from the stomach or small intestine. Also, some foods may contain indigestible fiber that binds with copper. High intakes of zinc can significantly decrease copper absorption. Extreme intakes of Vitamin C or iron can also affect copper absorption, reminding us of the fact that micronutrients need to be consumed as a balanced mixture. This is one reason why extreme intakes of any one single micronutrient are not advised. Individuals with chronic digestive problems may be unable to absorb sufficient amounts of copper, even though the foods they eat are copper-rich.
Essentiality:
Several copper transporters have been identified that can move copper across cell membranes. Other intestinal copper transporters may exist. Intestinal copper uptake may be catalyzed by Ctr1. Ctr1 is expressed in all cell types so far investigated, including enterocytes, and it catalyzes the transport of Cu+1 across the cell membrane.Excess copper (as well as other heavy metal ions like zinc or cadmium) may be bound by metallothionein and sequestered within intracellular vesicles of enterocytes (i.e., predominant cells in the small intestinal mucosa).
Essentiality:
Distribution Copper released from intestinal cells moves to the serosal (i.e., thin membrane lining) capillaries where it binds to albumin, glutathione, and amino acids in the portal blood. There is also evidence for a small protein, transcuprein, with a specific role in plasma copper transport Several or all of these copper-binding molecules may participate in serum copper transport. Copper from portal circulation is primarily taken up by the liver. Once in the liver, copper is either incorporated into copper-requiring proteins, which are subsequently secreted into the blood. Most of the copper (70 – 95%) excreted by the liver is incorporated into ceruloplasmin, the main copper carrier in blood. Copper is transported to extra-hepatic tissues by ceruloplasmin, albumin and amino acids, or excreted into the bile. By regulating copper release, the liver exerts homeostatic control over extra-hepatic copper.
Essentiality:
Excretion Bile is the major pathway for the excretion of copper and is vitally important in the control of liver copper levels. Most fecal copper results from biliary excretion; the remainder is derived from unabsorbed copper and copper from desquamated mucosal cells.
Dietary recommendations:
Various national and international organizations concerned with nutrition and health have standards for copper intake at levels judged to be adequate for maintaining good health. These standards are periodically changed and updated as new scientific data become available. The standards sometimes differ among countries and organizations.
Dietary recommendations:
Adults The World Health Organization recommends a minimal acceptable intake of approximately 1.3 mg/day. These values are considered to be adequate and safe for most of the general population. In North America, the U.S. Institute of Medicine (IOM) set the Recommended Dietary Allowance (RDA) for copper for healthy adult men and women at 0.9 mg/day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of copper the UL is set at 10 mg/day. The European Food Safety Authority reviewed the same safety question and set its UL at 5 mg/day.
Dietary recommendations:
Adolescents, children, and infants Full-term and premature infants are more sensitive to copper deficiency than adults. Since the fetus accumulates copper during the last 3 months of pregnancy, infants that are born prematurely have not had sufficient time to store adequate reserves of copper in their livers and therefore require more copper at birth than full-term infants.For full-term infants, the North American recommended safe and adequate intake is approximately 0.2 mg/day. For premature babies, it is considerably higher: 1 mg/day. The World Health Organization has recommended similar minimum adequate intakes and advises that premature infants be given formula supplemented with extra copper to prevent the development of copper deficiency.
Dietary recommendations:
Pregnant and lactating women In North America, the IOM has set the RDA for pregnancy at 1.0 mg/day and for lactation at 1.3 mg/day. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA. PRI for pregnancy is 1.6 mg/day, for lactation 1.6 mg/day – higher than the U.S. RDAs.
Food sources:
Foods contribute virtually all of the copper consumed by humans.In both developed and developing countries, adults, young children, and adolescents who consume diets of grain, millet, tuber, or rice along with legumes (beans) or small amounts of fish or meat, some fruits and vegetables, and some vegetable oil are likely to obtain adequate copper if their total food consumption is adequate in calories. In developed countries where consumption of red meat is high, copper intake may also be adequate.As a natural element in the earth's crust, copper exists in most of the world's surface water and groundwater, although the actual concentration of copper in natural waters varies geographically. Drinking water can comprise 20–25% of dietary copper.In many regions of the world, copper tubing that conveys drinking water can be a source of dietary copper. Copper tube can leach a small amount of copper, particularly in its first year or two of service. Afterwards, a protective surface usually forms on the inside of copper tubes that slows leaching.
Food sources:
In France and some other countries, copper bowls are traditionally used for whipping egg white, as the copper helps stabilise bonds in the white as it is beaten and whipped. Small amounts of copper may leach from the bowl during the process and enter the egg white.
Supplementation Copper supplements can prevent copper deficiency. Copper supplements are not prescription medicines, and are available at vitamin and herb stores and grocery stores and online retailers. Different forms of copper supplementation have different absorption rates. For example, the absorption of copper from cupric oxide supplements is lower than that from copper gluconate, copper sulfate, or carbonate.
Food sources:
Supplementation is generally not recommended for healthy adults who consume a well-balanced diet which includes a wide range of foods. However, supplementation under the care of a physician may be necessary for premature infants or those with low birth weights, infants fed unfortified formula or cow's milk during the first year of life, and malnourished young children. Physicians may consider copper supplementation for 1) illnesses that reduce digestion (e.g., children with frequent diarrhea or infections; alcoholics), 2) insufficient food consumption (e.g., the elderly, the infirm, those with eating disorders or on diets), 3) patients taking medications that block the body's use of copper, 4) anemia patients who are treated with iron supplements, 5) anyone taking zinc supplements, and 6) those with osteoporosis.
Food sources:
Many popular vitamin supplements include copper as small inorganic molecules such as cupric oxide. These supplements can result in excess free copper in the brain as the copper can cross the blood-brain barrier directly. Normally, organic copper in food is first processed by the liver which keeps free copper levels under control.
Copper deficiency and excess health conditions (non-genetic):
If insufficient quantities of copper are ingested, copper reserves in the liver will become depleted and a copper deficiency leading to disease or tissue injury (and in extreme cases, death). Toxicity from copper deficiency can be treated with a balanced diet or supplementation under the supervision of a doctor. On the contrary, like all substances, excess copper intake at levels far above World Health Organization limits can become toxic. Acute copper toxicity is generally associated with accidental ingestion. These symptoms abate when the high copper food source is no longer ingested.
Copper deficiency and excess health conditions (non-genetic):
In 1996, the International Program on Chemical Safety, a World Health Organization-associated agency, stated "there is greater risk of health effects from deficiency of copper intake than from excess copper intake". This conclusion was confirmed in recent multi-route exposure surveys.The health conditions of non-genetic copper deficiency and copper excess are described below.
Copper deficiency and excess health conditions (non-genetic):
Copper deficiency There are conflicting reports on the extent of deficiency in the U.S. One review indicates approximately 25% of adolescents, adults, and people over 65, do not meet the Recommended Dietary Allowance for copper. Another source states less common: a federal survey of food consumption determined that for women and men over the age of 19, average consumption from foods and beverages was 1.11 and 1.54 mg/day, respectively. For women, 10% consumed less than the Estimated Average Requirement, for men fewer than 3%.Acquired copper deficiency has recently been implicated in adult-onset progressive myeloneuropathy and in the development of severe blood disorders including myelodysplastic syndrome. Fortunately, copper deficiency can be confirmed by very low serum metal and ceruloplasmin concentrations in the blood.
Copper deficiency and excess health conditions (non-genetic):
Other conditions linked to copper deficiency include osteoporosis, osteoarthritis, rheumatoid arthritis, cardiovascular disease, colon cancer, and chronic conditions involving bone, connective tissue, heart and blood vessels. nervous system and immune system. Copper deficiency alters the role of other cellular constituents involved in antioxidant activities, such as iron, selenium, and glutathione, and therefore plays an important role in diseases in which oxidant stress is elevated. A marginal, i.e., 'mild' copper deficiency, believed to be more widespread than previously thought, can impair human health in subtle ways.Populations susceptible to copper deficiency include those with genetic defects for Menkes disease, low-birth-weight infants, infants fed cow's milk instead of breast milk or fortified formula, pregnant and lactating mothers, patients receiving total parenteral nutrition, individuals with "malabsorption syndrome" (impaired dietary absorption), diabetics, individuals with chronic diseases that result in low food intake, such as alcoholics, and persons with eating disorders. The elderly and athletes may also be at higher risk for copper deficiency due to special needs that increase the daily requirements. Vegetarians may have decreased copper intake due to the consumption of plant foods in which copper bioavailability is low. On the other hand, Bo Lönnerdal commented that Gibson's study showed that vegetarian diets provided larger quantities of copper. Fetuses and infants of severely copper deficient women have increased risk of low birth weights, muscle weaknesses, and neurological problems. Copper deficiencies in these populations may result in anemia, bone abnormalities, impaired growth, weight gain, frequent infections (colds, flu, pneumonia), poor motor coordination, and low energy.
Copper deficiency and excess health conditions (non-genetic):
Copper excess Copper excess is a subject of much current research. Distinctions have emerged from studies that copper excess factors are different in normal populations versus those with increased susceptibility to adverse effects and those with rare genetic diseases. This has led to statements from health organizations that could be confusing to the uninformed. For example, according to a U.S. Institute of Medicine report, the intake levels of copper for a significant percentage of the population are lower than recommended levels. On the other hand, the U.S. National Research Council concluded in its report Copper in Drinking Water that there is concern for copper toxicity in susceptible populations and recommended that additional research be conducted to identify and characterize copper-sensitive populations.
Copper deficiency and excess health conditions (non-genetic):
Excess copper intake causes stomach upset, nausea, and diarrhea and can lead to tissue injury and disease.
Copper deficiency and excess health conditions (non-genetic):
The oxidation potential of copper may be responsible for some of its toxicity in excess ingestion cases. At high concentrations copper is known to produce oxidative damage to biological systems, including peroxidation of lipids or other macromolecules.While the cause and progression of Alzheimer's disease are not well understood, research indicates that, among several other key observations, iron, aluminum, and copper accumulate in the brains of Alzheimer's patients. However, it is not yet known whether this accumulation is a cause or a consequence of the disease.
Copper deficiency and excess health conditions (non-genetic):
Research has been ongoing over the past two decades to determine whether copper is a causative or a preventive agent of Alzheimer's disease. For example, as a possible causative agent or an expression of a metal homeostasis disturbance, studies indicate that copper may play a role in increasing the growth of protein clumps in Alzheimer's disease brains, possibly by damaging a molecule that removes the toxic buildup of amyloid beta (Aβ) in the brain. There is an association between a diet rich in copper and iron together with saturated fat and Alzheimer's disease. On the other hand, studies also demonstrate potential beneficial roles of copper in treating rather than causing Alzheimer's disease. For example, copper has been shown to 1) promote the non-amyloidogenic processing of amyloid beta precursor protein (APP), thereby lowering amyloid beta (Aβ) production in cell culture systems 2) increase lifetime and decrease soluble amyloid production in APP transgenic mice, and 3) lower Aβ levels in cerebral spinal fluid in Alzheimer's disease patients.Furthermore, long-term copper treatment (oral intake of 8 mg copper (Cu-(II)-orotate-dihydrate)) was excluded as a risk factor for Alzheimer's disease in a noted clinical trial on humans and a potentially beneficial role of copper in Alzheimer's disease has been demonstrated on cerebral spinal fluid levels of Aβ42, a toxic peptide and biomarker of the disease. More research is needed to understand metal homeostasis disturbances in Alzheimer's disease patients and how to address these disturbances therapeutically. Since this experiment used Cu-(II)-orotate-dihydrate, it does not relate to the effects of cupric oxide in supplements.
Copper deficiency and excess health conditions (non-genetic):
Copper toxicity from excess exposures In humans, the liver is the primary organ of copper-induced toxicity. Other target organs include bone and the central nervous and immune systems. Excess copper intake also induces toxicity indirectly by interacting with other nutrients. For example, excess copper intake produces anemia by interfering with iron transport and/or metabolism.The identification of genetic disorders of copper metabolism leading to severe copper toxicity (i.e., Wilson disease) has spurred research into the molecular genetics and biology of copper homeostasis (for further information, refer to the following section on copper genetic diseases). Much attention has focused on the potential consequences of copper toxicity in normal and potentially susceptible populations. Potentially susceptible subpopulations include hemodialysis patients and individuals with chronic liver disease. Recently, concern was expressed about the potential sensitivity to liver disease of individuals who are heterozygote carriers of Wilson disease genetic defects (i.e., those having one normal and one mutated Wilson copper ATPase gene) but who do not have the disease (which requires defects in both relevant genes). However, to date, no data are available that either support or refute this hypothesis.
Copper deficiency and excess health conditions (non-genetic):
Acute exposures In case reports of humans intentionally or accidentally ingesting high concentrations of copper salts (doses usually not known but reported to be 20–70 grams of copper), a progression of symptoms was observed including abdominal pain, headache, nausea, dizziness, vomiting and diarrhea, tachycardia, respiratory difficulty, hemolytic anemia, hematuria, massive gastrointestinal bleeding, liver and kidney failure, and death.
Episodes of acute gastrointestinal upset following single or repeated ingestion of drinking water containing elevated levels of copper (generally above 3–6 mg/L) are characterized by nausea, vomiting, and stomach irritation. These symptoms resolve when copper in the drinking water source is reduced.
Copper deficiency and excess health conditions (non-genetic):
Three experimental studies were conducted that demonstrate a threshold for acute gastrointestinal upset of approximately 4–5 mg/L in healthy adults, although it is not clear from these findings whether symptoms are due to acutely irritant effects of copper and/or to metallic, bitter, salty taste. In an experimental study with healthy adults, the average taste threshold for copper sulfate and chloride in tap water, deionized water, or mineral water was 2.5–3.5 mg/L. This is just below the experimental threshold for acute gastrointestinal upset.
Copper deficiency and excess health conditions (non-genetic):
Chronic exposures The long-term toxicity of copper has not been well studied in humans, but it is infrequent in normal populations that do not have a hereditary defect in copper homeostasis.There is little evidence to indicate that chronic human exposure to copper results in systemic effects other than liver injury. Chronic copper poisoning leading to liver failure was reported in a young adult male with no known genetic susceptibility who consumed 30–60 mg/d of copper as a mineral supplement for 3 years. Individuals residing in U.S. households supplied with tap water containing >3 mg/L of copper exhibited no adverse health effects.No effects of copper supplementation on serum liver enzymes, biomarkers of oxidative stress, and other biochemical endpoints have been observed in healthy young human volunteers given daily doses of 6 to 10 mg/d of copper for up to 12 weeks. Infants aged 3–12 months who consumed water containing 2 mg Cu/L for 9 months did not differ from a concurrent control group in gastrointestinal tract (GIT) symptoms, growth rate, morbidity, serum liver enzyme and bilirubin levels, and other biochemical endpoints.) Serum ceruloplasmin was transiently elevated in the exposed infant group at 9 months and similar to controls at 12 months, suggesting homeostatic adaptation and/or maturation of the homeostatic response.Dermal exposure has not been associated with systemic toxicity but anecdotal reports of allergic responses may be a sensitization to nickel and cross-reaction with copper or a skin irritation from copper. Workers exposed to high air levels of copper (resulting in an estimated intake of 200 mg Cu/d) developed signs suggesting copper toxicity (e.g., elevated serum copper levels, hepatomegaly). However, other co-occurring exposures to pesticidal agents or in mining and smelting may contribute to these effects. Effects of copper inhalation are being thoroughly investigated by an industry-sponsored program on workplace air and worker safety. This multi-year research effort is expected to be finalized in 2011.
Copper deficiency and excess health conditions (non-genetic):
Measurements of elevated copper status Although a number of indicators are useful in diagnosing copper deficiency, there are no reliable biomarkers of copper excess resulting from dietary intake. The most reliable indicator of excess copper status is liver copper concentration. However, measurement of this endpoint in humans is intrusive and not generally conducted except in cases of suspected copper poisoning. Increased serum copper or ceruolplasmin levels are not reliably associated with copper toxicity as elevations in concentrations can be induced by inflammation, infection, disease, malignancies, pregnancy, and other biological stressors. Levels of copper-containing enzymes, such as cytochrome c oxidase, superoxide dismutase, and diaminase oxidase, vary not only in response to copper state but also in response to a variety of other physiological and biochemical factors and therefore are inconsistent markers of excess copper status.A new candidate biomarker for copper excess as well as deficiency has emerged in recent years. This potential marker is a chaperone protein, which delivers copper to the antioxidant protein SOD1 (copper, zinc superoxide dismutase). It is called "copper chaperone for SOD1" (CCS), and excellent animal data supports its use as a marker in accessible cells (e.g., erythrocytes) for copper deficiency as well as excess. CCS is currently being tested as a biomarker in humans.
Hereditary copper metabolic diseases:
Several rare genetic diseases (Wilson disease, Menkes disease, idiopathic copper toxicosis, Indian childhood cirrhosis) are associated with the improper use of copper in the body. All of these diseases involve mutations of genes containing the genetic codes for the production of specific proteins involved in the absorption and distribution of copper. When these proteins are dysfunctional, copper either builds up in the liver or the body fails to absorb copper.These diseases are inherited and cannot be acquired. Adjusting copper levels in the diet or drinking water will not cure these conditions (although therapies are available to manage symptoms of genetic copper excess disease).
Hereditary copper metabolic diseases:
The study of genetic copper metabolism diseases and their associated proteins are enabling scientists to understand how human bodies use copper and why it is important as an essential micronutrient.The diseases arise from defects in two similar copper pumps, the Menkes and the Wilson Cu-ATPases. The Menkes ATPase is expressed in tissues like skin-building fibroblasts, kidneys, placenta, brain, gut and vascular system, while the Wilson ATPase is expressed mainly in the liver, but also in mammary glands and possibly in other specialized tissues. This knowledge is leading scientists towards possible cures for genetic copper diseases.
Hereditary copper metabolic diseases:
Menkes disease Menkes disease, a genetic condition of copper deficiency, was first described by John Menkes in 1962. It is a rare X-linked disorder that affects approximately 1/200,000 live births, primarily boys. Livers of Menkes disease patients cannot absorb essential copper needed for patients to survive. Death usually occurs in early childhood: most affected individuals die before the age of 10 years, although several patients have survived into their teens and early 20s.The protein produced by the Menkes gene is responsible for transporting copper across the gastrointestinal tract (GIT) mucosa and the blood–brain barrier. Mutational defects in the gene encoding the copper ATPase cause copper to remain trapped in the lining of the small intestine. Hence, copper cannot be pumped out of the intestinal cells and into the blood for transport to the liver and consequently to rest of the body. The disease therefore resembles a severe nutritional copper deficiency despite adequate ingestion of copper.
Hereditary copper metabolic diseases:
Symptoms of the disease include coarse, brittle, depigmented hair and other neonatal problems, including the inability to control body temperature, intellectual disability, skeletal defects, and abnormal connective tissue growth.Menkes patients exhibit severe neurological abnormalities, apparently due to the lack of several copper-dependent enzymes required for brain development, including reduced cytochrome c oxidase activity. The brittle, kinky hypopigmented hair of steely appearance is due to a deficiency in an unidentified cuproenzyme. Reduced lysyl oxidase activity results in defective collagen and elastin polymerization and corresponding connective-tissue abnormalities including aortic aneurisms, loose skin, and fragile bones.With early diagnosis and treatment consisting of daily injections of copper histidine intraperitoneally and intrathecally to the central nervous system, some of the severe neurological problems may be avoided and survival prolonged. However, Menkes disease patients retain abnormal bone and connective-tissue disorders and show mild to severe intellectual disability. Even with early diagnosis and treatment, Menkes disease is usually fatal.Ongoing research into Menkes disease is leading to a greater understanding of copper homeostasis, the biochemical mechanisms involved in the disease, and possible ways to treat it. Investigations into the transport of copper across the blood/brain barrier, which are based on studies of genetically altered mice, are designed to help researchers understand the root cause of copper deficiency in Menkes disease. The genetic makeup of transgenic mice is altered in ways that help researchers garner new perspectives about copper deficiency. The research to date has been valuable: genes can be turned off gradually to explore varying degrees of deficiency.Researchers have also demonstrated in test tubes that damaged DNA in the cells of a Menkes patient can be repaired. In time, the procedures needed to repair damaged genes in the human body may be found.
Hereditary copper metabolic diseases:
Wilson's disease Wilson's disease is a rare autosomal (chromosome 13) recessive genetic disorder of copper transport that causes an excess of copper to build up in the liver. This results in liver toxicity, among other symptoms. The disease is now treatable.
Hereditary copper metabolic diseases:
Wilson's disease is produced by mutational defects of a protein that transports copper from the liver to the bile for excretion. The disease involves poor incorporation of copper into ceruloplasmin and impaired biliary copper excretion and is usually induced by mutations impairing the function of the Wilson copper ATPase. These genetic mutations produce copper toxicosis due to excess copper accumulation, predominantly in the liver and brain and, to a lesser extent, in kidneys, eyes, and other organs.The disease, which affects about 1/30,000 infants of both genders, may become clinically evident at any time from infancy through early adulthood. The age of onset of Wilson's disease ranges from 3 to 50 years of age. Initial symptoms include hepatic, neurologic, or psychiatric disorders and, rarely, kidney, skeletal, or endocrine symptomatology. The disease progresses with deepening jaundice and the development of encephalopathy, severe clotting abnormalities, occasionally associated with intravascular coagulation, and advanced chronic kidney disease. A peculiar type of tremor in the upper extremities, slowness of movement, and changes in temperament become apparent. Kayser-Fleischer rings, a rusty brown discoloration at the outer rims of the iris due to copper deposition noted in 90% of patients, become evident as copper begins to accumulate and affect the nervous system.Almost always, death occurs if the disease is untreated. Fortunately, identification of the mutations in the Wilson ATPase gene underlying most cases of Wilson's disease has made DNA testing for diagnosis possible.
Hereditary copper metabolic diseases:
If diagnosed and treated early enough, patients with Wilson's disease may live long and productive lives. Wilson's disease is managed by copper chelation therapy with D-penicillamine (which picks up and binds copper and enables patients to excrete excess copper accumulated in the liver), therapy with zinc sulfate or zinc acetate, and restrictive dietary metal intake, such as the elimination of chocolate, oysters, and mushrooms. Zinc therapy is now the treatment of choice. Zinc produces a mucosal block by inducing metallothionein, which binds copper in mucosal cells until they slough off and are eliminated in the feces. and it competes with copper for absorption in the intestine by DMT1 (Divalent Metal transporter 1). More recently, experimental treatments with tetrathiomolybdate showed promising results. Tetrathiomolybdate appears to be an excellent form of initial treatment in patients who have neurologic symptoms. In contrast to penicillamine therapy, initial treatment with tetrathiomolybdate rarely allows further, often irreversible, neurologic deterioration.Over 100 different genetic defects leading to Wilson's disease have been described and are available on the Internet at [1]. Some of the mutations have geographic clustering.Many Wilson's patients carry different mutations on each chromosome 13 (i.e., they are compound heterozygotes). Even in individuals who are homozygous for a mutation, onset and severity of the disease may vary. Individuals homozygous for severe mutations (e.g., those truncating the protein) have earlier disease onset. Disease severity may also be a function of environmental factors, including the amount of copper in the diet or variability in the function of other proteins that influence copper homeostasis.
Hereditary copper metabolic diseases:
It has been suggested that heterozygote carriers of the Wilson's disease gene mutation may be potentially more susceptible to elevated copper intake than the general population. A heterozygotic frequency of 1/90 people has been estimated in the overall population. However, there is no evidence to support this speculation. Further, a review of the data on single-allelic autosomal recessive diseases in humans does not suggest that heterozygote carriers are likely to be adversely affected by their altered genetic status.
Hereditary copper metabolic diseases:
Other copper-related hereditary syndromes Other diseases in which abnormalities in copper metabolism appear to be involved include Indian childhood cirrhosis (ICC), endemic Tyrolean copper toxicosis (ETIC), and idiopathic copper toxicosis (ICT), also known as non-Indian childhood cirrhosis. ICT is a genetic disease recognized in the early twentieth century primarily in the Tyrolean region of Austria and in the Pune region of India.ICC, ICT, and ETIC are infancy syndromes that are similar in their apparent etiology and presentation. Both appear to have a genetic component and a contribution from elevated copper intake.
Hereditary copper metabolic diseases:
In cases of ICC, the elevated copper intake is due to heating and/or storing milk in copper or brass vessels. ICT cases, on the other hand, are due to elevated copper concentrations in water supplies. Although exposures to elevated concentrations of copper are commonly found in both diseases, some cases appear to develop in children who are exclusively breastfed or who receive only low levels of copper in water supplies. The currently prevailing hypothesis is that ICT is due to a genetic lesion resulting in impaired copper metabolism combined with high copper intake. This hypothesis was supported by the frequency of occurrence of parental consanguinity in most of these cases, which is absent in areas with elevated copper in drinking water and in which these syndromes do not occur.ICT appears to be vanishing as a result of greater genetic diversity within the affected populations in conjunction with educational programs to ensure that tinned cooking utensils are used instead of copper pots and pans being directly exposed to cooked foods. The preponderance of cases of early childhood cirrhosis identified in Germany over a period of 10 years were not associated with either external sources of copper or with elevated hepatic metal concentrations Only occasional spontaneous cases of ICT arise today.
Cancer:
The role of copper in angiogenesis associated with different types of cancers has been investigated. A copper chelator, tetrathiomolybdate, which depletes copper stores in the body, is under investigation as an anti-angiogenic agent in pilot and clinical trials. The drug may inhibit tumor angiogenesis in hepatocellular carcinoma, pleural mesothelioma, colorectal cancer, head and neck squamous cell carcinoma, breast cancer, and kidney cancer. The copper complex of a synthetic salicylaldehyde pyrazole hydrazone (SPH) derivative induced human umbilical endothelial cell (HUVEC) apoptosis and showed anti-angiogenesis effect in vitro.The trace element copper had been found promoting tumor growth. Several evidence from animal models indicates that tumors concentrate high levels of copper. Meanwhile, extra copper has been found in some human cancers. Recently, therapeutic strategies targeting copper in the tumor have been proposed. Upon administration with a specific copper chelator, copper complexes would be formed at a relatively high level in tumors. Copper complexes are often toxic to cells, therefore tumor cells were killed, while normal cells in the whole body remained alive for the lower level of copper. Researchers have also recently found that cuproptosis, a copper-induced mechanism of mitochondrial-related cell death, has been implicated as a breakthrough in the treatment of cancer and has become a new treatment strategy. Some copper chelators get more effective or novel bioactivity after forming copper-chelator complexes. It was found that Cu2+ was critically needed for PDTC induced apoptosis in HL-60 cells. The copper complex of salicylaldehyde benzoylhydrazone (SBH) derivatives showed increased efficacy of growth inhibition in several cancer cell lines, when compared with the metal-free SBHs.SBHs can react with many kinds of transition metal cations and thereby forming a number of complexes. Copper-SBH complexes were more cytotoxic than complexes of other transitional metals (Cu > Ni > Zn = Mn > Fe = Cr > Co) in MOLT-4 cells, an established human T-cell leukemia cell line. SBHs, especially their copper complexes appeared to be potent inhibitors of DNA synthesis and cell growth in several human cancer cell lines, and rodent cancer cell lines.Salicylaldehyde pyrazole hydrazone (SPH) derivatives were found to inhibit the growth of A549 lung carcinoma cells. SPH has identical ligands for Cu2+ as SBH. The Cu-SPH complex was found to induce apoptosis in A549, H322 and H1299 lung cancer cells.
Contraception with copper IUDs:
A copper intrauterine device (IUD) is a type of long-acting reversible contraception that is considered to be one of the most effective forms of birth control.
Plant and animal health:
In addition to being an essential nutrient for humans, copper is vital for the health of animals and plants and plays an important role in agriculture.
Plant and animal health:
Plant health Copper concentrations in soil are not uniform around the world. In many areas, soils have insufficient levels of copper. Soils that are naturally deficient in copper often require copper supplements before agricultural crops, such as cereals, can be grown.Copper deficiencies in soil can lead to crop failure. Copper deficiency is a major issue in global food production, resulting in losses in yield and reduced quality of output. Nitrogen fertilizers can worsen copper deficiency in agricultural soils.The world's two most important food crops, rice and wheat, are highly susceptible to copper deficiency. So are several other important foods, including citrus, oats, spinach and carrots. On the other hand, some foods including coconuts, soybeans and asparagus, are not particularly sensitive to copper-deficient soils.The most effective strategy to counter copper deficiency is to supplement the soil with copper, usually in the form of copper sulfate. Sewage sludge is also used in some areas to replenish agricultural land with organics and trace metals, including copper.
Plant and animal health:
Animal health In livestock, cattle and sheep commonly show indications when they are copper deficient. Swayback, a sheep disease associated with copper deficiency, imposes enormous costs on farmers worldwide, particularly in Europe, North America, and many tropical countries. For pigs, copper has been shown to be a growth promoter. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**GPR162**
GPR162:
Probable G-protein coupled receptor 162 is a protein that in humans is encoded by the GPR162 gene.This gene was identified upon genomic analysis of a gene-dense region at human chromosome 12p13. It appears to be mainly expressed in the brain; however, its function is not known. Alternatively spliced transcript variants encoding different isoforms have been identified. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Delivery drone**
Delivery drone:
A delivery drone is an unmanned aerial vehicle (UAV) used to transport packages that include medical supplies, food, or other goods. Given their life-saving potential, use cases for medical supplies in particular have become the most widely-tested type of drone delivery, with trials and pilot projects in dozens of countries such as Australia, Canada, Botswana, Ghana, Uganda, the UK, the US among others (see below). Delivery drones are typically autonomous and electric, sometimes also operated as a part of a fleet.
Delivery drone:
Since 2018, the German consulting firm Drone Industry Insights provides a yearly ranking of drone delivery service providers which is widely referenced throughout the drone industry. Drones are nowadays also being used to ferry/delivery of personnel in the form of Air Taxis. Presently, countries are in the trial phase of using drones as Air Taxis. Dubai and India are presently carrying out trials of Drones as Air Taxis.
Applications:
Healthcare delivery Drones can be used to transport medicinal products such as blood products, vaccines, pharmaceuticals, and medical samples. Medical deliveries are able to fly into and out of remote or otherwise inaccessible regions, compared to trucks or motorcycles. Medical drone delivery is credited with saving lives during emergency deliveries of blood in Rwanda and post-hurricane relief in Puerto Rico.During the COVID-19 pandemic, drones made medical deliveries of personal protective equipment and COVID-19 tests in the United States, Israel, and Ghana. In partnership with the Ghana Ministry of Health, Zipline drones delivered thousands of COVID-19 vaccine vials in Ghana during 2020 and 2021.University of British Columbia (UBC) has. selected Drone Delivery Canada Corp for UBC's “Remote Communities Drone Transport Initiative” program. This solution will be used to transport a variety of cargo for the benefit of the Stellat’en First Nation, located in the Fraser Lake area of Central Northern British Columbia.
Applications:
Irish drone-delivery startup Manna was ready to start deliveries as the COVID-19 pandemic started in March of 2020. The company quickly restructured to start making essential deliveries of prescription medication and food to isolated residents in the village of Moneygall, Barack Obama's ancestral home. Ireland's national health service, the HSE, designated Manna an essential service and it took the company a week to pivot from the original plan of food delivery to delivering essential medical goods.In 2021, Skyports drones were used to carry Covid-19 kits from Mull, Clachan-Seil and Lochgilphead to Lorn and Islands Hospital in Oban which has been described as a UK first.In August 2022, St Mary's Hospital on the Isle of Wight was selected to participate in a pilot scheme for receiving medicine delivered by delivery drones, with the shipments being flown in from the Portsmouth Hospitals University NHS Trust's pharmacy on the mainland.
Applications:
Food delivery Drones have been proposed as a solution for rapidly delivering prepared foods, such as pizzas, tacos, and frozen beverages.
Applications:
Foodpanda has piloted food deliveries in Singapore using multirotor drones from ST Engineering and in Pakistan using VTOL drones from Woot Tech. Early prototypes of food delivery drones include the Tacocopter demonstration by Star Simpson, which was a taco delivery concept utilizing a smartphone app to order drone-delivered tacos in San Francisco area. The revelation that it didn't exist as a delivery system or app led to it being labelled a hoax. A similar concept named the "burrito bomber" was tested in 2012.Manna drone delivery has been successfully delivering orders by drone from Tesco, local coffee and bookshops, takeaways via Just Eat in the town of Oranmore in County Galway since October, and has proved so popular that 35 per cent of the 3,000 or so homes in its delivery area have tried the service.
Applications:
Postal delivery Different postal companies from Australia, Switzerland, Germany, Singapore, the United Kingdom and Ukraine have undertaken various drone trials as they test the feasibility and profitability of unmanned delivery drone services. The USPS has been testing delivery systems with HorseFly Drones.
Ship resupply The shipping line Maersk and the Port of Rotterdam have experimented with using drones to resupply offshore ships instead of sending smaller boats.
Regulation:
In February 2014, the prime minister and cabinet affairs minister of the United Arab Emirates (UAE) announced that the UAE planned to launch a fleet of UAVs for civilian purposes. Plans were for the UAVs to use fingerprint and eye-recognition systems to deliver official documents such as passports, ID cards and licenses, and supply emergency services at accidents. A battery-powered prototype four-rotor UAV about half a meter across was displayed in Dubai.In the United States, initial attempts at commercial use of UAVs were blocked by FAA regulation, but were later allowed. In June 2014, the FAA published a document that listed activities not permitted under its regulations, including commercial use, which the organization stated included "delivering packages to people for a fee" or offered as part of a "purchase or another offer." The agency issued waivers to many organizations for less restrictive commercial uses, but each had to apply individually. In August 2016, the FAA adopted Part 107 rules that allowed limited commercial use by right. Drone operation under these rules is restricted to line-of-sight of the pilot and is not allowed over people, implying many applications like delivery to populated areas still requires a waiver. They also require the UAVs weigh less than 55 lb (25 kg), fly up to a maximum of 400 feet (120 m), at a speed of no greater than 100 miles per hour (160 km/h), only be operated during daytime, and that drone operators must also qualify for flying certificates and be at least 16 years old. In 2019, the FAA began certifying drone delivery companies under conventional charter airline Part 135 rules, with some accommodations for drones (such as that the pilot manual did not need to be carried on board). In preparation for higher volumes of drone traffic, the FAA finalized the Remote ID regulation in December 2020, giving manufacturers 18 months and operators 30 months to comply with the requirement for self-identification transmissions outside of designated areas. At the same time, the FAA added a Operations Over People and at Night rule to Part 107. Nighttime operations require anti-collision lights and additional pilot training. For flight over people or moving vehicles, drones are put into four categories depending on capability of injury to people, with the least restricted category having a full Part 21 airworthiness certificate.
Early experiments:
The concept of drone delivery entered the mainstream with Amazon Prime Air – Amazon.com founder Jeff Bezos' December 2013 announcement that Amazon was planning rapid delivery of lightweight commercial products using UAVs. Amazon's press release was met with skepticism, with perceived hurdles including federal and state regulatory approval, public safety, reliability, individual privacy, operator training and certification, security (hacking), payload thievery, and logistical challenges.In December 2013, in a research project of Deutsche Post AG subsidiary DHL, a sub-kilogram quantity of medicine was delivered via a prototype Microdrones "Parcelcopter", raising speculation that disaster relief may be the first industry the company will use the technology.In August 2014, Google revealed it had been testing UAVs in Australia for two years. The Google X program known as "Project Wing" announced an aim to produce drones that can deliver products sold via e-commerce.In February 2015, Hangzhou-based e-commerce provider Ali Baba started delivery drone service in a partnership with Shanghai YTO Express in which it delivered tea to 450 customers around select cities in China.In 2015, an Israeli startup Flytrex partnered with AHA, Iceland's largest eCommerce website, and together they initiated a drone delivery route which demonstrated reducing delivery time from 30 minutes, to less than 5 minutes.In March 2016, Flirtey conducted the first fully autonomous FAA approved drone delivery in an urban setting in the U.S.A partnership between 7-Eleven and Flirtey resulted in the first FAA-approved delivery to a residence in the United States in July 2016, delivering a frozen Slurpee. The following month, the company partnered with Domino's in New Zealand to launch the first commercial drone delivery service.In China, JD.com has been developing drone delivery capabilities. As of June 2017, JD.com had seven different types of delivery drones in testing across four provinces in China (Beijing, Sichuan, Shaanxi and Jiangsu). The drones are capable of delivering packages weighing between 5 and 30 kg (11 to 66 lbs) while flying up to 100 km/h (62 mph). The drones fly along fixed routes from warehouses to special landing pads where one of JD.com's 300,000 local contractors then delivers the packages to the customers’ doorsteps in the rural villages. The e-commerce giant is also working on a 1 metric ton (1,000 kg) delivery drone which will be tested in Shaanxi.In January 2018, Boeing unveiled a prototype of a cargo drone for up to 500 lb (227 kg) payloads, an electric flying testbed that completed flight tests at the Boeing Research & Technology research center in Missouri.
Commercial systems:
Zipline In 2016, Zipline began their partnership with the government of Rwanda to construct and operate a medical distribution center in Muhanga. Rwanda has a mountainous geography, poor road conditions, and a long rainy season, making an aerial delivery system more cost efficient and timely than traditional road-based deliveries. As of May 2018, they had delivered over 7,000 units of blood using drones. By October 2020, Zipline had made over 70,000 medical deliveries by drone and expanded operations across Rwanda and Ghana.Their drones are small fixed-wing electric airplanes, enabling them to fly fast and over long distances (up to 180 km round-trip on a single charge), in all weather seen in Rwanda. Zipline drones use an assisted take-off to enter flight, and for landing they use an arresting gear-inspired mechatronic recovery system.
Commercial systems:
During a delivery, the Zipline drone does not land, but instead descends to a low height and drops the package to the ground, slowed by a parachute-like air brake.
In 2020, Zipline began deliveries between Novant Health facilities in Kannapolis and Huntersville, North Carolina.
Commercial systems:
Wing The company Wing incubated by Google X began commercial deliveries in Christiansburg, Virginia in October, 2019. It carries parcels up to three pounds, using a tether for the final drop. As of April 2020, shippers were limited to FedEx, Walgreens, Sugar Magnolia, Mockingbird Café, and Brugh Coffee. Wing had obtained an FAA Part 135 air carrier certificate in April, 2019, which allowed it to charge to carry third-party cargo, and to operate out of the line of sight of the pilot. It also delivered over a thousand meals to Virginia Tech students and employees in a fall 2016 test program.Wing also operates in Canberra, Logan, Queensland, and Helsinki.Wing announced it had conducted around 100,000 drone deliveries in August 2021, since launching trial services in September 2019. Wing uses two aircraft designs. The first features 12 hover motors and two cruise motors, which is the faster of the two flying machines, while the second design has four cruise motors and a slightly longer wing. Combining hover and cruise motors allows the aircraft greater manoeuvrability and speed for navigating urban environments.
Commercial systems:
UPS As of May, 2020, UPS has made over paid 3,700 drone deliveries under Part 107 rules to WakeMed Hospital in Raleigh, North Carolina. In May 2020, UPS began taking prescriptions via Matternet M2 drone under part 107 rules about half a mile from a CVS to a central location in The Villages, Florida, from which a UPS employee makes the home delivery by golf cart.In 2020, UPS also began drone deliveries between the central Wake Forest Baptist Medical Center campus in Winston-Salem and the health system's other locations. Another delivery service began for UC San Diego Health.In June 2019, UPS Flight Forward obtained a Part 135 Air Carrier certificate from the FAA, allowing longer-distance and nighttime flights.
Commercial systems:
Manna Founded in 2019, Manna operates a fleet of delivery drones in Ireland, with . Manna was the first company in Europe to receive a European-wide license to operate commercial drone deliveries. Their drone carries 3.5kg (~8lb) with ~30,000 cm3. With commercial partnerships with Tesco, Unilever, Coca-Cola, Samsung and local vendors of pharmacy, takeaway food, coffee, pastries, books and hardware store, Manna has completed over 100,000 delivery flights as of March 2022.
Commercial systems:
Swoop Aero Swoop Aero's network delivers health supplies to over 650,000 people in the Nsanje and Chikwawa districts in Southern Malawi.Swoop's drones can complete round trips of around 260 kilometres (160 mi) and can carry a maximum weight of 18 kg, which works out as 10 test kits or up to 50 vials of blood. The drones have a wingspan of 2.4m (nearly 8 feet) and are required to fly below 122m (about 400 feet) to ensure they don't collide with manned aircraft.
Illegal deliveries:
Drug cartels have used UAVs to transport contraband, sometimes using GPS-guided UAVs.From 2013 and 2015, UAVs were observed delivering items into prisons on at least four occasions in the United States while four separate but similar incidents occurred in Ireland, Britain, Australia and Canada as well. Though not a popular way of smuggling items into prisons, corrections officials state that some individuals are beginning to experiment with UAVs.In November 2013, four people in Morgan, Georgia, were arrested for allegedly attempting to smuggle contraband into Calhoun State Prison with a hexacopter.In June 2014, a quadcopter crashed into an exercise yard of Wheatfield Prison, Dublin. The quadcopter collided with wires designed to prevent helicopters landing to aid escapes, causing it to crash. A package containing drugs hung from the quadcopter and was seized by prisoners before prison staff could get to it.Between 2014 and 2015, at two prisons in South Carolina, items such as drugs and cell phones were flown into the area by UAVs with authorities and one prison not knowing how many deliveries were successful before gaining the attention of authorities.
Technology:
Aircraft configuration A delivery drone's design configuration is typically defined by the use-case of what is being delivered, and where it must be delivered to.
Technology:
A common configuration is a multirotor - such as a quadcopter or octocopter - which is a drone with horizontally-aligned propellers. Another common configuration is a fixed-wing design. A multirotor design provides power to lift the drone and payload, redundancy to powertrain failure, and an ability to hover and descend vertically (VTOL). However, a multirotor configuration is less efficient and produces more noise. A fixed-wing configuration provides an order of magnitude increase in range, flight at higher airspeeds, and produces less noise, but requires more space for take-off, delivery, and landing.There are also hybrid approaches (for example Wingcopter or Swoop Aero) that use multiple horizontal rotors for take-off and landing, and vertical rotors paired with a fixed-wing for forward flight.
Technology:
Autopilot There are many sensors in the drone which are necessary for it to fly autonomously. Inertial sensors such as an accelerometer help the drone remain in flight by providing data to allow the autopilot to adjust motor speeds (multirotor configuration) or control surface deflections (fixed-wing configuration) to steer the drone. Navigation sensors such as a GPS or magnetic sensors aid the drone to fly along a specific path or to a specific waypoint by measuring the drone's location and orientation with respect to the earth. Air flow sensors allow the drone to measure the air speed, temperature, and density, and that information maintain safe control of the aircraft. The drone may also use these sensors to estimate the wind speed and direction to assist with package delivery and/or landing manoeuvres.
Technology:
Powertrain Delivery drones need powerful motors to keep both the drone steady along with the load. Brushless DC motors are most typically used in drones because they have become cheap, lightweight, powerful, and small. The propeller blades of the drone turn at very high speeds, so the optimal material used for these rotor blades maximizes the strength to weight ratio. Some are made from carbon-fiber reinforced composites while others are made of thermoplastics because they are cheaper so the cost of replacement when the drone crashes is smaller. Lithium ion batteries are used in most drones because they offer enough energy and power, and they are relatively light so they do not weigh down the drone too much.
Technology:
Ground control system Delivery drones rely on ground control systems both for safe operations and for their commercial operations. For safe operations, the drone operator needs to manage their fleet of aircraft and how they integrate into the broader airspace. For commercial use cases, the ground systems allow for receiving and tracking orders. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Software map**
Software map:
A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance.
Motivation and concepts:
Software maps are applied in the context of software engineering: Complex, long-term software development projects are commonly faced by manifold difficulties such as the friction between completing system features and, at the same time, obtaining a high degree of code quality and software quality to ensure software maintenance of the system in the future. In particular, "Maintaining complex software systems tends to be costly because developers spend a significant part of their time with trying to understand the system’s structure and behavior." The key idea of software maps is to cope with that challenge and optimization problems by providing effective communication means to close the communication gap among the various stakeholders and information domains within software development projects and obtaining insights in the sense of information visualization.
Motivation and concepts:
Software maps take advantage of well-defined cartographic map techniques using the virtual 3D city model metaphor to express the underlying complex, abstract information space. The metaphor is required "since software has no physical shape, there is no natural mapping of software to a two-dimensional space". Software maps are non-spatial maps that have to convert the hierarchy data and its attributes into a spatial representation.
Applications:
Software maps generally allow for comprehensible and effective communication of course, risks, and costs of software development projects to various stakeholders such as management and development teams. They communicate the status of applications and systems currently being developed or further developed to project leaders and management at a glance. "A key aspect for this decision-making is that software maps provide the structural context required for correct interpretation of these performance indicators". As an instrument of communication, software maps act as open, transparent information spaces which enable priorities of code quality and the creation of new functions to be balanced against one another and to decide upon and implement necessary measures to improve the software development process.
Applications:
For example, they facilitate decisions as to where in the code an increase in quality would be beneficial both for speeding up current development activities and for reducing risks of future maintenance problems.
Due to their high degree of expressiveness (e.g., information density) and their instantaneous, automated generation, the maps additionally serve to reflect the current status of system and development processes, bridging an essential information gap between management and development teams, improve awareness about the status, and serve as early risk detection instrument.
Contents:
Software maps are based on objective information as determined by the KPI driven code analysis as well as by imported information from software repository systems, information from the source codes, or software development tools and programming tools. In particular, software maps are not bound to a specific programming language, modeling language, or software development process model.
Contents:
Software maps use the hierarchy of the software implementation artifacts such as source code files as a base to build a tree mapping, i.e., a rectangular area that represents the whole hierarchy, subdividing the area into rectangular sub-areas. A software map, informally speaking, looks similar to a virtual 3D city model, whereby artifacts of the software system appear as virtual, rectangular 3D buildings or towers, which are placed according to their position in the software implementation hierarchy.
Contents:
Software maps can express and combine information about software development, software quality, and system dynamics by mapping that information onto visual variables of the tree map elements such as footprint size, height, color or texture. They can systematically be specified, automatically generated, and organized by templates.
Mapping software system example:
Software maps "combine thematic information about software development processes (evolution), software quality, structure, and dynamics and display that information in a cartographic manner". For example: The height of a virtual building can be proportional to the complexity of the code unit (e.g., single or combined software metrics).
The ground area of a virtual 3D building can be proportional to the number of lines of code in the module or (e.g., non-comment lines-of-code NCLOC).
Mapping software system example:
The color can express the current development status, i.e., how many developers are changing/editing the code unit.With this exemplary configuration, the software map shows crucial points in the source code with relations to aspects of the software development process. For example, it becomes obvious at a glance what to change in order to: implement changes quickly; evaluate quickly the impact of changes in one place on functionality elsewhere; reduce entanglements that lead to uncontrolled processes in the application; find errors faster; discover and eliminate bad programming style.Software maps represent key tools in the scope of automated software diagnosis software diagnostics.
As business intelligence tools and recommendation systems:
Software maps can be used, in particular, as analysis and presentation tool of business intelligence systems, specialized in the analysis of software related data. Furthermore, software maps "serve as recommendation systems for software engineering".Software maps are not limited by software-related information: They can include any hierarchical system information as well, for example, maintenance information about complex technical artifacts.
Visualization techniques:
Software maps are investigated in the domain of software visualization. The visualization of software maps is commonly based on tree mapping, "a space-filling approach to the visualization of hierarchical information structures" or other hierarchy mapping approaches.
Layout algorithms To construct software maps, different layout approaches are used to generate the basic spatial mapping of components, such as: Tree-map algorithms that initially map the software hierarchy into a recursively nested rectangular area.
Voronoi-map algorithms that initially map the software hierarchy by generating a Voronoi map.
Layout stability The spatial arrangement computed by layouts such as defined by tree maps strictly depends on the hierarchy. If software maps have to be generated frequently for an evolving or changing system, the usability of software maps is affected by non-stable layouts, that is, minor changes to the hierarchy may cause significant changes to the layout.
In contrast to regular Voronoi treemap algorithms, which do not provide deterministic layouts, the layout algorithm for Voronoi treemaps can be extended to provides a high degree of layout similarity for varying hierarchies. Similar approaches exist for the tree-map based case.
History:
Software maps methods and techniques belong to the scientific discipline of software visualization and information visualization. They form a key concept and technique within the fields of software diagnosis. They also have applications in software mining and software analytics. Software maps have been extensively developed and researched by, e.g., at the Hasso Plattner Institute for IT systems engineering, in particular for large-scale, complex IT systems and applications. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Predicate (grammar)**
Predicate (grammar):
A predicate is one of the two main parts of a sentence (the other being the subject, which the predicate modifies). For the simple sentence "John [is yellow]", John acts as the subject, and is yellow acts as the predicate, a subsequent description of the subject headed with a verb. In current linguistic semantics, a predicate is an expression that can be true of something. Thus, the expressions "is yellow" or "is like broccoli" are true of those things that are yellow or like broccoli, respectively. This notion is closely related to the notion of a predicate in formal logic, which includes more expressions than the former one.
Syntax:
Traditional grammar The notion of a predicate in traditional grammar traces back to Aristotelian logic. A predicate is seen as a property that a subject has or is characterized by. A predicate is therefore an expression that can be true of something. Thus, the expression "is moving" is true of anything that is moving. This classical understanding of predicates was adopted more or less directly into Latin and Greek grammars; and from there, it made its way into English grammars, where it is applied directly to the analysis of sentence structure. It is also the understanding of predicates as defined in English-language dictionaries. The predicate is one of the two main parts of a sentence (the other being the subject, which the predicate modifies). The predicate must contain a verb, and the verb requires or permits other elements to complete the predicate, or it precludes them from doing so. These elements are objects (direct, indirect, prepositional), predicatives, and adjuncts: She dances. — Verb-only predicate.
Syntax:
Ben reads the book. — Verb-plus-direct-object predicate.
Ben's mother, Felicity, gave me a present. — Verb-plus-indirect-object-plus-direct-object predicate.
She listened to the radio. — Verb-plus-prepositional-object predicate.
She is in the park. — Verb-plus-predicative-prepositional-phrase predicate.
Syntax:
She met him in the park. — Verb-plus-direct-object-plus-adjunct predicate.The predicate provides information about the subject, such as what the subject is, what the subject is doing, or what the subject is like. The relation between a subject and its predicate is sometimes called a nexus. A predicative nominal is a noun phrase, such as in a sentence George III is the king of England, the phrase the king of England being the predicative nominal. In English, the subject and predicative nominal must be connected by a linking verb, also called a copula. A predicative adjective is an adjective, such as in Ivano is attractive, attractive being the predicative adjective. The subject and predicative adjective must also be connected by a copula.
Syntax:
Modern theories of syntax Some theories of syntax adopt a subject-predicate distinction. For instance, a textbook phrase structure grammar typically divides an English declarative sentence (S) into a noun phrase (NP) and verb phrase (VP). The subject NP is shown in green, and the predicate VP in blue. Languages with more flexible word order (often called nonconfigurational languages) are often treated differently also in phrase structure approaches.
Syntax:
On the other hand, dependency grammar rejects the binary subject-predicate division and places the finite verb as the root of the sentence. The matrix predicate is marked in blue and its two arguments are in green. While the predicate cannot be construed as a constituent in the formal sense, it is a catena. Barring a discontinuity, predicates and their arguments are always catenae in dependency structures.
Syntax:
Some theories of grammar accept both a binary division of sentences into subject and predicate while also giving the head of the predicate a special status. In such contexts, the term predicator is used to refer to that head.
Non-subject predicands There are cases in which the semantic predicand has a syntactic function other than subject. This happens in raising constructions, such as the following: Here, you is the object of the make verb phrase, the head of the main clause. But it is also the predicand of the subordinate think clause, which has no subject.: 329–335
Semantic predication:
The term predicate is also used to refer to properties and to words or phrases which denote them. This usage of the term comes from the concept of a predicate in logic. In logic, predicates are symbols which are interpreted as relations or functions over arguments. In semantics, the denotations of some linguistic expressions are analyzed along similar lines. Expressions which denote predicates in the semantic sense are sometimes themselves referred to as "predication".
Semantic predication:
Carlson classes The seminal work of Greg Carlson distinguishes between types of predicates. Based on Carlson's work, predicates have been divided into the following sub-classes, which roughly pertain to how a predicate relates to its subject.
Semantic predication:
Stage-level predicates A stage-level predicate is true of a temporal stage of its subject. For example, if John is "hungry", then he typically will eat some food. His state of being hungry therefore lasts a certain amount of time, and not his entire lifespan. Stage-level predicates can occur in a wide range of grammatical constructions and are probably the most versatile kind of predicate.
Semantic predication:
Individual-level predicates An individual-level predicate is true throughout the existence of an individual. For example, if John is "smart", this is a property that he has, regardless of which particular point in time we consider. Individual-level predicates are more restricted than stage-level ones. Individual-level predicates cannot occur in presentational "there" sentences (a star in front of a sentence indicates that it is odd or ill-formed): There are police available. — available is stage-level predicate.
Semantic predication:
*There are firemen altruistic. — altruistic is an individual-level predicate.Stage-level predicates allow modification by manner adverbs and other adverbial modifiers. Individual-level predicates do not, e.g.
Tyrone spoke French loudly in the corridor. — speak French can be interpreted as a stage-level predicate.
*Tyrone knew French silently in the corridor. — know French cannot be interpreted as a stage-level predicate.When an individual-level predicate occurs in past tense, it gives rise to what is called a lifetime effect: The subject must be assumed to be dead or otherwise out of existence.
John was available. — Stage-level predicate does NOT evoke the lifetime effect.
John was altruistic. — Individual-level predicate does evoke the lifetime effect.
Semantic predication:
Kind-level predicates A kind-level predicate is true of a kind of a thing, but cannot be applied to individual members of the kind. An example of this is the predicate are widespread. One cannot meaningfully say of a particular individual John that he is widespread. One may only say this of kinds, as in Cats are widespread.Certain types of noun phrases cannot be the subject of a kind-level predicate. We have just seen that a proper name cannot be. Singular indefinite noun phrases are also banned from this environment: *A cat is widespread. — Compare: Nightmares are widespread.
Semantic predication:
Collective vs. distributive predicates Predicates may also be collective or distributive. Collective predicates require their subjects to be somehow plural, while distributive ones do not. An example of a collective predicate is "formed a line". This predicate can only stand in a nexus with a plural subject: The students formed a line. — Collective predicate appears with plural subject.
Semantic predication:
*The student formed a line. — Collective predicate cannot appear with singular subject.Other examples of collective predicates include meet in the woods, surround the house, gather in the hallway and carry the piano together. Note that the last one (carry the piano together) can be made non-collective by removing the word together. Quantifiers differ with respect to whether or not they can be the subject of a collective predicate. For example, quantifiers formed with all the can, while ones formed with every or each cannot.
Semantic predication:
All the students formed a line. — Collective predicate possible with all the.
All the students gathered in the hallway. — Collective predicate possible with all the.
All the students carried a piano together. — Collective predicate possible with all the.
*Every student formed a line. — Collective predicate impossible with every.
*Each student gathered in the hallway. — Collective predicate impossible with each. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**König's theorem (kinetics)**
König's theorem (kinetics):
In kinetics, König's theorem or König's decomposition is a mathematical relation derived by Johann Samuel König that assists with the calculations of angular momentum and kinetic energy of bodies and systems of particles.
For a system of particles:
The theorem is divided in two parts.
For a system of particles:
First part of König's theorem The first part expresses the angular momentum of a system as the sum of the angular momentum of the centre of mass and the angular momentum applied to the particles relative to the center of mass. L→=r→CoM×∑imiv→CoM+L→′=L→CoM+L→′ Proof Considering an inertial reference frame with origin O, the angular momentum of the system can be defined as: L→=∑i(r→i×miv→i) The position of a single particle can be expressed as: r→i=r→CoM+r→i′ And so we can define the velocity of a single particle: v→i=v→CoM+v→i′ The first equation becomes: L→=∑i(r→CoM+r→i′)×mi(v→CoM+v→i′) L→=∑ir→i′×miv→i′+(∑imir→i′)×v→CoM+r→CoM×∑imiv→i′+∑ir→CoM×miv→CoM But the following terms are equal to zero: ∑imir→i′=0 ∑imiv→i′=0 So we prove that: L→=∑ir→i′×miv→i′+Mr→CoM×v→CoM where M is the total mass of the system.
For a system of particles:
Second part of König's theorem The second part expresses the kinetic energy of a system of particles in terms of the velocities of the individual particles and the centre of mass.
Specifically, it states that the kinetic energy of a system of particles is the sum of the kinetic energy associated to the movement of the center of mass and the kinetic energy associated to the movement of the particles relative to the center of mass.
CoM Proof The total kinetic energy of the system is: K=∑i12mivi2 Like we did in the first part, we substitute the velocity: CoM |2 CoM CoM CoM CoM 2 We know that v¯CoM⋅∑imiv¯i′=0, so if we define: K′=∑i12mivi′2 CoM CoM CoM 2 we're left with: CoM
For a rigid body:
The theorem can also be applied to rigid bodies, stating that the kinetic energy K of a rigid body, as viewed by an observer fixed in some inertial reference frame N, can be written as: NK=12m⋅Nv¯⋅Nv¯+12NH¯⋅NωR where m is the mass of the rigid body; Nv¯ is the velocity of the center of mass of the rigid body, as viewed by an observer fixed in an inertial frame N; NH¯ is the angular momentum of the rigid body about the center of mass, also taken in the inertial frame N; and NωR is the angular velocity of the rigid body R relative to the inertial frame N. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Class III PI 3-kinase**
Class III PI 3-kinase:
Class III PI 3-kinase is a subgroup of the enzyme family, phosphoinositide 3-kinase that share a common protein domain structure, substrate specificity and method of activation.
There is only one known class III PI 3-kinase, Vps34, which is also the only PI 3-kinase expressed in all eukaryotic cells. In humans it is encoded by the PIK3C3 gene. In human cells Vps34 associates with a regulatory subunit, PIK3R4(p150, Vps15).
Substrate specificity:
Vps34 is more accurately described as a phosphatidylinositol 3-kinase. In vivo Vps34 can phosphorylate only phosphatidylinositol to form phosphatidylinositol (3)-phosphate (PtdIns(3)P).
Functions:
Vps34 was first identified in a Saccharomyces cerevisiae (budding yeast) screen for proteins involved vesicle-mediated vacuolar protein sorting (hence Vps). A number of proteins containing a phosphoinositide binding domain specific for PtdIns(3)P that function in cellular protein trafficking have been identified.
Functions:
Vps34 has been shown to interact with Vps15 (PIK3R4, p150), a protein kinase. Vps15 can activate the lipid kinase activity of Vps34 and interact with Rab5, which has been hypothesized to recruit the Vps34/15 complex to early endosomes. Vps15 has a myristoylation tag that associates the complex with the membrane. The Vps34/15 complex also can interact with Rab7. Together, the complex can function at early to late endosomes.
Functions:
Vps34 has a calmodulin binding domain, but its activity has been clearly shown to be calcium-independent in vitro and in vivo. The functional role of its interactions with calmodulin in vivo are not understood.
Functions:
Vps34 activity is required for autophagy in yeast, and has been strongly implicated in this process in mammals. Vps34 has also been implicated in amino acid sensing. Vps34 is necessary for mTORC1 activity in response to amino acids in cultured cells, as siRNA knockdown completely inhibits mTORC1 signaling as determined by S6K phosphorylation. Sequestration of the Vps34 product by FYVE domain overexpression also disrupts mTOR signaling. However, genetic ablation of Vps34 in Drosophila does not affect dTORC1 signaling. Thus, the role of Vps34 in amino acid signaling to mTORC1 remains controversial.
Functions:
One recent paper suggested that the human ortholog is regulated by intracellular calcium, but this was later shown to be due to a calcium-independent inhibition of Vps34 by EGTA, an effect not seen with other calcium chelators.
The mechanisms that regulate Vps34 activity in mammalian cells are not yet understood. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Integrated Biological Detection System**
Integrated Biological Detection System:
The Integrated Biological Detection System is a system used by the British Army for detecting Chemical, biological, radiological, and nuclear agents or elements.
The Integrated Biological Detection System can provide early warning of a chemical or biological warfare attack and is in service with the United Kingdom Joint NBC Regiment. It can be installed in a container which can be mounted on a vehicle or ground dumped. It is also able to be transported by either a fixed-wing aircraft or by helicopter.
The system comprises A detection suite, including equipment for atmospheric sampling Meteorological station and GPS CBRN filtration and environmental control for use in all climates Chemical agent detection A independent power supply Cameras for 360 degree surveillance A U.S. military system with a similar purpose and a similar name is the Biological Integrated Detection System (BIDS). | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Academic Free License**
Academic Free License:
The Academic Free License (AFL) is a permissive free software license written in 2002 by Lawrence E. Rosen, a former general counsel of the Open Source Initiative (OSI).
Academic Free License:
The license grants similar rights to the BSD, MIT, UoI/NCSA and Apache licenses – licenses allowing the software to be made proprietary – but was written to correct perceived problems with those licenses, the AFL: makes clear what software is being licensed by including a statement following the software's copyright notice; includes a complete copyright grant to the software; contains a complete patent grant to the software; makes clear that no trademark rights are granted to the licensor's trademarks; warrants that the licensor either owns the copyright or is distributing the software under a license; is itself copyrighted, with the right granted to copy and distribute without modification.The Free Software Foundation consider all AFL versions up to and including 3.0 as incompatible with the GNU GPL. though Eric S. Raymond (a co-founder of the OSI) contends that AFL 3.0 is GPL compatible. In late 2002, an OSI working draft considered it a "best practice" license. In mid-2006, however, the OSI's License Proliferation Committee found it "redundant with more popular licenses", specifically version 2 of the Apache Software License. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Launch commit criteria**
Launch commit criteria:
Launch commit criteria are the criteria which must be met in order for the countdown and launch of a Space Shuttle or other launch vehicle to continue. These criteria relate to safety issues and the general success of the launch, as opposed to supplemental data.
Atlas V:
Launch commit criteria for Atlas V launches are similar to those used for the Atlas V launch of the Mars Science Laboratory wind at the launch pad exceeds 61 kilometres per hour; 38 miles per hour (33 kn) ceiling less than 1,800 metres (6,000 ft) or visibility less than 6.4 kilometres (4 mi) upper-level conditions containing wind shear that could lead to control problems for the launch vehicle.
Atlas V:
cloud layer greater than 1,400 metres (4,500 ft) thick that extends into freezing temperatures cumulus clouds with tops that extend into freezing temperatures within 5 to 10 miles (8.0 to 16.1 km) 19 kilometres (10 nmi) of the edge of a thunderstorm that is producing lightning for 30 minutes after the last lightning is observed.
Atlas V:
field mill instrument readings within 9.3 kilometres (5 nmi) of the launch pad or the flight path exceed +/- 1,500 volts per meter for 15 minutes after they occur thunderstorm anvil is within 19 kilometres (10 nmi) of the flight path thunderstorm debris cloud is within 5.6 kilometres (3 nmi) or fly through a debris cloud for three hours Do not launch through disturbed weather that has clouds that extend into freezing temperatures and contain moderate or greater precipitation, or launch within 9.3 kilometres (5 nmi) of disturbed weather adjacent to the flight path Do not launch through cumulus clouds formed as the result of or directly attached to a smoke plume
Falcon 9:
NASA has identified the Falcon 9 vehicle cannot be launched under the following conditions.
Falcon 9:
sustained wind at the 162 feet (49 m) level of the launch pad in excess of 30 knots (56 km/h; 35 mph), upper-level conditions containing wind shear that could lead to control problems for the launch vehicle, launch through a cloud layer greater than 4,500 feet (1,400 m) thick that extends into freezing temperatures, launch within 19 kilometres (10 nmi) of cumulus clouds with tops that extend into freezing temperatures, within 19 kilometres (10 nmi) of the edge of a thunderstorm that is producing lightning within 30 minutes after the last lightning is observed, within 19 kilometres (10 nmi) of an attached thunderstorm anvil cloud, within 9.3 kilometres (5 nmi) of disturbed weather clouds that extend into freezing temperatures and contain moderate or greater precipitation, within 5.6 kilometres (3 nmi) of a thunderstorm debris cloud, through cumulus clouds formed as the result of or directly attached to a smoke plume.The following should delay launch: delay launch for 15 minutes if field mill instrument readings within 9.3 kilometres (5 nmi) of the launch pad exceed +/- 1,500 volts per meter, or +/- 1,000 volts per meter, delay launch for 30 minutes after lightning is observed within 10 nautical miles (19 km; 12 mi) of the launch pad or the flight path.Unique for Crew Dragon launches of the Falcon 9: weather downrange has a high chance or is violating splashdown limits (wind, wave, lightning, and precipitation limits) in case of a launch escape
Space Shuttle:
Weather The weather conditions NASA required during countdown and launch were specified for "prior to loading external tank propellant" and "after loading propellant has begun". Weather forecasts were provided by the 45th Weather Squadron at nearby Patrick Air Force Base with concerns such as thunderstorms, winds, low cloud ceilings, or anvil clouds noted in the report.
Space Shuttle:
Prior to loading propellant Tanking was not to begin if the 24-hour average temperature had been below 41 °F (5 °C), the wind was observed or forecast to exceed 42 knots (78 km/h; 48 mph) for the next three-hour period, or there was a forecast to be greater than a 20% chance of lightning within five nautical miles of the launch pad during the first hour of tanking.
Space Shuttle:
After propellant loading was underway After tanking began, the countdown must not be continued, nor the Shuttle launched, if any of the following weather criteria were exceeded: TemperatureOnce propellant loading had begun, the countdown was to be stopped if the temperature remained above 99 °F (37 °C) for more than 30 consecutive minutes. The minimum temperature the countdown may proceed at was determined by a table of temperatures determined by wind speed and relative humidity ranging from 36 °F (2 °C) (high humidity, high winds) to 48 °F (9 °C) (low humidity, low winds). In no case was the space shuttle to be launched if the temperature was 35 °F (2 °C) degrees or colder.
Space Shuttle:
WindFor launch the wind constraints at the launch pad varied slightly for each mission. The peak wind speed allowable was 34 knots (63 km/h; 39 mph). However, when the wind direction was between 100 degrees and 260 degrees, the peak speed varies and may be as low as 20 knots (37 km/h; 23 mph).
PrecipitationNone was allowed to exist at the launch pad or within the flight path. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Neighborhood Texture Jam**
Neighborhood Texture Jam:
Neighborhood Texture Jam is a Memphis, Tennessee rock band who fuse elements of punk, industrial and funk into a heavy, rhythmic rock sound. Notable for a member responsible for providing the "texture" - an ever-changing assembly of 55-gallon oil drums, hub caps, corrugated sheet metal, household appliances and other found objects that serve as auxiliary percussion.
The band was originally formed in 1984 at Rhodes College by Joe Lapsley and Ed Scott. The first incarnation of the band lasted three gigs before breaking up. Reforming in 1988, the band signed with Feralette Records and achieved national recognition with their debut album, Funeral Mountain, released in 1990.
Following a line-up change in which Tom Murphy was replaced by John Whittemore, the band switched labels to Ardent Records and released 1993's Don't Bury Me In Haiti.
1994 saw another change in labels when Ardent Records closed its alternative mainstream division to concentrate on Christian music. The band landed on Snerd Records and released the 7" single, Rush Limbaugh-Evil Blimp/Awesome in 1994 and the full-length album, Total Social Negation in 1996.
After a long recording hiatus, the band premiered its long-rumored rock opera, Frank Rizzo at Colonus in 2003, following up with a repeat performance in 2006. 2006 also saw the release of a pair of B-Side albums, They Buried Me In Memphis, Vols. 1 and 2 on Snerd Records.
Current Band Members:
Joe Lapsley - lead vocals Tee Cloar - guitar Steven Conn - bass guitar Paul Buchignani - drums Greg Easterly - texture John Whittemore - guitar Former Members Ed Scott - guitar Sloan Wilson - texture Carter Green - texture Tony Pantuso - drums Tom Murphy - guitar Mark Harrison - guitar Fletcher Ward - guitar Former Auxiliary Members Derek Van Lynn - saxophone Montie Davis - guitar Sam Nowlin - texture
Album Discography:
Funeral Mountain (1990) Don't Bury Me In Haiti (1993) Total Social Negation (1996) They Buried Me In Memphis, Vol. 1 (2006) They Buried Me In Memphis, Vol. 2 (2006) Single Discography McThorazine/Waiting In Sverdlovsk (1992) Rush Limbaugh-Evil Blimp/Awesome (1994) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vascular endothelial growth factor A**
Vascular endothelial growth factor A:
Vascular endothelial growth factor A (VEGF-A) is a protein that in humans is encoded by the VEGFA gene.
Function:
This gene is a member of the platelet-derived growth factor (PDGF)/vascular endothelial growth factor (VEGF) family and encodes a protein that is often found as a disulfide linked homodimer. This protein is a glycosylated mitogen that specifically acts on endothelial cells and has various effects, including mediating increased vascular permeability, inducing angiogenesis, vasculogenesis and endothelial cell growth, promoting cell migration, and inhibiting apoptosis. Alternatively spliced transcript variants, encoding either freely secreted or cell-associated isoforms, have been characterized.VEGF-A shows prominent activity with vascular endothelial cells, primarily through its interactions with the VEGFR1 and -R2 receptors found in prominently on the endothelial cell membrane. Although, it does have effects on a number of other cell types (e.g., stimulation monocyte/macrophage migration, neurons, cancer cells, kidney epithelial cells ). In vitro, VEGF-A has been shown to stimulate endothelial cell mitogenesis and cell migration. VEGF-A is also a vasodilator and increases microvascular permeability and was originally referred to as vascular permeability factor.
Function:
During embryonic development angiogenesis is initiated as mesoderm mesenchyme cells are specified to differentiate into angioblasts, expressing the Vascular Endothelial Growth Factor Receptor (VEGFR-2). As embryonic tissue utilizes more oxygen than it receives from diffusion, it becomes hypoxic. These cells will secrete the signaling molecule vascular endothelial factor A (VEGFA) which will recruit the angioblasts expressing its partnering receptor to the site of future angiogenesis. The angioblasts will create scaffolding structures that form the primary capillary plexus from where the local vasculature system will develop. Disruption of this gene in mice resulted in abnormal embryonic blood vessel formation, resulting in underdeveloped vascular structures. This gene is also upregulated in many tumors and its expression is correlated with tumor development and is a target in many developing cancer therapeutics. Elevated levels of this protein are found in patients with POEMS syndrome, also known as Crow-Fukase syndrome which is a hemangioblastic proliferative disorder. Allelic variants of this gene have been associated with microvascular complications of diabetes 1 and atherosclerosis.
VEGF-A Overview:
Vascular endothelial growth factor A (VEGF-A) is a dimeric glycoprotein that plays a significant role in neurons and is considered to be the main, dominant inducer to the growth of blood vessels. VEGFA is essential for adults during organ remodeling and diseases that involve blood vessels, for example, in wound healing, tumor angiogenesis, diabetic retinopathy, and age-related macular degeneration. During early vertebrate development, vasculogenesis occurs which means that the endothelial condense into the blood vessels. The differentiation of endothelial cells is dependent upon the expression of VEGFA and if the expression is abolished then it can result in the death of the embryo. VEGFA is produced by a group of three major isoforms as a result of alternative splicing and if any three isoforms are produced (VEGFA120, VEGFA164, and VEGFA188) then this will not result in vessel defects and death of the full VEGFA knockout in mice. VEGFA is essential in the role of neurons because they too need vascular supply and abolishing the expression of VEGFA from neural progenitors will result in defects of the brain vascularization and neuronal apoptosis. Anti-VEGFA therapy can be used to treat patients with undesirable angiogenesis and vascular leakage in cancer and eye diseases but also could result in the inhibition of neurogenesis and neuroprotection. VEGFA could be used to treat patients with neurodegenerative and neuropathic conditions and also increase vascular permeability which will stop the blood-brain barrier and increase inflammatory cell infiltration.
Usage:
Angiogenesis ↑ Migration of endothelial cells ↑ mitosis of endothelial cells ↑ Matrix metalloproteinase activity ↑ αvβ3 activity creation of blood vessel lumen creates lumen creates fenestrations Chemotactic for macrophages and granulocytes Vasodilation (indirectly by NO release)Also tumour suppression.
Clinical significance:
Elevated levels of this protein is linked to POEMS syndrome, also known as Crow-Fukase syndrome. Mutations in this gene have been associated with proliferative and nonproliferative diabetic retinopathy.
Clinical significance:
Treatment of ischemic heart disease In ischemic cardiomyopathy, blood flow to the muscle cells of the heart is either partially or completely reduced, leading to cell death and scar tissue formation. Because the muscle cells are replaced with fibrous tissue, the heart loses its ability to contract, compromising heart function. Normally, if blood flow to the heart is compromised, over time, new blood vessels will develop, providing alternative circulation to the affected cells. The viability of the heart following severely restricted blood flow is dependent on the ability of the heart to provide this collateral circulation. Expression of VEGF-A has been found to be induced by myocardial ischemia and a higher level of expression of VEGF-A has been associated with better collateral circulation development during ischemia.
Clinical significance:
VEGF-A activation When cells are deprived of oxygen, they increase their production of VEGF-A. VEGF-A mediates the growth of new blood vessels from pre-existing vessels (angiogenesis) by binding to the cell surface receptors VEGFR1 and VEGFR2, two tyrosine kinases located in endothelial cells of the cardiovascular system. These two receptors act through different pathways to contribute to endothelial cell proliferation and migration, and formation of tubular structures.
Clinical significance:
VEGFR2 The binding of VEGF-A to VEGFR2 causes two VEGFR2 molecules to combine to form a dimer. Following this dimerization, through the action of the receptor itself, a phosphate group is added to certain tyrosines within the molecule in a process called auto-phosphorylation. The autophosphorylation of these amino acids allows for signalling molecules within to the cell to bind to the receptor and become activated. These signalling molecules include VEGF-receptor activated protein (VRAP), PLC- γ and Nck.Each of these is important in the signalling required for angiogenesis. VRAP (also known as T-cell specific adaptor) and Nck signalling are important in reorganization of the structural components of the cell, allowing for cells to move around to areas where they are needed. PLC- γ is vital to the proliferative effects of VEGF-A signalling. Activation of the phospholipase PLC- γ results in an increase in calcium levels in the cell, leading to the activation of protein kinase C (PKC). PKC phosphorylates the mitogen-activated protein kinase (MAPK) ERK which then moves to the nucleus of the cell and takes part in nuclear signalling. Once in the nucleus, ERK activates various transcription factors which initiate expression of genes involved in cellular proliferation. Activation of a different MAPK (p38 MAPK) by VEGFR2 is important in the transcription of genes associated with cellular migration.
Clinical significance:
VEGFR1 The tyrosine kinase activity of VEGFR1 is less efficient than that of VEGFR2 and its activation alone is insufficient to bring about the proliferative effects of VEGF-A. The major role of VEGFR1 is to recruit the cells responsible in blood cell development.
Clinical significance:
Current research It has been shown that injection of VEGF-A in dogs following severely restricted blood flow to the heart caused an increase in collateral blood vessel formation compared to the dogs who did not receive the VEGF-A treatment. It was also shown in dogs that delivery of VEGF-A to areas of the heart with little or no blood flow enhanced collateral blood vessel formation and increased the viability of the cells in that area. In gene therapy, DNA which encodes the gene of interest is integrated into a vector along with elements that are able to promote the gene's expression. The vector is then injected either into muscle cells of the heart or the blood vessels supplying the heart. The natural machinery of the cell is then used to express these genes. Currently, human clinical trials are being conducted to study the effectiveness of gene therapy with VEGF-A in restoring blood flow and function to areas of the heart that have severely restricted blood flow. So far, this type of therapy has proven both safe and beneficial.
Interactions:
Vascular endothelial growth factor A has been shown to interact with: ADAMTS1, CTGF, and NRP1, | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tonne**
Tonne:
The tonne ( (listen) or ; symbol: t) is a unit of mass equal to 1000 kilograms. It is a non-SI unit accepted for use with SI. It is also referred to as a metric ton to distinguish it from the non-metric units of the short ton (United States customary units), and the long ton (British imperial units). It is equivalent to approximately 2204.6 pounds, 1.102 short tons, and 0.984 long tons. The official SI unit is the megagram (symbol: Mg), a less common way to express the same mass.
Symbol and abbreviations:
The BIPM symbol for the tonne is t, adopted at the same time as the unit in 1879. Its use is also official for the metric ton in the United States, having been adopted by the United States National Institute of Standards and Technology (NIST). It is a symbol, not an abbreviation, and should not be followed by a period. Use of minuscule letter case is significant, and use of other letter combinations can lead to ambiguity. For example, T, MT, mT, Mt and mt are the SI symbols for the tesla, megatesla, millitesla, megatonne (one teragram), and millitonne (one kilogram), respectively. If describing TNT equivalent units of energy, one megatonne of TNT is equivalent to approximately 4.184 petajoules.
Origin and spelling:
In English, tonne is an established spelling alternative to metric ton. In the United States and United Kingdom, tonne is usually pronounced the same as ton (), but the final "e" can also be pronounced, i.e. "tunnie" (). In Australia, the common and recommended pronunciation is . In the United States, metric ton is the name for this unit used and recommended by NIST; an unqualified mention of a ton almost invariably refers to a short ton of 2000 pounds (907 kg) and to a lesser extent to a long ton of 2240 pounds (1020 kg), the tonne is rarely used in speech or writing. Both terms are acceptable in Canadian usage.
Origin and spelling:
Ton and tonne are both derived from a Germanic word in general use in the North Sea area since the Middle Ages (cf. Old English and Old Frisian tunne, Old High German and Medieval Latin tunna, German and French tonne) to designate a large cask, or tun. A full tun, standing about a metre high, could easily weigh a tonne.
Origin and spelling:
The spelling tonne pre-dates the introduction of the SI in 1960; it has been used with this meaning in France since 1842, when there were no metric prefixes for multiples of 106 and above, and is now used as the standard spelling for the metric mass measurement in most English-speaking countries. In the United States, the unit was originally referred to using the French words millier or tonneau, but these terms are now obsolete. The Imperial and US customary units comparable to the tonne are both spelled ton in English, though they differ in mass.
Conversions:
One tonne is equivalent to: In kilograms: 1000 kilograms (kg) by definition.
In grams: 1000000 grams (g) or 1 megagram (Mg). Megagram is the corresponding official SI unit with the same mass. Mg is distinct from mg, milligram.
In pounds: Exactly 1000/0.453 592 37 pounds (lb) by definition of the pound, or approximately 2204.622622 lb.
In short tons: Exactly 1/0.907 184 74 short tons (ST), or approximately 1.102311311 ST.
One short ton is exactly 0.90718474 t.
In long tons: Exactly 1/1.016 046 9088 long tons (LT), or approximately 0.9842065276 LT.
One long ton is exactly 1.0160469088 t.A tonne is the mass of one cubic metre of pure water: at 4 °C one thousand litres of pure water has an absolute mass of one tonne.
Derived units:
As a non-SI unit, the use of SI metric prefixes with the tonne does not fall within the SI standard. For multiples of the tonne, it is more usual to speak of thousands or millions of tonnes. Kilotonne, megatonne, and gigatonne are more usually used for the energy of nuclear explosions and other events in equivalent mass of TNT, often loosely as approximate figures. When used in this context, there is little need to distinguish between metric and other tons, and the unit is spelled either as ton or tonne with the relevant prefix attached.
Derived units:
*The equivalent units columns use the short scale large-number naming system currently used in most English-language countries, e.g. 1 billion = 1000 million = 1000000000.†Values in the equivalent short and long tons columns are rounded to five significant figures. See Conversions for exact values.ǂThough non-standard, the symbol "kt" is also used (instead of the standard symbol "kn") for knot, a unit of speed for aircraft and sea-going vessels, and should not be confused with kilotonne.
Alternative usages:
Metric ton units A metric ton unit (mtu) can mean 10 kg (approximately 22 lb) within metal trading, particularly within the United States. It traditionally referred to a metric ton of ore containing 1% (i.e. 10 kg) of metal.
The following excerpt from a mining geology textbook describes its usage in the particular case of tungsten: Tungsten concentrates are usually traded in metric tonne units (originally designating one tonne of ore containing 1% of WO3, today used to measure WO3 quantities in 10 kg units. One metric tonne unit (mtu) of tungsten (VI) contains 7.93 kilograms of tungsten.
In the case of uranium, MTU is sometimes used in the sense of metric ton of uranium (1,000 kg).
Alternative usages:
Use of mass as proxy for energy The tonne of trinitrotoluene (TNT) is used as a proxy for energy, usually of explosions (TNT is a common high explosive). Prefixes are used: kiloton(ne), megaton(ne), gigaton(ne), especially for expressing nuclear weapon yield, based on a specific combustion energy of TNT of about 4.2 MJ/kg (or one thermochemical calorie per milligram). Hence, 1 t TNT = approx. 4.2 GJ, 1 kt TNT = approx. 4.2 TJ, 1 Mt TNT = approx. 4.2 PJ.
Alternative usages:
The SI unit of energy is the joule. One tonne of TNT is approximately equivalent to 4.2 gigajoules.
In the petroleum industry the tonne of oil equivalent (toe) is a unit of energy: the amount of energy released by burning one tonne of crude oil, approximately 42 GJ. There are several slightly different definitions. This is ten times as much as a tonne of TNT because atmospheric oxygen is used.
Alternative usages:
Unit of force Like the gram and the kilogram, the tonne gave rise to a (now obsolete) force unit of the same name, the tonne-force, equivalent to about 9.8 kilonewtons. The unit is also often called simply "tonne" or "metric ton" without identifying it as a unit of force. In contrast to the tonne as a mass unit, the tonne-force is not accepted for use with SI. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Clinical endpoint**
Clinical endpoint:
Clinical endpoints or clinical outcomes are outcome measures referring to occurrence of disease, symptom, sign or laboratory abnormality constituting a target outcome in clinical research trials. The term may also refer to any disease or sign that strongly motivates withdrawal of an individual or entity from the trial, then often termed a humane (clinical) endpoint.
The primary endpoint of a clinical trial is the endpoint for which the trial is powered. Secondary endpoints are additional endpoints, preferably also pre-specified, for which the trial may not be powered.
Surrogate endpoints are trial endpoints that have outcomes that substitute for a clinical endpoint, often because studying the clinical endpoint is difficult, for example using an increase in blood pressure as a surrogate for death by cardiovascular disease, where strong evidence of a causal link exists.
Scope:
In a general sense, a clinical endpoint is included in the entities of interest in a trial. The results of a clinical trial generally indicate the number of people enrolled who reached the pre-determined clinical endpoint during the study interval compared with the overall number of people who were enrolled. Once a patient reaches the endpoint, he or she is generally excluded from further experimental intervention (the origin of the term endpoint).
Scope:
For example, a clinical trial investigating the ability of a medication to prevent heart attack might use chest pain as a clinical endpoint. Any patient enrolled in the trial who develops chest pain over the course of the trial, then, would be counted as having reached that clinical endpoint. The results would ultimately reflect the fraction of patients who reached the endpoint of having developed chest pain, compared with the overall number of people enrolled.
Scope:
When an experiment involves a control group, the proportion of individuals who reach the clinical endpoint after an intervention is compared with the proportion of individuals in the control group who reached the same clinical endpoint, reflecting the ability of the intervention to prevent the endpoint in question.
Scope:
A clinical trial will usually define or specify a primary endpoint as a measure that will be considered success of the therapy being trialled (e.g. in justifying a marketing approval). The primary endpoint might be a statistically significant improvement in overall survival (OS). A trial might also define one or more secondary endpoints such as progression-free-survival (PFS) that will be measured and are expected to be met. A trial might also define exploratory endpoints that are less likely to be met.
Examples:
Clinical endpoints can be obtained from different modalities, such as, behavioural or cognitive scores, or biomarkers from Electroencephalography (qEEG), MRI, PET, or biochemical biomarkers.
Examples:
In clinical cancer research, common endpoints include discovery of local recurrence, discovery of regional metastasis, discovery of distant metastasis, onset of symptoms, hospitalization, increase or decrease in pain medication requirement, onset of toxicity, requirement of salvage chemotherapy, requirement of salvage surgery, requirement of salvage radiotherapy, death from any cause or death from disease. A cancer study may be powered for overall survival, usually indicating time until death from any cause, or disease specific survival, where the endpoint is death from disease or death from toxicity.
Examples:
These are expressed as a period of time (survival duration) e.g., in months. Frequently the median is used so that the trial endpoint can be calculated once 50% of subjects have reached the endpoint, whereas calculation of an arithmetical mean can only be done after all subjects have reached the endpoint.
Examples:
Disease free survival The disease free survival is usually used to analyze the results of the treatment for the localized disease which renders the patient apparently disease free, such as surgery or surgery plus adjuvant therapy. In the disease-free survival, the event is relapse rather than death. The people who relapse are still surviving but they are no longer disease-free. Just as in the survival curves not all patients die, in "disease-free survival curves" not all patients relapse and the curve may have a final plateau representing the patients who didn't relapse after the study's maximum follow-up. Because the patients survive for at least some time after the relapse, the curve for the actual survival would look better than disease free survival curve.
Examples:
Progression free survival The Progression Free Survival is usually used in analysing the results of the treatment for the advanced disease. The event for the progression free survival is that the disease gets worse or progresses, or the patient dies from any cause. Time to Progression is a similar endpoint that ignores patients who die before the disease progresses.
Response duration The response duration is occasionally used to analyze the results of the treatment for the advanced disease. The event is progression of the disease (relapse). This endpoint involves selecting a subgroup of the patients. It measures the length of the response in those patients who responded. The patients who don't respond aren't included.
Overall survival Overall survival is based on death from any cause, not just the condition being treated, thus it picks up death from side effects of the treatment, and effects on survival after relapse.
Examples:
Toxic Death Rate Unlike overall survival, which is based on death from any cause or the condition being treated, the toxic death rate picks up just the deaths that are directly attributable to the treatment itself. These rates are generally low to zero as clinical trials are typically halted when toxic deaths occur. Even with chemotherapy the overall rate is typically under a percent. However the lack of systematic autopsies limits our understanding of deaths due to treatments.
Examples:
Percent serious adverse events The percentage of treated patients experiencing one or more serious adverse events. Serious adverse events are defined by the US Food and Drug Administration as "Any AE occurring at any dose that results in any of the following outcomes: Death Life-threatening adverse drug experience Inpatient hospitalization or prolongation of existing hospitalization Persistent or significant incapacity or substantial disruption of the ability to conduct normal life functions Congenital anomaly/birth defect Important medical events (IME) that may not result in death, be life-threatening, or require hospitalization may be considered serious when, based upon appropriate medical judgment, they may jeopardize the patient or subject and may require medical or surgical intervention to prevent one of the outcomes listed in this definition."
Humane endpoint:
A humane endpoint can be defined as the point at which pain and/or distress is terminated, minimized or reduced for an entity in a trial (such as an experimental animal), by taking action such as killing the animal humanely, terminating a painful procedure, or giving treatment to relieve pain and/or distress. The occurrence of an individual in a trial having reached may necessitate withdrawal from the trial before the target outcome of interest has been fully reached.
Surrogate endpoint:
A surrogate endpoint (or marker) is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but doesn't necessarily have a guaranteed relationship. The National Institutes of Health (USA) define surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint".
Combined endpoint:
Some studies will examine the incidence of a combined endpoint, which can merge a variety of outcomes into one group. For example, the heart attack study above may report the incidence of the combined endpoint of chest pain, myocardial infarction, or death. An example of a cancer study powered for a combined endpoint is disease-free survival (DFS); trial participants experiencing either death or discovery of any recurrence would constitute the endpoint. Overall Treatment Utility is an example of a multidimensional composite endpoint in cancer clinical trials.Regarding humane endpoints, a combined endpoint may constitute a threshold where there is enough cumulative degree of disease, symptoms, signs or laboratory abnormalities to motivate an intervention.
Response rates:
The response rate is the percentage of patients on whom a therapy has some defined effect; for example, the cancer shrinks or disappears after treatment.When used as a clinical endpoint for trials of cancer treatments, this is often called the objective response rate (ORR). The FDA definition of ORR in this context is "the proportion of patients with tumor size reduction of a predefined amount and for a minimum time period.": 7 Each trial, for whatever illness or condition, may define what is considered a complete response (CR) or partial response (PR) to the therapy or intervention. Hence the trials report the complete response rate and the overall response rate which includes CR and PR. (See e.g. Response evaluation criteria in solid tumors, and Small-cell carcinoma treatment, and for immunotherapies, Immune-related response criteria.)
Consistency:
Various studies on a particular topic often do not address the same outcomes, making it difficult to draw clinically useful conclusions when a group of studies is looked at as a whole. The Core Outcomes in Women's Health (CROWN) Initiative is one effort to standardize outcomes. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Accidental gap**
Accidental gap:
In linguistics an accidental gap, also known as a gap, paradigm gap, accidental lexical gap, lexical gap, lacuna, or hole in the pattern, is a potential word, word sense, morpheme, or other form that does not exist in some language despite being theoretically permissible by the grammatical rules of that language. For example, a word pronounced /zeɪ̯k/ is theoretically possible in English, as it would obey English word-formation rules, but does not currently exist. Its absence is therefore an accidental gap, in the ontologic sense of the word accidental (that is, circumstantial rather than essential).
Accidental gap:
Accidental gaps differ from systematic gaps, those words or other forms which do not exist in a language due to the boundaries set by phonological, morphological, and other rules of that specific language. In English, a word pronounced /pfnk/ does not and cannot exist because it has no vowels and therefore does not obey the word-formation rules of English. This is a systematic, rather than accidental, gap. Various types of accidental gaps exist. Phonological gaps are either words allowed by the phonological system of a language which do not actually exist, or sound contrasts missing from one paradigm of the phonological system itself. Morphological gaps are nonexistent words or word senses potentially allowed by the morphological system. A semantic gap refers to the nonexistence of a word or word sense to describe a difference in meaning seen in other sets of words within the language.
Phonological gaps:
Often words that are allowed in the phonological system of a language are absent. For example, in English the consonant cluster /spr/ is allowed at the beginning of words such as spread or spring and the syllable rime /ɪk/ occurs in words such as sick or flicker. Even so, there is no English word pronounced */sprɪk/. Although this potential word is phonologically well-formed according to English phonotactics, it happens to not exist.The term "phonological gap" is also used to refer to the absence of a phonemic contrast in part of the phonological system. For example, Thai has several sets of stop consonants that differ in terms of voicing (whether or not the vocal cords vibrate) and aspiration (whether a puff of air is released). Yet the language has no voiced velar consonant (/ɡ/). This lack of an expected distinction is commonly called a "hole in the pattern".
Morphological gaps:
A morphological gap is the absence of a word that could exist given the morphological rules of a language, including its affixes. For example, in English a deverbal noun can be formed by adding either the suffix -al or -(t)ion to certain verbs (typically words from Latin through Anglo-Norman French or Old French). Some verbs, such as recite have two related nouns, recital and recitation. However, in many cases there is only one such noun, as illustrated in the chart below. Although in principle the morphological rules of English allow for other nouns, those words do not exist.
Morphological gaps:
Many potential words that could be made following morphological rules of a language do not enter the lexicon. Blocking, including homonymy blocking and synonymy blocking, stops some potential words. A homonym of an existing word may be blocked. For example, the word liver meaning "someone who lives" is only rarely used because the word liver (an internal organ) already exists. Likewise, a potential word can be blocked if it is a synonym of an existing word. An older, more common word blocks a potential synonym, known as token-blocking. For example, the word stealer ("someone who steals") is also rarely used, because the word thief already exists. Not only individual words, but entire word formation processes may be blocked. For example, the suffix -ness is used to form nouns from adjectives. This productive word-formation pattern blocks many potential nouns that could be formed with -ity. Nouns such as *calmity (a potential synonym of calmness) and *darkity (cf. darkness) are unused potential words. This is known as type-blocking.A defective verb is a verb that lacks some grammatical conjugation. For example, several verbs in Russian do not have a first-person singular form in non-past tense. Although most verbs have such a form (e.g. vožu "I lead"), about 100 verbs in the second conjugation pattern (e.g. *derz'u "I talk rudely"; the asterisk indicates ungrammaticality) do not appear as first-person singular in the present-future tense. Morris Halle called this defective verb paradigm an example of an accidental gap.
Morphological gaps:
The similar case of unpaired words occurs where one word is obsolete or rare while another word derived from it is more common. Examples include *effable and ineffable or *kempt and unkempt.
Semantic gaps:
A gap in semantics occurs when a particular meaning distinction visible elsewhere in the lexicon is absent. For example, English words describing family members generally show gender distinction. Yet the English word cousin can refer to either a male or female cousin. Similarly, while there are general terms for siblings and parents, there is no comparable common gender-neutral term for a parent's sibling, and traditionally none for a sibling's child. The separate words predicted on the basis of this semantic contrast are absent from the language, or at least from many speakers' dialects. It is possible to coin new ones (as happened with the word nibling), but whether those words gain widespread acceptance in general use, or remain neologistic and resisted outside particular registers, is a matter of prevailing usage in each era. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Lunascape**
Lunascape:
Lunascape is a web browser developed by Lunascape Corporation in Tokyo, Japan. It is unusual in that it contains three rendering engines: Gecko (used in Mozilla Firefox), WebKit (used in Apple's Safari), and Trident (used in Microsoft Internet Explorer). This feature is common only to the Avant web browser. The user can switch between layout engines seamlessly.
Lunascape is available for Windows and Android platforms, as well as for iPad and iPhone.
History:
Lunascape was released in October 2001 while the founders were in college. As the browser became popular, Hidekazu Kondo established the Lunascape Corporation in August 2004 while pursuing a PhD. Hidekazu Kondo then became the CEO of Lunascape Corporation. Additionally, Lunascape was selected as an "Exploratory Software Project" commissioned by the Japanese government.The company branched out to the United States and as of June 2008 is based in Sunnyvale, California.Lunascape introduced its browser internationally in December 2008.
History:
In late 2019, Lunascape was acquired by G.U.Labs, with Hidekazu Kondo remaining as a representative director. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Dyskerin**
Dyskerin:
H/ACA ribonucleoprotein complex subunit 4 is a protein that in humans is encoded by the gene DKC1.This gene is a member of the H/ACA snoRNPs (small nucleolar ribonucleoproteins) gene family. snoRNPs are involved in various aspects of rRNA processing and modification and have been classified into two families: C/D and H/ACA. The H/ACA snoRNPs also include the NOLA1, 2 and 3 proteins. The protein encoded by this gene and the three NOLA proteins localize to the dense fibrillar components of nucleoli and to coiled (Cajal) bodies in the nucleus. Both 18S rRNA production and rRNA pseudouridylation are impaired if any one of the four proteins is depleted. The protein encoded by this gene is related to the Saccharomyces cerevisiae Cbf5p and Drosophila melanogaster Nop60B proteins. The gene lies in a tail-to-tail orientation with the palmitoylated erythrocyte membrane protein (MPP1) gene and is transcribed in a telomere to centromere direction. Both nucleotide substitutions and single trinucleotide repeat polymorphisms have been found in this gene. Mutations in this gene cause X-linked dyskeratosis congenita.
Clinical significance:
Mutations in DKC1 are associated to Hoyeraal-Hreidarsson syndrome. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Izak Benbasat**
Izak Benbasat:
Izak Benbasat is a Turkish–Canadian professor and scientist, currently the Sauder Distinguished Professor of Information Systems and professor of information-system management at the University of British Columbia Sauder School of Business. He is also a published author, being largely cited as a researcher. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Werewolf in a Girls' Dormitory**
Werewolf in a Girls' Dormitory:
Werewolf in a Girls' Dormitory (Italian: Lycanthropus) is a 1961 Italian horror film directed by Paolo Heusch.
Plot:
Wolves have been roaming around a girls' reformatory. When the girls begin to get murdered, suspicion focuses on both the wolves and on a handsome, newly hired science teacher, who might be a werewolf.
Production:
Werewolf in a Girls' Dormitory was shot in 1961 around Cinecittà Studios and Rome. In the film, director Paolo Heusch is credited under the name Richard Benson. One of the films screenwriters Ernesto Gastaldi explained that it was mandatory to give yourself an English name in Italian productions of the time because "that's the way the producers wanted it."The German actor Curt Lowens plays the werewolf in the film. Lowens described the werewolf transformations as being shot in reverse starting with full make-up and shot in reverse with more elements of his transformations being removed, the whole process shot with dissolves taking over three hours. Lowens described shooting the film as a chaotic experience with actors all predominantly speaking different languages: French, English, Italian and German.
Release:
Werewolf in a Girls' Dormitory was released in Italy on November 9, 1961 where it was distributed by Cineriz. The film grossed a total of 115 million Italian lira on its theatrical run. The film was shown in the United States on June 6, 1963 where it was distributed by MGM.The American version of the film adds the rock song "The Ghoul in School" to the opening credits written by Marilyn Stewart and Frank Owens. The song had vocals by Adam Keefe and was released on a 45 RPM record distributed by Cub Records.It was released on DVD in the United States by Retromedia and Alpha Video.
Reception:
From contemporary reviews, The Globe and Mail stated that the film was "disfigured by bad dubbing and a silly attempt to establish the locale as the United States, it might have been a very respectable specimen of the horror school" A review in the Monthly Film Bulletin reviewed an 81-minute dubbed version of the film titled I Married a Werewolf, describing it as a "grotesque horror film" where "a certain relish can be derived in the early stages from the unspeakably foolish dialogue; but there is really nothing else on any level which one can possibly recommend."Danny Shipka, author of Perverse Titillation: The Exploitation Cinema of Italy, Spain and France 1960-1980 stated the film "won't convert any fans to the genre", due to a slow pace and poor dubbing in the English-language version. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Solution Deployment Descriptor**
Solution Deployment Descriptor:
Solution Deployment Descriptor (SDD) is a standard XML-based schema defining a standardized way to express software installation characteristics required for lifecycle management in a multi-platform environment.
The SDD defines schema for two XML document types: Package Descriptors and Deployment Descriptors. Package Descriptors define characteristics of a package used to deploy a solution. Deployment Descriptors define characteristics of the content of a solution package, including the requirements that are relevant for creation, configuration, and maintenance of the solution content. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Leibniz's notation**
Leibniz's notation:
In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols dx and dy to represent infinitely small (or infinitesimal) increments of x and y, respectively, just as Δx and Δy represent finite increments of x and y, respectively.Consider y as a function of a variable x, or y = f(x). If this is the case, then the derivative of y with respect to x, which later came to be viewed as the limit lim lim Δx→0f(x+Δx)−f(x)Δx, was, according to Leibniz, the quotient of an infinitesimal increment of y by an infinitesimal increment of x, or dydx=f′(x), where the right hand side is Joseph-Louis Lagrange's notation for the derivative of f at x. The infinitesimal increments are called differentials. Related to this is the integral in which the infinitesimal increments are summed (e.g. to compute lengths, areas and volumes as sums of tiny pieces), for which Leibniz also supplied a closely related notation involving the same differentials, a notation whose efficiency proved decisive in the development of continental European mathematics.
Leibniz's notation:
Leibniz's concept of infinitesimals, long considered to be too imprecise to be used as a foundation of calculus, was eventually replaced by rigorous concepts developed by Weierstrass and others in the 19th century. Consequently, Leibniz's quotient notation was re-interpreted to stand for the limit of the modern definition. However, in many instances, the symbol did seem to act as an actual quotient would and its usefulness kept it popular even in the face of several competing notations. Several different formalisms were developed in the 20th century that can give rigorous meaning to notions of infinitesimals and infinitesimal displacements, including nonstandard analysis, tangent space, O notation and others.
Leibniz's notation:
The derivatives and integrals of calculus can be packaged into the modern theory of differential forms, in which the derivative is genuinely a ratio of two differentials, and the integral likewise behaves in exact accordance with Leibniz notation. However, this requires that derivative and integral first be defined by other means, and as such expresses the self-consistency and computational efficacy of the Leibniz notation rather than giving it a new foundation.
History:
The Newton–Leibniz approach to infinitesimal calculus was introduced in the 17th century. While Newton worked with fluxions and fluents, Leibniz based his approach on generalizations of sums and differences. Leibniz was the first to use the ∫ character. He based the character on the Latin word summa ("sum"), which he wrote ſumma with the elongated s commonly used in Germany at the time. Viewing differences as the inverse operation of summation, he used the symbol d, the first letter of the Latin differentia, to indicate this inverse operation. Leibniz was fastidious about notation, having spent years experimenting, adjusting, rejecting and corresponding with other mathematicians about them. Notations he used for the differential of y ranged successively from ω, l, and y/d until he finally settled on dy. His integral sign first appeared publicly in the article "De Geometria Recondita et analysi indivisibilium atque infinitorum" ("On a hidden geometry and analysis of indivisibles and infinites"), published in Acta Eruditorum in June 1686, but he had been using it in private manuscripts at least since 1675. Leibniz first used dx in the article "Nova Methodus pro Maximis et Minimis" also published in Acta Eruditorum in 1684. While the symbol dx/dy does appear in private manuscripts of 1675, it does not appear in this form in either of the above-mentioned published works. Leibniz did, however, use forms such as dy ad dx and dy : dx in print.
History:
At the end of the 19th century, Weierstrass's followers ceased to take Leibniz's notation for derivatives and integrals literally. That is, mathematicians felt that the concept of infinitesimals contained logical contradictions in its development. A number of 19th century mathematicians (Weierstrass and others) found logically rigorous ways to treat derivatives and integrals without infinitesimals using limits as shown above, while Cauchy exploited both infinitesimals and limits (see Cours d'Analyse). Nonetheless, Leibniz's notation is still in general use. Although the notation need not be taken literally, it is usually simpler than alternatives when the technique of separation of variables is used in the solution of differential equations. In physical applications, one may for example regard f(x) as measured in meters per second, and dx in seconds, so that f(x) dx is in meters, and so is the value of its definite integral. In that way the Leibniz notation is in harmony with dimensional analysis.
Leibniz's notation for differentiation:
Suppose a dependent variable y represents a function f of an independent variable x, that is, y=f(x).
Then the derivative of the function f, in Leibniz's notation for differentiation, can be written as or or d(f(x))dx.
The Leibniz expression, also, at times, written dy/dx, is one of several notations used for derivatives and derived functions. A common alternative is Lagrange's notation dydx=y′=f′(x).
Another alternative is Newton's notation, often used for derivatives with respect to time (like velocity), which requires placing a dot over the dependent variable (in this case, x): dxdt=x˙.
Lagrange's "prime" notation is especially useful in discussions of derived functions and has the advantage of having a natural way of denoting the value of the derived function at a specific value. However, the Leibniz notation has other virtues that have kept it popular through the years.
In its modern interpretation, the expression dy/dx should not be read as the division of two quantities dx and dy (as Leibniz had envisioned it); rather, the whole expression should be seen as a single symbol that is shorthand for lim Δx→0ΔyΔx (note Δ vs. d, where Δ indicates a finite difference).
The expression may also be thought of as the application of the differential operator d/dx (again, a single symbol) to y, regarded as a function of x. This operator is written D in Euler's notation. Leibniz did not use this form, but his use of the symbol d corresponds fairly closely to this modern concept.
While there is traditionally no division implied by the notation (but see Nonstandard analysis), the division-like notation is useful since in many situations, the derivative operator does behave like a division, making some results about derivatives easy to obtain and remember.
This notation owes its longevity to the fact that it seems to reach to the very heart of the geometrical and mechanical applications of the calculus.
Leibniz notation for higher derivatives If y = f(x), the nth derivative of f in Leibniz notation is given by, f(n)(x)=dnydxn.
This notation, for the second derivative, is obtained by using d/dx as an operator in the following way, d2ydx2=ddx(dydx).
A third derivative, which might be written as, d(d(dydx)dx)dx, can be obtained from d3ydx3=ddx(d2ydx2)=ddx(ddx(dydx)).
Similarly, the higher derivatives may be obtained inductively.
While it is possible, with carefully chosen definitions, to interpret dy/dx as a quotient of differentials, this should not be done with the higher order forms. However, an alternative Leibniz notation for higher order derivatives allows for this.
Leibniz's notation for differentiation:
This notation was, however, not used by Leibniz. In print he did not use multi-tiered notation nor numerical exponents (before 1695). To write x3 for instance, he would write xxx, as was common in his time. The square of a differential, as it might appear in an arc length formula for instance, was written as dxdx. However, Leibniz did use his d notation as we would today use operators, namely he would write a second derivative as ddy and a third derivative as dddy. In 1695 Leibniz started to write d2⋅x and d3⋅x for ddx and dddx respectively, but l'Hôpital, in his textbook on calculus written around the same time, used Leibniz's original forms.
Leibniz's notation for differentiation:
Use in various formulas One reason that Leibniz's notations in calculus have endured so long is that they permit the easy recall of the appropriate formulas used for differentiation and integration. For instance, the chain rule—suppose that the function g is differentiable at x and y = f(u) is differentiable at u = g(x). Then the composite function y = f(g(x)) is differentiable at x and its derivative can be expressed in Leibniz notation as, dydx=dydu⋅dudx.
Leibniz's notation for differentiation:
This can be generalized to deal with the composites of several appropriately defined and related functions, u1, u2, ..., un and would be expressed as, dydx=dydu1⋅du1du2⋅du2du3⋯dundx.
Also, the integration by substitution formula may be expressed by ∫ydx=∫ydxdudu, where x is thought of as a function of a new variable u and the function y on the left is expressed in terms of x while on the right it is expressed in terms of u.
If y = f(x) where f is a differentiable function that is invertible, the derivative of the inverse function, if it exists, can be given by, dxdy=1(dydx), where the parentheses are added to emphasize the fact that the derivative is not a fraction.
However, when solving differential equations, it is easy to think of the dys and dxs as separable. One of the simplest types of differential equations is M(x)+N(y)dydx=0, where M and N are continuous functions. Solving (implicitly) such an equation can be done by examining the equation in its differential form, M(x)dx+N(y)dy=0 and integrating to obtain ∫M(x)dx+∫N(y)dy=C.
Rewriting, when possible, a differential equation into this form and applying the above argument is known as the separation of variables technique for solving such equations.
In each of these instances the Leibniz notation for a derivative appears to act like a fraction, even though, in its modern interpretation, it isn't one.
Modern justification of infinitesimals:
In the 1960s, building upon earlier work by Edwin Hewitt and Jerzy Łoś, Abraham Robinson developed mathematical explanations for Leibniz's infinitesimals that were acceptable by contemporary standards of rigor, and developed nonstandard analysis based on these ideas. Robinson's methods are used by only a minority of mathematicians. Jerome Keisler wrote a first-year calculus textbook, Elementary calculus: an infinitesimal approach, based on Robinson's approach.
Modern justification of infinitesimals:
From the point of view of modern infinitesimal theory, Δx is an infinitesimal x-increment, Δy is the corresponding y-increment, and the derivative is the standard part of the infinitesimal ratio: f′(x)=st(ΔyΔx) .Then one sets dx=Δx , dy=f′(x)dx , so that by definition, f′(x) is the ratio of dy by dx.
Modern justification of infinitesimals:
Similarly, although most mathematicians now view an integral ∫f(x)dx as a limit lim Δx→0∑if(xi)Δx, where Δx is an interval containing xi, Leibniz viewed it as the sum (the integral sign denoted summation for him) of infinitely many infinitesimal quantities f(x) dx. From the viewpoint of nonstandard analysis, it is correct to view the integral as the standard part of such an infinite sum.
Modern justification of infinitesimals:
The trade-off needed to gain the precision of these concepts is that the set of real numbers must be extended to the set of hyperreal numbers.
Other notations of Leibniz:
Leibniz experimented with many different notations in various areas of mathematics. He felt that good notation was fundamental in the pursuit of mathematics. In a letter to l'Hôpital in 1693 he says: One of the secrets of analysis consists in the characteristic, that is, in the art of skilful employment of the available signs, and you will observe, Sir, by the small enclosure [on determinants] that Vieta and Descartes have not known all the mysteries. He refined his criteria for good notation over time and came to realize the value of "adopting symbolisms which could be set up in a line like ordinary type, without the need of widening the spaces between lines to make room for symbols with sprawling parts." For instance, in his early works he heavily used a vinculum to indicate grouping of symbols, but later he introduced the idea of using pairs of parentheses for this purpose, thus appeasing the typesetters who no longer had to widen the spaces between lines on a page and making the pages look more attractive.Many of the over 200 new symbols introduced by Leibniz are still in use today. Besides the differentials dx, dy and the integral sign ( ∫ ) already mentioned, he also introduced the colon (:) for division, the middle dot (⋅) for multiplication, the geometric signs for similar (~) and congruence (≅), the use of Recorde's equal sign (=) for proportions (replacing Oughtred's :: notation) and the double-suffix notation for determinants. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**BIBO stability**
BIBO stability:
In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.
A signal is bounded if there is a finite value B>0 such that the signal magnitude never exceeds B , that is For discrete-time signals: ∃B∀n(|y[n]|≤B)n∈Z For continuous-time signals: ∃B∀t(|y(t)|≤B)t∈R
Time-domain condition for linear time-invariant systems:
Continuous-time necessary and sufficient condition For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, h(t) , be absolutely integrable, i.e., its L1 norm exists.
∫−∞∞|h(t)|dt=‖h‖1∈R Discrete-time sufficient condition For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its ℓ1 norm exists.
∑n=−∞∞|h[n]|=‖h‖1∈R Proof of sufficiency Given a discrete time LTI system with impulse response h[n] the relationship between the input x[n] and the output y[n] is y[n]=h[n]∗x[n] where ∗ denotes convolution. Then it follows by the definition of convolution y[n]=∑k=−∞∞h[k]x[n−k] Let ‖x‖∞ be the maximum value of |x[n]| , i.e., the L∞ -norm.
|y[n]|=|∑k=−∞∞h[n−k]x[k]| ≤∑k=−∞∞|h[n−k]||x[k]| (by the triangle inequality) ≤∑k=−∞∞|h[n−k]|‖x‖∞=‖x‖∞∑k=−∞∞|h[n−k]|=‖x‖∞∑k=−∞∞|h[k]| If h[n] is absolutely summable, then ∑k=−∞∞|h[k]|=‖h‖1∈R and ‖x‖∞∑k=−∞∞|h[k]|=‖x‖∞‖h‖1 So if h[n] is absolutely summable and |x[n]| is bounded, then |y[n]| is bounded as well because ‖x‖∞‖h‖1∈R The proof for continuous-time follows the same arguments.
Frequency-domain condition for linear time-invariant systems:
Continuous-time signals For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s-plane for BIBO stability.
Frequency-domain condition for linear time-invariant systems:
This stability condition can be derived from the above time-domain condition as follows: ∫−∞∞|h(t)|dt=∫−∞∞|h(t)||e−jωt|dt=∫−∞∞|h(t)(1⋅e)−jωt|dt=∫−∞∞|h(t)(eσ+jω)−t|dt=∫−∞∞|h(t)e−st|dt where s=σ+jω and Re 0.
The region of convergence must therefore include the imaginary axis.
Frequency-domain condition for linear time-invariant systems:
Discrete-time signals For a rational and discrete time system, the condition for stability is that the region of convergence (ROC) of the z-transform includes the unit circle. When the system is causal, the ROC is the open region outside a circle whose radius is the magnitude of the pole with largest magnitude. Therefore, all poles of the system must be inside the unit circle in the z-plane for BIBO stability.
Frequency-domain condition for linear time-invariant systems:
This stability condition can be derived in a similar fashion to the continuous-time derivation: ∑n=−∞∞|h[n]|=∑n=−∞∞|h[n]||e−jωn|=∑n=−∞∞|h[n](1⋅e)−jωn|=∑n=−∞∞|h[n](rejω)−n|=∑n=−∞∞|h[n]z−n| where z=rejω and r=|z|=1 The region of convergence must therefore include the unit circle. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Bus terminus**
Bus terminus:
A bus terminus or bus terminal is a designated place where a bus or coach starts or ends its scheduled route. The terminus is the designated place that a timetable is timed from. Termini can be located at bus stations, interchanges, bus garages or bus stops. Termini can both start and end at the same place, or may be in different locations for starting and finishing a route. Termini may or may not coincide with the use of bus stands.
Size of termini:
For operational reasons and passenger routes to be their bus garage, where the legal terminus is just outside or nearby. For the purposes of integration of different public transport modes, termini may also be located as part of a transportation hub or 'interchange' or alongside other major amenities such as universities, shopping centres or hospitals. Minor termini may be a bus stop or loop in a residential street, used by very few or just one.
Operational considerations:
While it may be of prime importance to the passenger, the location of a terminus may be made for reasons other than convenience of passengers.
Competitive interests In rare cases, where the bus operator is commercially separate from the bus station owner, the bus company may choose to terminate services outside the station, so as not to incur usage fees. Additionally, counter to the idea of integration, competing bus operators may use different locations as intermediate termini, to discourage passengers use of competitors services.
Operational considerations:
Turning A factor in the location of a terminus is how to turn the bus around to start the route in the other direction, which may be difficult in areas where road space is an issue, or the road layout prevents U-turns. This does not apply for true circle routes, where buses simply operate permanently in the clockwise or anti-clockwise direction. Termini in bus stations will often include reversing/run-around space, negating the turning issue.
Operational considerations:
Layover Another consideration about the location of a terminus is the need to lay over before resuming in service.
In busy locations, such as main streets or bus stations, allowing the bus the space to lay over may not be appropriate, and the bus may have to run out of service, to a quieter layover point, before returning to the terminus to start the route again.
Operational considerations:
To allow layover at a terminus, many routes run through busy centres terminating either side in quiet termini, where a bus can lay over without causing an obstruction. In the one-stop case, this can cause problems for passengers when an apparently in-service bus parks at a bus stop with the doors closed, waiting until the timetabled departure time, or when an arriving bus is not forming a departing service. This can be mitigated by using a bus stand. In the two-stop type, the arrival stop can be used as the layover point.
Operational considerations:
Layover time is time built into a schedule between arrival at the end of a route and the departure for the return trip, used for the recovery of delays and preparation for the return trip.
Operational considerations:
Driver change Terminus location may be positioned to allow driver changes, although this may be less of a factor than the location of the bus garage. Centrally located termini may be more convenient for driver changes. Some operators operate pool cars to allow drivers to drive to and wait at a quiet terminus, swapping the car with the bus when it arrives.
Types of terminus:
One stop Many routes avoid the need to accommodate turning by having the end of the route form a small circuit as an official part of the route. The terminus is designated as one stop on this circuit, with the bus starting and finishing in the same orientation. This is often necessary in many town centres with one-way traffic systems.
Types of terminus:
Space permitting, the terminus may be a purpose built run-around Bus turnout, which allows the bus to change direction simply by entering and leaving the turnout. Often the infrastructure for this remains from a previous tram or trolleybus system.
In rare cases, to allow a one stop terminus, routes may be arranged to start and finish at the same terminus, with buses arriving as one scheduled route, and leaving as a different route. This can also be done to allow a formal midpoint to split up a long route, reducing the knock-on effect of delays.
Types of terminus:
Two stop As opposed to a one stop arrangement, some routes that need to reverse direction at a terminus will start and finish in different stops, and the pair of stops locations forms the terminus. This necessitates running the bus out of service along other streets in order to position in the bus for the reverse direction. In the UK, this is often achieved by locating the terminus near a roundabout.
Types of terminus:
In this case, the arrival point can be designated as a 'set down only' stop, where passengers are not permitted to board.
Route terminus variations:
Often one bus route will follow a core main route, but towards the termini, the bus may branch off and terminate in different locations. This may be indicated by different route numbers, or with the same route number but a different destination name on the headsign/rollsign.
Routes may also have a number of different termini on the same numbered route, again shown only by different destinations. These may be used at different times according to operational need, usually to reflect different demand at the different times of the day. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**ToFeT**
ToFeT:
ToFeT is a kinetic Monte Carlo electronic model of molecular films, able to simulate the time-of-flight experiment (ToF), field-effect transistors (FeTs). As its input, ToFeT takes a description of the film at a molecular level: a description of the position of all molecules and the interactions between them. As its output, ToFeT produces electrical characteristics such as mobilities, JV curves, and transient photocurrents. ToFeT thus allows the microscopic properties of a film to be related to its macroscopic electronic properties.
ToFeT:
ToFeT is an open-source project, used by academic and industrial groups around the world. The current focus in ToFeT's development is to treat a wider range of materials systems, and reproduce a wider range of experimental measurements.
In Hebrew, the word "Tofet" stands for "inferno, scene of horror; hell", thus providing a fitting indication to the complexity of the module. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**CH3O**
CH3O:
The molecular formula CH3O may refer to: Hydroxymethyl group (HOCH2–) Methoxy group (H3CO–) | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ronald Beavis**
Ronald Beavis:
Ronald Charles Beavis (better known professionally as Ron Beavis) is a Canadian protein biochemist, who has been involved in the application of mass spectrometry to protein primary structure, with applications in the fields of proteomics and analytical biochemistry. He has developed methods for measuring the identity and post-translational modification state of proteins obtained from biological samples using mass spectrometry. He is currently best known for developing new methods for analyzing proteomics data and applying the results of these methods to problems in computational biology.
Background:
Ronald Charles Beavis was born in Winnipeg, Manitoba in May 1958. He went to school in Winnipeg and became interested in science at an early age. As a high school student he participated in the Manitoba Science Fair for projects related to chemistry and physics. He received a B.Sc. (Hons) in Zoology and Physics and a Ph.D. (Physics) from the University of Manitoba.
Current status:
Dr. Beavis is the founder of Beavis Informatics Ltd, a Canadian company providing consulting services in the general area of mass spectrometry-based proteomics. He oversees the development and operation of the GPM & GPMDB projects.
Professional affiliations:
Dr. Beavis was a founding member of Genome Prairie and one of the founders of ProteoMetrics, LLC. He has held academic positions at Rockefeller University, Memorial University of Newfoundland and Labrador, New York University Medical Center, the University of British Columbia and the University of Manitoba. He served on the Editorial Advisory Boards of the Journal of Proteome Research and Rapid Communications in Mass Spectrometry. He has also served on the Editorial Board of Molecular & Cellular Proteomics and on the Editorial Board of Scientific Data. He was part of the Human Proteome Project and a founding member of the U.S. Chromosome 17 project. He has frequently collaborated with David Fenyő on informatics projects.
Honors and awards:
NATO Postdoctoral Fellowship (1987) President's Award for Innovation, Eli Lilly & Co. (1999) Yen Fellowship (2003) Tier I Canada Research Chair (2007)
Most recent publications:
Perez-Riverol, Yasset; Bai, Mingze; da Veiga Leprevost, Felipe; Squizzato, Silvano; Park, Young Mi; Haug, Kenneth; Carroll, Adam J; Spalding, Dylan; Paschall, Justin; Wang, Mingxun; del-Toro, Noemi; Ternent, Tobias; Zhang, Peng; Buso, Nicola; Bandeira, Nuno; Deutsch, Eric W; Campbell, David S; Beavis, Ronald C; Salek, Reza M; Sarkans, Ugis; Petryszak, Robert; Keays, Maria; Fahy, Eoin; Sud, Manish; Subramaniam, Shankar; Barbera, Ariana; Jiménez, Rafael C; Nesvizhskii, Alexey I; Sansone, Susanna-Assunta; Steinbeck, Christoph; Lopez, Rodrigo; Vizcaíno, Juan A; Ping, Peipei; Hermjakob, Henning (2017). "Discovering and linking public omics data sets using the Omics Discovery Index". Nature Biotechnology. 35 (5): 406–409. doi:10.1038/nbt.3790. ISSN 1087-0156. PMC 5831141. PMID 28486464.
Most recent publications:
Omenn, Gilbert S.; Lane, Lydie; Lundberg, Emma K.; Beavis, Ronald C.; Overall, Christopher M.; Deutsch, Eric W. (2016). "Metrics for the Human Proteome Project 2016: Progress on Identifying and Characterizing the Human Proteome, Including Post-Translational Modifications". Journal of Proteome Research. 15 (11): 3951–3960. doi:10.1021/acs.jproteome.6b00511. ISSN 1535-3893. PMC 5129622. PMID 27487407.
Spicer, Vic; Ezzati, Peyman; Neustaeter, Haley; Beavis, Ronald C.; Wilkins, John A.; Krokhin, Oleg V. (2016). "3D HPLC-MS with Reversed-Phase Separation Functionality in All Three Dimensions for Large-Scale Bottom-Up Proteomics and Peptide Retention Data Collection". Analytical Chemistry. 88 (5): 2847–2855. doi:10.1021/acs.analchem.5b04567. ISSN 0003-2700. PMID 26849966.
Keegan, Sarah; Cortens, John P; Beavis, Ronald C; Fenyö, David (2016). "g2pDB: A Database Mapping Protein Post-Translational Modifications to Genomic Coordinates". Journal of Proteome Research. 15 (3): 983–990. doi:10.1021/acs.jproteome.5b01018. ISSN 1535-3893. PMID 26842767.
McAfee, Alison; Harpur, Brock A.; Michaud, Sarah; Beavis, Ronald C.; Kent, Clement F.; Zayed, Amro; Foster, Leonard J. (2016). "Toward an Upgraded Honey Bee (Apis melliferaL.) Genome Annotation Using Proteogenomics". Journal of Proteome Research. 15 (2): 411–421. doi:10.1021/acs.jproteome.5b00589. ISSN 1535-3893. PMID 26718741.
Fenyö, David; Beavis, Ronald C. (2015). "Selenocysteine: Wherefore Art Thou?". Journal of Proteome Research. 15 (2): 677–678. doi:10.1021/acs.jproteome.5b01028. ISSN 1535-3893. PMID 26680273.
Liu, Fei; Koval, Michael; Ranganathan, Shoba; Fanayan, Susan; Hancock, William S.; Lundberg, Emma K.; Beavis, Ronald C.; Lane, Lydie; Duek, Paula; McQuade, Leon; Kelleher, Neil L.; Baker, Mark S. (2016). "Systems Proteomics View of the Endogenous Human Claudin Protein Family". Journal of Proteome Research. 15 (2): 339–359. doi:10.1021/acs.jproteome.5b00769. ISSN 1535-3893. PMC 4777318. PMID 26680015.
Most recent publications:
Yan, Julia Fangfei; Kim, Hoguen; Jeong, Seul-Ki; Lee, Hyoung-Joo; Sethi, Manveen K.; Lee, Ling Y.; Beavis, Ronald C.; Im, Hogune; Snyder, Michael P.; Hofree, Matan; Ideker, Trey; Wu, Shiaw-lin; Paik, Young-Ki; Fanayan, Susan; Hancock, William S. (2015). "Integrated Proteomic and Genomic Analysis of Gastric Cancer Patient Tissues". Journal of Proteome Research. 14 (12): 4995–5006. doi:10.1021/acs.jproteome.5b00827. ISSN 1535-3893. PMC 5706558. PMID 26435392.
Most recent publications:
Omenn, Gilbert S.; Lane, Lydie; Lundberg, Emma K.; Beavis, Ronald C.; Nesvizhskii, Alexey I.; Deutsch, Eric W. (2015). "Metrics for the Human Proteome Project 2015: Progress on the Human Proteome and Guidelines for High-Confidence Protein Identification". Journal of Proteome Research. 14 (9): 3452–3460. doi:10.1021/acs.jproteome.5b00499. ISSN 1535-3893. PMC 4755311. PMID 26155816.
Fenyö, David; Beavis, Ronald C. (2015). "The GPMDB REST interface". Bioinformatics. 31 (12): 2056–2058. doi:10.1093/bioinformatics/btv107. ISSN 1367-4803. PMID 25697819. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Burke's theorem**
Burke's theorem:
In queueing theory, a discipline within the mathematical theory of probability, Burke's theorem (sometimes the Burke's output theorem) is a theorem (stated and demonstrated by Paul J. Burke while working at Bell Telephone Laboratories) asserting that, for the M/M/1 queue, M/M/c queue or M/M/∞ queue in the steady state with arrivals is a Poisson process with rate parameter λ: The departure process is a Poisson process with rate parameter λ.
Burke's theorem:
At time t the number of customers in the queue is independent of the departure process prior to time t.
Proof:
Burke first published this theorem along with a proof in 1956. The theorem was anticipated but not proved by O’Brien (1954) and Morse (1955). A second proof of the theorem follows from a more general result published by Reich. The proof offered by Burke shows that the time intervals between successive departures are independently and exponentially distributed with parameter equal to the arrival rate parameter, from which the result follows.
Proof:
An alternative proof is possible by considering the reversed process and noting that the M/M/1 queue is a reversible stochastic process. Consider the figure. By Kolmogorov's criterion for reversibility, any birth-death process is a reversible Markov chain. Note that the arrival instants in the forward Markov chain are the departure instants of the reversed Markov chain. Thus the departure process is a Poisson process of rate λ. Moreover, in the forward process the arrival at time t is independent of the number of customers after t. Thus in the reversed process, the number of customers in the queue is independent of the departure process prior to time t.
Proof:
This proof could be counter-intuitive, in the sense that the departure process of a birth-death process is independent of the service offered.
Related results and extensions:
The theorem can be generalised for "only a few cases," but remains valid for M/M/c queues and Geom/Geom/1 queues.It is thought that Burke's theorem does not extend to queues fed by a Markovian arrival processes (MAP) and is conjectured that the output process of an MAP/M/1 queue is an MAP only if the queue is an M/M/1 queue.An analogous theorem for the Brownian queue was proven by J. Michael Harrison. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Illusory palinopsia**
Illusory palinopsia:
Illusory palinopsia is a subtype of palinopsia, a visual disturbance defined as the persistence or recurrence of a visual image after the stimulus has been removed. Palinopsia is a broad term describing a heterogeneous group of symptoms, which is divided into hallucinatory palinopsia and illusory palinopsia. Illusory palinopsia is likely due to sustained awareness of a stimulus and is similar to a visual illusion: the distorted perception of a real external stimulus.
Illusory palinopsia:
Illusory palinopsia is caused by migraines, hallucinogen persisting perception disorder (HPPD), prescription drugs, and head trauma, but is also sometimes idiopathic. Illusory palinopsia consists of afterimages that are short-lived or unformed, occur at the same location in the visual field as the original stimulus, and are often exposed or exacerbated based on environmental parameters such as stimulus intensity, background contrast, fixation, and movement. Illusory palinopsia symptoms occur continuously or predictably, based on environmental conditions. The term is from Greek: palin for "again" and opsia for "seeing".
Signs and symptoms:
Illusory palinopsia is often worse with high stimulus intensity and contrast ratio in a dark adapted state. Multiple types of illusory palinopsia often co-exist in a patient and occur with other diffuse, persistent illusory symptoms such as halos around objects, dysmetropsia (micropsia, macropsia, pelopsia, or teleopsia), Alice in Wonderland Syndrome, visual snow, and oscillopsia. Illusory palinopsia consists of the following four symptom categories.
Signs and symptoms:
Prolonged indistinct afterimage Prolonged indistinct afterimages are unformed and occur at the same location in the visual field as the original stimulus. Stimulus intensity, contrast, and fixation length affects the generation and severity of these perseverated images. For example, after seeing a bright light such as a car headlight or a camera flash, a persistent afterimage remains in the visual field for several minutes. Patients often report photophobia, which can restrict their ability to perform outdoor activity. The prolonged image or light is typically isochromatic (positive afterimage) to the original stimulus, but can fade to different colors over time. Afterimages from lights tend to last longer than the indistinct afterimages from other brightly colored objects. Palinoptic prolonged light afterimages of the complementary color are differentiated from physiological afterimages based on afterimage intensity and duration.
Signs and symptoms:
Light streaking Light streaking describes a comet-like tail which is seen due to motion between a person or a light. The streaking usually persists for several seconds before fading and often occurs with bright lights on a dark background. Patients commonly report of difficulty with night driving since the headlights of oncoming cars cause multiple streaks which obscure vision.
Signs and symptoms:
Visual trailing Visual trailing describes an object in motion leaving frozen copies in its wake. These motion-induced afterimages may be discontinuous such as in a film reel or may be blurred together such as in a long-exposure photograph. If discontinuous, the patient also usually reports akinetopsia. The perseverated images last a few seconds and are usually identical in color and shape to the original stimulus. Most cases describe visual trails during movement of an object, although there are also reports from the movement of the observer's head or eyes.
Signs and symptoms:
Variant image perseveration There are a few cases of palinopsia with many of the same features as hallucinatory palinopsia (formed image perseveration) but with some important differences. The formed perseverated image may only last a couple seconds or may be black or translucent. These variants usually lack the realistic clarity of hallucinatory palinopsia, and the generation of the palinoptic images is affected by fixation time, motion, stimulus intensity, or contrast. These variants probably represent an overlap in hallucinatory and illusory palinopsia but are included in illusory palinopsia since they often co-exist with the other illusory symptoms.
Cause:
Of the published cases of palinopsia that are idiopathic or attributed to migraines, HPPD, prescription drugs, or head trauma, 94% described illusory palinopsia. Trazodone, nefazodone, mirtazapine, topiramate, clomiphene, oral contraceptives, and risperidone have been reported to cause illusory palinopsia. Clomiphene and oral contraceptives are the only prescription drugs reported to cause permanent symptoms. HPPD is most common after LSD ingestion, but can occur after any hallucinogen use. HPPD is commonly described in psychiatric literature and illusory palinopsia symptoms are sometimes not defined as palinopsia. It is not clear if there is a relationship between HPPD and the quantity and strength of hallucinogen doses taken.
Pathophysiology:
Illusory palinopsia is a dysfunction in visual perception, presumably related to diffuse neuronal excitability alterations in the anterior and posterior visual pathways. Because of the drugs that cause illusory palinopsia, 5-HT2a receptor excitotoxicity or a disruption of GABAergic transmission have been proposed as possible mechanisms. However, the neuropharmacology of the visual system is probably too complex to pinpoint the visual disturbances to a single neurotransmitter or neurotransmitter receptor. The generation of illusory palinopsia is often dependent on ambient light or motion, and the symptoms could be a pathological exaggeration of normal light perception and motion perception mechanisms. Prolonged indistinct afterimages are symptomatically similar to physiological afterimages, and light streaking and visual trailing are symptomatically similar to motion blur when viewing fast-moving objects.
Pathophysiology:
Light and motion perception are dynamic operations involving processing and feedback from structures throughout the central nervous system. A patient frequently has multiple types of diffuse, persistent illusory symptoms which represent dysfunctions in both light and motion perception. Light and motion are processed via different pathways, which suggests that there are diffuse or global excitability alterations in the visual pathway. Faulty neural adaptation and feedback between the anterior and posterior visual pathways could cause persistent excitability changes. Movement-related palinopsia could be due to inappropriate or incomplete activation of the motion suppression mechanisms (visual masking/backward masking and corollary discharges) related to visual stability during eye or body movements, which are present in saccadic suppression, blinking, smooth pursuit, etc.
Pathophysiology:
Palinopsia in migraineurs Illusory palinopsia may occur during a migraine aura, as do other diffuse illusory symptoms such as halos around objects, visual snow, dysmetropsia, and oscillopsia. In a rare migraine subtype known as persistent visual aura without infarction, illusory palinopsia symptoms (prolonged indistinct afterimages, light streaking, and visual trailing) persist after the migraine has abated. Alternatively, up to 10% of all migraineurs report of formed afterimages that only last a couple seconds and do not occur with other illusory symptoms. These momentary afterimages appear at a different location in the visual field than the original stimulus, occur a few times per month, and are affected by external light and motion. (variant image perseveration). Migraineurs with these momentary afterimages report significantly fewer migraine headaches than migraineurs without these afterimages (4.3 vs. 14.4 attacks/year). These afterimages probably represent an overlap in hallucinatory and illusory palinopsia. Studying these momentary formed afterimages, in relation to alterations in cortical excitability, could advance our understanding of migraine pathogenesis and mechanisms associated with encoding visual memory.
Diagnosis:
Palinopsia necessitates a full ophthalmologic and neurologic history and physical exam. There are no clear guidelines on the work-up for illusory palinopsia, but it is not unreasonable to order automated visual field testing and neuroimaging since migraine aura can sometimes mimic seizures or cortical lesions. However, in a young patient without risk factors or other worrisome symptoms or signs (vasculopathy, history of cancer, etc.), neuroimaging for illusory palinopsia is low-yield but may grant the patient peace of mind.The physical exam and work-up are usually non-contributory in illusory palinopsia. Diagnosing the etiology of illusory palinopsia is often based on the clinical history. Palinopsia is attributed to a prescription drug if symptoms begin after drug initiation or dose increase. Palinopsia is attributed to head trauma if symptoms begin shortly after the incident. Continuous illusory palinopsia in a migraineur is usually from persistent visual aura. HPPD can occur any time after hallucinogen ingestion and is a diagnosis of exclusion in patients with previous hallucinogen use. Migraines and HPPD are probably the most common causes of palinopsia. Idiopathic palinopsia may be analogous to the cerebral state in persistent visual aura with non-migraine headache or persistent visual aura without headache.
Diagnosis:
Due to the subjective nature of the symptoms and the lack of organic findings, clinicians may be dismissive of illusory palinopsia, sometimes causing the patient distress. There is considerable evidence in the literature confirming the symptom legitimacy, so validating the patient's symptoms can help ease anxiety. Unidirectional visual trails or illusory symptoms confined to part of a visual field suggest cortical pathology and necessitate further work-up.
Treatment:
There is limited data on treating the visual disturbances associated with HPPD, persistent visual aura, or post-head trauma visual disturbances, and pharmaceutical treatment is empirically based. It is not clear if the etiology or type of illusory symptom influences treatment efficacy. Since the symptoms are usually benign, treatment is based on the patient's zeal and willingness to try many different drugs. There are cases which report successful treatment with clonidine, clonazepam, lamotrigine, nimodipine, topiramate, verapamil, divalproex sodium, gabapentin, furosemide, and acetazolamide, as these drugs have mechanisms that decrease neuronal excitability. However, other patients report treatment failure from the same drugs. Based on the available evidence and side-effect profile, clonidine might be an attractive treatment option. Many patients report improvement from sunglasses. FL-41 tinted lenses may provide additional relief, as they have shown some efficacy in providing relief to visually-sensitive migraineurs. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Autoscopy**
Autoscopy:
Autoscopy is the experience in which an individual perceives the surrounding environment from a different perspective, from a position outside of their own body. Autoscopy comes from the ancient Greek autós (αὐτός, "self") and skopós (σκοπός, "watcher").
Autoscopy:
Autoscopy has been of interest to humankind from time immemorial and is abundant in the folklore, mythology, and spiritual narratives of most ancient and modern societies. Cases of autoscopy are commonly encountered in modern psychiatric practice. According to neurological research, autoscopic experiences are hallucinations. Their root cause is unclear. Autoscopic experiences can include non-mirroring real-time images and the experiencer may be able to move.
Factors:
Experiences are characterized by the presence of the following three factors: disembodiment – an apparent location of the self outside one's body; impression of seeing the world from an elevated and distanced, but egocentric visuo-spatial perspective; impression of seeing one's own body from this perspective (autoscopy).The autoscopic phenomenon is classified in the following six tipologies: autoscopic hallucination, he-autoscopy or heautoscopic proper, feeling of a presence, out of body experience, negative and inner forms of autoscopy.Laboratory of Cognitive Neuroscience, École Polytechnique Fédérale de Lausanne, Lausanne, and Department of Neurology, University Hospital, Geneva, Switzerland, have reviewed some of the classical precipitating factors of autoscopy. These are sleep, drug abuse, and general anesthesia as well as neurobiology. They have compared them with recent findings on neurological and neurocognitive mechanisms of autoscopy; the reviewed data suggest that autoscopic experiences are due to functional disintegration of lower-level multisensory processing and abnormal higher-level self-processing at the temporoparietal junction.
Related disorders:
Heautoscopy is a term used in psychiatry and neurology for the reduplicative hallucination of "seeing one's own body at a distance". It can occur as a symptom in schizophrenia and epilepsy. Heautoscopy is considered a possible explanation for doppelgänger phenomena.The term polyopic heautoscopy refers to cases where more than one double is perceived. In 2006, Peter Brugger and his colleagues described the case of a man who experienced five doubles resulting from a tumor in the insular region of his left temporal lobe.Another related autoscopy disorder is known as negative autoscopy (or negative heautoscopy) a psychological phenomenon in which the affected person does not see their reflection when looking in a mirror. Although their image may be seen by others, they claim not to see it. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Timbuktu (software)**
Timbuktu (software):
Timbuktu is a discontinued remote control software product originally developed by WOS Data Systems. Remote control software allows a user to control another computer across the local network or the Internet, viewing its screen and using its keyboard and mouse as though sitting in front of it. Timbuktu is compatible with computers running both Mac OS X and Windows.
Timbuktu (software):
Timbuktu was first developed in the late 1980s as a Macintosh product by WOS DataSystems and a version was later developed to run on Microsoft Windows. WOS Data Systems was purchased by Farallon Computing in July 1988. Farallon was renamed Netopia in 1999 and the company was acquired by Motorola in February 2007. Timbuktu's primary function is remote control, and the application has support for various remote-control features such as multiple displays, screen-scaling, remote screen and keyboard lockout, clipboard synchronization and "on the fly" color-depth reduction for enhanced speed.
Timbuktu (software):
In addition to the remote control features (screen-sharing), Timbuktu also allows for file transfers, system profiling, voice and text chat, and remote activity notifications. Timbuktu versions 5.1 and earlier initiate connections over UDP port 407, though versions 5.2 and later use TCP port 407. The program has integrated support for Secure Shell (SSH) tunneling for those who require additional security. Both the Mac and Windows versions can use a standalone user database or integrate with the respective platform's "standard" user database (OpenDirectory on the Mac, and Active Directory or NT Users on Windows). The 8.6 version, released in March 2006, added an optional integration with Skype to enable a user to remote-control any of their Skype contacts who have Timbuktu installed. Starting with the 8.6 version, Timbuktu has been released as a Universal Binary supporting both Intel and PowerPC-based Macs. The 8.8 version, released in September 2009, added support for Mac OS X v10.6, although the ability to receive clicks with modifier keys broke with the release of Mac OS X v10.6.3 (March 2010). Version 8.8.2, released November 2010, resolved the Control session mouse-click modifier key issues as well as Exchange connection performance issues. Version 8.8.3, released in 2011, made Timbuktu compatible with Mac OS X Lion. Version 8.8.4, released in 2012, made Timbuktu compatible with Mac OS X Mountain Lion, resolving a screen rendering issue. Version 8.8.5 for Mac, released in October 2013, made Timbuktu compatible with Mac OS X 10.9 "Mavericks".
Timbuktu (software):
Timbuktu for Windows v8.x is not compatible with Windows Vista, Windows 7 or Windows Server 2008, and feature-wise it lags well behind the Mac client from the standpoint of acting as a remote client (host-wise, it is identical). Motorola announced in mid-2009 that Timbuktu v9.0 would be released for "early preview" in Q4 2009, featuring full Windows Vista/Windows 7/Windows Server 2008 compatibility for selected customers and v9.0 was released in early 2011, with minor bugfix updates since. Version 9.1 for Windows apparently made it to pre-beta testing but was never released.
Timbuktu (software):
On April 28, 2015, Arris, the current vendor of the Timbuktu line, announced in an email to customers that development of Timbuktu was ending, and sales would be ended in 90 days. Technical support would be provided for five more years. The software is now listed among the company's discontinued products, in the "Remote Access Software" category. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Apple Certified Desktop Technician**
Apple Certified Desktop Technician:
Apple certification programs are IT professional certifications for Apple Inc. products. They are designed to create a high level of technical proficiency among Macintosh service technicians, help desk support, technical support, system administrators, and professional users. Apple certification exams are offered at Prometric testing centers and Apple Authorized Training Centers, as well as online through Pearson Vue.
Hardware Certifications:
These certifications are designed for individuals interested in becoming Apple service technicians, help desk, desktop support, or Macintosh consultants who need all-around experience in servicing Macintosh computers. It includes two separate certifications.
Apple Certified Macintosh Technician This certification is for the repair and diagnostics of all Macintosh desktops, portables, and servers. This certification is required to perform warranted hardware repairs for an Apple Authorized Service Provider.
Hardware Certifications:
Required exams Apple Service Fundamentals Exam (Pearson Vue exam: #SVC-16A) ACMT 2016 Mac Service Certification Exam(Pearson Vue exam: #MAC-16A)Previously, the hardware certification came in the combination of Apple Certified Desktop Technician (ACDT) and Apple Certified Portable Technician (ACPT), but has been combined into a single hardware certification. This certification also includes an extensive knowledge of Apple's operating system OS X 10.6 Snow Leopard including its installation, settings, troubleshooting, and applications.
Hardware Certifications:
Before November 2014, the Apple Certified Macintosh Technician did not cover the Retina MacBook Pro lineup, as well as all Macs released after 2012. To repair a Late-2013 13-inch MacBook Pro with Retina Display, one must be an Apple Certified Macintosh Technician and pass an exam for repairing 13-inch Retina MacBook Pros (Late 2013 to Early 2015). All technicians who received certification after November 20, 2014 are certified to repair any Mac released before 2015.
Hardware Certifications:
Pro Apps certifications These certifications are designed for individuals who need a high skill-level in the use of Apple's pro applications or for professionals who provide support for Final Cut Pro software and peripheral devices.
Apple Certified ProFinal Cut Pro X Logic Pro X Logic Pro X 10.1 Motion 5 (Level One)
IT Professional Certifications:
These certifications are designed for IT professionals who support Mac OS X or who perform Mac OS X desktop support and troubleshooting, such as help desk staff, system administrators, service technicians, and service desk personnel. Each certification is specific to the version of OS X it relates to; an administrator who was qualified as ACTC for OS X 10.4 Tiger is not an ACTC for 10.6 Snow Leopard. Recertification exams are available to speed the process of moving from one version to the next. OS X 10.6 exams were only available until 31 May 2012, when they were withdrawn.
IT Professional Certifications:
Apple Certified Technical Coordinator (ACTC) This certification is designed for system administrators who provide support to OS X users, as well as maintain the OS X Server platform.
Required exams OS X Support Essentials v10.6 through v10.10 OS X Server Essentials v10.6 through v10.10The ACTC certification pathway was withdrawn for new students in 2015.
IT Professional Certifications:
Apple Certified System Administrator (ACSA) This certification is catered for system administrators managing large multiplatform IT networks using Mac OS X Server and other Apple technologies. The ACSA program has been changed to offer individuals more flexibility and is now focused on individual job functions. Each passed exam earns a specialization certificate and a specific number of credits toward ACSA certification, which requires a total of 7 valid (unexpired) credits. OS X 10.6 is the last version to have this certification, there is no equivalent for OS X 10.7 Lion.
IT Professional Certifications:
Required exams Mac OS X Server Essentials v10.6 (Prometric exam: #9L0-510, withdrawn May 31, 2012) Mac OS X Deployment v10.6 (Prometric exam: #9L0-623, withdrawn May 31, 2012) Mac OS X Directory Services v10.6 (Prometric exam: #9L0-624, withdrawn May 31, 2012) Mac OS X Security & Mobility v10.6 (Prometric exam: #9L0-625, withdrawn May 31, 2012) Apple Certified Media Administrator (ACMA) Verifies in-depth knowledge of Xsan architecture, including an ability to install and configure systems, architect and maintain networks, customize and troubleshoot services, and integrate Mac OS X, Final Cut Server, and other Apple technologies within an Xsan installation. ACMA certification is for system administrators and technicians working for resellers, post houses, studios or other large facilities. To earn ACMA status, students must pass three required exams and one elective exam as outlined below.
IT Professional Certifications:
Required Exams Mac OS X Server Essentials v10.6 (Prometric exam: #9L0-510) Xsan 2 Administration (Prometric exam: #9L0-622, withdrawn May 31, 2012) Final Cut Server Level One (Prometric exam: #9L0-980)Plus one of the following: Mac OS X Support Essentials v10.6 (Prometric exam: #9L0-403, withdrawn May 31, 2012) Mac OS X Directory Services v10.6 (Prometric exam: #9L0-624, withdrawn May 31, 2012) Mac OS X Deployment v10.6 (Prometric exam: #9L0-623, withdrawn May 31, 2012) Final Cut Pro Level One (Prometric exam: #9L0-827) Apple Certificate Support Pro (ACSP 10.11) OS X El Capitan (iLearn: Advance ACSP resource) Xsan 2 Administrator Verifies comprehensive knowledge of Apple's SAN file system for Mac OS X. An Xsan Administrator is responsible for the life cycle of Xsan, including installation, deployment, infrastructure. To earn Xsan Administrator certification, students must pass one exam. Xsan 2 exams are only available until 31 May 2012, when they will be withdrawn following Apple's integration of Xsan into OS X Server 10.7.
IT Professional Certifications:
Required Exam Xsan 2 Administration (Prometric exam: #9L0-622, withdrawn May 31, 2012)
Associate Certifications:
Certified Associate certifications are designed for professionals, educators and students to validate their skills in Apple's digital lifestyle and authoring applications and iWork.
Apple Certified Associate in iWork Required exam iWork Level One (Prometric exam: #9L0-806)
List of Apple certification programs:
Apple Authorized Training Centers (AATC) Hardware Apple Certified Macintosh Technician (ACMT) Apple Certified Desktop Technician (ACDT) Apple Certified Portable Technician (ACPT) Trainers and Training Centers Apple Certified Trainer (ACT) Apple Authorized Training Centers (AATCs) Apple Authorized Training Centers for Education (AATCEs) Software Certifications Apple Certified Pro Final Cut Pro Final Cut Express Color DVD Studio Pro Motion Logic Pro Shake Color Management Xsan for Pro Video Aperture Soundtrack Pro Apple Certified Master Pro Final Cut Studio Logic Studio Mac OS X and IT certifications Apple Certified Support Professional (ACSP) Apple Certified Technical Coordinator (ACTC) Apple Certified System Administrator (ACSA) Apple Certified Specialist (ACS) Apple Certified Media Administrator (ACMA) Xsan 2 Administrator | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Paperless office**
Paperless office:
A paperless office (or paper-free office) is a work environment in which the use of paper is eliminated or greatly reduced. This is done by converting documents and other papers into digital form, a process known as digitization. Proponents claim that "going paperless" can save money, boost productivity, save space, make documentation and information sharing easier, keep personal information more secure, and help the environment. The concept can be extended to communications outside the office as well.
Definition:
The paperless world was a publicist's slogan, intended to describe the office of the future. It was facilitated by the popularization of video display computer terminals like the 1964 IBM 2260. An early prediction of the paperless office was made in a 1975 Business Week article. The idea was that office automation would make paper redundant for routine tasks such as record-keeping and bookkeeping, and it came to prominence with the introduction of the personal computer. While the prediction of a PC on every desk was remarkably prophetic, the "paperless office" was not. Improvements in printers and photocopiers made it much easier to reproduce documents in bulk, causing the worldwide use of office paper to more than double from 1980 to 2000. This was attributed to the increased ease of document production and widespread use of electronic communication, which resulted in users receiving large numbers of documents that were often printed out.
Definition:
Since about 2000, at least in the US, the use of office paper has leveled off and is now decreasing, which has been attributed to a generation shift; younger people are believed to be less inclined to print out documents, and more inclined to read them on a full-color interactive display screen. According to the United States Environmental Protection Agency, the average office worker generates approximately two pounds of paper and paperboard products each day.The term "The Paperless Office" was first used in commerce by Micronet, Inc., an automated office equipment company, in 1978.
History:
Traditional offices have paper-based filing systems, which may include filing cabinets, folders, shelves, microfiche systems, and drawing cabinets, all of which require maintenance, equipment, considerable space, and are resource-intensive. In contrast, a paperless office could simply have a desk, chair, and computer (with a modest amount of local or network storage), and all of the information would be stored in digital form. Speech recognition and speech synthesis could also be used to facilitate the storage of information digitally.Once computer data is printed on paper, it becomes out-of-sync with computer database updates. Paper is difficult to search and arrange in multiple sort arrangements, and similar paper data stored in multiple locations is often difficult and costly to track and update. A paperless office would have a single-source collection point for distributed database updates, and a publish-subscribe system. Modern computer screens make reading less exhausting for the eyes; a laptop computer can be used on a couch or in bed. With tablet computers and smartphones, with many other low-cost value-added features like video animation, video clips, and full-length movies, many argue that paper is now obsolete to all but those who are resistant to technological change. eBooks are often free or low cost compared to hard-copy books.Others argue that paper will always have a place because it affords different uses than screens.
Environmental impact of paper:
Some believe that paper product manufacturing contributes significantly to deforestation and man-made climate change, and produces greenhouse gases. Others argue that paper product manufacturing, especially in North America, supports the ecological and economic balance of sustainable forestry. According to the 2018 American Forest & Paper Association Sustainability Report, paper manufacturing decreased greenhouse gas emission by 20% in an eleven-year period. Measures such as recycling can help reduce the environmental impact of paper. Some paper production outside of North America may lead to air pollution with the release of nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon dioxide (CO2). Waste water discharged from pulp and paper mills outside of North America may contain solids, nutrients, and dissolved organic matter that are classified as pollutants. Nutrients such as nitrogen and phosphorus can cause or exacerbate eutrophication of fresh water bodies.Printing inks and toners are very expensive and use environment-damaging volatile organic compounds, heavy metals and non-renewable oils, although standards for the amount of heavy metals in ink have been set by some regulatory bodies. Deinking recycled paper pulp results in a waste slurry, sometimes weighing 22% of the weight of the recycled wastepaper, which may go to landfills.
Environmental impact of electronics:
A paperless work environment requires an infrastructure of electronic components to enable the production, transmission, and storage of information. The industry that produces these components is one of the least sustainable and most environmentally damaging sectors in the world. The process of manufacturing electronic hardware involves the extraction of precious metals and the production of plastic on an industrial scale. The transmission and storage of digital data is facilitated by data centers, which consume significant amounts of the electricity supply of a host country.
Eliminating paper via automation and electronic forms automation:
The need for paper is eliminated by using online systems, such as replacing index cards and rolodexes with databases, typed letters and faxes with email, and reference books with the internet. Another way to eliminate paper is to automate paper-based processes that rely on forms, applications and surveys to capture and share data. This method is referred to as "electronic forms" or e-forms and is typically accomplished by using existing print-perfect documents in electronic format to allow for prefilling of existing data, capturing data manually entered online by end-users, providing secure methods to submit form data to processing systems, and digitally signing the electronic documents without printing.
Eliminating paper via automation and electronic forms automation:
The technologies that may be used with electronic forms automation include – Portable Document Format (PDF) – to create, display and interact with electronic documents and forms E-form (electronic form) management software – to create, integrate and route forms and form data with processing systems Databases – to capture data for prefilling and processing documents Workflow platforms – to route information, documents and direct process flow E-mail (electronics email) communication which allows sending and receiving information of all kinds and enable attachments Digital signature solutions – to digitally sign documents (used by end-users) Web servers – to host the process, receive submitted data, store documents and manage document rightsOne of the main issues that has kept companies from adopting paperwork automation is difficulty capturing digital signatures in a cost-effective and compliant manner. The E-Sign Act of 2000 in the United States provided that a document cannot be rejected on the basis of an electronic signature and required all companies to accept digital signatures on documents. Today there are sufficient cost-effective options available, including solutions that do not require end-users to purchase hardware or software.
Eliminating paper via automation and electronic forms automation:
One of the great benefits of this type of software is that OCR (Optical character recognition) can be used, which enables the user to search the full text of any file. Additionally, user-defined tags can be added to each file to make it easier to locate certain files throughout the entire system.
Eliminating paper via automation and electronic forms automation:
Some paperless software offers a scanner, hardware and software and works seamlessly in separating and organizing important documents. Paperless software might also allow people to enable online signatures for important documents that can be used in any small business or office. Document management and archiving systems do offer some methods of automating forms. Typically, the point in which document management systems start working with a document is when the document is scanned and/or sent into the system. Many document management systems include the ability to read documents via optical character recognition and use that data within the document management system's framework. While this technology is essential to achieving a paperless office it does not address the processes that generate paper in the first place.
Digitizing paper-based documents:
Another key aspect of the paperless office philosophy is the conversion of paper documents, photos, engineering plans, microfiche and all the other paper based systems to digital documents. Technologies that may be used for this include scanners, digital mail solutions, book copiers, wide format scanners (for engineering drawings), microfiche scanners, fax to PDF conversion, online post offices, multifunction printers and document management systems. Each of these technologies uses software that converts the raster formats (bitmaps) into other forms depending on need. Generally, they involve some form of image compression technology that produces smaller raster images or use optical character recognition (OCR) to convert a document into text. A combination of OCR and raster is used to enable search ability while maintaining the original form of the document. An important step is the labeling related to paper-to-digital conversion and the cataloging of scanned documents. Some technologies have been developed to do this, but they generally involve either human cataloging or automated indexing on the OCR document. However, scanners and software continue to improve with the development of small, portable scanners that are able to scan doubled-sided A4 documents at around 30-35ppm to a raster format (typically TIFF fax 4 or PDF).An issue faced by those wishing to take the paperless philosophy to the limit has been copyright laws. These laws may restrict the transfer of documents protected by copyright from one medium to another, such as converting books to electronic format.
Securing and tracing documents:
As awareness of identity theft and data breaches became more widespread, new laws and regulations were enacted, requiring companies that manage or store personally identifiable information to take proper care of those documents. Paperless office systems are easier to secure than traditional filing cabinets, and can track individual accesses to each document.
Difficulties in adopting the paperless office:
A major difficulty in "going paperless" is that much of a business's communication is with other businesses and individuals, as opposed to just being internal. Electronic communication requires both the sender and the recipient to have easy access to appropriate software and hardware. Costs and temporary productivity losses when converting to a paperless office are also a factor, as are government regulations, industry standards, legal requirements, and business policies which may also slow down the change. Businesses may encounter technological difficulties such as file format compatibility, longevity of digital documents, system stability, and employees and clients not having appropriate technological skills.For these reasons, while there may be a reduction of paper, some uses of paper will likely remain indefinitely. However, a 2015 questionnaire suggested that nearly half of small/medium-sized businesses believed they were or could go paperless by the end of that year. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Mood (literature)**
Mood (literature):
In literature, mood is the atmosphere of the narrative. Mood is created by means of setting (locale and surroundings in which the narrative takes place), attitude (of the narrator and of the characters in the narrative), and descriptions. Though atmosphere and setting are connected, they may be considered separately to a degree. Atmosphere is the aura of mood that surrounds the story. It is to fiction what the sensory level is to poetry or mise-en-scene is to cinema. Mood is established to affect the reader emotionally and psychologically and to provide a feeling for the narrative.
Elements:
Mood is generally created through several different things. Setting, which provides the physical location of the story, is used to create a background in which the story takes place. Different settings can affect the mood of a story differently and usually support or conflict with the other content of the story in some way. For example, the desert may be a setting for a cowboy story and may generate a mood of solitude, desolation, and struggle, among other possible associations. The attitude of the narrator is another element that helps generate mood. As the reader is dependent on the narrator's perspective of the story, they see the story through their lenses, feeling the way the narrator feels about what happens or what is being described. Embedded in the attitude of a narrator are the feelings and emotions which make it up. A similar element that goes into generating mood is diction, that is, the choice and style of words the writer uses. Diction conveys a sensibility as well as portrays the content of a story in specific colors, thus affecting the way the reader feels about it.
Difference from tone:
Tone and mood are not the same. The tone of a piece of literature is the speaker's or narrator's attitude towards the subject, rather than what the reader feels, as in mood. Mood is the general feeling or atmosphere that a piece of writing creates within the reader. Mood is produced most effectively through the use of setting, theme, voice and tone. Tone can indicate the narrator's mood, but the overall mood comes from the totality of the written work, even in first-person narratives. The effect a literary work has upon the reader is subjective and produces different associations, while the text made by the author is presented to the reader as an objective thing. The mood is suggested by the elements utilized by the author, but relies on the subjective response from the reader. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Founders' Pie Calculator**
Founders' Pie Calculator:
The Founder's Pie Calculator is a tool for distributing shares when starting a business venture. It was first described in an article by Frank Demmler, who is an Adjunct Teaching Professor of Entrepreneurship at Carnegie Mellon University.In contrast to popular notion, the shares are not distributed equally (because "it's fair") but using a system of 5 important aspects of any business venture, assigning a relative weight to them and then rating the founders in each of these aspects. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Staphylococcus virus G1**
Staphylococcus virus G1:
Staphylococcus virus G1 is a virus of the family Herelleviridae, genus Kayvirus.As a member of the group I of the Baltimore classification, Staphylococcus virus G1 is a dsDNA virus. All the family Herelleviridae members share a nonenveloped morphology consisting of a head and a tail separated by a neck. Its genome is linear. The propagation of the virions includes the attaching to a host cell (a bacterium, as Staphylococcus virus G1 is a bacteriophage) and the injection of the double stranded DNA; the host transcribes and translates it to manufacture new particles. To replicate its genetic content requires host cell DNA polymerases and, hence, the process is highly dependent on the cell cycle.The Gp67 protein of G1 has been found to interact with its host's RNA polymerase though an interaction with a sigma factor.The phage contains a genome of 138,715 base pairs with a 30.4% of GC content and 214 predicted genes; this means that the 88.5% of the DNA is coding open reading frames, and therefore the gene density (the number of genes per kilobase) is 1.54. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Desktop search**
Desktop search:
Desktop search tools search within a user's own computer files as opposed to searching the Internet. These tools are designed to find information on the user's PC, including web browser history, e-mail archives, text documents, sound files, images, and video. A variety of desktop search programs are now available; see this list for examples. Most desktop search programs are standalone applications. Desktop search products are software alternatives to the search software included in the operating system, helping users sift through desktop files, emails, attachments, and more.Desktop search emerged as a concern for large firms for two main reasons: untapped productivity and security. According to analyst firm Gartner, up to 80% of some companies' data is locked up inside unstructured data — the information stored on a user's PC, the directories (folders) and files they've created on a network, documents stored in repositories such as corporate intranets and a multitude of other locations. Moreover, many companies have structured or unstructured information stored in older file formats to which they don't have ready access.
Desktop search:
The sector attracted considerable attention in the late 2004 to early 2005 period from the struggle between Microsoft and Google. According to market analysts, both companies were attempting to leverage their monopolies (of web browsers and search engines, respectively) to strengthen their dominance. Due to Google's complaint that users of Windows Vista cannot choose any competitor's desktop search program over the built-in one, an agreement was reached between US Justice Department and Microsoft that Windows Vista Service Pack 1 would enable users to choose between the built-in and other desktop search programs, and select which one is to be the default. As of September 2011, Google ended life for Google Desktop.
Technologies:
Most desktop search engines build and maintain an index database to improve performance when searching large amounts of data. Indexing usually takes place when the computer is idle and most search applications can be set to suspend indexing if a portable computer is running on batteries, in order to save power. There are notable exceptions, however: Voidtools' Everything Search Engine, which performs searches over only file names, not contents, is able to build its index from scratch in just a few seconds. Another exception is Vegnos Desktop Search Engine, which performs searches over filenames and files' contents without building any indices. An index may also not be up-to-date, when a query is performed. In this case, results returned will not be accurate (that is, a hit may be shown when it is no longer there, and a file may not be shown, when in fact it is a hit). Some products have sought to remedy this disadvantage by building a real-time indexing function into the software. There are disadvantages to not indexing. Namely, the time to complete a query can be significant, and the issued query can also be resource-intensive.
Technologies:
Desktop search tools typically collect three types of information about files: file and folder names metadata, such as titles, authors, comments in file types such as MP3, PDF and JPEG file content, for the types of documents supported by the toolLong-term goals for desktop search include the ability to search the contents of image files, sound files and video by context.
Platforms & their histories:
Windows Indexing Service, a "base service that extracts content from files and constructs an indexed catalog to facilitate efficient and rapid searching", was originally released in August 1996. It was built in order to speed up manually searching for files on Personal Desktops and Corporate Computer Network. Indexing service helped by using Microsoft web servers to index files on the desired hard drives. Indexing was done by file format. By using terms that users provided, a search was conducted that matched terms to the data within the file formats. The largest issue that Indexing service faced was the fact that every time a file was added, it had to be indexed. This coupled with the fact that the indexing cached the entire index in RAM, made the hardware a huge limitation. This made indexing large amounts of files require extremely powerful hardware and very long wait times.
Platforms & their histories:
In 2003, Windows Desktop Search (WDS) replaced Microsoft Indexing Service. Instead of only matching terms to the details of the file format and file names, WDS brings in content indexing to all Microsoft files and text-based formats such as e-mail and text files. This means, that WDS looked into the files and indexed the content. Thus, when a user searched a term, WDS no longer matched just information such as file format types and file names, but terms, and values stored within those files. WDS also brought "Instant searching" meaning the user could type a character and the query would instantly start searching and updating the query as the user typed in more characters. Windows Search apparently used up a lot of processing power, as Windows Desktop Search would only run if it was directly queried or while the PC was idle. Even only running while directly queried or while the computer was idled, indexing the entire hard drive still took hours. The index would be around 10% of the size of all the files that it indexed, e.g. if the indexed files amounted to around 100GB, the index size would be 10GB.
Platforms & their histories:
With the release of Windows Vista came Windows Search 3.1. Unlike its predecessors WDS and Windows Search 3.0, 3.1 could search through both indexed and non indexed locations seamlessly. Also, the RAM and CPU requirements were greatly reduced, cutting back indexing times immensely. Windows Search 4.0 is currently running on all PCs with Windows 7 and up.
Platforms & their histories:
Mac OS In 1994 the AppleSearch search engine was introduced, allowing users to fully search all documents within their Macintosh computer, including file format types, meta-data on those files, and content within the files. AppleSearch was a client/server application, and as such required a server separate from the main device in order to function. The biggest issue with AppleSearch were its large resource requirements: "AppleSearch requires at least a 68040 processor and 5MB of RAM." At the time, a Macintosh computer with these specifications was priced at approximately $1400; equivalent to $2050 in 2015. On top of this, the software itself cost an additional $1400 for a single license.
Platforms & their histories:
In 1997, Sherlock was released alongside Mac OS 8.5. Sherlock (named after the famous fictional detective Sherlock Holmes) was integrated into Mac OS's file browser – Finder. Sherlock extended the desktop search function to the World Wide Web, allowing users to search both locally and externally. Adding additional functions—such as internet access—to Sherlock was relatively simple, as this was done through plugins written as plain text files. Sherlock was included in every release of Mac OS from Mac OS 8, before being deprecated and replaced by Spotlight and Dashboard in Mac OS X 10.4 Tiger. It was officially removed in Mac OS X 10.5 Leopard Spotlight was released in 2005 as part of Mac OS X 10.4 Tiger. It is a Selection-based search tool, which means the user invokes a query using only the mouse. Spotlight allows the user to search the Internet for more information about any keyword or phrase contained within a document or webpage, and uses a built-in calculator and Oxford American Dictionary to offer quick access to small calculations and word definitions. While Spotlight initially has a long startup time, this decreases as the hard disk is indexed. As files are added by the user, the index is constantly updated in the background using minimal CPU & RAM resources.
Platforms & their histories:
Linux There are a wide range of desktop search options for Linux users, depending upon the skill level of the user, their preference to use desktop tools which tightly integrate into their desktop environment, command-shell functionality (often with advanced scripting options), or browser-based users interfaces to locally running software. In addition, many users create their own indexing from a variety of indexing packages (e.g. one which does extraction and indexing of PDF/DOC/DOCX/ODT documents well, another search engine which works w/ vcard, LDAP, and other directory/contact databases, as well as the conventional find and locate commands.
Platforms & their histories:
Ubuntu Ubuntu Linux didn't have desktop search until release Feisty Fawn 7.04. Using Tracker desktop search, the desktop search feature was very similar to Mac OS's AppleSearch and Sherlock. It not only featured the basic features of file format sorting and meta-data matching, but support for searching through emails and instant messages was added. In 2014 Recoll was added to Linux distributions, working with other search programs such as Tracker and Beagle to provide efficient full text search. This greatly increased the types of queries and file types that Linux desktop searches could handle. A major advantage of Recoll is that it allows for greater customization of what is indexed; Recoll will index the entire hard disk by default, but can be made to index only selected directories, omitting directories that will never need to be searched.
Platforms & their histories:
openSUSE Starting with KDE4, the NEPOMUK was introduced. It provided the ability to index a wide range of desktop content, email, and use semantic web technologies (e.g. RDF) to annotate the database. The introduction faced a few glitches, much of which seemed to be based on the triplestore. Performance improved (at least for queries) by switching the backend to a stripped-down version of the Virtuoso Open Source Edition, however indexing remained a common user complaint.
Platforms & their histories:
Based on user feedback, the Nepomuk indexing and search has been replaced with the Baloo framework based on Xapian. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Winpepi**
Winpepi:
WinPepi is a freeware package of statistical programs for epidemiologists, comprising seven programs with over 120 modules. WinPepi is not a complete compendium of statistical routines for epidemiologists but it provides a very wide range of procedures, including those most commonly used and many that are not easy to find elsewhere. This has repeatedly led reviewers to use a "Swiss army knife" analogy. Each program has a comprehensive fully referenced manual.
Winpepi:
WinPepi had its origins in 1983 in a book of programs for hand-held calculators,. In 1993, this was developed into a set of DOS-based computer programs by Paul M. Gahlinger with the assistance of one of the original authors of calculator programs, Prof. JH Abramson that came to be called Pepi (an acronym for "Programs for EPIdemiologists") and evolved, after its fourth version in 2001, into WinPepi (Pepi-for-Windows). New expanded versions were issued at frequent intervals. Professor Joe Abramson died away on February 17, 2017, and since then no longer is developed. The latest update (version 11.65) was released on August 23, 2016.
Winpepi:
The programs are notable for their user-friendliness. A portal links to programs and manuals. Menus, buttons, on-screen instructions, help screens, pop-up hints, and built-in error traps are also provided. The programs can also be operated from a USB flash drive.WinPepi does not provide data management facilities. With some exceptions, it requires the entry (at the keyboard or by pasting from a spreadsheet or text file) of data that have already been counted or summarized. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Container (type theory)**
Container (type theory):
In type theory, containers are abstractions which permit various "collection types", such as lists and trees, to be represented in a uniform way. A (unary) container is defined by a type of shapes S and a type family of positions P, indexed by S. The extension of a container is a family of dependent pairs consisting of a shape (of type S) and a function from positions of that shape to the element type. Containers can be seen as canonical forms for collection types.For lists, the shape type is the natural numbers (including zero). The corresponding position types are the types of natural numbers less than the shape, for each shape.
Container (type theory):
For trees, the shape type is the type of trees of units (that is, trees with no information in them, just structure). The corresponding position types are isomorphic to the types of valid paths from the root to particular nodes on the shape, for each shape.
Note that the natural numbers are isomorphic to lists of units. In general the shape type will always be isomorphic to the original non-generic container type family (list, tree etc.) applied to unit.
One of the main motivations for introducing the notion of containers is to support generic programming in a dependently typed setting.
Categorical aspects:
The extension of a container is an endofunctor. It takes a function g to λ(s,f).(s,g∘f) This is equivalent to the familiar map g in the case of lists, and does something similar for other containers.
Indexed containers:
Indexed containers (also known as dependent polynomial functors) are a generalisation of containers, which can represent a wider class of types, such as vectors (sized lists).The element type (called the input type) is indexed by shape and position, so it can vary by shape and position, and the extension (called the output type) is also indexed by shape. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Tone Clock**
Tone Clock:
The Tone Clock, and its related compositional theory Tone-Clock Theory, is a post-tonal music composition technique, developed by composers Peter Schat and Jenny McLeod. Music written using tone-clock theory features a high economy of musical intervals within a generally chromatic musical language. This is because tone-clock theory encourages the composer to generate all their harmonic and melodic material from a limited number of intervallic configurations (called 'Intervallic Prime Forms', or IPFs, in tone-clock terminology). Tone-clock theory is also concerned with the way that the three-note pitch-class sets (trichords or 'triads' in tone-clock terminology) can be shown to underlie larger sets, and considers these triads as a fundamental unit in the harmonic world of any piece. Because there are twelve possible triadic prime forms, Schat called them the 'hours', and imagined them arrayed in a clock face, with the smallest hour (012 or 1-1 in IPF notation) in the 1 o'clock position, and the largest hour (048 or 4-4 in IPF notation) in the 12 o'clock position. A notable feature of Tone-Clock Theory is 'tone-clock steering': transposing and/or inverting hours so that each note of the chromatic aggregate is generated once and once only.
Relationship to pitch-class set theory and serialism:
While Tone-Clock Theory displays many similarities to Allen Forte's pitch-class set theory, it places greater emphasis on the creation of pitch 'fields' from multiple transpositions and inversions of a single set-class, while also aiming to complete all twelve pitch-classes (the 'chromatic aggregate') with minimal, if any, repetition of pitch-classes. While the emphasis of Tone-Clock Theory is on creating the chromatic aggregate, it is not a serial technique, as the ordering of pitch-classes is not important. Having said that, it bears a certain similarity to the technique of 'serial derivation', which was used by Anton Webern and Milton Babbitt amongst others, in which a row is constructed from only one or two set-classes. It also bears a similarity to Josef Hauer's system of 'tropes', albeit generalised to sets of any cardinality.
Peter Schat:
The term 'tone clock' (toonklok in Dutch) was originally coined by Dutch composer Peter Schat, in reference to a technique he had developed of creating twelve-note pitch 'fields' by transposing and inverting a trichord so that all twelve pitch-classes would be created once and once only. Schat discovered that it was possible to achieve a trichordally partitioned aggregate from all twelve trichords, with the exception of the diminished triad (036 or 3-10 in Forte's pitch-class set theory). Schat called the 12 trichords the 'hours', and they became central to the harmonic organization in a number of his works. He created a 'zodiac' of the hours, which shows in graphical form the symmetrical patterns created by the tone-clock steerings of the hours. (Note that Hour X is substituted with its tetrachord, the diminished seventh, which can be tone-clock steered).
Jenny McLeod and Tone-Clock Theory:
In her as-yet-unpublished monograph 'Chromatic Maps', New Zealand composer Jenny McLeod extended and expanded Schat's focus on trichords to encompass all 223 set-classes, thus becoming a true 'Tone-Clock Theory'. She also introduced new terminology in order to 'simplify' the labelling and categorization of the set-classes, and to draw attention to the specific transpositional properties within a field.
The most succinct musical expression of the theory is in her 24 Tone Clock Pieces, written between 1988–2011. Each of these piano works explores different aspects of tone-clock theory.
McLeod's terminology:
The following terms are explained in McLeod's Chromatic Maps I: Intervallic Prime Form (IPF): the prime form of a pitch-class set, expressed as a series of interval classes (e.g. set-class (037) is called 3-4 in Tone-Clock Theory, as these are the interval classes between successive pitches in the prime form). Where possible, IPFs should be labelled using hour-group notation (see below). Furthermore, if an IPF can be rewritten so that the number of different interval classes in the title is one or two, then this is the preferred notation: e.g. IPF 143 (0158 in pc-set theory) can be rewritten as 414 or 434, which is to be preferred, as it makes the relationship to the trichords clearer — see below.
McLeod's terminology:
Hours: the 12 trichordal set-classes, called 'triads' in Tone-Clock Theory. The 'first hour' is therefore IPF 1-1 (in pc-set theory, this would be set-class 3-1 or (012)), while the 'twelfth hour' is IPF 4-4 (in pc-set theory, this would be set-class 3-12 or (048)). In Tone-Clock Theory, the hours are often referred to using Roman numerals — so IV is IPF 1-4, while IX is IPF 2-5.
McLeod's terminology:
Major/minor form: For 'asymmetrical' hours (hours that are formed from two different interval classes), the 'minor' form is the inversion of the triad with the smallest ic on the bottom, while the 'major' form is the inversion with the largest ic on the bottom. So, XIm is equivalent to a standard minor triad (3-4), while XIM is equivalent to a major triad (4-3).
McLeod's terminology:
Hour Groups: IPFs with only one or two interval classes can often be related to a single hour, and relabelled using the Roman numeral hour notation to make this relationship clear. For instance, the tetrachord IPF 242 clearly relates to the 'eighth hour', IPF 2-4 (set-class 3-8 in pc-set theory). It can therefore be labelled as VIII4 — the 4 relating to its cardinality, a tetrachord. Note that some IPFs cannot be labelled as hour-groups if the distribution of intervals is ambiguous: e.g. for IPF 2232, it is unclear as to whether the generating trichord is 2-2 (VI) or 2-3 (VII). However, 2232 can be rewritten as 3223, 5225 or 5555 or 2323, all of which are valid hour groups (see 'Multiple-Hour Groups' below).
McLeod's terminology:
Oedipus Groups: The commonest kind of hour-group, in which two interval classes alternate (e.g. the octatonic scale, in which the interval classes proceed 1212121), relating to the second hour (II, or IPF 1-2). These are simply written in the form: II8.
McLeod's terminology:
Multiple-Hour Groups: Some IPFs can be rearranged so that while they are no longer in prime form, they do display a different hour relationship — for instance, 414 (IVM4) can also be rewritten as 434 (XIM4). In Tone-Clock Theory, this is considered to show that an IPF has multiple relationships to different hours, which can be brought out by the composer depending on how they are voiced and utilized.
McLeod's terminology:
Symmetrical Pentads: A pentachord/pentad that has a clear relationship to an asymmetrical hour, but in which the two interval classes are arrayed symmetrically rather than alternately (e.g. 2442) is called a 'symmetrical pentad', and is written thus: SP VIII.
McLeod's terminology:
Steering: one IPF transposes by another (i.e. IPF a 'steers' IPF b). If IPF a and b are the same, then this is 'self steering'. Note that the IPF does not necessarily remain in its prime form, but can also appear inverted. In Tone-Clock Theory, the 'steering group' (the IPF that is underlying the transpositional levels) has a kind of 'deep structure' status — the listener does not necessarily hear its immediate effect, but it governs elements such as voice-leadings.
McLeod's terminology:
Reverse Steering: the 'steering group' becomes the 'steered group' and vice versa — i.e. IPF b 'steers' IPF A. In Tone-Clock Theory, this is considered to have a kind of 'symmetry', and often appears to provide contrast or 'closure' to a passage.
Twelve-Tone Steering or Tone-Clock Steering: a specific steering of an IPF so that the chromatic aggregate is created with no repetition of pc. All of the triads except the tenth hour (the diminished triad) can be steered in this way. Some tetrachords, and all hexachords that are self-complementary (i.e. not Z-related) can also be steered in this way.
Anchor Form: the creation of the twelve-tone aggregate with no pc repetition, typically from a tetrachord, but using a second IPF to complete the aggregate.
Mathematical generalizations of 'tessellating' set-classes:
New Zealand composer and music theorist Michael Norris has generalized the concept of 'tone-clock steering' into a theory of 'pitch-class tessellation', and has developed an algorithm that can provide tone-clock steerings in 24TET. He has also written about and analyzed Jenny McLeod's 'Tone Clock Pieces'. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Epstein–Barr virus latent membrane protein 2**
Epstein–Barr virus latent membrane protein 2:
Epstein–Barr virus (EBV) latent membrane protein 2 (LMP2) are two viral proteins of the Epstein–Barr virus. LMP2A/LMP2B are transmembrane proteins that act to block tyrosine kinase signaling. LMP2A is a transmembrane protein that inhibits normal B-cell signal transduction by mimicking an activated B-cell receptor (BCR). The N-terminus domain of LMP2A is tyrosine phosphorylated and associates with Src family protein tyrosine kinases (PTKs) as well as spleen tyrosine kinase (Syk). PTKs and Syk are associated with BCR signal transduction.
LMP2 gene structure and expression:
Latent Membrane Protein 2 (LMP2) is a rightward transcribing gene. LMP2's transcript originates across the fused terminal repeats in sequences at opposite ends of the genome. 16–24 hours after infection, the genome circularizes and the open reading frame is created. 1.7 kb and 2.0 kb messages are created by alternative promoter usage and differ only in the sequences of the first exon. These messages are expressed in Epstein-Barr Virus transformed lymphoblastoid cell cultures. The ratio of these messages varies widely and unpredictably suggesting little co-ordinate control of promoter activity or mRNA abundance. Residues 497 (LMP2A) and 378 (LMP2B) are encoded by these two messages. These two iso forms of LMP2 only differ in that LMP2A contains an extra 119 residue N-terminal domain encoded in exon 1. LMP2B's first exon is non coding. Initiation of translation is presumed to occur at the first available [methionine] that is in-frame in exon two. Twelve membrane spanning segments ending with a short 28 residue COOH tail are common to both proteins in residue 379.
LMP2A protein interactions:
The 119 amino-terminal cytoplasmic domain of LMP2A has several motifs that mediate interactions between proteins, including eight tyrosine residues. Two motifs that are centered on Y74 and Y85 are spaced 7 residues apart to form an immunoreceptor tyrosine-based activation motif (ITAM) commonly found in Fc receptors and signal molecules of B-cell and T-cell receptors. Receptor docking with molecules containing cytoplasmic tyrosine kinases is governed by Phosphorylation of ITAM motifs. In lymphoblastoid cell cultures, Syk tyrosine kinases have been found in LMP2A immunoprecipitates following in vitro kinase reactions followed by Syk antibody reimunnoprecipitation. Affinity precipitation experiments have shown that Syk interacts with phosphorylated peptides corresponding to the LMP2A-ITAM complex. Residues for Syk binding have been discovered by inducing point mutations in Y74F and Y85F point. Tyrosine kinase LYN has also been detected in immunoprecipitates from transiently transfected B cells at residue Y112. Constitutive phosphorylation occurs on tyrosine, serine and threonine residues.
LMP2A function:
Epstein–Barr virus (EBV) establishes a lifelong latent infection in B lymphocytes. Viral LMP2A mRNA is frequently detected in peripheral blood B lymphocytes and the protein is often present in tumor biopsies from EBV malignancies. This suggests LMP2A plays an important role in viral latency, as well as in progression of EBV related diseases such as Burkitt's lymphoma, Nasopharyngeal carcinoma, and Hodgkin's lymphoma. Portis and Longnecker (2004) have found that LMP2A induces activation of B cell Ras pathway in vivo. Using down stream inhibitors of Ras signaling components, they demonstrated activation of PI3K/Akt pathway is involved in LMP2A mediated B cell survival and resistance to apoptosis Caldwell et al. (1998) demonstrated the ability of LMP2A to provide survival signals to B-cells in vivo where expression of an LMP2A transgene in mice disrupts with normal B-cell development. This results in BCR-negative cells being able to exit the bone marrow and survive in peripheral lymphoid organs. B-cells from LMP2A transgenic E line undergo immunoglobulin light chain rearrangements, but not heavy chain rearrangement. This indicates that LMP2A signaling bypasses the requirement for immunoglobulin recombination and allows immunoglobulin M-negative type cells to bypass apoptosis, allowing them to colonize peripheral lymphoid organs.
LMP2B:
Eight exons of LMP2 isoforms encode 12 membrane spanning segments that are connected by short hydrophilic loops and ends with a 27 amino acid cytoplasmic C-terminus domain. LMP2B, unlike LMP2A, does not contain the N-terminal 119 amino acid cytoplasmic signaling domain. Most LMP2 research is focused on LMP2A isoform due to its unique expression in latently infected B lymphocytes in situ. LMP2B protein function is unknown. There has not been a comprehensive phenotypic analysis of the LMP2B isoform because of its hydrophobic character. While the role of LMP2B in pathogenesis remains uncertain, homology studies comparing the LMP2 gene of EBV with Rhesus and Baboon Lymphocryptovirus, have revealed promoter regulatory elements, Epstein–Barr nuclear antigen-2 responsiveness, and the ability to make LMP2B transcripts are conserved. This implies that an unrecognized role for LMP2B in the EBV life cycle has yet to be determined. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Hexaconazole**
Hexaconazole:
Hexaconazole is a broad-spectrum systemic triazole fungicide used for the control of many fungi particularly ascomycetes and basidiomycetes. Major consumption is in Asian countries and it is used mainly for the control of rice sheath blight in China, India, Vietnam, and parts of East Asia. It is also used for control of diseases in various fruits and vegetables. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Scope (project management)**
Scope (project management):
In project management, scope is the defined features and functions of a product, or the scope of work needed to finish a project. Scope involves getting information required to start a project, including the features the product needs to meet its stakeholders' requirements.: 116 Project scope is oriented towards the work required and methods needed, while product scope is more oriented toward functional requirements. If requirements are not completely defined and described and if there is no effective change control in a project, scope or requirement creep may ensue.: 434 : 13 Scope management Scope management is the process of defining,: 481–483 and managing the scope of a project to ensure that it stays on track, within budget, and meets the expectations of stakeholders. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Angelism**
Angelism:
In theology, angelism is a pejorative for arguments that human beings are essentially angelic, and therefore sin-less in actual nature. The term is used as a criticism, to identify ideas which reject conceptions of human nature as being (to some degree) sinful and lustful: "[Angelism] minimizes concupiscence and therefore ignores the need for moral vigilance and prayer to cope with the consequences of original sin." | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Raw Run**
Raw Run:
Raw Run is recognized colloquially within the longboarding community as a recorded video showcasing the entire descent down a hill, from top to bottom. This is typically done all in one take to showcase the rider's consistency and skill.
Notable Examples:
Colorado native and longboarder, Zak Maytum, made national news when a video of him descending a roadway reportedly achieving 70 mph went viral. The YouTube video titled "Raw Run: Zak Maytum" reached one million views in under two days. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Vernolepin**
Vernolepin:
Vernolepin is a sesquiterpene lactone isolated from the dried fruit of Vernonia amygdalina. It shows platelet anti-aggregating properties and is also an irreversible DNA polymerase inhibitor, hence may have antitumor properties. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Equine multinodular pulmonary fibrosis**
Equine multinodular pulmonary fibrosis:
Equine multinodular pulmonary fibrosis is a chronic lung disease of horses. There is evidence that the disease is caused by infection with a gammaherpesvirus, equine herpesvirus 5. The disease affects usually adult horses reducing the ability to exercise as a result of the formation of nodular lesions in the lungs.
Signs and symptoms:
Signs of equine multinodular pulmonary fibrosis are mainly weight loss, fever, respiratory distress and depression. Unwillingness to move, mild cough and intermittent tachypnea have been reported. When the disease progresses, the general condition of the horse usually deteriorates, showing possible signs as severe dyspnea, hypoxemia or nasal discharge.
Diagnosis:
Generally EMPF has been categorized as an uncommon disease in horses with idiopathic in origin. Comparing to other interstitial fibrosing lung diseases in horses, the lesions in EMPF differ remarkably due to their nodular pattern. In histologic examinations, a sharp border may be discerned between the fibrotic and intact lung tissue. Integrative fact among horses with EMPF has been frequently established to be an infection of EHV-5. Diagnosis is based on the PCR results and pathomorphological findings from lungs. The disease has been reported as part of the granumalomatous process in an apparent silicate pneumoconiosis. Among horses with diagnosed EMPF, thoracic radiography have shown several changes. Generalized mixed interstitial and nodular pattern in the lung parenchyma, large radiodense opacity in the caudodorsal area of the lung and thickened peribronchial walls. Thoracic ultrasonography have demonstrated a bilaterally generalized pleural roughening, several comet tails and multiple, hyperechogenic scattered areas.
Pathology findings:
Based on a study of 5 horses by T. POTH, G. NIEDERMAIER and W. HERMANNS (2009) as well as on a study of 24 horses investigated by WILLIAMS et al. (2007), there are significant pathomorphological findings in lungs with equine multinodular pulmonary fibrosis. In gross findings can be noticed pulmonary induration in all lung lobes as well as lesions restricted to the lungs and the bronchiolar lymph nodes. The distribution between two patterns can be seen as a large, whitish until tan and firm nodules of fibrosis. There are noticed histological findings, such as fibrosis and inflammation in different stages and degrees. In alveolies accumulation of neutrophils can be identified, and enlarged, vacuolated macrophages with several multinucleated giant cells. In addition hypertrophy of type II pneumocytes, formation on abnormal cystic airspaces and viral eosinophilic intranuclear inclusion bodies in intraluminal macrophages.
Treatment and prognosis:
Different treatments have been experimented with horses diagnosed EMPF, but the prognosis of EMPF is tentative to unfavourable. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Circulation: Cardiovascular Imaging**
Circulation: Cardiovascular Imaging:
Circulation: Cardiovascular Imaging is a scientific journal published by Lippincott Williams & Wilkins for the American Heart Association. The journals presents articles focusing on clinical trials and observational studies, with a focus on innovative imaging approaches to diagnosis cardiovascular disease. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Harris matrix**
Harris matrix:
The Harris matrix is a tool used to depict the temporal succession of archaeological contexts and thus the sequence of depositions and surfaces on a 'dry land' archaeological site, otherwise called a 'stratigraphic sequence'. The matrix reflects the relative position and stratigraphic contacts of observable stratigraphic units, or contexts. It was developed in 1973 in Winchester, England, by Dr. Edward C. Harris.
Harris matrix:
The concept of creating seriation diagrams of archaeological strata based on the physical relationship between strata had had some currency in Winchester and other urban centres in England prior to Harris's formalisation. One of the results of Harris's work, however, was the realisation that sites had to be excavated stratigraphically, in the reverse order to that in which they were created, without the use of arbitrary measures of stratification such as spits or planums. In his Principles of archaeological stratigraphy Harris first proposed the need for each unit of stratification to have its own graphic representation, usually in the form of a measured plan. In articulating the laws of archaeological stratigraphy and developing a system in which to demonstrate simply and graphically the sequence of deposition or truncation on a site, Harris, it has been argued, has followed in the footsteps of notable stratigraphic archaeologists such as Mortimer Wheeler, without necessarily being a notable excavator himself.
Harris matrix:
Harris's work was a vital precursor to the development of single context planning by the Museum of London and also the development of land use diagrams, all facets of a suite of archaeological recording tools and techniques developed in the UK which allow in-depth analysis of complex archaeological data sets, usually from urban excavations.
Harris' laws of archaeological stratigraphy:
The first four laws were published in 1979. A fifth law has been added following papers presented at the "Interpreting Stratigraphy: a Review of the Art" conferences in the UK from 1992 to 2003.
Law of superposition In a series of layers and interfacial features, as originally created, the upper units of stratification are younger and the lower are older, for each must have been deposited on, or created by the removal of, a pre-existing mass of archaeological stratification.
Law of original horizontality Any archaeological layer deposited in an unconsolidated form will tend towards a horizontal disposition. Strata which are found with tilted surfaces were so originally deposited, or lie in conformity with the contours of a pre-existing basin of deposition.
Harris' laws of archaeological stratigraphy:
Law of original continuity Any archaeological deposit, as originally laid down, will be bounded by the edge of the basin of deposition, or will thin down to a feather edge. Therefore, if any edge of the deposit is exposed in a vertical plane view, a part of its original extent must have been removed by excavation or erosion: its continuity must be sought, or its absence explained.
Harris' laws of archaeological stratigraphy:
Law of stratigraphic succession Any given unit of archaeological stratification takes its place in the stratigraphic sequence of a site from its position between the undermost of all units which lie above it and the uppermost of all those units which lie below it and with which it has a physical contact, all other superpositional relationships being regarded as redundant.
Harris' laws of archaeological stratigraphy:
Law of original consolidation This law makes the distinction between architectural stratigraphy and all other types in regard to three criteria: When intact, architectural stratigraphy is of consolidated nature, as opposed to the loose or scattered below-ground remains. Erosion causes parts of buildings to become part of soil stratigraphy.
Architectural stratigraphy is characterised by human intentionality, which is only seldom the case with below-ground strata.
Gravity: architectural stratigraphy left in situ is pulled down by gravity, in combination with human or natural intervention, while below-ground stratigraphy is created by gravity. As a result, architectural stratigraphy scatters with time, the oldest parts being those which resisted the effect of time.
In use:
In constructing a matrix, the latest contexts sit on top of the matrix and the earliest at the bottom with the lines that link them together representing direct stratigraphic contact (though note that though all stratigraphic relationships are physical, not all physical relationships are stratigraphic). The matrix thus demonstrates the temporal relationship between any two units of archaeological stratification. While excavating, it is best practice to compile the area and site stratigraphic matrices during the progress of an excavation through reference to both the drawn and written record. Regular daily checking of the record and the compilation of the matrix itself both help inform the individual archaeologist on the physical processes of site formation and highlight any areas where dubious relationships such as H relationships or loops in the recorded sequence may occur. Loops are sequences in the matrix that produce temporal anomalies so that the earliest context in a sequence of context appears to be later than the latest context by virtue of errors in excavation or recording.
In use:
Urban archaeological sites are complex affairs, often generating thousands of units of archaeological stratigraphy (contexts). It is of even more vital importance when excavating such sites to compile the matrix as the excavation progresses. Such sites by definition produce multi-linear sequences of succession and to date the best way to get a handle of these sequences is to compile the matrix by hand, based on the drawings and the context sheets. This ensures an internally consistent record and that the complexity of the site is given due regard. Computer programmes do exist which can aid the production of a matrix, though at the moment these tend towards articulating linear sequences rather than multi-linear sequences.
In use:
The Harris matrix is a tool that aids the accurate and consistent excavation of a site and articulates complex sequences in a clear and understandable way. Harris matrices play an invaluable role in the articulation of sequence and provide the building blocks from which higher order units of stratigraphically related events can be constructed.
In use:
Example Take this hypothetical section as an example of matrix formation. Here there are twelve contexts, numbered thus: A horizontal layer Masonry wall remnant Backfill of the wall construction cut (sometimes called construction trench) A horizontal layer, probably the same as 1 Construction cut for wall 2 A clay floor abutting wall 2 Fill of shallow cut 8 Shallow pit cut A horizontal layer A horizontal layer, probably the same as 9 Natural sterile ground formed before human occupation of the site Trample in the base of cut 5 formed by workmen's boots constructing the structure wall 2 and floor 6 is associated with.The order in which these events occurred and the reverse order they should have been excavated with would be demonstrated by the following Harris matrix.
In use:
Completed matrix The later a context's formation is, the higher it is in the matrix, and conversely the earlier it is, the lower. Relationships between contexts are recorded in the sequence of formation, so even though wall 2 is physically higher than other contexts in section, its position in the matrix is immediately under backfill 3 and below floor 6. This is because the formation of the backfill and floor happened later. Also note the matrix splits into two parts below the construction cut 5. This is because the relationships across the section have been destroyed by the cutting of construction cut 5 and even if it is likely that layers 1 and 4 are probably the same deposit the information can not be guaranteed if the only information we had was this section. However the position of cut 5 and natural layer 11 "ties" the matrix together above and below the split in the matrix.
In use:
Interpretation Starting at the bottom, the order events occurred in this section is revealed by the matrix as follows. Natural ground formation 11 was followed by the laying down of layers 9 and 10 which "probably" occurred as the same event. Then a shallow pit 8 was cut and then back filled with 7. This pit feature in turn was "sealed" by the laying down of layer 1 which is probably the same event as layer 4. Following this a major change in land use occurs as construction cut 5 is dug and immediately followed by trample off the feet of people 12 working in the construction cut 5 who then build wall 2 after which they backfill excess space between the wall 2 and cut 5 with backfill 3. Finally clay floor 6 is laid down to the right of wall 2 over backfill 3 indicating a probable interior surface.
In use:
The nature of archaeological investigation and the subjective nature of all human experience means that a degree of interpretive activity obviously occurs during the process of excavation. The Harris matrix itself however serves to provide a check on observable quantifiable physical phenomena and relies on the excavator understanding which way in the sequence is 'up' and the ability of the excavator to excavate and record honestly, accurately and stratigraphically. The process of excavation destroys the context and requires the excavator to be able and willing to make informed (by experience and where necessary collaboration) decisions about which context or contexts lay at the top of the sequence.
In use:
As long as undercutting is not endemic, in practice onsite errors in judgment should become evident especially if temporary sections are kept for stratigraphic control in areas of a site that are hard to discern. However, archaeological sections, while being useful and valuable, only ever present a slice or caricature of a sequence, and often underrepresent its complexity. The use of archaeological sections when dealing with stratigraphic complexity is limited and their use should be context-sensitive rather than as a running arbiter of sequence.
Carver matrix:
Professor Martin Carver of the University of York has also developed a seriation diagram, known as the Carver matrix (not to be confused with the military term also named CARVER matrix). This diagram, which is based on the Harris matrix, is designed to represent the time lapse in use of recognizable archaeological entities such as floors and pits. Like Edward Harris, he used contexts numbered and defined on site as the basic elements of the sequence, but he added higher order groupings ("feature" and "structure") to increase the interpretive power. Several other people, such as Norman Hammond, looked to develop similar systems in the 1980s and 1990s.
References and sources:
References Sources The MoLAS archaeological site manual MoLAS, London 1994. ISBN 0-904818-40-3. Rb 128pp. bl/wh Harris, Edward C.; (1979 & 1989). Principles of Archaeological Stratigraphy. 40 figs. 1 pl. 136 pp. London & New York: Academic Press. ISBN 0-12-326651-3 Harris, Edward C.; Brown III, Marley R.; & Brown, Gregory, J. (eds.) (1993). Practices of Archaeological Stratigraphy. London: Academic Press. ISBN 0-12-326445-6.
References and sources:
Roskams, Steve (Ed.) (2000). Interpreting Stratigraphy. Papers presented to the Interpreting Stratigraphy Conferences 1993-1997. BAR International Series 910. ISBN 1-84171-210-8. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Primordial fluctuations**
Primordial fluctuations:
Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflation field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in".
Primordial fluctuations:
At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.
The statistical properties of the primordial fluctuations can be inferred from observations of anisotropies in the cosmic microwave background and from measurements of the distribution of matter, e.g., galaxy redshift surveys. Since the fluctuations are believed to arise from inflation, such measurements can also set constraints on parameters within inflationary theory.
Formalism:
Primordial fluctuations are typically quantified by a power spectrum which gives the power of the variations as a function of spatial scale. Within this formalism, one usually considers the fractional energy density of the fluctuations, given by: δ(x→)=defρ(x→)ρ¯−1=∫dkδkeik→⋅x→, where ρ is the energy density, ρ¯ its average and k the wavenumber of the fluctuations. The power spectrum P(k) can then be defined via the ensemble average of the Fourier components: ⟨δkδk′⟩=2π2k3δ(k−k′)P(k).
Formalism:
There are both scalar and tensor modes of fluctuations.
Scalar modes Scalar modes have the power spectrum Ps(k)=|δR|2.
Many inflationary models predict that the scalar component of the fluctuations obeys a power law in which Ps(k)∝kns.
Formalism:
For scalar fluctuations, ns is referred to as the scalar spectral index, with ns=1 corresponding to scale invariant fluctuations.The scalar spectral index describes how the density fluctuations vary with scale. As the size of these fluctuations depends upon the inflaton's motion when these quantum fluctuations are becoming super-horizon sized, different inflationary potentials predict different spectral indices. These depend upon the slow roll parameters, in particular the gradient and curvature of the potential. In models where the curvature is large and positive ns>1 . On the other hand, models such as monomial potentials predict a red spectral index ns<1 . Planck provides a value of ns of 0.96.
Formalism:
Tensor modes The presence of primordial tensor fluctuations is predicted by many inflationary models. As with scalar fluctuations, tensor fluctuations are expected to follow a power law and are parameterized by the tensor index (the tensor version of the scalar index). The ratio of the tensor to scalar power spectra is given by r=2|δh|2|δR|2, where the 2 arises due to the two polarizations of the tensor modes. 2015 CMB data from the Planck satellite gives a constraint of 0.11
Adiabatic/isocurvature fluctuations:
Adiabatic fluctuations are density variations in all forms of matter and energy which have equal fractional over/under densities in the number density. So for example, an adiabatic photon overdensity of a factor of two in the number density would also correspond to an electron overdensity of two. For isocurvature fluctuations, the number density variations for one component do not necessarily correspond to number density variations in other components. While it is usually assumed that the initial fluctuations are adiabatic, the possibility of isocurvature fluctuations can be considered given current cosmological data. Current cosmic microwave background data favor adiabatic fluctuations and constrain uncorrelated isocurvature cold dark matter modes to be small. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Transport in Somalia**
Transport in Somalia:
Transport in Somalia refers to the transportation networks and modes of transport in effect in Somalia. They include highways, airports and seaports, in addition to various forms of public and private vehicular, maritime and aerial transportation.
Roads:
Somalia's network of roads is 21,830 km long. As of 2010, 2,757 km (12%) of streets are paved, 844 km (3.9%) are gravel, and 18,229 km (83.5%) are earth. 2,559 km are primary roads, 4,850 km are secondary roads, and 14,421 km are rural/feeder roads. As of May 2015, over 70,000 vehicles are registered with the Puntland Ministry of Works and Transport.
Roads:
A 750 km highway connects major cities in the northern part of the country, such as Bosaso, Galkayo and Garowe, with towns in the south. In 2012, the Puntland Highway Authority (PHA) completed rehabilitation work on the central artery linking Garowe with Galkayo. The transportation body also began an upgrade and repair project in June 2012 on the large Garowe–Bosaso Highway. Additionally, renovations were initiated in October 2012 on the central artery linking Bosaso with Qardho. Plans are also in the works to construct new roads connecting littoral towns in the region to the main thoroughfare.In September 2013, the Somali federal government signed an official cooperation agreement with its Chinese counterpart in Mogadishu as part of a five-year national recovery plan. The pact will see the Chinese authorities reconstruct several major infrastructural landmarks in the Somali capital and elsewhere, as well as the road between Galkayo and Burao in the northern part of the country.In June 2014, the Puntland administration inaugurated a new 5.9 km paved road in the city. The construction project leads to the Bosaso seaport, and was completed in conjunction with UNHABITAT. The Puntland government plans to invest at least 23 million Euros in contributions from international partners in similar road infrastructure development initiatives.In October 2014, the Puntland Highway Authority began construction on a new highway connecting the presidential palace in Garowe with various other parts of the administrative capital. Financing for the project was provided by the Puntland government. According to the Head of the PHA Mohamud Abdinur Adan, the new thoroughfare aims to facilitate local transportation and movement. Puntland Minister of Public Works Mohamed Hersi also indicated that the Puntland authorities plan to build and repair other roads linking to the regional urban centers. In December 2014, Galkayo District Mayor Yacqub Mohamed Abdalla and other Puntland officials likewise laid the foundation for a new tarmac road in western Galkayo. The project was funded by the Puntland administration, with other roads in the broader district also slated to be paved with bitumen in 2015. Among the latter streets, a tar construction project began on the Durdur road in the Garsor suburb in February 2015. The main road in the Central Business District as well as the airport road are concurrently scheduled to be paved.In November 2014, the Ministry of Interior and Federalism reached an agreement with the government of Qatar to assist in the renovation of existing roads in Somalia and the construction of new streets.
Roads:
In January 2015, the Interim Juba Administration launched a beautification and cleaning campaign in Kismayo's transportation system. Part of a broader urbanization drive, the initiative includes the clearing of clogged streets and lanes, razing of illegal buildings therein, and further development of the municipal road network.In March 2015, Puntland President Abdiweli Mohamed Ali in conjunction with EU Ambassador to Somalia Michele Cervone d'Urso and German Ambassador to Somalia Andreas Peschke launched the Sustainable Road Maintenance Project. Part of the New Deal Compact for Somalia, the initiative's implementation is facilitated by 17.75 million Euros and 3 million Euros provided by the EU and Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), respectively. Among other objectives, the project aims to renovate the highway between Galkayo and Garowe, including funding refurbishments on the damaged segments of the road and construction of check dams and flood control structures. The initiative also involves a routine annual maintenance program, which focuses on side brushing, clearing bridges after floods, drainage and culvert clearance, and pothole filling. Additionally, the project will offer policy support to the Puntland Ministry of Public Works and the Puntland Highway Authority, and local contractors will receive on-the-job training to upgrade their skills.
Air:
Airports The Somali Civil Aviation Authority (SOMCAA) is Somalia's national civil aviation authority body. Based at Aden Abdulle International Airport in Mogadishu, it is under the aegis of the federal Ministry of Air and Land Transport. In 2012, the ministry along with the Somali Civil Aviation Steering Committee set a three-year window for reconstruction of the national civil aviation capacity. After a long period of management by the Civil Aviation Caretaker Authority for Somalia (CACAS), SCAMA in conjunction with the International Civil Aviation Organization also finalized a process in December 2014 to transfer control of Somalia's airspace to the new Air Space Management Centre in the capital.As of 2012, Somalia has 61 airports according to the CIA factfile. 7 of these have paved runways. Among the latter, four have runways of over 3,047 m; two between 2,438 m and 3,047 m; and one 1,524 m to 2,437 m long.
Air:
There are 55 airports with unpaved landing areas. One has a runway of over 3,047 m; four are between 2,438 m to 3,047 m in length; twenty are 1,524 m to 2,437 m; twenty four are 914 m to 1,523 m; and six are under 914 m.Major airports in the country include the Aden Adde International Airport in Mogadishu, the Hargeisa International Airport in Hargeisa, the Kismayo Airport in Kismayo, the Bender Qassim International Airport in Bosaso, the Berbera Airport in Berbera, and the Garowe International Airport in Garowe.
Air:
In September 2013, the Turkish company Favori LLC began operations at the airport. The firm announced plans to renovate the aviation building and construct a new one, as well as upgrade other modern service structures. A$10 million project, it will increase the airport's existing 15 aircraft capacity to 60. According to Favori, there were 439,879 domestic and international flights at the airport in 2014, an increase of 319,925 flights from the previous year.
Air:
In April 2014, construction began on a new national Aviation Training Academy at the Aden Adde International Airport. The new institution is intended to enhance the capacity of aviation personnel working in Somalia's airports, and focus training within the country. A modern terminal that is concurrently being built at the airport, with funding provided by the Favori firm. According to Minister of Air Transportation and Civil Aviation Said Jama Qorshel, construction of the new terminal is scheduled to take six months, and is expected to improve the airport's functionality and operations. He added that his Ministry is also slated to establish other airports on the capital's outskirts. This in turn would serve to reduce congestion at the Aden Adde International Airport, which would then be exclusively used by large aircraft.In December 2014, the foundation stone for a new runway was also laid at the Bender Qassim International Airport. The China Civil Engineering Construction Corporation is now slated to upgrade the airport's existing gravel runway, pave it with asphalt, and convert it from 1.8 km to 2.65 km in accordance with the code 4C operations clause.
Air:
Airlines Somali Airlines was the flag carrier of Somalia. Established in 1964, it offered flights to both domestic and international destinations. Due to the outbreak of the civil war in the early 1990s, all of the carrier's operations were officially suspended in 1991. A reconstituted Somali government later began preparations in 2012 for an expected relaunch of the carrier, with the first new Somali Airlines aircraft scheduled for delivery by the end of December 2013.According to the Somali Chamber of Commerce and Industry, the void created by the closure of Somali Airlines has since been filled by various Somali-owned private carriers. Over six of these private airline firms offer commercial flights to both domestic and international locations, including Daallo Airlines, Jubba Airways, African Express Airways, East Africa 540, Central Air and Hajara.Additionally, the Somali central government flies its own public aircraft.
Sea:
Ports Possessing the longest coastline on mainland Africa, Somalia has a number of maritime transport facilities across the country. In total, there are over 15 seaports. Major class ports are found at Mogadishu, Bosaso, Berbera and Kismayo. Two jetty class ports are also situated at Las Khorey and Merca. Additionally, Aluula, Maydh, Lughaya, Eyl, Qandala, Hafun, Hobyo, Garacad and El Maan all have smaller ports.In October 2013, the federal Cabinet endorsed an agreement with the Turkish firm Al-Bayrak to manage the Port of Mogadishu for a 20-year period. According to the Prime Minister's Office, the deal was secured by the Ministry of Ports and Public Works, and also assigns Al-Bayrak responsibility for rebuilding and modernizing the port. In April 2014, the Federal Parliament postponed finalization of the Seaport Management Deal pending the approval of a new foreign investment bill. The MPs also requested that the agreement be submitted to the legislature for deliberation and to ensure that the interests of the port's manual labourers are taken into account. In September 2014, the federal government officially delegated management of the Mogadishu Port to Al-Bayrak.
Sea:
According to Al-Bayrak, the majority of its revenue share will be re-invested in the seaport through additional port-based trade and new docks, construction materials and machinery. The company also plans to install an environment wall and a closed circuit camera system in accordance with international security protocols, erect a modern port administration building, and clean the ship entrance channels via underwater surveillance. As of September 2014, the first phase of the renovations are reportedly complete, with the second phase underway.The Port of Bosaso was constructed during the mid-1980s by the Siad Barre administration for annual livestock shipments to the Middle East. In January 2012, a renovation project was launched, with KMC contracted to upgrade the harbor. The initiative's first phase saw the clean-up of unwanted materials from the dockyard and was completed within the month. The second phase involves the reconstruction of the port's adjoining seabed, with the objective of accommodating larger ships.Adeso, an organization founded by Somali environmentalist Fatima Jibrell, began a project for the redevelopment of the Port of Las Khorey. The initiative was later taken up by Faisal Hawar, CEO of the Maakhir Resource Company. In 2012, he brokered an agreement with a Greek investment firm for the development of the commercial seaport. A team of engineers was subsequently enlisted by the Puntland authorities to assess the ongoing renovations taking place at the port. According to the Minister of Ports, Saeed Mohamed Ragge, the Puntland government intends to launch more such development projects in Las Khorey.
Sea:
Merchant marine Somalia has one merchant marine. Established in 2008, it is cargo-based.
Railways:
Rail transport in Somalia consisted of the erstwhile Mogadishu-Villabruzzi Railway, which ran from Mogadishu to Jowhar (then called Villaggio Duca degli Abruzzi). This railway was 114 km in total: the system was built by the colonial authorities in Italian Somaliland in the 1910s. The track gauge was 950 mm (3 ft 1+3⁄8 in). It was dismantled in the 1940s by the British during their military occupation of the territory. In the late 1970s there were plans to restore the railway. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**HD 128093**
HD 128093:
HD 128093 is a double star in the constellation Boötes. The brighter component is an F-type main-sequence star with a stellar classification of F5V and an apparent magnitude of 6.33. It has a magnitude 11.33 companion at an angular separation of 28.1 along a position angle of 318°. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Streak (mineralogy)**
Streak (mineralogy):
The streak of a mineral is the color of the powder produced when it is dragged across an un-weathered surface. Unlike the apparent color of a mineral, which for most minerals can vary considerably, the trail of finely ground powder generally has a more consistent characteristic color, and is thus an important diagnostic tool in mineral identification. If no streak seems to be made, the mineral's streak is said to be white or colorless. Streak is particularly important as a diagnostic for opaque and colored materials. It is less useful for silicate minerals, most of which have a white streak or are too hard to powder easily.
Streak (mineralogy):
The apparent color of a mineral can vary widely because of trace impurities or a disturbed macroscopic crystal structure. Small amounts of an impurity that strongly absorbs a particular wavelength can radically change the wavelengths of light that are reflected by the specimen, and thus change the apparent color. However, when the specimen is dragged to produce a streak, it is broken into randomly oriented microscopic crystals, and small impurities do not greatly affect the absorption of light. The surface across which the mineral is dragged is called a "streak plate", and is generally made of unglazed porcelain tile. In the absence of a streak plate, the unglazed underside of a porcelain bowl or vase or the back of a glazed tile will work. Sometimes a streak is more easily or accurately described by comparing it with the "streak" made by another streak plate.
Streak (mineralogy):
Because the trail left behind results from the mineral being crushed into powder, a streak can only be made of minerals softer than the streak plate, around 7 on the Mohs scale of mineral hardness. For harder minerals, the color of the powder can be determined by filing or crushing a small sample, which is then usually rubbed on a streak plate. Most minerals that are harder have an unhelpful white streak.
Streak (mineralogy):
Some minerals leave a streak similar to their natural color, such as cinnabar, lazurite and native gold. Other minerals leave surprising colors, such as fluorite, which always has a white streak, although it can appear in purple, blue, yellow, or green crystals. Hematite, which is black in appearance, leaves a red streak which accounts for its name, which comes from the Greek word "haima", meaning "blood." Galena, which can be similar in appearance to hematite, is easily distinguished by its gray streak. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Argon flash**
Argon flash:
Argon flash, also known as argon bomb, argon flash bomb, argon candle, and argon light source, is a single-use source of very short and extremely bright flashes of light. The light is generated by a shock wave in argon or, less commonly, another noble gas. The shock wave is usually produced by an explosion. Argon flash devices are almost exclusively used for photographing explosions and shock waves.
Argon flash:
Although krypton and xenon can be also used; argon is favorable because of its low cost.
Process:
The light generated by an explosion is produced primarily by compression heating of the surrounding air. Replacement of the air with a noble gas considerably increases the light output; with molecular gases, the energy is consumed partially by dissociation and other processes, while noble gases are monatomic and can only undergo ionization; the ionized gas then produces the light. The low specific heat capacity of noble gases allows heating to higher temperatures, yielding brighter emission. Flashtubes are filled with noble gases for the same reason.
Engineering:
Typical argon flash devices consist of an argon-filled cardboard or plastic tube with a transparent window on one end and an explosive charge on the other end. Many explosives can be used; Composition B, PETN, RDX, and plastic bonded explosives are just a few examples.
Engineering:
The device consists of a vessel filled with argon and a solid explosive charge. The explosion generates a shock wave, which heats the gas to very high temperature (over 104 K; published values vary between 15,000 K to 30,000 K with the best values around 25,000 K). The gas becomes incandescent and emits a flash of intense visible and ultraviolet black-body radiation. The emission for the temperature range is highest between 97–193 nm, but usually only the visible and near-ultraviolet ranges are exploited.
Engineering:
To achieve emission, the layer of at least one or two optical depths of the gas has to be compressed to sufficient temperature. The light intensity rises to full magnitude in about 0.1 microsecond. For about 0.5 microsecond the shock wave front instabilities are sufficient to create significant striations in the produced light; this effect diminishes as the thickness of the compressed layer increases. Only an about 75 micrometer thick layer of the gas is responsible for the light emission. The shock wave reflects after reaching the window at the end of the tube; this yields a brief increase of light intensity. The intensity then fades.The amount of explosive can control the intensity of the shock wave and therefore of the flash. The intensity of the flash can be increased and its duration decreased by reflecting the shock wave by a suitable obstacle; a foil or a curved glass can be used. The duration of the flash is about as long as the explosion itself, depending on the construction of the lamp, between 0.1 and 100 microseconds. The duration is dependent on the length of the shockwave path through the gas, which is proportional to the length of the tube; it was shown that each centimeter of the path of shock wave through the argon medium is equivalent to 2 microseconds.
Uses:
Argon flash is a standard procedure for high-speed photography, especially for photographing explosions, or less commonly for use in high altitude test vehicles. The photography of explosions and shock waves is made easy by the fact that the detonation of the argon flash lamp charge can be accurately timed relative to the test specimen explosion and the light intensity can overpower the light generated by the explosion itself. The formation of shock waves during explosions of shaped charges can be imaged this way.
Uses:
As the amount of released radiant energy is fairly high, significant heating of the illuminated object can occur. Especially in the case of high explosives, this has to be taken into account.
Super Radiant Light (SRL) sources are an alternative to argon flash. An electron beam source delivers a brief and intense pulse of electrons to suitable crystals (e.g. doped cadmium sulfide). Flash times in the nanosecond to picosecond range are achievable. Pulsed lasers are another alternative. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Maropitant**
Maropitant:
Maropitant (INN; trade name: Cerenia sə-REE-nee-ə), used as maropitant citrate (USAN), is a neurokinin-1 (NK1) receptor antagonist developed by Zoetis specifically for the treatment of motion sickness and vomiting in dogs. It was approved by the FDA in 2007 for use in dogs and in 2012 for cats.Maropitant has mild pain-relieving, anti-anxiety and anti-inflammatory effects.
Veterinary uses:
Injectable maropitant is used in dogs to treat and prevent acute vomiting; tablets are used for preventing vomiting from a variety of causes, though it requires a higher dose to prevent vomiting from motion sickness. The injectable version is also licensed for preventing and treating acute vomiting in cats.Maropitant is effective in treating vomiting from a variety of causes, including gastroenteritis, chemotherapy, and kidney failure; when given beforehand, it can prevent vomiting caused by using an opioid as a premedication. Some have claimed that maropitant is better at treating vomiting than nausea, pointing to cats with chronic kidney disease who, when receiving maropitant, had a reduction in vomiting but no corresponding increase in appetite.Maropitant has been used in acute cases of rapid or labored breathing to prevent vomiting that could lead to aspiration pneumonia. It has been given in combination with a benzodiazepine to cats prior to stressful events (such as a veterinary visit) to possibly relieve hypersensitivity.When compared to other antiemetics, maropitant has similar or greater effectiveness to chlorpromazine and metoclopramide for centrally mediated vomiting induced by apomorphine or xylazine. It works better than chlorpromazine and metoclopramide for vomiting peripherally induced induced with syrup of ipecac. Unlike dimenhydrinate and acepromazine, which are used for motion sickness, maropitant does not cause sedation.Maropitant has been used as an adjunct treatment in severe bronchitis due to its weak anti-inflammatory effects. It alleviates visceral pain and has been found to reduce the amount of general anesthesia (both sevoflurane and isoflurane) needed in some operations.Some believe that maropitant can be used in rabbits and guinea pigs to relieve pain caused by ileus (impaired bowel movements), though it lacks antiemetic effects in rabbits, who cannot vomit.Maropitant is given orally, subcutaneously, and intravenously.
Contraindications:
Maropitant is only indicated for dogs at least 16 weeks old, as some very young puppies suffered bone marrow hypoplasia (incomplete development ). It should also not be used in animals with suspected GI obstruction or toxin ingestion.It is not recommended to give maropitant for more than five consecutive days, as it tends to accumulate in the body due to one of the liver enzymes responsible for its metabolism, CYP2D15, becoming saturated. However, there have been studies where beagles who were given maropitant for two weeks in a row showed no signs of toxicity.Because maropitant is metabolized by the liver, caution should be taken when giving it to dogs with liver dysfunction. Caution should also be taken when giving it with other drugs that are also highly protein-bound, such as NSAIDs, anticonvulsants, and some behavior-modifying drugs; such drugs compete with maropitant for binding to plasma proteins, increasing the concentration of unbound maropitant in the blood. Maropitant should also not be used with calcium channel antagonists (used to treat high blood pressure) or in animals with heart disease, as it has a slight affinity for calcium and potassium channels.
Side effects:
Maropitant is safer than other antiemetics used in veterinary medicine, in part because of its high specificity for its target and thus not binding to other receptors in the central nervous system. Side effects in dogs and cats include hypersalivation, diarrhea, loss of appetite, and vomiting. Eight percent of dogs taking maropitant at doses meant to prevent motion sickness vomited right after, likely due to the local effects maropitant had on the gastrointestinal tract. Small amounts of food beforehand can prevent such post-administration vomiting.One of the most common side effects of subcutaneous administration is moderate to severe pain at the injection site. Although the manufacturer recommends keeping maropitant at room temperature, many people have noted that keeping it in the fridge reduces the sting upon injection. Only unbound maropitant causes pain – it is normally formulated with beta-cyclodextrin to increase solubility; at cooler temperatures, more of the maropitant remains bound to the cyclodextrin, leaving less maropitant unbound.While there is no pain related to the intravenous administration of maropitant, pushing a dose in too quickly can temporarily reduce blood pressure.Fewer than 1 in 10,000 dogs and cats experience anaphylactic reactions.
Overdose:
Signs of maropitant overdose include lethargy, irregular or labored breathing, lack of muscle coordination, and tremors. Overdose of the oral formulation can cause salivation and nasal discharge, while overdose of intravenous maropitant can sometimes lead to reddish urine. The LD50 is high, being over 2,000 mg/kg for oral maropitant in rats.
Pharmacology:
Mechanism of action Vomiting is caused when impulses from the chemoreceptor trigger zone (CRTZ) in the brain are sent to the vomiting center in the medulla. Motion sickness specifically occurs when signals to the CRTZ originate from the inner ear: motion is sensed by the fluid of the semicircular canals, which causes overstimulation. The signal travels to the brain's vestibular nuclei, then to the CRTZ, and finally to the vomiting center.Maropitant has a similar structure to substance P, the key neurotransmitter in causing vomiting, which allows it to act as an antagonist and bind to the substance P receptor neurokinin 1 (NK1). It is highly selective for NK1 over NK2 and NK3. Maropitant binds to the neurokinin receptors in the vagus nerves leaving the GI tract, as well as to receptors in the CRTZ and the vomiting center. By virtue of working at the last step in triggering vomiting, it can prevent a broader range of stimuli than most antiemetics can. It is effective against emetogens that act at the central nervous system (such as apomorphine in dogs and xylazine in cats), those that act in the periphery (e.g. syrup of ipecac), and those that act in both (e.g. cisplatin).
Pharmacology:
Pharmacokinetics Maropitant's bioavailability is unaffected by the presence of food. Bioavailability is 91% at the standard subcutaneous dose but 24% at the standard oral dose; the standard oral dose is higher to partially compensate for incomplete bioavailability. It binds to plasma proteins at a rate of 99.5%; it has a low volume of distribution (9 L/kg) and is thus not extensively absorbed. Subcutaneously administered maropitant had peak plasma concentration around half an hour after administration; the mean half-life is 6–8 hours, and a single dose lasts 24 hours in dogs. Orally administered maropitant reached its peak plasma concentration within two hours.Maropitant undergoes 1st-pass metabolism by liver enzymes, mainly CYP2D15 (which has high affinity for maropitant and clears over 90% of it) but also by the lower-affinity CYP3A12. Repeat dosing of maropitant eventually saturates CYP2D15, causing the drug to accumulate due to reduced clearance. Thus, it is recommended to not use maropitant for more than five days straight, and to have a two-day rest period to allow maropitant to clear the body to prevent accumulation. Maropitant has over 21 metabolites, though its major one (produced by hydroxylation) is CJ-18,518.Maropitant clearance is slower in cats. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Food porn**
Food porn:
Food porn (or foodporn) is a glamourized visual presentation of cooking or eating in advertisements, infomercials, blogs, cooking shows, and other visual media. Its origins come from a restaurant review e-commerce platform called Foodporn. Food porn often takes the form of food photography with styling that presents food provocatively, in a similar way to glamour photography or pornographic photography.
History:
One of the earliest forms of the term can be found in an article by Alexander Cockburn, published in December 1977 in The New York Review of Books, in which Cockburn wrote, "True gastro-porn heightens the excitement and also the sense of the unattainable by proffering colored photographs of various completed recipes". Michael F. Jacobson used the term food porn in a 1979 newsletter of the Center for Science in the Public Interest. The term food porn was also used by the feminist critic Rosalind Coward in her 1984 book Female Desire, in which she wrote: Cooking food and presenting it beautifully is an act of servitude. It is a way of expressing affection through a gift [...] That we should aspire to produce perfectly finished and presented food is a symbol of a willing and enjoyable participation in servicing others. Food pornography exactly sustains these meanings relating to the preparation of food. The kinds of picture used always repress the process of production of a meal. They are always beautifully lit, often touched up.
History:
The term food porn does not strictly deal with the connection between food and sexuality. In the United States, food porn is a term applied when "food manufacturers are capitalising on a backlash against low-calorie and diet foods by marketing treats that boast a high fat content and good artery-clogging potential".In the United Kingdom, the term became popular in the 1990s due to the TV cookery programme Two Fat Ladies, after the show's producer described the "pornographic joy" the pair took in using vast quantities of butter and cream.
Connection with business:
Taking a picture of food became one of the norms for the younger generation in the world as they tried to emulate Foodporn, a site that popularized the concept of posting visually appealing videos and photos of food and drink across social media. Study from YPulse shows 63% of people between thirteen years old to thirty two years old posted their food picture while they are eating on social networking services. Moreover, 57% of people in the same age range posted information of the food they were eating at that time. From the percentage, food and social media are starting to connect together as trend. People using the hashtag #foodporn helps the food industry to track audiences on social networking services.
Usage and community:
The term food porn has shifted throughout its first appearances. Articles mentioned food porn as early as the late 1970s. The phrase was used in a literal manner, describing food that was unhealthy for human consumption, directly comparing it to pornography. Its use took on a new meaning in the early 2000s, when the term food porn began being used to describe food that was presented and prepared in a manner that was aesthetically appealing. This desire for food has flooded the internet, having significant effects on social media sites that provide the ability to display such as Instagram, Flickr, Snapchat, Facebook, Reddit, and Twitter. The popularity of displaying food in a physically appealing manner is driven by the users that create these communities. The use of hashtags that the users of these sites have adapted to, allow food porn to connect people in a way that documents anything about the food such as, foods that reflect cultures, calories, presentation, preparation, delicious taste, and anything else that adds to the authenticity of the meal.
Culture:
The term food porn refers to images of food across various social media platforms such as TV, cooking magazines, online blogs, mobile apps, websites and social media platforms. The reason why food porn is strongly connecting with popular culture is due to the fact that people are exposed to food in their everyday lives. Food porn is not specific to social media platforms and could also be part of the category on newspaper and online blog. Moreover, food porn is experienced globally. Language barriers that exist culturally can be bypassed by the usage of #foodporn. Food porn is used collectively by the online users and does not exclude or privilege one food over another.
Pornographic metaphor:
Contemporary literature and cinema consistently connect food and sexuality. Scholars note historical links between eating and sex, such as male and female humans coming together throughout evolution around food and creating offspring—two essential needs for survival. Today, a more obvious connection exists between the physical acts of eating and having sex in popular culture. In his book Food: The Key Concepts, Warren Belasco examines this particular resonance between kitchen and bedroom in modern-day vocabulary: "these intensely sexualized associations between eating and loving make it difficult to adopt the asceticism implied in eating responsibly. If pastry, sugar, fat, meat, and salt are so closely tied to life's most intimately pleasurable experiences, who would ever want to cut back on them?" When Alexander Cockburn defined the term gastro porn, he used the words excitement and unattainable, implying an element of fantasy that can be seen in both food porn and "traditional" pornography. With the rise of fad diets and exercise programs in the 1980s came a similar upward trend in food-related media and development of eating disorders. As people continued to restrict calories, food-related media increased in popularity due to its ability to provide the consumer with a voyeuristic indulgence of their food fantasies, similar to the voyeuristic indulgence that traditional pornography provides. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Ptx (Unix)**
Ptx (Unix):
ptx is a Unix utility, named after the permuted index algorithm which it uses to produce a search or concordance report in the Keyword in Context (KWIC) format. It is available on most Unix and Unix-like operating systems (e.g. Linux, FreeBSD). The GNU implementation uses extensions that are more powerful than the older SysV implementation.
Ptx (Unix):
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.There is also a corresponding IBM mainframe utility which performs the same function. Permuted indexes are often used in such places as bibliographic or medical databases, documentation, thesauri, or web sites to aid in locating entries of interest. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
**Cyanostar**
Cyanostar:
A cyanostar (pentacyanopentabenzo[25]annulene) is a shape-persistent macrocycle that binds anions.
Synthesis:
The cyanostar structure is synthesized in a one-pot process among five equivalents of a benzaldehyde bearing a meta-cyanomethyl substituent. A series of Knoevenagel condensation reactions catalyzed by various bases stitches them together to make the C5-symmetric structure.
Anion binding:
Cyanostar binds anions through hydrogen bonding from the C–H hydrogen bonds, as the hydrogen has a slight positive charge. It is the first binder to make use of cyanostilbene's electropositive CH groups. The hydrogen bonds create an electropositive region in the center of the macrocycle, creating a binding pocket. Cyanostar strongly binds anions that usually can only be bound weakly. The increased binding arises from the formation of a 2:1 complex, with two cyanostars sandwiching the anion on each side. An extended version of this structural pattern is a 4:3 alternating stack of cyanostar molecules complexing a hydrogen-bonded chain of dihydrogen phosphate units.
Rotaxanes:
Two cyanostars can be threaded onto a phosphate diester structure, forming a rotaxane. Because they have a high affinity for the central phosphate group only when it is in its anionic form, there is a substantial and reversible structural change in response to acid–base changes in solution. | kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.