text
stringlengths
60
353k
source
stringclasses
2 values
**Section restoration** Section restoration: In structural geology section restoration or palinspastic restoration is a technique used to progressively undeform a geological section in an attempt to validate the interpretation used to build the section. It is also used to provide insights into the geometry of earlier stages of the geological development of an area. A section that can be successfully undeformed to a geologically reasonable geometry, without change in area, is known as a balanced section.Comparably a palinspastic map is a map view of geological features, often also including present-day coastlines to aid the reader in recognising the area, representing the state before deformation. 2D restoration: Development of technique The earliest attempts to produce restored sections were on foreland fold and thrust belts. This technique assumed a stratigraphic template with unit thicknesses either constant or smoothly varying across the section. Line lengths were measured on the present-day deformed section and transferred to the template, to rebuild the section as it was before deformation started. This method does not guarantee that area is conserved, only line length. The technique was applied to areas of extensional tectonics initially using vertical simple shear. Over the next decade several types of commercial restoration software became available, allowing the technique to be routinely applied. 2D restoration: Deformation algorithms In order to calculate the change in shape of an element within the section, various deformation algorithms are used. Initially many of these were applied manually, but are now available in specialist software packages. It is worth mentioning that these deformation algorithms are approximations and idealizations of actual strain paths and deviate from reality (Ramsey and Huber, 1987). Geologic media are typically not continuum materials; that is, they are not isotropic media as is implicitly assumed in all strain algorithms used for cross-section balancing. That said, balanced cross sections maintain material balance, which is important for conceptualizing kinematic histories of deformed regions. 2D restoration: Vertical/inclined shear This mechanism deforms an element to accommodate a change in shape by movement on closely spaced parallel planes of slip. The commonest assumption is vertical shear although comparisons with well understood examples suggest that antithetic inclined shear (i.e. in the opposite sense of dip to the controlling fault) at about 60°–70° is the best approximation to the behaviour of real rocks under extension. These algorithms preserve area but do not, in general, preserve line length. Restoration using this type of algorithm can be carried out by hand, but is normally done using specialist software. This algorithm is not generally thought to represent the actual mechanism by which deformation occurs, just to represent a reasonable approximation. 2D restoration: Flexural slip In a flexural slip algorithm deformation occurs by unfolding the deformed fault bounded horse by slip along bedding planes. This modelling mechanism does represent a real geological mechanism, as shown by slickensides along folded bedding planes. The shape of the unfolded horse is further constrained either by using the restored fault boundary to the previous horse in the restored section of by using an internal pin within the block itself, assuming this was unsheared during the deformation. This algorithm is normally only used in software based restoration. It preserves both area and line length. 2D restoration: Trishear A trishear algorithm is used to model and restore fault-propagation folds as other algorithms fail to explain thickness changes and strain variations associate with such folds. The deformation within the tip-zone of the propagating fault is idealised to heterogeneous shear within a triangular zone starting at the fault tip. Compaction In most section restorations there is an element of backstripping and decompaction. This is necessary to adjust the geometry of the section for the compactional effects of later sediment loading. Forward modelling: Section restoration involves undeforming a natural example, a form of inverse modelling. In many cases carrying out forward modelling helps to test out concepts for all or part of the section. 3D restoration: A basic assumption of 2D restoration is that the displacement on all faults is within the plane of the section. It also assumes that no material enters or leaves the section plane. In areas of complex multi-phase or strike slip deformation or where salt is present, this is rarely the case. 3D restoration can only be carried out using specialist software, such as Midland Valley's Move3D, Paradigm's Kine3D or Schlumberger's Dynel3D. The results of such restoration can be used to study the migration of hydrocarbons at an earlier stage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Information Systems International Conference** Information Systems International Conference: Information Systems International Conference (ISICO) is an AISINDO AIS Indonesia Chapter affiliated international conference administered by the Department of Information Systems, Institut Teknologi Sepuluh Nopember, Indonesia. ISICO takes place biannually since 2011. This event brings together information system and information and technology practitioners from around the world to share and discuss their ideas about issues associated with information technology. Information Systems International Conference: ISICO complements existing IS Conferences such as PACIS, AMCIS, ICIS, or ECIS. ISICO 2013 was held in Bali and invited Doug Vogel (Association for Information Systems (AIS) immediate past president) from City University of Hong Kong and Prof. Don Kerr (President of Australasian AIS chapter) from University of the Sunshine Coast. It was attended by 340 participants from 9 countries and established the new AIS Chapter of Indonesia (named: AISINDO). Information Systems International Conference: In 2015, ISICO collaborated with Procedia Computer Science from Elsevier to publish ISICO full papers into the journal. HIstory: 2011 ISICO 2011 was the first international conference to be managed by Information System Department Faculty of Information Technology Institut Teknologi Sepuluh Nopember (ITS). The theme was "Information System for Sustainable Economics Development". ISICO 2011 provided over 10 topics, including Product Knowledge, Information systems, Data Warehouse, Data Mining, Business Intelligence, Business Process Management, Business and Management. This conference was supported by National Taiwan University of Science and Technology (NTUST) Taiwan and Pusan National University. HIstory: 2013 The second conference was held December 2–4, in Bali. Topics included management, economic, and business; education and curriculum; software engineering and design; artificial intelligence and enterprise systems; information, network and computer security. Keynote speaker were Prof. Dr. Mohammad Nuh, DEA, (Minister of Education and Culture of Indonesia), Prof. Don Kerr (President of Australasian (AIS) and program Leader of Bachelor of Information and Communications Technology, University of the Sunshine Coast, Australia. HIstory: 2015 ISICO collaborated with Procedia Computer Science (PCS) from Elsevier to publish all ISICO papers. PCS focuses on publishing high quality conference proceedings. It enables fast dissemination so conference delegates can publish their papers in a dedicated online issue on ScienceDirect, which is made freely available worldwide. ISICO's main focus was preparing for the Asia Pacific Free Trade Area opening in 2020. The conference featured Prof. Jae Kyu Lee, PhD, Prof. Dipl.-Ing. Dr. Technology A Min Tjoa, and Prof. Shuo-Yan Chou. ISICO received 230 submissions from 23 countries, mostly from Indonesia, Malaysia, South Korea, Japan, Taiwan and Thailand. HIstory: 2017 ISICO again collaborated with PCS. It was held in Bali on 6–8 November 2017. The theme was “Innovation of Information Systems – visions, opportunities and challenges“. Keynoters were Matti Rossi, President of The Association for Information Systems 2017/2018; Caroline Chan, President of Australian Council Australian Council of Professors and Heads of Information Systems; and Ahmed Imran, PG IT Program Coordinator, School of Engineering and Information Technology, University of New South Wales, Australia. Topics related to Enterprise System, Information Systems Management, Data Acquisition and Information Dissemination, Data Engineering and Business Intelligence, and IT Infrastructure and Security.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**A Course in Miracles** A Course in Miracles: A Course in Miracles (also referred to as ACIM or the Course) is a 1976 book by Helen Schucman. The underlying premise is that the greatest "miracle" is the act of simply gaining a full "awareness of love's presence" in a person's life. Schucman said that the book had been dictated to her, word for word, via a process of "inner dictation" from Jesus Christ. The book is considered to have borrowed from New Age movement writings.ACIM consists of three sections: "Text", "Workbook for Students", and "Manual for Teachers". Written from 1965 to 1972, some distribution occurred via photocopies before a hardcover edition was published in 1976 by the Foundation for Inner Peace. The copyright and trademarks, which had been held by two foundations, were revoked in 2004 after lengthy litigation because the earliest versions had been circulated without a copyright notice.Throughout the 1980s, annual sales of the book steadily increased each year; however, the largest growth in sales occurred in 1992 after Marianne Williamson discussed the book on The Oprah Winfrey Show, with more than two million volumes sold. The book has been called everything from "New Age psychobabble" to "a Satanic seduction" to "The New Age Bible". According to Olav Hammer, the psychiatrist and author Gerald G. Jampolsky was among the most effective promoters of ACIM. Jampolsky's first book, Love is Letting Go of Fear, which is based on the principles of ACIM, was published in 1979 and, after being endorsed on Johnny Carson's show, went on to sell over three million copies by 1990. Origins: A Course in Miracles was written as a collaborative venture between Schucman and William ("Bill") Thetford. In 1958, Schucman began her professional career at Columbia-Presbyterian Medical Center in New York City as Thetford's research associate. In 1965, at a time when their weekly office meetings had become so contentious that they both dreaded them, Thetford suggested to Schucman that "[t]here must be another way". Schucman believed that this interaction acted as a stimulus, triggering a series of inner experiences that were understood by her as visions, dreams, and heightened imagery, along with an "inner voice" which she identified as Jesus (although the ACIM text itself never explicitly claims that the voice she hears speaking is the voice of Jesus). She said that on October 21, 1965, an "inner voice" told her: "This is a Course in Miracles, please take notes." Schucman said that the writing made her very uncomfortable, though it never seriously occurred to her to stop. The next day, she explained the events of her "note-taking" to Thetford. To her surprise, Thetford encouraged her to continue the process. He also offered to assist her in typing out her notes as she read them to him. The process continued the next day and repeated itself regularly for many years. In 1972, the writing of the three main sections of ACIM was completed, with some additional minor writing coming after that point. Origins: For copyright purposes, US courts determined that the author of the text was Schucman, not Jesus. Kenneth Wapnick believed that Schucman did not channel Jesus, but was describing her "own mental experience of divine 'love'". Reception: Since it went on sale in 1976, the text has been translated into 27 languages. The book is distributed globally, spawning a range of organized groups.Wapnick said that "if the Bible were considered literally true, then (from a Biblical literalist's viewpoint) the Course would have to be viewed as demonically inspired". He also declared "I often taught in the context of the Bible, even though it is obvious to serious students of A Course in Miracles that it and the Bible are fundamentally incompatible." "Course-teachers Robert Perry, Greg Mackie, and Allen Watson" disagreed about that. Though a friend of Schucman, Thetford, and Wapnick, Catholic priest Benedict Groeschel criticized ACIM and related organizations. Finding some elements of ACIM to be "severe and potentially dangerous distortions of Christian theology", he wrote that it is "a good example of a false revelation" and that it has "become a spiritual menace to many". The evangelical editor Elliot Miller says that Christian terminology employed in ACIM is "thoroughly redefined" to resemble New Age teachings. Other Christian critics say that ACIM is "intensely anti-biblical" and incompatible with Christianity, blurring the distinction between creator and created and forcefully supporting the occult and New Age worldview.Olav Hammer locates A Course in Miracles in the tradition of channeled works from those of Madam Blavatsky through to the works of Rudolf Steiner and notes the close parallels between Christian Science and the teachings of the Course. Hammer called it "gnosticizing beliefs". In "'Knowledge is Truth': A Course in Miracles as Neo-Gnostic Scripture" in Gnosis: Journal of Gnostic Studies, Simon J. Joseph outlines the relationship between the Course and Gnostic thinking. Daren Kemp also considers ACIM to be neo-Gnostic and agrees with Hammer that it is a channeled text. The course has been viewed as a way which "integrates a psychological world view with a universal spiritual perspective" and linked to transpersonal psychology. Reception: Joseph declared: Consequently, new manuscript discoveries, lost gospels, and new “scriptural” revelations represent an effective way of subverting the traditional picture of early Christian origins and destabilizing traditional Christian authority by redefining the cultural boundaries of Christianity in contemporary culture. [...] Since the Course’s redefinition of terms is so offensive to its critics, [...] the Gospel narrative that the Course subverts and redefines is the suffering, death, and crucifixion of Jesus. Reception: The Skeptic's Dictionary describes ACIM as "a minor industry" that is overly commercialized and characterizes it as "Christianity improved". Robert T. Carroll wrote that the teachings are not original but are culled from "various sources, east, and west". He adds that it has gained increased popularity as New Age spirituality writer Marianne Williamson promoted a variant. Associated works: Two works have been described as extensions of A Course in Miracles, Gary Renard's 2003 The Disappearance of the Universe and Marianne Williamson's A Return to Love published in 1992. The Disappearance of the Universe, published in 2003 by Fearless Books, was republished by Hay House in 2004. Publishers Weekly reported that Renard's examination of A Course in Miracles influenced his book.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ThinCan** ThinCan: ThinCan is the name for a thin client manufactured by Estonian electronic design start-up Artec Group. The ThinCan remained relatively unknown outside Estonia until 2006, when a recent ThinCan iteration was selected as the hardware base for the Linutop, a network appliance that greatly stimulated the market for lightweight computing platforms. The ThinCan was also commercialized by SmartLink under the Revnetek brand name. Hardware: Functionally, all ThinCan production models offer similar features: Front panel: 1/8" stereo audio, USB ports. Hardware: Back panel: Ethernet port, VGA output, PSU connector.Aesthetically, the original ThinCan was an exercise in futuristic looks, with brushed aluminum end caps and a tubular aluminum shape that featured alternating patterns of decorative serrations along the surface of the tube. The tube came painted in one's choice of several transparent colors (black, dark blue, light blue, purple, red) for an authentic "Jetsons" feel. Hardware: After an early prototype based on a custom x86 core, supporting PS/2 keyboard and mouse, the platform was redesigned around an NSC Geode SC2200 supporting only USB peripherals. An optional on-board SmartCard reader attached to an internally mounted USB port made the original ThinCan an instant hit on the local market, due to an Estonian legislation dating from 2001 that mandated the issuance of a national Electronic ID card to all citizens and their use to access many public services. Hardware: Still, while the futuristic design received some attention in the IT press, the prohibitive cost of machining an extruded aluminum tube with intricate decorative serrations prevented the manufacturer from achieving commercial success with this early model. In 2003, the company revised the design towards a simpler cost-effective flat boxy shape for their DBE60 model (initially commercialized as the ThinCan SE). Aside from the addition of a parallel printer port, the DBE60 is functionally identical to the original ThinCan and built around the same NSC Geode SC2200. In 2005, this design was updated for the AMD Geode LX700-based DBE61 model, with USB 2.0 provided by a CS5536 companion chip. The parallel printer port was then removed, returning the design to an all-USB configuration. Linutop SARL retained this model as a starting point for their Linutop-1 product. In 2007, the DBE61 design was upgraded with Gigabit Ethernet support. The manufacturer calls this the DBE62. In 2009, the DBE62 design was reconfigured to use SO DIMM memory and IDE Compact Flash media. The manufacturer calls this the DBE63. Software: Firmware SC2200-based models boot using a proprietary loader called Clara that was developed by Artec. Software: All LX700-based models can natively boot using Coreboot. This started as a Geode GX port developed by AMD for the OLPC prototype, to which Artec added Geode LX support. That code was later adopted and further polished by AMD, after the OLPC switched to the LX700 for its production models. This Coreboot port was used on the SmartLink model and on several custom Artec models configured as network appliances. Software: Meanwhile, both Artec's PXE-boot and Linutop's USB-boot DBE61 models, plus all DBE62 and DBE63 models, use a General Software BIOS. Operating system The original ThinCan ran on Windows CE and launched into an RDP client for Windows Terminal Services. DBE60 models come with either the same RDP client as the original ThinCan or with Etherboot support for UNIX terminal services. Software: DBE61 models come with either a BIOS with PXE support optimized for LTSP or with a BIOS for USB booting Linutop's own Linux distribution. Meanwhile, SmartLink preloads their DBE61 models with their own versatile firmware called R-BOX that can be user-configured to launch into either an RDP client or into a Web kiosk – both of which are implemented using Free Software components – and which makes use of the Coreboot port. Software: DBE62 models have a BIOS that first attempts booting from USB and, if no bootable USB media is found, then attempts PXE booting – essentially combining the boot options of Artec's and Linutop's models into a single configuration. DBE63 models run on Embedded Windows XP and launch into a Web kiosk. Timeline: 1999 – prototype based on a custom x86 core. 2001 – round model with NSC Geode SC2200. 2003 – DBE60 model with NSC Geode SC2200. 2005 – DBE61 model with AMD Geode LX700 and CS5536. 2007 – DBE62 model with AMD Geode LX700 and CS5536, plus Gigabit Ethernet. 2009 – DBE63 model with AMD Geode LX700 and CS5536, plus Gigabit Ethernet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NFKBIE** NFKBIE: Nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor, epsilon, also known as NFKBIE, is a protein which in humans is encoded by the NFKBIE gene. Function: NFKBIE protein expression is up-regulated following NF-κB activation and during myelopoiesis. NFKBIE is able to inhibit NF-κB-directed transactivation via cytoplasmic retention of REL proteins.NFKB1 or NFKB2 is bound to REL, RELA, or RELB to form the NF-κB transcription factor complex. The NF-κB complex is inhibited by I-kappa-B proteins (NFKBIA or NFKBIB), which inactivate NF-kappa-B by trapping it in the cytoplasm. Phosphorylation of serine residues on the I-kappa-B proteins by kinases (IKBKA, or IKBKB) marks them for destruction via the ubiquitination pathway, thereby allowing activation of the NF-kappa-B complex. Activated NF-κB complex translocates into the nucleus and binds DNA at kappa-B-binding motifs such as 5-prime GGGRNNYYCC 3-prime or 5-prime HGGARNYYCC 3-prime (where H is A, C, or T; R is an A or G purine; and Y is a C or T pyrimidine). For some genes, activation requires NF-κB interaction with other transcription factors, such as STAT (see STAT6), AP-1 (JUN), and NFAT (see NFATC1). Interactions: NFKBIE has been shown to interact with NFKB2, RELA, NFKB1 and REL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Estradiol valerate/megestrol acetate** Estradiol valerate/megestrol acetate: Estradiol valerate/megestrol acetate (EV/MGA) is a combined injectable contraceptive which was developed in China in the 1980s but was never marketed. It is an aqueous suspension of microcapsules (50–80 μm in diameter) containing 5 mg estradiol valerate (EV) and 15 mg megestrol acetate (MGA). It was also studied at doses of EV ranging from 0.5 to 5 mg and at doses of MGA ranging from 15 to 25 mg.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pell's equation** Pell's equation: Pell's equation, also called the Pell–Fermat equation, is any Diophantine equation of the form x2−ny2=1, where n is a given positive nonsquare integer, and integer solutions are sought for x and y. In Cartesian coordinates, the equation is represented by a hyperbola; solutions occur wherever the curve passes through a point whose x and y coordinates are both integers, such as the trivial solution with x = 1 and y = 0. Joseph Louis Lagrange proved that, as long as n is not a perfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accurately approximate the square root of n by rational numbers of the form x/y. Pell's equation: This equation was first studied extensively in India starting with Brahmagupta, who found an integer solution to 92 x2+1=y2 in his Brāhmasphuṭasiddhānta circa 628. Bhaskara II in the 12th century and Narayana Pandit in the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing the chakravala method, building on the work of Jayadeva and Brahmagupta. Solutions to specific examples of Pell's equation, such as the Pell numbers arising from the equation with n = 2, had been known for much longer, since the time of Pythagoras in Greece and a similar date in India. William Brouncker was the first European to solve Pell's equation. The name of Pell's equation arose from Leonhard Euler mistakenly attributing Brouncker's solution of the equation to John Pell. History: As early as 400 BC in India and Greece, mathematicians studied the numbers arising from the n = 2 case of Pell's equation, x2−2y2=1, and from the closely related equation x2−2y2=−1 because of the connection of these equations to the square root of 2. Indeed, if x and y are positive integers satisfying this equation, then x/y is an approximation of √2. The numbers x and y appearing in these approximations, called side and diameter numbers, were known to the Pythagoreans, and Proclus observed that in the opposite direction these numbers obeyed one of these two equations. Similarly, Baudhayana discovered that x = 17, y = 12 and x = 577, y = 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2.Later, Archimedes approximated the square root of 3 by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation. History: Likewise, Archimedes's cattle problem — an ancient word problem about finding the number of cattle belonging to the sun god Helios — can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter to Eratosthenes, and the attribution to Archimedes is generally accepted today.Around AD 250, Diophantus considered the equation a2x2+c=y2, where a and c are fixed numbers, and x and y are the variables to be solved for. History: This equation is different in form from Pell's equation but equivalent to it. History: Diophantus solved the equation for (a, c) equal to (1, 1), (1, −1), (1, 12), and (3, 9). Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus.In Indian mathematics, Brahmagupta discovered that (x12−Ny12)(x22−Ny22)=(x1x2+Ny1y2)2−N(x1y2+x2y1)2, a form of what is now known as Brahmagupta's identity. Using this, he was able to "compose" triples (x1,y1,k1) and (x2,y2,k2) that were solutions of x2−Ny2=k , to generate the new triples (x1x2+Ny1y2,x1y2+x2y1,k1k2) and (x1x2−Ny1y2,x1y2−x2y1,k1k2). History: Not only did this give a way to generate infinitely many solutions to x2−Ny2=1 starting with one solution, but also, by dividing such a composition by k1k2 , integer or "nearly integer" solutions could often be obtained. For instance, for 92 , Brahmagupta composed the triple (10, 1, 8) (since 10 92 (12)=8 ) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" for x and y ) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution of x2−Ny2=k for k = ±1, ±2, or ±4.The first general method for solving the Pell's equation (for all N) was given by Bhāskara II in 1150, extending the methods of Brahmagupta. Called the chakravala (cyclic) method, it starts by choosing two relatively prime integers a and b , then composing the triple (a,b,k) (that is, one which satisfies a2−Nb2=k ) with the trivial triple (m,1,m2−N) to get the triple (am+Nb,a+bm,k(m2−N)) , which can be scaled down to (am+Nbk,a+bmk,m2−Nk). History: When m is chosen so that a+bmk is an integer, so are the other two numbers in the triple. Among such m , the method chooses one that minimizes m2−Nk and repeats the process. This method always terminates with a solution. Bhaskara used it to give the solution x = 1766319049, y = 226153980 to the N = 61 case.Several European mathematicians rediscovered how to solve Pell's equation in the 17th century. Pierre de Fermat found how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians. In a letter to Kenelm Digby, Bernard Frénicle de Bessy said that Fermat found the smallest solution for N up to 150 and challenged John Wallis to solve the cases N = 151 or 313. Both Wallis and William Brouncker gave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker.John Pell's connection with the equation is that he revised Thomas Branker's translation of Johann Rahn's 1659 book Teutsche Algebra into English, with a discussion of Brouncker's solution of the equation. Leonhard Euler mistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell.The general theory of Pell's equation, based on continued fractions and algebraic manipulations with numbers of the form P+Qa, was developed by Lagrange in 1766–1769. In particular, Lagrange gave a proof that the Brouncker-Wallis algorithm always terminates. Solutions: Fundamental solution via continued fractions Let hi/ki denote the sequence of convergents to the regular continued fraction for n . This sequence is unique. Then the pair (x1,y1) solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. Thus, the fundamental solution may be found by performing the continued fraction expansion and testing each successive convergent until a solution to Pell's equation is found.The time for finding the fundamental solution using the continued fraction method, with the aid of the Schönhage–Strassen algorithm for fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair (x1,y1) . However, this is not a polynomial-time algorithm because the number of digits in the solution may be as large as √n, far larger than a polynomial in the number of digits in the input value n. Solutions: Additional solutions from the fundamental solution Once the fundamental solution is found, all remaining solutions may be calculated algebraically from xk+ykn=(x1+y1n)k, expanding the right side, equating coefficients of n on both sides, and equating the other terms on both sides. This yields the recurrence relations xk+1=x1xk+ny1yk, yk+1=x1yk+y1xk. Concise representation and faster algorithms Although writing out the fundamental solution (x1, y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the form x1+y1n=∏i=1t(ai+bin)ci using much smaller integers ai, bi, and ci. Solutions: For instance, Archimedes' cattle problem is equivalent to the Pell equation 410 286 423 278 424 y2=1 , the fundamental solution of which has 206545 digits if written out explicitly. However, the solution is also equal to 2329 , where 729 494 300 426 607 914 281 713 365 609 84 129 507 677 858 393 258 7766 )2 and x1′ and y1′ only have 45 and 41 decimal digits respectively.Methods related to the quadratic sieve approach for integer factorization may be used to collect relations between prime numbers in the number field generated by √n and to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of the generalized Riemann hypothesis, it can be shown to take time exp log log log ⁡N), where N = log n is the input size, similarly to the quadratic sieve. Solutions: Quantum algorithms Hallgren showed that a quantum computer can find a product representation, as described above, for the solution to Pell's equation in polynomial time. Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a real quadratic number field, was extended to more general fields by Schmidt and Völlmer. Example: As an example, consider the instance of Pell's equation for n = 7; that is, 1. Example: The sequence of convergents for the square root of seven are Therefore, the fundamental solution is formed by the pair (8, 3). Applying the recurrence formula to this solution generates the infinite sequence of solutions (1, 0); (8, 3); (127, 48); (2024, 765); (32257, 12192); (514088, 194307); (8193151, 3096720); (130576328, 49353213); ... (sequence A001081 (x) and A001080 (y) in OEIS)The smallest solution can be very large. For example, the smallest solution to 313 y2=1 is (32188120829134849, 1819380158564160), and this is the equation which Frenicle challenged Wallis to solve. Values of n such that the smallest solution of x2−ny2=1 is greater than the smallest solution for any smaller value of n are 1, 2, 5, 10, 13, 29, 46, 53, 61, 109, 181, 277, 397, 409, 421, 541, 661, 1021, 1069, 1381, 1549, 1621, 2389, 3061, 3469, 4621, 4789, 4909, 5581, 6301, 6829, 8269, 8941, 9949, ... (sequence A033316 in the OEIS).(For these records, see OEIS: A033315 for x and OEIS: A033319 for y.) List of fundamental solutions of Pell's equations: The following is a list of the fundamental solution to x2−ny2=1 with n ≤ 128. When n is an integer square, there is no solution except for the trivial solution (1, 0). The values of x are sequence A002350 and those of y are sequence A002349 in OEIS. Connections: Pell's equation has connections to several other important subjects in mathematics. Connections: Algebraic number theory Pell's equation is closely related to the theory of algebraic numbers, as the formula x2−ny2=(x+yn)(x−yn) is the norm for the ring Z[n] and for the closely related quadratic field Q(n) . Thus, a pair of integers (x,y) solves Pell's equation if and only if x+yn is a unit with norm 1 in Z[n] . Dirichlet's unit theorem, that all units of Z[n] can be expressed as powers of a single fundamental unit (and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution. The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. Connections: Chebyshev polynomials Demeyer mentions a connection between Pell's equation and the Chebyshev polynomials: If Ti(x) and Ui(x) are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in any polynomial ring R[x] , with n=x2−1 1. Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution: Ti+Ui−1x2−1=(x+x2−1)i. Connections: It may further be observed that if (xi,yi) are the solutions to any integer Pell's equation, then xi=Ti(x1) and yi=y1Ui−1(x1) Continued fractions A general development of solutions of Pell's equation x2−ny2=1 in terms of continued fractions of n can be presented, as the solutions x and y are approximates to the square root of n and thus are a special case of continued fraction approximations for quadratic irrationals.The relationship to the continued fractions implies that the solutions to Pell's equation form a semigroup subset of the modular group. Thus, for example, if p and q satisfy Pell's equation, then (pqnqp) is a matrix of unit determinant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: If pk−1/qk−1 and pk/qk are two successive convergents of a continued fraction, then the matrix (pk−1pkqk−1qk) has determinant (−1)k. Connections: Smooth numbers Størmer's theorem applies Pell equations to find pairs of consecutive smooth numbers, positive integers whose prime factors are all smaller than a given value. As part of this theory, Størmer also investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has a prime factor that does not divide n. The negative Pell's equation: The negative Pell's equation is given by x2−ny2=−1 and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. However, it is not known which roots have odd period lengths, and therefore not known when the negative Pell equation is solvable. A necessary (but not sufficient) condition for solvability is that n is not divisible by 4 or by a prime of form 4k + 3. Thus, for example, x2 − 3ny2 = −1 is never solvable, but x2 − 5ny2 = −1 may be.The first few numbers n for which x2 − ny2 = −1 is solvable are 1, 2, 5, 10, 13, 17, 26, 29, 37, 41, 50, 53, 58, 61, 65, 73, 74, 82, 85, 89, 97, ... (sequence A031396 in the OEIS).Let is odd (1−2j) . The proportion of square-free n divisible by k primes of the form 4m + 1 for which the negative Pell's equation is solvable is at least α. When the number of prime divisors is not fixed, the proportion is given by 1 - α.If the negative Pell's equation does have a solution for a particular n, its fundamental solution leads to the fundamental one for the positive case by squaring both sides of the defining equation: (x2−ny2)2=(−1)2 implies 1. The negative Pell's equation: As stated above, if the negative Pell's equation is solvable, a solution can be found using the method of continued fractions as in the positive Pell's equation. The recursion relation works slightly differently however. Since (x+ny)(x−ny)=−1 , the next solution is determined in terms of i(xk+nyk)=(i(x+ny))k whenever there is a match, that is, when k is odd. The resulting recursion relation is (modulo a minus sign, which is immaterial due to the quadratic nature of the equation) xk=xk−2x12+nxk−2y12+2nyk−2y1x1, yk=yk−2x12+nyk−2y12+2xk−2y1x1, which gives an infinite tower of solutions to the negative Pell's equation. Generalized Pell's equation: The equation x2−dy2=N is called the generalized (or general) Pell's equation. The equation u2−dv2=1 is the corresponding Pell's resolvent. A recursive algorithm was given by Lagrange in 1768 for solving the equation, reducing the problem to the case |N|<d . Such solutions can be derived using the continued-fractions method as outlined above. Generalized Pell's equation: If (x0,y0) is a solution to x2−dy2=N, and (un,vn) is a solution to u2−dv2=1, then (xn,yn) such that xn+ynd=(x0+y0d)(un+vnd) is a solution to x2−dy2=N , a principle named the multiplicative principle. The solution (xn,yn) is called a Pell multiple of the solution (x0,y0) There exists a finite set of solutions to x2−dy2=N such that every solution is a Pell multiple of a solution from that set. In particular, if (u,v) is the fundamental solution to u2−dv2=1 , then each solution to the equation is a Pell multiple of a solution (x,y) with |x|≤|N|(|U|+1)/2 and |y|≤|N|(|U|+1)/(2d) , where U=u+vd .If x and y are positive integer solutions to the Pell's equation with |N|<d , then x/y is a convergent to the continued fraction of d .Solutions to the generalized Pell's equation are used for solving certain Diophantine equations and units of certain rings, and they arise in the study of SIC-POVMs in quantum information theory.The equation x2−dy2=4 is similar to the resolvent x2−dy2=1 in that if a minimal solution to x2−dy2=4 can be found, then all solutions of the equation can be generated in a similar manner to the case N=1 . For certain d , solutions to x2−dy2=1 can be generated from those with x2−dy2=4 , in that if mod 8), then every third solution to x2−dy2=4 has x,y even, generating a solution to x2−dy2=1
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jasmonate** Jasmonate: Jasmonate (JA) and its derivatives are lipid-based plant hormones that regulate a wide range of processes in plants, ranging from growth and photosynthesis to reproductive development. In particular, JAs are critical for plant defense against herbivory and plant responses to poor environmental conditions and other kinds of abiotic and biotic challenges. Some JAs can also be released as volatile organic compounds (VOCs) to permit communication between plants in anticipation of mutual dangers. History: The isolation of methyl jasmonate (MeJa) from jasmine oil derived from Jasminum grandiflorum led to the discovery of the molecular structure of jasmonates and their name in 1962 while jasmonic acid itself was isolated from Lasiodiplodia theobromae by Alderidge et al in 1971. Biosynthesis: Biosynthesis is reviewed by Acosta and Farmer 2010, Wasternack and Hause 2013, and Wasternack and Song 2017. Jasmonates (JA) are oxylipins, i.e. derivatives of oxygenated fatty acid. They are biosynthesized from linolenic acid in chloroplast membranes. Synthesis is initiated with the conversion of linolenic acid to 12-oxo-phytodienoic acid (OPDA), which then undergoes a reduction and three rounds of oxidation to form (+)-7-iso-JA, jasmonic acid. Only the conversion of linolenic acid to OPDA occurs in the chloroplast; all subsequent reactions occur in the peroxisome.JA itself can be further metabolized into active or inactive derivatives. Methyl JA (MeJA) is a volatile compound that is potentially responsible for interplant communication. JA conjugated with amino acid isoleucine (Ile) results in JA-Ile ((+)-7-iso-jasmonoyl-L-isoleucine), which Fonseca et al 2009 finds is involved in most JA signaling - see also the review by Katsir et al 2008. However Van Poecke & Dicke 2003 finds Arabidopsis's emission of volatiles to not require JA-Ile, nor VanDoorn et al 2011 for Solanum nigrum's herbivore resistance. JA undergoes decarboxylation to give cis-jasmone. Function: Although jasmonate (JA) regulates many different processes in the plant, its role in wound response is best understood. Following mechanical wounding or herbivory, JA biosynthesis is rapidly activated, leading to expression of the appropriate response genes. For example, in the tomato, wounding produces defense molecules that inhibit leaf digestion in guts of insects. Another indirect result of JA signaling is the volatile emission of JA-derived compounds. MeJA on leaves can travel airborne to nearby plants and elevate levels of transcripts related to wound response. In general, this emission can further upregulate JA biosynthesis and cell signaling, thereby inducing nearby plants to prime their defenses in case of herbivory. Function: JAs have also been implicated in cell death and leaf senescence. JA can interact with many kinases and transcription factors associated with senescence. JA can also induce mitochondrial death by inducing the accumulation of reactive oxygen species (ROSs). These compounds disrupt mitochondria membranes and compromise the cell by causing apoptosis, or programmed cell death. JAs' roles in these processes are suggestive of methods by which the plant defends itself against biotic challenges and limits the spread of infections.JA and its derivatives have also been implicated in plant development, symbiosis, and a host of other processes included in the list below. Function: By studying mutants overexpressing JA, one of the earliest discoveries made was that JA inhibits root growth. The mechanism behind this event is still not understood, but mutants in the COI1-dependent signaling pathway tend to show reduced inhibition, demonstrating that the COI1 pathway is somehow necessary for inhibiting root growth. JA plays many roles in flower development. Mutants in JA synthesis or in JA signaling in Arabidopsis present with male sterility, typically due to delayed development. The same genes promoting male fertility in Arabidopsis promote female fertility in tomatoes. Overexpression of 12-OH-JA can also delay flowering. JA and MeJA inhibit the germination of nondormant seeds and stimulate the germination of dormant seeds. High levels of JA encourage the accumulation of storage proteins; genes encoding vegetative storage proteins are JA responsive. Specifically, tuberonic acid, a JA derivative, induces the formation of tubers. Function: JAs also play a role in symbiosis between plants and microorganisms; however, its precise role is still unclear. JA currently appears to regulate signal exchange and nodulation regulation between legumes and rhizobium. On the other hand, elevated JA levels appear to regulate carbohydrate partitioning and stress tolerance in mycorrhizal plants.JAs have been implicated in the development of carnivorous plants such as the Venus flytrap. Research suggests that evolutionary repurposing of the jasmonate signaling pathway, which mediates defense against herbivores in noncarnivorous plants, has supported the evolution of plant carnivory. Jasmonates can be used to signal the closing of traps and to control the release of enzymes and nutrient transporters which are used in plant digestion. However, not all carnivorous plants rely on the jasmonate pathway in the same way. Butterworts differ significantly from Venus flytraps and sundews, and may have developed methods of regulating digestive enzymes that are JA-independent. Function: Role in pathogenesis Pseudomonas syringae causes bacterial speck disease in tomatoes by hijacking the plant's jasmonate (JA) signaling pathway. This bacteria utilizes a type III secretion system to inject a cocktail of viral effector proteins into host cells. Function: One of the molecules included in this mixture is the phytotoxin coronatine (COR). JA-insensitive plants are highly resistant to P. syringae and unresponsive to COR; additionally, applying MeJA was sufficient to rescue virulence in COR mutant bacteria. Infected plants also expressed downstream JA and wound response genes but repressed levels of pathogenesis-related (PR) genes. All these data suggest COR acts through the JA pathway to invade host plants. Activation of a wound response is hypothesized to come at the expense of pathogen defense. By activating the JA wound response pathway, P. syringae could divert resources from its host's immune system and infect more effectively.Plants produce N-acylamides that confer resistance to necrotrophic pathogens by activating JA biosynthesis and signalling. Arachidonic acid (AA), the counterpart of the JA precursor α-LeA occurring in metazoan species but not in plants, is perceived by plants and acts through an increase in JA levels concomitantly with resistance to necrotrophic pathogens. AA is an evolutionarily conserved signalling molecule that acts in plants in response to stress similar to that in animal systems. Function: Cross talk with other defense pathways While the jasmonate (JA) pathway is critical for wound response, it is not the only signaling pathway mediating defense in plants. To build an optimal yet efficient defense, the different defense pathways must be capable of cross talk to fine-tune and specify responses to abiotic and biotic challenges. One of the best studied examples of JA cross talk occurs with salicylic acid (SA). SA, a hormone, mediates defense against pathogens by inducing both the expression of pathogenesis-related genes and systemic acquired resistance (SAR), in which the whole plant gains resistance to a pathogen after localized exposure to it. Function: Wound and pathogen response appear to be interact negatively. For example, silencing phenylalanine ammonia lyase (PAL), an enzyme synthesizing precursors to SA, reduces SAR but enhances herbivory resistance against insects. Similarly, overexpression of PAL enhances SAR but reduces wound response after insect herbivory. Generally, it has been found that pathogens living in live plant cells are more sensitive to SA-induced defenses, while herbivorous insects and pathogens that derive benefit from cell death are more susceptible to JA defenses. Thus, this trade-off in pathways optimizes defense and saves plant resources.Cross talk also occurs between JA and other plant hormone pathways, such as those of abscisic acid (ABA) and ethylene (ET). These interactions similarly optimize defense against pathogens and herbivores of different lifestyles. For example, MYC2 activity can be stimulated by both JA and ABA pathways, allowing it to integrate signals from both pathways. Other transcription factors such as ERF1 arise as a result of JA and ET signaling. All these molecules can act in combination to activate specific wound response genes.Finally, cross talk is not restricted for defense: JA and ET interactions are critical in development as well, and a balance between the two compounds is necessary for proper apical hook development in Arabidopsis seedlings. Still, further research is needed to elucidate the molecules regulating such cross talk. Mechanism of signaling: In general, the steps in jasmonate (JA) signaling mirror that of auxin signaling: the first step comprises E3 ubiquitin ligase complexes, which tag substrates with ubiquitin to mark them for degradation by proteasomes. The second step utilizes transcription factors to effect physiological changes. One of the key molecules in this pathway is JAZ, which serves as the on-off switch for JA signaling. In the absence of JA, JAZ proteins bind to downstream transcription factors and limit their activity. However, in the presence of JA or its bioactive derivatives, JAZ proteins are degraded, freeing transcription factors for expression of genes needed in stress responses.Because JAZ did not disappear in null coi1 mutant plant backgrounds, protein COI1 was shown to mediate JAZ degradation. COI1 belongs to the family of highly conserved F-box proteins, and it recruits substrates for the E3 ubiquitin ligase SCFCOI1. The complexes that ultimately form are known as SCF complexes. These complexes bind JAZ and target it for proteasomal degradation. However, given the large spectrum of JA molecules, not all JA derivatives activate this pathway for signaling, and the range of those participating in this pathway is unknown. Thus far, only JA-Ile has been shown to be necessary for COI1-mediated degradation of JAZ11. JA-Ile and structurally related derivatives can bind to COI1-JAZ complexes and promote ubiquitination and thus degradation of the latter.This mechanistic model raises the possibility that COI1 serves as an intracellular receptor for JA signals. Recent research has confirmed this hypothesis by demonstrating that the COI1-JAZ complex acts as a co-receptor for JA perception. Specifically, JA-Ile binds both to a ligand-binding pocket in COI1 and to a 20 amino-acid stretch of the conserved Jas motif in JAZ. This JAZ residue acts as a plug for the pocket in COI1, keeping JA-Ile bound in the pocket. Additionally, Sheard et al 2010 co-purified and subsequently removed inositol pentakisphosphate (InsP5) from COI1, demonstrating InsP5 to be a necessary component of the co-receptor and playing a role in potentiating the co-receptor complex. Sheard's results may show varying binding specificity for various SCFCOI1-InsP5-JAZ complexes.Once freed from JAZ, transcription factors can activate genes needed for a specific JA response. The best-studied transcription factors acting in this pathway belong to the MYC family of transcription factors, which are characterized by a basic helix-loop-helix (bHLH) DNA binding motif. These factors (of which there are three, MYC2, 3, and 4) tend to act additively. For example, a plant that has only lost one myc becomes more susceptible to insect herbivory than a normal plant. A plant that has lost all three will be as susceptible to damage as coi1 mutants, which are completely unresponsive to JA and cannot mount a defense against herbivory. However, while all these MYC molecules share functions, they vary greatly in expression patterns and transcription functions. For instance, MYC2 has a greater effect on root growth compared to MYC3 or MYC4.Additionally, MYC2 will loop back and regulate JAZ expression levels, leading to a negative feedback loop. These transcription factors all have different impacts on JAZ levels after JA signaling. JAZ levels in turn affect transcription factor and gene expression levels. In other words, on top of activating different response genes, the transcription factors can vary JAZ levels to achieve specificity in response to JA signals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metabelian group** Metabelian group: In mathematics, a metabelian group is a group whose commutator subgroup is abelian. Equivalently, a group G is metabelian if and only if there is an abelian normal subgroup A such that the quotient group G/A is abelian. Subgroups of metabelian groups are metabelian, as are images of metabelian groups over group homomorphisms. Metabelian groups are solvable. In fact, they are precisely the solvable groups of derived length at most 2. Examples: Any dihedral group is metabelian, as it has a cyclic normal subgroup of index 2. More generally, any generalized dihedral group is metabelian, as it has an abelian normal subgroup of index 2. Examples: If F is a field, the group of affine maps x↦ax+b (where a ≠ 0) acting on F is metabelian. Here the abelian normal subgroup is the group of pure translations x↦x+b , and the abelian quotient group is isomorphic to the group of homotheties x↦ax . If F is a finite field with q elements, this metabelian group is of order q(q − 1). Examples: The group of direct isometries of the Euclidean plane is metabelian. This is similar to the above example, as the elements are again affine maps. The translations of the plane form an abelian normal subgroup of the group, and the corresponding quotient is the circle group. The finite Heisenberg group H3,p of order p3 is metabelian. The same is true for any Heisenberg group defined over a ring (group of upper-triangular 3 × 3 matrices with entries in a commutative ring). All nilpotent groups of class 3 or less are metabelian. The lamplighter group is metabelian. All groups of order p5 are metabelian (for prime p). All groups of order less than 24 are metabelian.In contrast to this last example, the symmetric group S4 of order 24 is not metabelian, as its commutator subgroup is the non-abelian alternating group A4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technofeminism** Technofeminism: Technofeminism explores the role gender plays in technology. It is often examined in conjunction with intersectionality, a term coined by Kimberlé Crenshaw which analyzes the relationships among various identities, such as race, socioeconomic status, sexuality, gender, and more. However, many scholars, such as Lori Beth De Hertogh, Liz Lane, and Jessica Oulette, as well as Angela Haas, have spoken out about the lack of technofeminist scholarship, especially in the context of overarching technological research.A primary concern of technofeminism is the relationship between historical and societal norms, and technology design and implementation. Technofeminist scholars actively work to illuminate the often unnoticed inequities ingrained in systems and come up with solutions to combat them. They also research how technology can be used for positive ends, especially for marginalized groups. Judy Wajcman: TechnoFeminism Book Overview TechnoFeminism is a book by academic sociologist Judy Wajcman which reframes the relationship between gender and technologies, and presents a feminist reading of the woman-machine relationship. It is considered a key contributor to the rise of feminist technoscience as a field. Judy Wajcman: Reception According to a review in the American Journal of Sociology, Wajcman convincingly argues that "analyses of everything from transit systems to pap smears must include a technofeminist awareness of men's and women's often different positions as designers, manufacturing operatives, salespersons, purchasers, profiteers, and embodied users of such technologies."In the journal Science, Technology and Human Values, Sally Wyatt notes that the "theoretical insights from feminist technoscience (can and should) be useful for empirical research as well as for political change and action" and that one way of moving towards this is "return to production and work as research sites because so much work in recent years has focused on consumption, identity, and representation."Editions Adding to the print edition, which has been reprinted several times, E-book editions of TechnoFeminism were released in 2013. The book has been translated into Spanish as El Tecnofeminismo. Angela Haas: Scholarship Angela Haas focuses on technofeminism as a predecessor of "digital cultural rhetorics research", the focus of her scholarship. The interactions between these two fields have led scholars to analyze the intersectional nature of technology, and how this intersectionality results in tools that do not serve all users.Haas also explores how marginalized groups interact with digital technologies. Specific areas analyzed include how revealing aspects of one's identity influences their ability to exist online. Although at times digital spaces do not cater to marginalized groups, one example being the idea that someone who identifies as homosexual is perceived as "sexual in every situation", which alters how the online community they are a part of interacts with them.However, at times, technology can be renewed to serve women and marginalized groups. Haas uses the example of the vibrator to prove this point. While it is now associated with female empowerment, the tool was originally used to control women suffering from "hysteria". De Hertogh et al.: Scholarship Lori Beth De Hertogh, Liz Lane, and Jessica Ouellette expanded upon previous scholars' work, placing it within the specific context of the "Computers and Composition" journal. In their work, the scholars analyzed frequencies of the term "technofeminism/t" and associated words in the "Computers and Composition" journal. Unfortunately, the occurrences were limited, leading the scholars to call for increased use of the term "technofeminism" in scholarly materials and increased intersectional frameworks in mainstream technology literature. Kerri Elise Hauman: Scholarship Kerri Hauman explores technofeminist themes in her PhD dissertation, specifically discussing how feminism exists in digital spaces. Using the example of "Feministing", a blog serving those invested in "feminist activism", Hauman applies various rhetorical frameworks (such as invitational rhetoric and rhetorical ecologies) to understand how online platforms can further social justice initiatives in some ways, but promote the exclusion of disadvantaged groups in others. Melanie Kill: Pedagogy Melanie Kill, assistant professor of English at the University of Maryland, College Park, regularly teaches classes at the intersection of technology and identities. One course, entitled "Digital Rhetoric: Technofeminism", uses a variety of projects and class activities to analyze technofeminist themes in scholarly materials, online platforms, and other digital entities. The course also invites students to consider the power dynamics behind technology creation and use, and how these dynamics impact marginalized groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Secure file transfer program** Secure file transfer program: sftp is a command-line interface client program to transfer files using the SSH File Transfer Protocol (SFTP), which runs inside the encrypted Secure Shell connection. It provides an interactive interface similar to that of traditional command-line FTP clients. One common implementation of sftp is part of the OpenSSH project. There are other command-line SFTP clients that use different names, such as lftp, PSFTP and PSCP (from PuTTY package) and WinSCP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Guetzli** Guetzli: Guetzli is a freely licensed JPEG encoder that Jyrki Alakujala, Robert Obryk, and Zoltán Szabadka have developed in Google's Zürich research branch. The encoder seeks to produce significantly smaller files than prior encoders at equivalent quality, albeit at very low speed. It is named after the Swiss German diminutive expression for biscuits, in line with the names of other compression technology from Google. Operation: Guetzli optimizes the quantization step of encoding to achieve compression efficiency. It constructs custom quantization tables for each file, decides on color subsampling, and quantizes adjacent DCT coefficients to zero, balancing benefits in the run-length encoding of coefficients and preservation of perceived image fidelity. Zeroing the right coefficients is the most effective tool in Guetzli, which is used as a makeshift means of spatially adaptive quantization. Guetzli uses Butteraugli (another open-source Google project) to guide compression.Guetzli is resource-intensive, requiring orders of magnitude more processing time and random-access memory than other JPEG encoders. Guetzli supports only the top of JPEG's quality range (quantizer settings 84–100) and supports only sequential (non-"progressive") encoding. Guetzli is more effective with bigger files. Google says it is a demonstration of the potential of psychovisual optimizations, intended to motivate further research into future JPEG encoders. Two tests found that Guetzli is very slow (about 4 magnitudes slower than normal JPEG encoder) and not necessarily better than mozjpeg. Operation: Butteraugli Butteraugli is a project that estimates the psychovisual similarity of two images. It assigns a differential mean opinion score (DMOS) value to the difference between an original image and a degraded version. It is significantly more complex than traditional metrics like PSNR and SSIM, but claimed to perform better with high-end quality, where degradations are not or barely noticeable. It models color perception and visual masking in the human visual system, taking into account that the eye is imaging different colors with different precision. It uses a heat map of changes. How the hundreds of parameters that model the properties of the human visual system were derived remains unexplained. An in-house performance evaluation with 614 ratings from 23 people on their own test set of 31 images yielded 75% of ratings favouring of JPEGs encoded for Butteraugli scores over libjpeg-turbo encodes, which usually score higher on SSIM and PSNR-HVS-M.Translating to "butter eye", the Swiss-German name originally signifies a dimple on top of some sweet pastry that has been filled with butter and sugar before baking. Availability: Guetzli is a command-line app. Written in C++, it is free and open-source under the terms of Apache License 2.0. Windows, macOS, and Linux versions of Guetzli are directly available from Google's repository on GitHub. The first public version was released on October 21, 2016, without any speed optimizations, and only announced on a specialist forum. Version 1.0 followed five months later on March 15, 2017, accompanied by an announcement to a broader public and two scientific papers.In addition to official release channel, openSUSE and Debian distribute it via their official software repositories. (For Arch Linux, there are user repositories available.) The Homebrew repository distributes a macOS version. For the Windows platform, two open-source GUI front-ends are available.Software developers that use Node.js can integrate Guetzli in their apps via a package available on the npm repository.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vessel flute** Vessel flute: A vessel flute is a type of flute with a body which acts as a Helmholtz resonator. The body is vessel-shaped, not tube- or cone-shaped; that is, the far end is closed. Most flutes have cylindrical or conical bore (examples: concert flute, shawm). Vessel flutes have more spherical hollow bodies. Vessel flute: The air in the body of a vessel flute resonates as one, with air moving alternately in and out of the vessel, and the pressure inside the vessel increasing and decreasing. This is unlike the resonance of a tube or cone of air, where air moves back and forth along the tube, with pressure increasing in part of the tube while it decreases in another. Vessel flute: Blowing across the opening of empty bottle produces a basic edge-blown vessel flute. Multi-note vessel flutes include the ocarina.A Helmholtz resonator is unusually selective in amplifying only one frequency. Most resonators also amplify more overtones. As a result, vessel flutes have a distinctive overtoneless sound. Types: Fipple vessel flutes These flutes have a fipple to direct the air at an edge. Gemshorn Pifana Ocarina Molinukai Tonette NiwawuA referee's whistle is technically a fipple vessel flute, although it only plays one note. Edge-blown vessel flutes These flutes are edge-blown. They have no fipple and rely on the player's mouth to direct the air at an edge. Types: Xun Hun Borrindo Hand flute Kōauau ponga ihu (a Māori gourd vessel flute played with the nose) Ipu ho kio kio (a similar instrument from Hawai'i) Blown bottle Other The shepherd's whistle is an unusual vessel flute; the fipple consists of two consecutive holes, and the player's mouth acts as a tunable vessel resonator. A nose whistle also uses the mouth as a resonating cavity, and can therefore vary its pitch. Acoustics: Sound production Sound is generated by oscillations in an airstream passing an edge, just as in other flutes. The airstream alternates quickly between the inner and outer side of the edge. The opening at which this occurs is called the voicing.Some vessel flutes have a fipple to direct the air onto the labium edge, like a recorder. Others rely on the player's lips to direct the air against the edge, like a concert flute. Fippleless flutes are called edge-blown flutes. Acoustics: The pitch of a vessel flute is affected by how hard the player blows. Breath force can change the pitch by several semitones, though too much or too little air will also harm the tone, so the usable range of tones is much smaller. The optimal breath force depends on which pitch is being sounded (according to the instrument's breath curve). This is why it is hard to learn to play a vessel flute in tune. Acoustics: Vessel flutes generally have no tuning mechanism, partly because they rely on variations in breath pressure and partly because the volume of the chamber and the size of the voicing need to be matched to produce a good tone. A few have plungers that change the chamber volume.Fingering holes and fingers that are too close to the labium disrupt the oscillation of the airstream and hurt the tone. Acoustics: Amplification At first the sound is a broad-spectrum "noise" (i.e. "chiff"), but those frequencies that match the resonant frequency of the resonating chamber are selectively amplified. The resonant frequency is the pitch of the note that is heard. Vessel flutes use the air in a vessel for amplification; the vessel acts as a Helmholtz resonator. Other things being equal, vessel flutes are louder when they use more air, and when they are being played at higher pressures. Acoustics: Pitch and fingering The resonant frequency of a vessel flute is given by this formula: (heavily simplified, see simplifications) pitchofthenote=(aconstant)×totalsurfaceareaofopenholestotalvolumeenclosedbytheinstrument From this, one can see that smaller instruments are higher-pitched. It also means that, in theory, opening a specific hole on an instrument always raises the pitch by the same amount. It doesn't matter how many other holes are open; opening the hole always increases the total area of the open holes by the same amount. Acoustics: A vessel flute with two fingering holes of the same size can sound three notes (both closed, one open, both open). An vessel flute with two fingering holes of different sizes can sound four notes (both closed, only the smaller hole open, only the bigger hole open, both open). The number of notes increases with the number of holes: In theory, if the smallest hole were just big enough to raise the pitch by a semitone, and each successive hole was twice as big as the last, then a vessel flute could play a scale of 1024 fully-chromatic notes. Fingering would be equivalent to counting in finger binary. Acoustics: In practice, the pitch of a vessel flute is also affected by how hard the player blows. If more holes are open, it is necessary to blow harder, which raises the pitch. The high notes tend to go sharp; the low notes, flat. To compensate, fingering charts soon diverge from the plain binary progression. The same pitch can be made with a variety of vessel shapes, as long as the cavity resonates as a Helmholtz resonator. This is why vessel flutes come in a variety of shapes. The chamber shape does, however, affect the acoustics and ergonomics; it is not entirely arbitrary. Acoustics: Overtones The resonator in the ocarina can create overtones, but because of the common "egg" shape, these overtones are many octaves above the keynote scale. In similar instruments with a narrow cone shape, like the Gemshorn or Tonette, some partial overtones are available. Overblowing to get a range of higher pitched notes is possible on the ocarina, but not widely done, because the resulting notes are not "clean" enough. Acoustics: Multiple resonant chambers Some ocarinas are double- or triple-chambered, often with the chambers tuned an octave or a tenth apart. This allows the player to play chords, but it also allows an increased range.A chamber with a smaller range can be more tuned to better characteristics throughout its range; a chamber with a large range will, for basic physical reasons, have more borderline characteristics at the extremities of its range. Splitting a large range over multiple chambers makes for a smaller range per chamber. So for the same range, multichambers can have a better tone. The optimal air pressure can also be more consistent between notes (a flatter breath curve), making multichambers easier to play, especially for fast music with large jumps in pitch. Acoustics: Physics simplifications A less-simplified formula for the resonant frequency of a Helmholtz resonator is: f=v2πAV Where f is the resonant frequency, v is the speed of sound, A is the total area of openings in the vessel, and V is the volume of air enclosed in the vessel. The pitch of a Helmholtz resonator is also affected by how far the air has to go to get in or out of the resonator; in other words, the thickness of the material the holes are cut in. Variations in the speed of sound The speed of sound, assumed to be constant above, is in fact somewhat variable. Acoustics: The speed of sound in air varies with temperature, meaning that a vessel flute's pitch will change in hot or cold air. However, varying the playing airspeed can change the pitch by several semitones. Unfortunately, most of this range is not usable, only about third of a semitone / 30 cents (for music with rapid or complex note transitions, the practical limit is only 5-10 cents). This is enough to cancel the expected pitch effects of moderate temperature changes (±20-30 Celsius for simple music, ±4-5 Celsius for complex music). The low notes can be made to sound good and in-tune at a variety of pressures, but the higher pitches are substantially less sensitive to changes in pressure. At low temperatures, the high notes may squeak before the player can blow hard enough to bring them in tune; at high temperatures, the high notes will require so little air that they sound too airy. Ocarina makers can give information on the temperature a specific ocarina was tuned for, the temperature which will give its designed tone.Air pressure variations do not affect pitch. The ratio of pressure to air density in an ideal gas is constant. Air pressure and density changes therefore cancel, and have no effect on the speed of sound; air is nearly an ideal gas, so there is nearly no effect. Acoustics: Humidity has a comparatively small effect on the speed of sound. Going from zero to 100% relative humidity should change the frequency by less than a two-degree-Celsius change in room temperature. As the player's breath has ~100% relative humidity, the humidity can't vary that much anyway.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Genome informatics** Genome informatics: Genome Informatics (also genoinformatics or genetic information processing) is a scientific study of information processing in genomes. Introduction: Information processing and information flow occur in the course of an organism's development and throughout its lifespan. The essence of computation is information processing, and the essence of biological information processing is control of the molecular events inside a cell. Genome informatics introduces computational techniques and applies them to derive information from genome sequences. Genome informatics includes methods to analyze DNA sequence information and to predict protein sequence and structure. Methods of studying a large genomic data include variant-calling, transcriptomic analysis, and variant interpretation. Genome informatics can analyze DNA sequence information and to predict protein sequence and structure. Genome informatics dealing with microbial and metagenomics, sequencing algorithms, variant discovery and genome assembly, evolution, complex traits and phylogenetics, personal and medical genomics, transcriptomics, genome structure and function. Genoinformatics refers to genome and chromosome dynamics, quantitative biology and modeling, molecular and cellular pathologies. Genome informatics also includes the field of genome design. There still a lot more we can do and develop in Genome Informatics. Find a potential disease, searching a solution for a disease, or proving why people get sick for no reason. For genomic informatics there are several main applications for it, including: genome information analysiscomputational modelling of gene regulatory networks models for complex eukaryotic regulatory DNA sequences an algorithm for Ab Initio DNA Motif Detection Applications: Biomolecular systems that can process information are sought for computational applications, because of their potential for parallelism and miniaturization and because their biocompatibility also makes them suitable for future biomedical applications. DNA has been used to design machines, motors, finite automata, logic gates, reaction networks and logic programs, amongst many other structures and dynamic behaviours.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Projective Set (game)** Projective Set (game): Projective Set (sometimes shortened to ProSet) is a real-time card game derived from the older game Set. The deck contains cards consisting of colored dots; some cards are laid out on the table and players attempt to find "Sets" among them. The word projective comes from the game's relation to Projective spaces over the finite field with two elements.Projective Set has been studied mathematically as well as played recreationally. It has been a popular game at Canada/USA Mathcamp. Rules: A Projective Set card has six binary attributes, or bits, generally represented by colored dots. For each color of dot, each card either has that dot or does not. There is one card for each possible combination of dots except the combination of no dots at all, making 63 cards total. Three cards are said to form a "set" if the total number of dots of each color is either 0 or 2. Similarly, four or more cards form a "set" if the number of dots of each color is an even number. A card and itself could be said to form a two-card set, but as the cards in the deck are all distinct, this does not arise in actual gameplay. Original Version In the original version, as in Set, 12 cards are laid out on the table. The first player to find three cards which form a set and call out "set" takes the three cards. Three new cards are then dealt and the play continues until the deck is depleted. If at any time the players agree there is no set among the cards, three new cards can be dealt, bringing the total number of cards on the table to 15. Other than this, new cards are not dealt out unless the number of cards on the table goes below 12. The game ends when the deck is depleted and no more sets can be found among the cards on the table. The player who captured the most sets is the winner. 7-card Version A variation of the game, more popular than the original, allows sets of any size, rather than just sets of size three. 7 cards are put out on the table at a time, and when a set is found (with anywhere from 3-7 cards), all the cards from the set are taken and then replaced. Points are generally given at the end according to how many cards each player captured rather than how many sets. It turns out that among any 7 cards there is a set, under these rules, so there is no extra rule necessary for the case that no set can be found. Mathematics: The cards of a Projective Set deck can be thought of as nonzero vectors in the finite vector space F26 The collection of all such vectors is the finite projective space with order 2 and dimension 5. Three cards form a set if and only if the corresponding points are collinear in that space. More generally, in the variant, n cards form a set if and only if the corresponding vectors add to the zero vector. In Set, there can exist 20 cards out of the 81 without a set, but no more. In Projective Set, there can exist up to 32 out of the 63 cards with no (3-card) set.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TOLLIP** TOLLIP: Toll interacting protein, also known as TOLLIP, is an inhibitory adaptor protein that in humans is encoded by the TOLLIP gene. Function and regulation: It is an inhibitory adaptor protein within Toll-like receptors (TLR). The TLR pathway is a part of the innate immune system that recognizes structurally conserved molecular patterns of microbial pathogens, leading to an inflammatory immune response. Tollip interacts with cellular and subcellular membrane compartments such as endosome and lysosome through its C2 domain binding with phosphoinositides. By coordinating organelle communications , Tollip can contribute to the fusion of endo-lysosome and autophagosome. Mice with Tollip deletion exhibit elevated risks for inflammatory diseases such as atherosclerosis and neurodegeneration. Clinical significance: Polymorphisms in TLR genes have been implicated in various diseases like atopic dermatitis. Recently, variations in the TOLLIP gene have been associated with tuberculosis and idiopathic pulmonary fibrosis. Interactions: TOLLIP has been shown to interact with TOM1, TLR 2, TLR 4 and IL1RAP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VyOS** VyOS: VyOS is an open source network operating system based on Debian.VyOS provides a free routing platform that competes directly with other commercially available solutions from well-known network providers. Because VyOS is run on standard amd64 systems, it can be used as a router and firewall platform for cloud deployments. History: After Brocade Communications stopped development of the Vyatta Core Edition of the Vyatta Routing software, a small group of enthusiasts in 2013 took the last Community Edition, and worked on building an Open Source fork to live on in place of the end of life VC. Features: BGP (IPv4 and IPv6), OSPF (v2 and v3), RIP and RIPng, policy-based routing. IPsec, VTI, VXLAN, L2TPv3, L2TP/IPsec and PPTP servers, tunnel interfaces (GRE, IPIP, SIT), OpenVPN in client, server, or site-to-site modes, WireGuard. Stateful firewall, zone-based firewall, all types of source and destination NAT (one to one, one to many, many to many). DHCP and DHCPv6 server and relay, IPv6 RA, DNS forwarding, TFTP server, web proxy, PPPoE access concentrator, NetFlow/sFlow sensor, QoS. VRRP for IPv4 and IPv6, ability to execute custom health checks and transition scripts; ECMP, stateful load balancing. Built-in versioning. Releases: VyOS version 1.0.0 (Hydrogen) was released on December 22, 2013. On October 9, 2014, version 1.1.0 (Helium) was released. All versions released thus far have been based on Debian 6.0 (Squeeze), and are available as a 32-bit images and 64-bit images for both physical and virtual machines.On January 28, 2019, version 1.2.0 (Crux) was released. Version 1.2.0 is based on Debian 8 (Jessie). Releases: While version 1.0 and 1.1 were named after elements, a new naming scheme based on constellations is used from version 1.2. Release History VMware Support The VyOS OVA image for VMware was released with the February 3, 2014 maintenance release. It allows a convenient setup of VyOS on a VMware platform and includes all of the VMware tools and paravirtual drivers. Releases: The OVA image can be downloaded from the standard download site Amazon EC2 Support Starting with version 1.0.2, Amazon EC2 customers can select a VyOS AMI image. (deprecated, will be removed in February 2018) Starting with version 1.1.7, AWS customers should use new marketplace VyOS AMIStarting with version 1.2.0, AWS customers can deploy new marketplace AMI This new offering now comes with support Azure Support Starting with version 1.2.0, Azure customers can use VyOS on Azure
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Midpoint method** Midpoint method: In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, y′(t)=f(t,y(t)),y(t0)=y0. The explicit midpoint method is given by the formula the implicit midpoint method by for n=0,1,2,… Here, h is the step size — a small positive number, tn=t0+nh, and yn is the computed approximate value of y(tn). The explicit midpoint method is sometimes also known as the modified Euler method, the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods. The name of the method comes from the fact that in the formula above, the function f giving the slope of the solution is evaluated at t=tn+h/2=tn+tn+12, the midpoint between tn at which the value of y(t) is known and tn+1 at which the value of y(t) needs to be found. Midpoint method: A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at (tn,yn) is computed using f(tn,yn) . The next value yn+1 is found where the tangent intersects the vertical line t=tn+1 . However, if the second derivative is only positive between tn and tn+1 , or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as h increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of y(t) at the midpoint, then computing the slope of the tangent with f() . Finally, the improved tangent is used to calculate the value of yn+1 from yn . This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of y(t) at the midpoint. Midpoint method: The local error at each step of the midpoint method is of order O(h3) , giving a global error of order O(h2) . Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as h→0 The methods are examples of a class of higher-order methods known as Runge–Kutta methods. Derivation of the midpoint method: The midpoint method is a refinement of the Euler method yn+1=yn+hf(tn,yn), and is derived in a similar manner. The key to deriving Euler's method is the approximate equality which is obtained from the slope formula and keeping in mind that y′=f(t,y). Derivation of the midpoint method: For the midpoint methods, one replaces (3) with the more accurate y′(t+h2)≈y(t+h)−y(t)h when instead of (2) we find One cannot use this equation to find y(t+h) as one does not know y at t+h/2 . The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for y(t+h/2) : y(t+h2)≈y(t)+h2y′(t)=y(t)+h2f(t,y(t)), which, when plugged in (4), gives us y(t+h)≈y(t)+hf(t+h2,y(t)+h2f(t,y(t))) and the explicit midpoint method (1e). Derivation of the midpoint method: The implicit method (1i) is obtained by approximating the value at the half step t+h/2 by the midpoint of the line segment from y(t) to y(t+h) y(t+h2)≈12(y(t)+y(t+h)) and thus y(t+h)−y(t)h≈y′(t+h2)≈k=f(t+h2,12(y(t)+y(t+h))) Inserting the approximation yn+hk for y(tn+h) results in the implicit Runge-Kutta method k=f(tn+h2,yn+h2k)yn+1=yn+hk which contains the implicit Euler method with step size h/2 as its first part. Because of the time symmetry of the implicit method, all terms of even degree in h of the local error cancel, so that the local error is automatically of order O(h3) . Replacing the implicit with the explicit Euler method in the determination of k results again in the explicit midpoint method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perlite** Perlite: Perlite is an amorphous volcanic glass that has a relatively high water content, typically formed by the hydration of obsidian. It occurs naturally and has the unusual property of greatly expanding when heated sufficiently. It is an industrial mineral, suitable "as ceramic flux to lower the sintering temperature", and a commercial product useful for its low density after processing. Properties: Perlite softens when it reaches temperatures of 850–900 °C (1,560–1,650 °F). Water trapped in the structure of the material vaporises and escapes, and this causes the expansion of the material to 7–16 times its original volume. The expanded material is a brilliant white, due to the reflectivity of the trapped bubbles. Unexpanded ("raw") perlite has a bulk density around 1100 kg/m3 (1.1 g/cm3), while typical expanded perlite has a bulk density of about 30–150 kg/m3 (0.03–0.150 g/cm3). Typical analysis: 70–75% silicon dioxide: SiO2 12–15% aluminium oxide: Al2O3 3–4% sodium oxide: Na2O 3–5% potassium oxide: K2O 0.5-2% iron oxide: Fe2O3 0.2–0.7% magnesium oxide: MgO 0.5–1.5% calcium oxide: CaO 3–5% loss on ignition (chemical / combined water) Sources and production: Perlite is a non-renewable resource. The world reserves of perlite are estimated at 700 million tonnes.The confirmed resources of perlite existing in Armenia amount to 150 million m3, whereas the total amount of projected resources reaches up to 3 billion m3. Considering specific density of 1.1 ton/m3 confirmed reserves in Armenia amount to 165 million tons. Other reported reserves are: Greece - 120 million tonnes, Turkey, USA and Hungary - about 49-57 million tonnes. Perlite world production, led by China, Turkey, Greece, USA, Armenia and Hungary, summed up to 4.6 million tonnes in 2018. Uses: Because of its low density and relatively low price (about US$150 per tonne of unexpanded perlite), many commercial applications for perlite have been developed. Uses: Construction and manufacturing In the construction and manufacturing fields, it is used in lightweight plasters, concrete and mortar, insulation and ceiling tiles. It may also be used to build composite materials that are sandwich-structured or to create syntactic foam.Perlite filters are fairly commonplace in filtering beer before it is bottled.Small quantities of perlite are also used in foundries, cryogenic insulation, and ceramics (as a clay additive). It is also used by the explosives industry. Uses: Aquatic filtration Perlite is currently used in commercial pool filtration technology, as a replacement to diatomaceous earth filters. Perlite is an excellent filtration aid and is used extensively as an alternative to diatomaceous earth. The popularity of perlite usage as a filter medium is growing considerably worldwide. Several products exist in the market to provide perlite based filtration. Several perlite filters and perlite media have met NSF-50 approval (Aquify PMF Series and AquaPerl), which standardizes water quality and technology safety and performance. Perlite can be safely disposed of through existing sewage systems, although some pool operators choose to separate the perlite using settling tanks or screening systems to be disposed of separately. Uses: Biotechnology Due to thermal and mechanical stability, non-toxicity, and high resistance against microbial attacks and organic solvents, perlite is widely used in biotechnological applications. Perlite was found to be an excellent support for immobilization of biocatalysts such as enzymes for bioremediation and sensing applications. Agriculture In horticulture, perlite can be used as a soil amendment or alone as a medium for hydroponics or for starting cuttings. When used as an amendment, it has high permeability and low water retention and helps prevent soil compaction. Cosmetics Perlite is used in cosmetics as an absorbent and mechanical exfoliant. Substitutes Perlite can be replaced for all of its uses. Substitutes include: Diatomite, used for filter-aids Expanded clay, an alternative lightweight filler for building materials Shale Pumice Slag Vermiculite - many expanders of perlite are also exfoliating vermiculite and belong to both trade associations Occupational safety: As perlite contains silicon dioxide, goggles and silica filtering masks are recommended when handling large quantities. Occupational safety: United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for perlite exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GNU Scientific Library** GNU Scientific Library: The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is distributed under the GNU General Public License. Project history: The GSL project was initiated in 1996 by physicists Mark Galassi and James Theiler of Los Alamos National Laboratory. They aimed at writing a modern replacement for widely used but somewhat outdated Fortran libraries such as Netlib. They carried out the overall design and wrote early modules; with that ready they recruited other scientists to contribute. Project history: The "overall development of the library and the design and implementation of the major modules" was carried out by Brian Gough and Gerard Jungman. Other major contributors were Jim Davies, Reid Priedhorsky, M. Booth, and F. Rossi.Version 1.0 was released in 2001. In the following years, the library expanded only slowly; as the documentation stated, the maintainers were more interested in stability than in additional functionality. Major version 1 ended with release 1.16 of July 2013; this was the only public activity in the three years 2012–2014. Project history: Vigorous development resumed with publication of version 2.0 in October 2015. The latest version 2.7 was released in June 2021. Example: The following example program calculates the value of the Bessel function of the first kind and order zero for 5: The example program has to be linked to the GSL library upon compilation: The output is shown below and should be correct to double-precision accuracy: Features: The software library provides facilities for: Programming-language bindings Since the GSL is written in C, it is straightforward to provide wrappers for other programming languages. Such wrappers currently exist for AMPL C++ Fortran Haskell Java Julia Common Lisp OCaml Octave Perl Data Language Python R Ruby Rust C++ support The GSL can be used in C++ classes, but not using pointers to member functions, because the type of pointer to member function is different from pointer to function. Instead, pointers to static functions have to be used. Another common workaround is using a functor. Features: C++ wrappers for GSL are available. Not all of these are regularly maintained. They do offer access to matrix and vector classes without having to use GSL's interface to malloc and free functions. Some also offer support for also creating workspaces that behave like Smart pointer classes. Finally, there is (limited, as of April 2020) support for allowing the user to create classes to represent a parameterised function as a functor. Features: While not strictly wrappers, there are some C++ classes that allow C++ users to use the Gnu Scientific Library with wrapper features.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Taxine alkaloids** Taxine alkaloids: Taxine alkaloids, which are often named under the collective title of taxines, are the toxic chemicals that can be isolated from the yew tree. The amount of taxine alkaloids depends on the species of yew, with Taxus baccata and Taxus cuspidata containing the most. The major taxine alkaloids are taxine A and taxine B although there are at least 10 different alkaloids. Until 1956, it was believed that all the taxine alkaloids were one single compound named taxine.The taxine alkaloids are cardiotoxins with taxine B being the most active. Taxine alkaloids have no medical uses but Paclitaxel and other taxanes that can be isolated from yews have been used as chemotherapy drugs. Provenance: Taxine can be found in Taxus species: Taxus cuspidata, T. baccata (English yew), Taxus x media, Taxus canadensis, Taxus floridana, and Taxus brevifolia (Pacific or western yew). All of these species contain taxine in every part of the plant except in the aril, the fleshy covering of the seeds (berries). Concentrations vary between species, leading to varying toxicities within the genus. This is the case of Taxus brevifolia (Pacific yew) and Taxus baccata (English yew); T. baccata contains high taxine concentrations, which leads to a high toxicity, whereas T. brevifolia has a low toxicity. There are seasonal changes in the concentrations of taxine in yew plants, with the highest concentrations during the winter, and the lowest in the summer. The poison remains dangerous in dead plant matter.These species have distinctive leaves, which are needle-like, small, spirally arranged but twisted so they are two-ranked, and linear-lanceolate. They are also characterized by their ability to regenerate from stumps and roots.Taxus species are found exclusively in temperate zones of the northern hemisphere. In particular T. baccata is found all over Europe, as a dominant species or growing under partial canopies of deciduous trees. It grows well in steep rocky areas on calcareous substrates such as in the chalk downs of England, and in more continental climates it fares better in mixed forests. T. baccata is sensitive to frost, limiting its northern Scandinavian distribution. History: The toxic nature of yew trees has been known for millennia. Greek and Roman writers have recorded examples of poisonings, including Julius Caesar's account of Cativolcus, king of Eburones, who committed suicide using the “juice of the yew”. The first attempt to extract the poisonous substance in the yew tree was in 1828 by Piero Peretti, who isolated a bitter substance. In 1856, H. Lucas, a pharmacist in Arnstadt, prepared a white alkaloid powder from the foliage of Taxus baccata L. which he named taxine. The crystalline form of the substance was isolated in 1876 by W. Marmé, a French chemist. A. Hilger and F. Brande used elemental combustion analysis in 1890 to suggest the first molecular formula of 37 52 NO 10 .For the next 60 years, it was generally accepted that taxine was made of a single compound and it was well known enough for Agatha Christie to use it as a poison in A Pocket Full of Rye (1953). However, in 1956, Graf and Boeddeker discovered that taxine was actually a complex mixture of alkaloids rather than a single alkaloid. Using electrophoresis, they were able to isolate the two major components, taxine A and taxine B. taxine A was the fastest moving band and accounted for 1.3% of the alkaloid mixture, while taxine B was the slowest moving band and accounted for 30% of the mixture. The full structure of taxine A was reported in 1982, taxine B in 1991. Toxicity in humans: Almost all parts of Taxus baccata, perhaps the best-known Taxus species, contain taxines.Taxines are cardiotoxic calcium and sodium channel antagonists. If any leaves or seeds of the plant are ingested, urgent medical attention is recommended as well as observation for at least 6 hours after the point of ingestion. There are currently no known antidotes for yew poisoning, but drugs such as atropine have been used to treat the symptoms. Taxine B, the most common alkaloid in Taxus species, is also the most cardiotoxic taxine, followed by taxine A.Taxine alkaloids are absorbed quickly from the intestine and in high enough quantities can cause death due to general cardiac failure, cardiac arrest or respiratory failure. Taxines are also absorbed efficiently via the skin and Taxus species should thus be handled with care and preferably with gloves. Taxus Baccata leaves contain approximately 5 mg of Taxines per 1g of leaves. The estimated lethal dose (LDmin) of taxine alkaloids is approximately 3.0 mg/kg body weight for humans. Different studies show different toxicities; a major reason is the difficulty of measuring taxine alkaloids.Minimum lethal doses (oral LDmin) for many different animals have been tested: Chicken 82.5 mg/kg Cow 10.0 mg/kg Dog 11.5 mg/kg Goat 60.0 mg/kg Horse 1.0–2.0 mg/kg Pig 3.5 mg/kg Sheep 12.5 mg/kgSeveral studies have found taxine LD50 values under 20 mg/kg in mice and rats. Toxicity in humans: Clinical signs Cardiac and cardiovascular effects: Arrhythmia – Irregular heartbeats leading to lower cardiac output; itself a very severe symptom. Ventricular arrhythmias can lead to circulatory collapse (via cardiac arrest) very quickly if not treated. Bradycardia – Fewer heart beats per time unit.Both these effects lead to hypotension, which gives many symptoms including: Headache Dizziness Tremorand many other typical signs of low blood pressure. Intestinal effects: Nausea and vomiting Diarrhoea Abdominal painRespiratory effects: Respiratory distress – Shortness of breath.If the poisoning is severe and not treated: Loss of consciousness – Lack of oxygen due to low blood pressure and respiratory distress forces the body to shut down all but the most vital functions. Respiratory failure – Breathing stops. Circulatory collapse – Blood pressure drops to the point that not even the most basic functions can be sustained. Toxicity in humans: Diagnosis Diagnosis of yew poisoning is very important if the patient is not already aware of having ingested parts of the yew tree. The method of diagnosis is the determination of 3,5-dimethoxyphenol, a product of the hydrolysis of the glycosidic bond in taxine, in the blood, the gastric contents, the urine, and the tissues of the patient. This analysis can be done by gas or liquid chromatography and also by mass spectroscopy. Toxicity in humans: Treatment There are no specific antidotes for taxine, so patients can only receive treatment for their symptoms. Toxicity in humans: It is also important to control blood pressure and heart rate to treat the heart problems. Atropine has been used successfully in humans to treat bradycardias and arrhythmias caused by taxine. It is more effective if administrated early, but it is also necessary to be cautious with administration because it can produce an increase in myocardial oxygen demand and potentiate myocardial hypoxia and dysfunction. An artificial cardiac pacemaker can also be installed to control the heartbeat. Toxicity in humans: Other treatments are useful to treat the other symptoms of poisoning: positive pressure ventilation if respiratory distress is present, fluid therapy to support blood pressure and maintain hydration and renal function, and gastrointestinal protectants. It may also be necessary to control aggressive behaviour and convulsions with tranquilizers. Toxicity in humans: Prevention The toxic effects of T. baccata have been known since ancient times. In most cases, poisoning is accidental, especially in cases involving children or animals. However, there are cases in which the poison is used as a suicide method.Because taxine poisoning is often only diagnosed after the death of the patient due to its rapid effect, preventing exposure is very important. Even dried parts of the plant are toxic because they still contain taxine molecules. Pet owners must ensure that yew branches or leaves are not used as toys for dogs or as perches for domestic birds. Toxicity in animals: The effects of Taxine in humans are very similar to the effects on other animals. It has the same mechanisms of action, and most of the times the ingestion of yew material is diagnosed with the death of the animal. Moreover, clinical signs, diagnosis, treatment, and prevention are mostly the same as in humans. This was seen due to the many experiments realized on rats, pigs, and other animals.Poisoning is typically caused by ingestion of decorative yew shrubs or trimmings thereof. In animals the only sign is often sudden death. Diagnosis is based on knowledge of exposure and foliage found in the digestive tract. With smaller doses, animals display uneasiness, trembling, dyspnea, staggering, weakness, and diarrhea. Cardiac arrhythmias worsen over time, eventually causing death. "Necropsy findings are unremarkable and nonspecific", generally including pulmonary, hepatic, and splenic congestion. With lower doses, mild inflammation may be seen in the upper gastrointestinal tract.Some animals are immune to the effects of taxine, particularly deer. Mechanism of action: The toxicity of the yew plant is due to a number of substances, the principal ones being toxic alkaloids (taxine B, paclitaxel, isotaxine B, taxine A), glycosides (taxicatine) and taxane derivates (taxol A, taxol B).There have been many studies about the toxicity of the taxine alkaloids, and they have shown that their mechanism of action is interference with the sodium and calcium channels of myocardial cells, increasing the cytoplasmic calcium concentrations. Their mechanism is similar to drugs such as verapamil, although taxines are more cardioselective. They also reduce the rate of the depolarization of the action potential in a dose-dependent manner. This produces bradycardia, hypotension, depressed myocardial contractility, conduction delay, arrhythmias, and other complications.Some taxine alkaloids have been isolated to study their effects and characteristics. This has allowed the discovery of some of the particular effects of each substance of the plant. For example, taxine A does not influence blood pressure, taxol causes cardiac disturbances in some people, that taxine B is the most toxic of these substances.Because a derivative from the yew, paclitaxel, functions as an anticancer drug, there have been investigations to show whether taxine B could also be used as a pharmaceutical.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kyiv style** Kyiv style: The Kyiv Academic Style of Bandura Playing is a method of playing the Ukrainian folk instrument of bandura. The instrument is held between the knees perpendicular to the body of the player. This means that the left hand is only able to play easily along the bass strings of the instrument. The right hand usually plays just on the treble strings known as prystrunky. The manner in which the instrument is held influences the technique used by the bandurist. The left hand uses only the middle three fingers in play. The position in which the bandura is held also means that the 5th finger of the right hand cannot be used effectively. Kyiv style: The Kyiv style is based on the technique used by kobzari of the Chernihiv province such as Tereshko Parkhomenko. It became known as the Kyiv style because the Kyiv Bandurist Capella used it. Before World War II, most Kyiv banduras had diatonically tuned bass strings. Since World War II in Ukraine, chromatic bass tuning is the standard. In the West, however, groups of bandurists exist that adhere to a diatonic bass tuning. Often these bandurists will refer to their playing style as the Chernihiv style of playing the bandura.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gateway Load Balancing Protocol** Gateway Load Balancing Protocol: Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol that attempts to overcome the limitations of existing redundant router protocols by adding basic load balancing functionality. Gateway Load Balancing Protocol: In addition to being able to set priorities on different gateway routers, GLBP allows a weighting parameter to be set. Based on this weighting (compared to others in the same virtual router group), ARP requests will be answered with MAC addresses pointing to different routers. Thus, by default, load balancing is not based on traffic load, but rather on the number of hosts that will use each gateway router. By default, GLBP load balances in round-robin fashion. Gateway Load Balancing Protocol: GLBP elects one AVG (Active Virtual Gateway) for each group. Other group members act as backup in case of AVG failure. In case there are more than two members, the second best AVG is placed in the Standby state and all other members are placed in the Listening state. This is monitored using hello and holdtime timers, which are 3 and 10 seconds by default. The elected AVG then assigns a virtual MAC address to each member of the GLBP group, including itself, thus enabling AVFs (Active Virtual Forwarders). Each AVF assumes responsibility for forwarding packets sent to its virtual MAC address. There could be up to four AVFs at the same time. Gateway Load Balancing Protocol: By default, GLBP routers use the local multicast address 224.0.0.102 to send hello packets to their peers every 3 seconds over UDP 3222 (source and destination). Cisco implemented IPv6 support for GLBP in IOS release 12.2(33)SXI.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Descriptive phenomenological method in psychology** Descriptive phenomenological method in psychology: The descriptive phenomenological method in psychology was developed by the American psychologist Amedeo Giorgi in the early 1970s. Giorgi based his method on principles laid out by philosophers like Edmund Husserl and Maurice Merleau-Ponty as well as what he had learned from his prior professional experience in psychophysics. Giorgi was an early pioneer of the humanistic psychology movement, the use of phenomenology in psychology, and qualitative research in psychology, and to this day continues to advocate for the importance of a human science approach to psychological subject matter. Giorgi has directed over 100 dissertations that have used the Descriptive Phenomenological Method on a wide variety of psychological problems, and he has published over 100 articles on the phenomenological approach to psychology. Theoretical perspective: Giorgi promotes phenomenology as a theoretical movement that avoids certain simplified tendencies sustained by many modern approaches to psychological research. According to the phenomenological psychological perspective embraced by Giorgi, researchers are encouraged to "bracket" their own assumptions pertaining to the phenomenon in question by refraining from positing a static sense of objective reality for oneself and the participants whose experiences are being studied. This allows the researchers to attend to the descriptions of the participants without forcing the meaning of the descriptive units into pre-defined categories. Theoretical perspective: An important aspect of the descriptive phenomenological method in psychology is the way by which it distinguishes itself from those approaches that are strictly interpretive. In this, Giorgi closely follows Husserl who proposes that "being given and being interpreted are descriptions of the same situation from two different levels of discourse." As such, in the Descriptive Phenomenological Method there are both descriptive and interpretive moments, but the researcher remains careful to attend to each type of act in unique ways. Through a sort of empathic immersion with the subjects and their descriptions, the researchers get a sense of the ways that the experiences given by the participants were actually lived, which is in turn described. During this process, however, theoretical or speculative interpretation should be avoided so as to flesh out the full lived meaning inherent to the descriptions themselves (Giorgi, 2009, p. 127). Interpretation may then occur to various extents during other phases of the research process, but only as it relates to implications of the results rather than the lived meaning of the participants' experiences. Theoretical perspective: Another form of Descriptive phenomenological method in psychology was proposed by Paul Colaizzi. This method follows a distinctive seven step process that stays close to the data while providing strong analysis. Phenomenological intuition: The Descriptive Phenomenological Method involves neither deduction nor induction in order to find meaning, but instead asks the researcher to intuit what is essential to the phenomenon being studied. Intuition, in this sense (going along with the philosophy of phenomenology), simply means that an object (or state of affairs, structural whole, proposition etc.) becomes presented to consciousness in a certain mode of giveness. In the context of this research method, therefore, intuition is used in order to get a sense of the lived meaning of each description so as to relate them to what is known about the phenomenon of interest in general These types of generalities are not statistical probabilities nor universally posited, but are dependent upon the lived meaning of the descriptions and the meaning of the phenomenon being studied. Data analysis: The phenomenological psychological attitude is to be assumed while analyzing the data in order to ensure that "the results reflect a careful description of precisely the features of the experienced phenomenon as they present themselves to the consciousness of the researcher" (Giorgi, 2009, pp. 130–131). In the phenomenological psychological attitude, the psychological acts of the participants are affirmed to be real while the objects at which those acts are directed are reduced to what appears as psychologically relevant to the particular experience being attended to. In this sense, the researcher attends to the phenomenon in its "own appropriate mode of self-givenness, thus [meeting] the demand for scientific objectivity concerning the subjective: the method of phenomenological reduction" (Scanlon, 1977, xiv) With this method, this is done so as to reach a level of understanding that is appropriate for psychologists, while also helping the researcher to reach a sort of empathically sensed intuition of the experiences, in the sense used by Eugene GendlinEach description given by the participants is first read through in its entirety in order to get a better sense of the whole situation in which the experiences occurred. Then each description is attended to individually as the researcher goes through and marks off different units of meaning within the data in order to make the descriptions more manageable. After a single description is broken down into separate units, each unit can then be transformed from the language through which it was given into "psychologically sensitive" meaning units, which is done with the help of imaginative variation. This process is meant to flesh out the horizons of the lived meaning more fully in order to expand the possibilities inherent to the phenomenon being studied. Finally, after all the descriptions have undergone these steps, general psychological structures, in the sense described above, are sought. For Giorgi (2009), "essential psychological structure" refers to: "[A depiction] of the lived experience of a phenomenon, which may include aspects of the description of which the experiencer was unaware. The psychological structure is not a definition. It is meant to depict how certain phenomena that get named are lived, which includes experiential and conscious moments seen from a psychological perspective. A psychological perspective means that the lived meanings are based on an individual but get expressed eidetically, which means that they are general." The final structure is meant to serve as an ideal representation of the phenomenon being studied, based upon actual instantiations of it within concrete lived experiences. It may be the case that such structures turn up many times again, or their relevance may be limited to the cases studied in a particular study. Either way, they have the potential to reveal a lived understanding of a certain phenomenon without first requiring a certain theoretical framework in order to comprehend it. Data analysis: The Colaizzi method for data analysis proceeds as follows: Familiarization: The researcher reads the participants description multiple times until it is familiar. Identifying Significant Statements: The researcher Identifies all the relevant statements that directly relate to the select phenomenon. Formulating Meanings: While bracketing their own assumptions, the researcher identify meanings relevant to the phenomenon from the significant statements. Clustering Themes: The researcher clusters the identified meanings into themes considering all the accounts presented. Bracketing is again important to ensure internal validity. Developing an Exhaustive Description: The researcher writes a full description of the phenomenon including the themes complied from step 4. Producing the Fundamental Structure: The researcher condenses the statement form the previous step into a dense statement that shows the essential aspects of the select phenomenon. Seeking Verification of the Fundamental Structure: The statement from step 6 is returned to all the participants to verify that this captures their experience. Based on their response the researcher may reevaluate the analysis.This last step in particular has been criticized by Giorgi who stated that the researcher and the participant will have different perspectives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-Methylconiine** N-Methylconiine: N-Methylconiine is a poisonous alkaloid found in poison hemlock in small quantities. Isolation and properties: The d-(+)-stereoisomer of N-methylconiine is reported to occur in hemlock in small quantities, and methods for its isolation are described by Wolffenstein and by von Braun. It is a colourless, oily, coniine-like liquid, specific rotation [α]D +81.33° at 24.3 °C. The salts are crystalline ("B" marks one molecule of the base): the hydrochloride, B•HCl, forms masses of needles, mp. 188 °C; the platinichloride, B2•H2PtCl6, has mp. 158 °C. Isolation and properties: The l-(−)-stereoisomer was obtained by Ahrens from residues left in the isolation of coniine as hydrobromide or by removing coniine as the nitroso-compound. It is a colourless, coniine-like liquid, bp. 175.6 °C/767 mmHg, specific rotation [α]D −81.92° at 20 °C. The monohydrochloride crystallises in leaflets, mp. 191–192 °C; the monohydrobromide in leaflets, mp. 189–190 °C; the platinichloride in orange crystals, mp. 153–154 °C; the aurichloride in leaflets, mp. 77–78 °C; and the picrate in long needles, mp. 121–122 °C. Synthesis: N-Methyl-d-coniine was prepared by the action of potassium methyl sulfate on coniine by Passon. Hess and Eichel have shown that d-coniine with formaldehyde and formic acid yields an active N-methyl-d-coniine, and that methyl-isopelletierine hydrazone yields N-methyl-dl-coniine when heated with sodium ethoxide at 150–170 °C.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dtella** Dtella: Dtella is a free and open-source, peer-to-peer file sharing client that connects to a distributed, decentralized DC++ Direct Connect network. Dtella allows the creation of a file-sharing intra-net with decentralized communication between clients. This lets clients on the same network share files without an internet connection, avoiding internet bandwidth limitations. Dtella is pronounced "dee-tell-uh" and the name originates from the rough contraction of "DC++ GNUtella"—a P2P system that shares a similar network structure to Dtella (though Dtella does not derive from it) History: Dtella originated at Purdue from a need to share files on a network without being limited by Purdue's bandwidth limit (for residents of its residence halls) and to avoid anti-piracy take-down notices. Dtella started as a client for DCGate, which used a DC network linked via IRC directory nodes. DCGate originally provided Purdue with IRC-based network transfers at about 1MB/sec.Dtella grew into a decentralized system offering several improvements over the now deprecated DCGate project. While DCGate featured main IRC directory nodes, Dtella is decentralized, making it harder to shut down. DCgate translated between DC and IRC protocols, while Dtella forms a P2P mesh network allowing it to not rely on IRC or any other central node. This additionally made it more adaptable to other universities and faster to spread, as it needed no central node set up and maintained. Bandwidth caps for large networked entities such as university residence halls are common practice, attributing to Dtella's spread and growth across multiple universities. Currently, many forks of Dtella exist, some are listed below. Forks: Notable Forks Dtella@Purdue Dtella@Purdue The original Dtella hub. Active from 2004 to May 13, 2023. One of the most active Dtella communitiesDtella@Berkeley Dtella@Berkeley Moved to UC Berkeley in Fall 2010 No longer a Dtella CommunityDtella@Home Dtella@Home For Home and off-campus use. Other known Forks Dtella@MS Files@USYD Dtella@CMU Dtella@McGill Dtella@Cambridge Dtella@IH Dtella@PKDC Dtella@UMD
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isometry** Isometry: In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure". Introduction: Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry; the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection. Introduction: Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space M involves an isometry from M into M′, a quotient set of the space of Cauchy sequences on M. Introduction: The original space M is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace. Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space. Introduction: An isometric surjective linear operator on a Hilbert space is called a unitary operator. Definition: Let X and Y be metric spaces with metrics (e.g., distances) dX and dY. A map f:X→Y is called an isometry or distance preserving if for any a,b∈X one has dX(a,b)=dY(f(a),f(b)). An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d. This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding. A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse. The inverse of a global isometry is also a global isometry. Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y. The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group. Definition: There is also the weaker notion of path isometry or arcwise isometry: A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective. This term is often abridged to simply isometry, so one should take care to determine from context which type is intended. Definition: ExamplesAny reflection, translation and rotation is a global isometry on Euclidean spaces. See also Euclidean group and Euclidean space § Isometries. The map x↦|x| in R is a path isometry but not a (general) isometry. Note that unlike an isometry, this path isometry does not need to be injective. Isometries between normed spaces: The following theorem is due to Mazur and Ulam. Definition: The midpoint of two elements x and y in a vector space is the vector 1/2(x + y). Linear isometry Given two normed vector spaces V and W, a linear isometry is a linear map A:V→W that preserves the norms: ‖Av‖=‖v‖ for all v∈V. Linear isometries are distance-preserving maps in the above sense. They are global isometries if and only if they are surjective. In an inner product space, the above definition reduces to ⟨v,v⟩=⟨Av,Av⟩ for all v∈V, which is equivalent to saying that A†A=IV⁡. This also implies that isometries preserve inner products, as ⟨Au,Av⟩=⟨u,A†Av⟩=⟨u,v⟩. Linear isometries are not always unitary operators, though, as those require additionally that V=W and AA†=IV⁡. By the Mazur–Ulam theorem, any isometry of normed vector spaces over R is affine. A linear isometry also necessarily preserves angles, therefore a linear isometry transformation is a conformal linear transformation. ExamplesA linear map from Cn to itself is an isometry (for the dot product) if and only if its matrix is unitary. Manifold: An isometry of a manifold is any (smooth) mapping of that manifold into itself, or into another manifold that preserves the notion of distance between points. The definition of an isometry requires the notion of a metric on the manifold; a manifold with a (positive-definite) metric is a Riemannian manifold, one with an indefinite metric is a pseudo-Riemannian manifold. Thus, isometries are studied in Riemannian geometry. Manifold: A local isometry from one (pseudo-)Riemannian manifold to another is a map which pulls back the metric tensor on the second manifold to the metric tensor on the first. When such a map is also a diffeomorphism, such a map is called an isometry (or isometric isomorphism), and provides a notion of isomorphism ("sameness") in the category Rm of Riemannian manifolds. Manifold: Definition Let R=(M,g) and R′=(M′,g′) be two (pseudo-)Riemannian manifolds, and let f:R→R′ be a diffeomorphism. Then f is called an isometry (or isometric isomorphism) if g=f∗g′, where f∗g′ denotes the pullback of the rank (0, 2) metric tensor g′ by f. Equivalently, in terms of the pushforward f∗, we have that for any two vector fields v,w on M (i.e. sections of the tangent bundle TM ), g(v,w)=g′(f∗v,f∗w). If f is a local diffeomorphism such that g=f∗g′, then f is called a local isometry. Manifold: Properties A collection of isometries typically form a group, the isometry group. When the group is a continuous group, the infinitesimal generators of the group are the Killing vector fields. The Myers–Steenrod theorem states that every isometry between two connected Riemannian manifolds is smooth (differentiable). A second form of this theorem states that the isometry group of a Riemannian manifold is a Lie group. Manifold: Riemannian manifolds that have isometries defined at every point are called symmetric spaces. Generalizations: Given a positive real number ε, an ε-isometry or almost isometry (also called a Hausdorff approximation) is a map f:X→Y between metric spaces such that for x,x′∈X one has |dY(f(x),f(x′))−dX(x,x′)|<ε, and for any point y∈Y there exists a point x∈X with dY(y,f(x))<ε That is, an ε-isometry preserves distances to within ε and leaves no element of the codomain further than ε away from the image of an element of the domain. Note that ε-isometries are not assumed to be continuous.The restricted isometry property characterizes nearly isometric matrices for sparse vectors. Generalizations: Quasi-isometry is yet another useful generalization. One may also define an element in an abstract unital C*-algebra to be an isometry: a∈A is an isometry if and only if a∗⋅a=1. Note that as mentioned in the introduction this is not necessarily a unitary element because one does not in general have that left inverse is a right inverse.On a pseudo-Euclidean space, the term isometry means a linear bijection preserving magnitude. See also Quadratic spaces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cross-coupling reaction** Cross-coupling reaction: In organic chemistry, a cross-coupling reaction is a reaction where two different fragments are joined. Cross-couplings are a subset of the more general coupling reactions. Often cross-coupling reactions require metal catalysts. One important reaction type is this: R−M + R'−X → R−R' + MX (R, R' = organic fragments, usually aryle; M = main group center such as Li or MgX; X = halide)These reactions are used to form carbon–carbon bonds but also carbon-heteroatom bonds. Cross-coupling reaction are a subset of coupling reactions. Cross-coupling reaction: Richard F. Heck, Ei-ichi Negishi, and Akira Suzuki were awarded the 2010 Nobel Prize in Chemistry for developing palladium-catalyzed coupling reactions. Mechanism: Many mechanisms exist reflecting the myriad types of cross-couplings, including those that do not require metal catalysts. Often, however, cross-coupling refers to a metal-catalyzed reaction of a nucleophilic partner with an electrophilic partner. Mechanism: In such cases, the mechanism generally involves reductive elimination of R-R' from LnMR(R') (L = spectator ligand). This intermediate LnMR(R') is formed in a two step process from a low valence precursor LnM. The oxidative addition of an organic halide (RX) to LnM gives LnMR(X). Subsequently, the second partner undergoes transmetallation with a source of R'−. The final step is reductive elimination of the two coupling fragments to regenerate the catalyst and give the organic product. Unsaturated substrates, such as C(sp)−X and C(sp2)−X bonds, couple more easily, in part because they add readily to the catalyst. Mechanism: Catalysts Catalysts are often based on palladium, which is frequently selected due to high functional group tolerance. Organopalladium compounds are generally stable towards water and air. Palladium catalysts can be problematic for the pharmaceutical industry, which faces extensive regulation regarding heavy metals. Many pharmaceutical chemists attempt to use coupling reactions early in production to minimize metal traces in the product. Heterogeneous catalysts based on Pd are also well developed.Copper-based catalysts are also common, especially for coupling involving heteroatom-C bonds.Iron-, cobalt-, and nickel-based catalysts have been investigated. Mechanism: Leaving groups The leaving group X in the organic partner is usually a halide, although triflate, tosylate and other pseudohalide have been used. Chloride is an ideal group due to the low cost of organochlorine compounds. Frequently, however, C–Cl bonds are too inert, and bromide or iodide leaving groups are required for acceptable rates. The main group metal in the organometallic partner usually is an electropositive element such as tin, zinc, silicon, or boron. Carbon–carbon cross-coupling: Many cross-couplings entail forming carbon–carbon bonds. Carbon–heteroatom coupling: Many cross-couplings entail forming carbon–heteroatom bonds (heteroatom = S, N, O). A popular method is the Buchwald–Hartwig reaction: Miscellaneous reactions: Palladium-catalyzes the cross-coupling of aryl halides with fluorinated arene. The process is unusual in that it involves C–H functionalisation at an electron deficient arene. Applications: Cross-coupling reactions are important for the production of pharmaceuticals, examples being montelukast, eletriptan, naproxen, varenicline, and resveratrol. with Suzuki coupling being most widely used. Some polymers and monomers are also prepared in this way. Reviews: Fortman, George C.; Nolan, Steven P. (2011). "N-Heterocyclic carbene (NHC) ligands and palladium in homogeneous cross-coupling catalysis: a perfect union". Chemical Society Reviews. 40 (10): 5151–69. doi:10.1039/c1cs15088j. PMID 21731956. Yin; Liebscher, Jürgen (2007). "Carbon−Carbon Coupling Reactions Catalyzed by Heterogeneous Palladium Catalysts". Chemical Reviews. 107 (1): 133–173. doi:10.1021/cr0505674. PMID 17212474. S2CID 36974481. Jana, Ranjan; Pathak, Tejas P.; Sigman, Matthew S. (2011). "Advances in Transition Metal (Pd,Ni,Fe)-Catalyzed Cross-Coupling Reactions Using Alkyl-organometallics as Reaction Partners". Chemical Reviews. 111 (3): 1417–1492. doi:10.1021/cr100327p. PMC 3075866. PMID 21319862. Molnár, Árpád (2011). "Efficient, Selective, and Recyclable Palladium Catalysts in Carbon−Carbon Coupling Reactions". Chemical Reviews. 111 (3): 2251–2320. doi:10.1021/cr100355b. PMID 21391571. Miyaura, Norio; Suzuki, Akira (1995). "Palladium-Catalyzed Cross-Coupling Reactions of Organoboron Compounds". Chemical Reviews. 95 (7): 2457–2483. CiteSeerX 10.1.1.735.7660. doi:10.1021/cr00039a007. Roglans, Anna; Pla-Quintana, Anna; Moreno-Mañas, Marcial (2006). "Diazonium Salts as Substrates in Palladium-Catalyzed Cross-Coupling Reactions". Chemical Reviews. 106 (11): 4622–4643. doi:10.1021/cr0509861. PMID 17091930. S2CID 8128630. Korch, Katerina M.; Watson, Donald A. (2019). "Cross-Coupling of Heteroatomic Electrophiles". Chemical Reviews. 119 (13): 8192–8228. doi:10.1021/acs.chemrev.8b00628. PMC 6620169. PMID 31184483. Cahiez, Gérard; Moyeux, Alban (2010). "Cobalt-Catalyzed Cross-Coupling Reactions". Chemical Reviews. 110 (3): 1435–1462. doi:10.1021/cr9000786. PMID 20148539. Yi, Hong; Zhang, Guoting; Wang, Huamin; Huang, Zhiyuan; Wang, Jue; Singh, Atul K.; Lei, Aiwen (2017). "Recent Advances in Radical C–H Activation/Radical Cross-Coupling". Chemical Reviews. 117 (13): 9016–9085. doi:10.1021/acs.chemrev.6b00620. PMID 28639787.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Home construction** Home construction: Home construction or residential construction is the process of constructing a house, apartment building, or similar residential building generally referred to as a 'home' when giving consideration to the people who might now or someday reside there. Beginning with simple pre-historic shelters, home construction techniques have evolved to produce the vast multitude of living accommodations available today. Different levels of wealth and power have warranted various sizes, luxuries, and even defenses in a "home". Environmental considerations and cultural influences have created an immensely diverse collection of architectural styles, creating a wide array of possible structures for homes. Home construction: The cost of housing and access to it is often controlled by the modern realty trade, which frequently has a certain level of market force speculation. The level of economic activity in the home-construction section is reported as housing starts, though this is contrarily denominated in terms of distinct habitation units, rather than distinct construction efforts. 'Housing' is also the chosen term in the related concepts of housing tenure, affordable housing, and housing unit (aka dwelling). Four of the primary trades involved in home construction are carpenters, masons, electricians and plumbers, but there are many others as well. Home construction: Global access to homes is not consistent around the world, with many economies not providing adequate support for the right to housing. Sustainable Development Goal 11 includes a goal to create "Adequate, safe, and affordable housing and basic services and upgrade slums". Based on current and expected global population growth, UN habitat projects needing 96,000 new dwelling units built each day to meet global demands. An important part of housing construction to meet this global demand, is upgrading and retrofitting existing buildings to provide adequate housing. History: While homes may have originated in pre-history, there are many notable stages through which cultures pass to reach the current level of modernization. Countries and communities throughout the world currently exhibit very diverse concepts of housing, at many different stages of home development. Finding or buying parts: Two methods for constructing a home can be distinguished: the method in which architects simply assume free choice of materials and parts, and the method in which reclaimed materials are used, and the house is thus during its entire construction a "work in progress" (meaning every single aspect of it is subject to change at any given time, depending on what materials are found). Finding or buying parts: The second method has been used throughout history, as materials have always been scarce. In Britain, there is comparatively little demand for innovative homes produced through radically different production methods, materials, and components. Over the years, a combination of trade protectionism and technical-product conservatism all round has also stymied the growth of indigenous producers of housing products such as aluminum cladding and curtain walling, wall tiles, advanced specialist ironmongery, and structural steel. Specifications: Civil Site Plans, Architectural Drawings and Specifications comprise the document set needed to construct a new home. Specifications consist of a precise description of the materials to be used in construction. Specifications are typically organized by each trade required to construct a home. Specifications: The modern family home has many more systems and facets of construction than one might initially believe. With sufficient study, an average person can understand everything there is to know about any given phase of home construction. The do it yourself (DIY) boom of the late twentieth century was due, in large part, to this fact. And an international proliferation of kitset home and prefabricated home suppliers, often consisting of components of Chinese origin has further increased supply and made DIY home building more prevalent. Procedures: The process often starts with a planning stage in which plans are prepared by an architect and approved by the client and any regulatory authority. Then the site is cleared, foundations are laid and trenches for connection to services such as sewerage, water, and electricity are established. If the house is wooden-framed, a framework is constructed to support the boards, siding and roof. If the house is of brick construction, then courses of bricks are laid to construct the walls. Floors, beams and internal walls are constructed as the building develops, with plumbing and wiring for water and electricity being installed as appropriate. Once the main structure is complete, internal fitting with lights and other fitments is done, Decorate home and furnished with furniture, cupboards, carpets, curtains and other fittings.To avoid running out of money, consider building your house in phases. This phased approach allows homeowners to prioritize essential components of the house, such as the foundation, structure, and basic utilities, while deferring less critical elements to later phases. It provides the flexibility to pause construction temporarily, if necessary, and resume when funds become available. Costs: The cost of building a house varies by country widely. According to data from the National Association of Realtors, the median cost of buying an existing single-family house in the United States is $274,600, whereas the average cost to build is $296,652. Several different factors can impact the cost of building a house, including the size of the dwelling, the location, and availability of resources, the slope of the land, the quality of the fixtures and fittings, and the difficulty in finding construction and building materials talent. Some of the typical expenses involved in a site cost can be connections to services such as water, sewer, electricity, and gas; fences; retaining walls; site clearance (trees, roots, bushes); site survey; soil tests. Phases: Architectural design Building Code External construction Shallow foundation Light-frame construction Domestic water system Electrical wiring Building envelope Retaining walls Internal construction Ventilation Plumbing Air conditioning Electrical wiring Telephone wiring Ethernet wiring Insulation Flooring Wall Ceilings Doors Windows Finishing construction Cabinetry Furnishings Interior decorating Painting Fixtures Appliances Toiletry Home size: According to data from the U.S. Census and Bureau of Labor Statistics found the average floor area of a home in the United States has steadily increased over the past one hundred years, with an estimated 18.5 square foot increase in the average floor area per year. In 1920, the average floor area was 1,048 square feet (97.4 m2), which rose to 1,500 square feet (140 m2) by 1970 and today sits at around 2,261 square feet (210.1 m2). Criticism: Some have criticized the housebuilding industry. Mass housebuilders can be risk averse, preferring cost-efficient building methods rather than adopting new technologies for improved building performance. Traditional vernacular building methods that suit local conditions and climates can be dispensed with in favour of a generic 'cookie cutter' housing type.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fore-edge painting** Fore-edge painting: A fore-edge painting is a scene painted on the edges of book pages. There are two basic forms, including paintings on fanned edges and closed edges. For the first type, the book's leaves must be fanned, exposing the pages' edges for the picture to become visible. For the second closed type, the image is visible only while the book is closed. Fore-edge painting: The fundamental difference between the two fore-edge styles is that a painting on the closed edge is painted directly on the book's surface (the fore-edge being the opposite of the spine side). In contrast, the fanned fore-edge style has watercolor applied to the top or bottom margin (recto or verso) of the page/leaf and not to the actual "fore"-edge itself. Fore-edge painting: To begin a fore-edge painting artists clamped the slightly fanned pages of a book between the boards of a special press that held them in place while keeping pressure off of the cover boards. While the paints used for fore-edge paintings are watercolors, artists needed to use them carefully. If water was first used on the pages the paint would bleed through to the inner pages or remove the gold of the fore-edges. Artists needed to slowly build up the colors of the fore-edges to avoid over saturating the paper. Watercolor was the only paint that could be used when doing these paintings because others (acrylics and oils) would crack and crumble with age. Variations: A single fore-edge painting includes a painting on only one side of the book page edges. Generally, gilt or marbling is applied by the bookbinder after the painting has dried to make the painting completely invisible until the pages are fanned. Variations: A double fore-edge painting has paintings on both sides of the page margin so that one painting is visible when the leaves are fanned one way, and the other is visible when the leaves are fanned the other way. It is estimated that only 2-3% of fore-edge paintings are doubled.A triple fore-edge painting has, in addition to paintings on the edges, a third painting applied directly to the edges (in lieu of gilt or marbling). Edge paintings that are continuous scenes wrapped around more than one edge are called a panoramic fore-edge painting. These are sometimes called a 'triple edge painting.'A split double painting has two different illustrations, one on either side of the book's center. When the book is laid open in the center, one illustration is seen on the edges of the first half of the book, and another illustration is on the edge of the second half of the book.There are even examples of rare variations that require the book's pages to be pinched or tented in a certain way to see the image. History: The earliest fore-edge paintings date as far back as the 10th century; these earliest paintings were symbolic designs, which may have been used for identification purposes, rather than decoration. Early English fore-edge paintings, believed to date to the 14th century, presented heraldic designs in gold and other colors. The first known example of a disappearing fore-edge painting (a painting not visible when the book is closed) dates back to 1649, while the earliest signed and dated fore-edge painting dates to 1653: a family coat of arms painted on a 1651 Bible.A legend regarding how hidden fore-edge painting on books first began states that a duchess and friend of Charles II of England would often borrow his books, sometimes forgetting to return them. As a result, the king commissioned the court painter, Sir Peter Lely, and the court bookbinder, Samuel Mearne, to devise a secret method to identify his books. They worked out a plan to paint a hidden image on the edges. When the king visited the duchess, he spotted a familiar-looking book on a shelf. As he was leaving, he took the book from the shelf to reclaim. The duchess protested, but the king fanned out the pages of the book to reveal the royal coat of arms.Research by Cyril Davenport (1848 - 1941), former Superintendent of Bookbinding at the British Museum gives some credibility to the legend above, naming Samuel Mearne the mastermind behind the "mysterious" art of fore-edge painting during his service as the bookbinder for King Charles II from 1660 to 1683. Further research by Davenport turned up instances of these early fore-edge paintings having been signed by "an artist of the name of Fletcher" (no first name given). Davenport reports a 1641 copy of Acts and Monuments bearing a fore-edge portrait of Charles II, signed by "Fletcher" as the earliest known example of the art.Further research, though, suggests that Davenport's focus on only works linked to royalty may have caused his oversight of prior fore-edge paintings outside the royal libraries. Carl J. Weber, fore-edge scholar, suggests that fore-edge painting may have been in practice 7–10 years prior to Mearne's "invention". He lists a fore-edge painted copy of The Holy Bible as the first known instance of a signed and dated fore-edge painting. The painting on the 1651 bible is signed and dated by the artists: "S.T. Lewis Fecit Anno Dom 1653." Although the signature "S.T". was originally thought to be the initials of one man, Weber surmises that the work was produced by two (now) known fore-edge artists, brothers Stephen and Thomas Lewis.Around 1750, the subject matter of fore-edge paintings changed from simply decorative or heraldic designs to landscapes, portraits, and religious scenes usually painted in full color to reflect the coming trend in popularity of the Picturesque. Modern fore-edge painted scenes have many more variations as they can depict numerous subjects not found on earlier specimens. These include erotic scenes, or they might involve scenes from novels (like Jules Verne, Sherlock Holmes or Dickens, etc.), as more popular literature featured fore-edge paintings. In many cases, the chosen scene will depict a subject related to the book, but in other cases, it did not. In one instance, the same New Brunswick landscape was applied to both a Bible and a collection of poetry and plays. The artist, bookseller, or owner decides on the scene; thus, the variety is wide. History: The technique was popularized in the 18th century by John Brindley (1732 - 1756), publisher, and bookbinder to the prince of Wales. and Edwards of Halifax, a distinguished family of bookbinders and booksellers. The Edwards were responsible for popularizing the technique in London, and contemporaneous booksellers often mimicked their designs.Caroline Billin Curry was a prominent fore-edge painter in the late 19th and early 20th centuries. The number of her works are largely unknown, however the current estimate is around 131. She numbered her works in the flyleaf of the book painted. Because of this, we know that her fore-edge painting of "The White House in 1840" on a copy of Washington Irving's Knickerbocker History of New York was her eleventh work. Also of mention is John Beer, of close geographical proximity to Miss Currie, both residing in Great Britain.Not commonly mentioned is the presence of fore-edge paintings in China. Carl Weber, Shen Jin, and Yao Boyue all provide testimony that Chinese painters were influenced by Western fore-edge paintings and began practicing the art overseas. The evidence does not make a clear historical case, however it is an interesting factoid to note.The majority of extant examples of fore-edge painting date to the late 19th and early 20th centuries on reproductions of books originally published in the early 19th century. Contemporary fore-edge paintings: Fore-edge painting as a craft is deemed critically endangered in the contemporary era. The Heritage Crafts Association (HCA) only lists four “craftspeople currently known” as working in this medium.The remaining artists that practice fore-edge painting are amateurs and leisure makers numbering fewer than sixty. According to the HCA, there are currently no formal trainees in the art form. Contemporary fore-edge paintings: Martin Frost in Worthing, United Kingdom, is currently the only professional full-time fore-edge artist. He has created over 3,500 fore-edge paintings since he started his career in the 1970s. In 2019 he was presented with the MBE in the New Year Honours list by Queen Elizabeth II. According to a profile he did with the BBC, his training was in theater where he painted backdrops for plays until he was introduced to this craft by a friend who happened to be a fore-edge painter.Brianna Sprague is a hidden fore edge painter based in the US. She received her BFA in fine art painting in 2012, specializing in oil painting and watercolor. She began creating hidden fore edge paintings in 2020 and began selling custom commissions in 2021. She has created over 200 paintings at this time. Focusing on modern books and classics, she has gained attention on the social media apps of TikTok, Reddit, and Instagram. Contemporary fore-edge paintings: Dispelling the mystique of fore-edge paintings Artist Christopher Folwell, an artist based in the United Kingdom, showed how to create fore-edge paintings step by step in an article published in My Modern Met. Collections: College of William and Mary's Earl Gregg Swem Library holds a collection of 709 fore-edge paintings in the Ralph H. Wark Collection, the largest collection of fore-edge painted books in America. Loyola-Notre Dame Library, the library shared by Loyola University Maryland and Notre Dame of Maryland University, has a collection of more than 300 fore-edge painted volumes. Boston Public Library has a collection of 258 fore-edge paintings, one of the larger collections in the United States, and many examples are displayed online. Brandeis University holds 22 fore-edge paintings in their special collection library. Estelle Doheny Collection housed in the Edward Laurence Doheny Memorial Library at St. John's Seminary, Camarillo, California, is described as "roughly twice as large" as the collection at the Boston Public Library. University of New Mexico's Center for Southwest Research & Special Collections holds 102 fore-edge paintings from the collection of Lucia von Borosini Batten of Albuquerque. Many were formerly owned by Estelle Doheny, who married her husband, oil baron Edward L. Doheny, in New Mexico Territory in 1900. Three paintings by Miss C. B. Currie are available. Syracuse University's Special Collections Research Center has the Poushter Collection, with more than 90 volumes. Louisiana State University Library holds at least 37 fore-edge paintings in its Rare Book Collection. Several are probably by the artist identified by Jeff Weber as the "American City View Painter". Clark University holds the Robert H. Goddard Library's Rare Book Collection, which includes 17 books with fore-edge paintings. Mudd Library at Lawrence University has a varied collection of books with fore-edge art that were donated by two alumnae, Dorothy Ross Pain Lawrence class of 1918, and Bernice Davis Fligman Milwaukee-Downer class of 1922. Hofstra University has in their collection a few fore-edge books, some of which are Les Psaumes de David and Outlines from the Figures and Compositions upon the Greek, Roman and Etruscan Vases of the Late Sir William Hamilton. George Peabody Library in Baltimore, Maryland also contains a collection of books with fore-edge paintings within its Dorothy McIlvain Scott Collection. Collections: The National Library of the Netherlands has a few fore-edge books, e.g. KW 1740 F 1 (a pendrawing and aquarelle in shades of blue, green, yellow and red, depicting a lake surrounded by mountains and on the righthand side a castle with docking place and boats) and KW 1740 F 2 (a pendrawing and aquarelle in shades of blue, green and red, depicting the Tower of London surrounded by houses and an meadow with walking people), 1786 B 24 or 1773 D 25.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Raffinose** Raffinose: Raffinose is a trisaccharide composed of galactose, glucose, and fructose. It can be found in beans, cabbage, brussels sprouts, broccoli, asparagus, other vegetables, and whole grains. Raffinose can be hydrolyzed to D-galactose and sucrose by the enzyme α-galactosidase (α-GAL), an enzyme which in the lumen of the human digestive tract is only produced by bacteria in the large intestine. α-GAL also hydrolyzes other α-galactosides such as stachyose, verbascose, and galactinol, if present. The enzyme does not cleave β-linked galactose, as in lactose. Chemical properties: The raffinose family of oligosaccharides (RFOs) are alpha-galactosyl derivatives of sucrose, and the most common are the trisaccharide raffinose, the tetrasaccharide stachyose, and the pentasaccharide verbascose. RFOs are almost ubiquitous in the plant kingdom, being found in a large variety of seeds from many different families, and they rank second only to sucrose in abundance as soluble carbohydrates. Raffinose typically crystallises as a pentahydrate white crystalline powder. It is odorless and has a sweet taste approximately 10% that of sucrose. Biochemical properties: Energy source It is non-digestible in humans and other monogastric animals (pigs and poultry) who do not possess the α-GAL enzyme to break down RFOs. These oligosaccharides pass undigested through the stomach and small intestine. In the large intestine, they are fermented by bacteria that do possess the α-GAL enzyme and make short-chain fatty acids (SCFA)(acetic, propionic, butyric acids), as well as the flatulence commonly associated with eating beans and other vegetables. These SCFAs have been recently found to impart a number of health benefits. α-GAL is present in digestive aids such as the product Beano. Disease relevance: Research has shown that the differential ability to utilize raffinose by strains of the bacteria Streptococcus pneumoniae, impacts their ability to cause disease and the nature of the disease. Uses: Procedures concerning cryopreservation have used raffinose to provide hypertonicity for cell desiccation prior to freezing. Either raffinose or sucrose is used as a base substance for sucralose. Raffinose is also used in: skin moisturizers and smoothers prebiotics (it promotes growth of lactobacilli and bifidobacteria) food or drinks additive
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indoor cricket** Indoor cricket: Indoor cricket is a variant of and shares many basic concepts with cricket. The game is most often played between two teams each consisting of six or eight players.Several versions of the game have been in existence since the late 1960s, whilst the game in its present form began to take shape in the late 1970s and early 1980s.The codified sport of indoor cricket is not to be confused with conventional cricket played indoors, or with other modified versions of cricket played indoors (see other forms of indoor cricket below). The game of cricket: In terms of the concept of the game indoor cricket is similar to cricket. Like its outdoor cousin, indoor cricket involves two batsmen, a bowler and a team of fielders. The bowler bowls the ball to the batsmen who must score runs. The team with the highest score at the end of the match wins. Despite these basic similarities, the game itself differs significantly from its traditional counterpart in several ways, most notably on the field of play and the means by which runs are obtained. International rules overview: Safety gear As a minimum, every male player, including the fielders have to wear an abdominal guard (box), with the person bowling the ball as an exception. International rules overview: The batsman are required to use batting gloves, primarily for preventing the bat from slipping out of the hands. Indoor batting gloves are readily available at cricket stores, however some indoor cricket facilities also provide basic non-slip gloves that can be shared during the game. Some players prefer to use hard ball batting gloves to prevent their hands from serious injury, as the indoor cricket ball can cause serious damage. International rules overview: One optional security gadget is safety goggles to prevent any serious injury to the eyes. As the game speed is usually very fast and the play rigorous, it is a demanding cardiovascular activity. It is recommended to have a doctor checkup before taking up indoor cricket, especially in advance age and/or with any medical conditions. It's fielders right of way when a shot is played, so the batsman/fielder has to be watchful to avoid collisions. Indoor cricket causes more sporting injuries than casual outdoor cricket, due to the proximity of the ball and fielders. Therefore, a sports/team insurance is important. Some indoor sports facilities provide these insurances as part of the indoor tournaments. International rules overview: Playing arena The length of an indoor cricket pitch is the same as a conventional cricket pitch, and has 3 stumps at each end, but there the similarities end. The arena is completely enclosed by tight netting, a few metres from each side and end of the pitch. The playing surface is normally artificial grass matting. Whilst the pitch is the same length, however, the batsmen do not have to run the entire length. The striker's crease is in the regulation place in front of the stumps, but the non-striker's crease is only halfway down the pitch. International rules overview: Players Indoor cricket is played between 2 teams of 8 players. Each player must bowl 2 eight ball overs, and bat in a partnership for 4 overs. A faster version of the game exists, where each side is reduced to 6 players and each innings lasts 12 overs instead of 16. International rules overview: Equipment The stumps used in indoor cricket are not, for obvious reasons, stuck in the ground. Instead, they are collapsible spring-loaded stumps that immediately spring back to the standing position when knocked over. The ball used in indoor cricket is a modified cricket ball, with a softer centre. The ball also differs in that it is yellow to make it more obvious to see indoors against varied backgrounds. Both traditional outdoor cricket bats or more specialised lighter-weight indoor cricket bats may be used. The gloves are typically lightweight cotton with no protective padding on the outside. The palm-side of the gloves usually have embedded rubber dots to aid grip. International rules overview: Scoring Scoring in indoor cricket is divided into 4 types: physical runs, bonus runs, the usual extras/sundries, and penalty-minus runs. Physical runs are scored by both batsmen completing a run from one crease to the other. Bonus runs are scored when the ball hits a net. Bonus scores for particular parts of the nets follow: Zone A (front net – behind the keeper): 0 runs Zone B (side nets between the striker's end and halfway down the pitch): 1 run Zone C (side nets between halfway and the bowler's end): 2 runs Zone D (back net – behind the bowler): 4 or 6 runs depending on how the ball hit the back net. International rules overview: On the bounce: 4 runs On the full: 6 runs Zone B or C onto Zone D: 3 runsNB: For bonus runs to be scored, at least one physical run must be scored. The bonus runs are then added to the physical runs. For example, a batsman strikes the ball, hits the back net on the full (6), and he/she makes one physical run, for a total of 7 runs. International rules overview: Extras/sundries are the same as those in formal cricket and consist of wides, no-balls etcetera. Penalty-minus runs are the set number of runs deducted from a team's score for each dismissal. International rules overview: Dismissals A batsman can be dismissed in the same ways they can be in conventional cricket – with variations in the case of LBW and mankad (see below) – and with the exception of timed out. When a batsman gets dismissed, however, five runs are deducted from their total and they continue to bat. Batsmen bat in pairs for 4 overs at a time, irrespective of whether they are dismissed. A player can also be "caught" by a ball rebounding off a net, except off a "six", as long as it has not previously touched the ground. This negates any physical or bonus runs that might have been awarded. International rules overview: A method of dismissal in indoor cricket that is far more prevalent than its outdoor counterpart is the mankad. A mankad is given out if the bowler completes their bowling action without releasing the ball, breaks the stumps at their end without letting go of the ball and the non-striker is out of their ground. Whilst lbw is a valid form of dismissal in indoor cricket, it is a far rarer occurrence in indoor than it is in outdoor cricket. A batsman can only be dismissed lbw if he does not offer a shot and the umpire is satisfied that the ball would then have hit the stumps. Officials Indoor cricket is officiated by one umpire who is situated outside of the playing area at the strike batsmen's end of the court. The umpire sits or stands on a raised platform that is usually 3 metres above ground level. Secondary officials (such as scorers or video umpires) have sometimes been utilised in national or international competition. International rules overview: Result The team with the higher score at the conclusion of each innings is declared the winner of the match. The second innings continues for a full 16 overs even if the batting side passes the first innings total due to the possibility of a side finishing behind a total even after they have surpassed it (see dismissals above).In most cases indoor cricket is played according to a skins system, where the batting partnerships from each innings are compared against one another and the higher of the two is deemed to have won the skin. For example, the second batting partnership in the first innings might score 5 runs whilst the second partnership in the second innings scores 10 – the latter would be deemed to have won the skin. The team that has won the greater of the four skins available is often awarded the win if the totals are tied. 3 Dot balls Rule: Most indoor cricket centres employ a dot ball rule, where the scoreboard has to change at least every third ball. This means if the batsmen play 2 consecutive balls without a change in the scorecard (applies on multiple batsmen over multiple overs), the scorecard has to change on the 3rd ball. It can be changed by batsman scoring a run, extra runs or in the case where a run is not scored on the 3rd consecutive ball, the batsman is declared out and 5 runs deducted off the score, hence changing the scorecard. 3 Dot balls Rule: Jackpot ball Rule Some indoor leagues have the first or last ball of a 'Skin' declared a jackpot ball. This means any runs scored on the jackpot ball will be doubled. e.g. if a '7' is hit, it will counted as 14 runs and if a wicket is lost, it will be counted as minus 10 runs. Types of match and competition: Indoor cricket is typically played either as a six- or eight-a-side match, and with six- or eight-ball overs respectively. The game can be played in men's, women's and mixed competitions. Permutations of the game include bonus overs (where the bonus score is double, dismissals result in seven (7) runs (cf. five (5) runs) being deducted from the team score and fielding restrictions removed.) Test Match Test indoor cricket is the highest standard of indoor cricket and is played between members of the World Indoor Cricket Federation.The first international Test matches were played between Australia and New Zealand in 1985. Those sides have since been joined on the international stage by England (1990), South Africa (1991), Zimbabwe (1998), Namibia (1998), India (2000), Pakistan (2000), Sri Lanka (2002), United Arab Emirates (2004), Wales (2007), France (2007), Guernsey (2007), Singapore (2013), Malaysia (2017). Types of match and competition: Test matches are usually played in a group of matches called a "series" featuring two to four nations. These series can consist of three to five matches and where more than two nations are involved, may also include a finals series. Matches played at World Cup events are also considered Test matches. Types of match and competition: International competition is also organised for juniors and masters age groups. The matches are considered Test matches within their respective divisions.Since 1985, most Test series between Australia and New Zealand have played for the Trans Tasman trophy. Similarly, since 1990, Test series between Australia and England have been played for a trophy known as The Ashes, a name borrowed from the trophy contested by the same nations in outdoor cricket. Types of match and competition: National championships Each member nation of the WICF usually holds its own national titles. In Australia, states and territories compete in the Australian Indoor Cricket Championships (as well as the now defunct National League).The national competition in New Zealand is referred to as the Tri Series and is contested by three provinces – Northern, Central and Southern.National championships contested elsewhere in the world include South Africa's National Championship and England's National League. Types of match and competition: Minor Competition In addition to social competition played throughout the world there are several state leagues and competitions within each nation. Various states, provinces or geographical areas organise their own state championships (referred to in Australia as "Superleague" – not to be confused with the ill-fated Rugby League competition). Various districts, centres or arenas take part in these competitions including the Rec Club Miranda which is one of Sydney's oldest indoor cricket centres. Types of match and competition: World Cup The Indoor Cricket World cup was first held in Birmingham, England in 1995 and has run every two or three years since. The event usually also features age-group, masters' and women's competitions. The last World Cup was held in Wellington (NZ) in October 2014. Australia came first in the boys', girls', women's and men's competitions. Australia has won all 9 Open Men World Cup titles (since 1995) and all 8 Open World Cup titles (since 1998). Origin and development of indoor cricket: The first significant example of organised indoor cricket took place, somewhat unusually, in Germany. A tournament was held under the auspices of the Husum Cricket Club in a hall in Flensburg in the winter of 1968–69.It was not until the 1970s that the game began to take shape as a codified sport. Conceived as a way of keeping cricketers involved during the winter months, various six-a-side leagues were formed throughout England in the first half of the decade, eventually leading to the first national competition held in March 1976 at the Sobell Center in Islington. This distinct form of indoor cricket is still played today. Origin and development of indoor cricket: Despite the early popularity of the sport in England, a different version of indoor cricket developed by two different parties in Perth, Western Australia in the late 1970s evolved into the sport known as indoor cricket today. Against the backdrop of the upheaval in the conventional game caused by World Series Cricket, torrential rain and a desire to keep their charges active led cricket school administrators Dennis Lillee and Graeme Monaghan to set up netted arenas indoors. Concurrently, entrepreneurs Paul Hanna and Michael Jones began creating an eight-a-side game that eventually led to the nationwide franchise known as Indoor Cricket Arenas (ICA). It was not long before hundreds of ICA-branded stadiums were set up throughout Australia, leading to the first national championships held in 1984 at a time where over 200,000 people were estimated to be participating in the sport.The sport underwent several organisational changes, most notably in Australia and in South Africa (where competing organisations fought for control of the sport), but the game has changed little since that time and has risen in popularity in several nations. Under the auspices of the World Indoor Cricket Federation the sport has reached a point where is played according to the same standard rules in major competitions throughout the world. International structure of indoor cricket: The World Indoor Cricket Federation is the international governing body of cricket. It was founded prior to the 1995 World Cup by representatives from Australia, New Zealand, South Africa and England.Nations may either be full members or associate members of the WICF. Each member nation has its own national body which regulates matches played in its country. The national bodies are responsible for selecting representatives for its national side and organising home and away internationals for the side. Other forms of indoor cricket: Conventional cricket indoors Conventional cricket matches have taken place at covered venues (usually featuring a retractable roof) and can thus be regarded as cricket being played indoors, such as Docklands Stadium in Melbourne, Australia. Such matches are relatively infrequent and come with added complications in the event that the ball makes contact with the roof while in play. Other forms of indoor cricket: UK variant A version of indoor cricket (bearing greater resemblance to conventional cricket) is played exclusively in the United Kingdom. This variant sees the six players on each team utilise the same playing and protective equipment that can be found in outdoor cricket, and is played in indoor facilities that differ greatly from the international form of indoor cricket.Despite lacking international competition, this form of indoor cricket enjoys a strong following in the UK, and, like its international counterpart, enjoys the support of the ECB
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Incontinentia pigmenti achromians** Incontinentia pigmenti achromians: Incontinentia pigmenti achromians (also known as "hypomelanosis of Ito") is a cutaneous condition characterized by various patterns of bilateral or unilateral hypopigmentation following the lines of Blaschko.: 548–9  Though the consistency of the skin findings have led to the term "hypomelanosis of Ito", it actually refers to a group of disorders with various genetic causes including polyploidies and aneuploidies. Based upon the specifics of the genetic defect, the skin findings can be accompanied by a great range of systemic findings. These include central nervous system, ocular, and musculoskeletal defects. Nonetheless, the vast majority of cases are limited to the skin. As opposed to incontinentia pigmenti, hypomelanosis of Ito affects both genders equally. This disorder was first described by Japanese dermatologist Minoru Ito in 1952.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indeterminacy in computation** Indeterminacy in computation: Indeterminacy is a property of formal systems that evolve in time (often conceptualized as a computation), in which complete information about the internal state of the system at some point in time admits multiple future trajectories. In simpler terms, if such a system is returned to the same initial condition—or two identical copies of the system are started at the same time—they won't with certainty produce the same behaviour, as some element of chance is able to enter the system from outside its formal specification. In some cases the indeterminacy arises from the laws of physics, in other cases it leaks in from the abstract model, and sometimes the model includes an explicit source of indeterminacy, as with deliberately randomized algorithms, for the benefits that this provides. Disambiguation: Indeterminancy in computation may refer to: quantum indeterminacy in quantum computers nondeterministic finite automata nondeterministic algorithmIn concurrency: indeterminacy in concurrent computation unbounded nondeterminism
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earth3D** Earth3D: Earth3D was developed as part of a diploma thesis of Dominique Andre Gunia at Braunschweig University of Technology to display a virtual globe of the earth. It was developed before Google bought Keyhole, Inc and changed their product into Google Earth. Earth3D downloads its data (satellite imagery and height data) from a server while the user navigates around. The data itself is saved in a Quadtree. It uses data from NASA, USGS, the CIA and the city of Osnabrück. Earth3D: One of the strengths of Earth3D is the capacity of showing meteorological phenomena, like Low-pressure areas, anticyclones, etc. in near-real time.The original version of Earth3D was developed using Trolltech's QT framework. Later a version built with Java and JOGL was developed. But the demand for a Java-based version was very little. This may be because NASA's WorldWind also has an open source Java version. So most people wanted to use a C++ based globe in their applications. That was the reason why a minimalized version, the Earth3dlib was developed. It contains only the most necessary functions to display the earth itself and to add own visualizations to it. Earth3D: All these three projects can be retrieved from SourceForge's CVS (C++) or Subversion (Java) repository.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antinovel** Antinovel: An antinovel is any experimental work of fiction that avoids the familiar conventions of the novel, and instead establishes its own conventions. Origin of the term: The term ("anti-roman" in French) was brought into modern literary discourse by the French philosopher and critic Jean-Paul Sartre in his introduction to Nathalie Sarraute's 1948 work Portrait d’un inconnu (Portrait of a Man Unknown). However the term "anti-roman" (anti-novel) had been used by Charles Sorel in 1633 to describe the parodic nature of his prose fiction Le Berger extravagant. Characteristics: The antinovel usually fragments and distorts the experience of its characters, presenting events outside of chronological order and attempting to disrupt the idea of characters with unified and stable personalities. Some principal features of antinovels include lack of obvious plot, minimal development of character, variations in time sequence, experiments with vocabulary and syntax, and alternative endings and beginnings. Extreme features may include detachable or blank pages, drawings, and hieroglyphs. History: Although the term is most commonly applied to the French nouveau roman of the 1940s, 1950s and 1960s, similar traits can be found much further back in literary history. One example is Laurence Sterne's Tristram Shandy, a seemingly autobiographical novel that barely makes it as far as the title character's birth thanks to numerous digressions and a rejection of linear chronology. History: Aron Kibédi Varga has suggested that the novel in fact began as an antinovel, since the first novels such as Don Quixote subverted their form even as they were constructing the form of the novel.It was however in the postwar decades that the term first came into critical and general prominence. To the middlebrow like C. P. Snow, the antinovel appeared as "an expression of that nihilism that fills the vacuum created by the withdrawal of positive directives for living", and as an ignoble scene in which "the characters buzz about sluggishly like winter flies". More technically however, its distinctive feature was the anti-mimetic and self-reflective drawing of attention to its own fictionality, a parodic anti-realistic element. Paradoxically, such anti-conventionalism would eventually come to form a distinctive convention of its own.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Graham cracker crust** Graham cracker crust: Graham cracker crust is a style of pie crust made from crushed graham crackers. Graham crackers are a sweet American cracker made from unbleached, whole wheat graham flour. The crust is usually flavored and stiffened with butter or vegetable oil and sometimes sugar. Graham cracker crust is a very common type of crust for cheesecakes and cream pies in America. Graham cracker pie crusts are available as a mass-produced product in the United States, and typically consist of the prepared crust pressed into a disposable aluminum pie pan.Variations use crushed cookies or Nilla wafers as substitutes for the graham crackers. Origin: The invention of the graham cracker crust is credited to Monroe Boston Strause, who was known as the Pie King and also invented the chiffon pie.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**3D Virtual Creature Evolution** 3D Virtual Creature Evolution: 3D Virtual Creature Evolution, abbreviated to 3DVCE, is an artificial evolution simulation program created by Lee Graham. Its purpose is to visualize and research common themes in body plans and strategies to achieve a fitness function of the artificial organisms generated and maintained by the system in their given environment. The program was inspired by Karl Sims’ 1994 artificial evolution program, Evolved Virtual Creatures. The program is run through volunteers who download the program from the home website and return information from completed simulations. It is currently available on Windows and in some cases Linux. Settings: 3DVCE uses evolutionary algorithms to simulate evolution. The user sets the body plan restrictions (maximum number of segment types, branching segments’ length and depth limits, and size limits) and whether fitness score is scaled in relation to size. Limb interpenetration is also an option. Reproduction / population settings include the size of each population and their run time (how long each individual has to attain a fitness score), percentage of individuals who get to reproduce (tournament size), what percentage sexually or asexually reproduce, and selection type is then determined. Crossover rate determines what percentage of an individual is created via crossover of parents and mutation. Mutation rate in body and brain is then determined. Specific mathematical operations and values can be attributed to the creature’s brain as well.Fitness function is then determined. Artificial organisms’ fitness score is determined by how well they achieve their fitness goal within their evaluation time. Fitness functions include distance traveled, maximum height, average height, “TOG” (determined by amount of time creature is in contact with ground), and “Sphere” (determined by creature’s ability to catch and hold spheres). These goals are not individualized and can be set to specific strengths (from zero, as not having an influence on fitness, to one, or having maximum influence) to determine the fitness goal. What generations the fitness function applies to can also be set. The environment, or “Terrain”, is then determined. This includes a flat plain, bumpy terrain (in which a hill is generated around creature that constantly inclines as distance is traveled from the creature’s spawning point), water (a low gravity simulator, non-functional), and “spheres” (spheres are generated above the creature to catch). Simulation: Everything in the simulation is viewed from a first person viewpoint. After settings are determined, the first generation is generated from randomly created individuals. All creatures appear at the same spawning point and are made of segments or rectangular prisms connected to others at joints. Colors are assigned to segment types randomly. Segment type is determined by the size and joints a segment has. Colors indicate nothing else than that. These first generation creatures move randomly, with no influence from the fitness goal. Creatures with the largest fitness value reproduce and the following generation is based on this reproduction. Eventually, patterns in the population form and fitness increases even further. Fitness function can be changed during the simulation to simulate environmental changes and individual runs can be duplicated to simulate different lineages or speciation.3DVCE is not only for evolutionary research. Objects can also be spawned for graphics and simulated physics tests. This includes pre-installed blocks, spheres, grenades, and structures that can either be thrown from camera or generated at a spawning point. Artificial gravity can also be manipulated. Random and archived creatures can also be re-spawned to manipulate or view. Lee Graham has also included a TARDIS in the simulation, which when moved into can teleport the camera back to the original spawning point. Creatures: Convergent evolution occurs often in 3DVCE, as similar structures and behaviors of the creatures form to maximize fitness. Two-Armed Jumpers consist of a small core and two large symmetrical "wings", and evolve in response to jumping and distance requirement. These creatures propel themselves forward using their limbs by jiggling or flapping them. Jumping Ribbons and Springs consist of a chain of segments and evolve in response to max height and distance. They contract or curl up and stretch out their body to leap into the air. Rolling Ribbons and Springs are very similar to the previous group, except they are often larger and segments are more repetitive. They evolve in response to average height, distance, and TOG (touching the ground). They roll on the ground to propel their head into the air to attain height while still touching the ground. Some simply roll in a horizontal fashion like a cylinder. Single-Joint Powered Creatures have more erratic structures and evolve in response to distance on bumpy terrain. They have one large segment in back which kicks the creature forward, but being poorly balanced they use the rest of their bodies to get back up after stumbling or prevent stumbles altogether. Creatures: Many other types of creatures also form that do not necessarily fit the four main groups previously described by Lee Graham. Tall stick-like creatures also evolve to attain maximum height. Some users have been able to fix the water simulator to evolve creatures that swim. Many other creatures evolve that share traits of multiple groups. There are currently over 220 creatures archived on the main website, which can be found on YouTube by visiting the "Creature Mann" channel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Knit picker** Knit picker: A knit picker is a fabric tool used to remove snags from clothes and fabrics. It has a small hook that grabs the snag, and can then be pulled through to the interior side to remove it. The hook is very fine to ensure it gets under the snag.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shkarofsky function** Shkarofsky function: The Shkarofsky function is a physics formula which describes the behavior of microwaves. It is named after Canadian physicist Issie Shkarofsky (1931-2018), who first identified the function in 1966.N.M. Temme and S.S. Sazhin later developed this idea further to give what they called the generalized Shkarofsky function.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parallactic angle** Parallactic angle: In spherical astronomy, the parallactic angle is the angle between the great circle through a celestial object and the zenith, and the hour circle of the object. It is usually denoted q. In the triangle zenith—object—celestial pole, the parallactic angle will be the position angle of the zenith at the celestial object. Despite its name, this angle is unrelated with parallax. The parallactic angle is zero or 180° when the object crosses the meridian. Uses: For ground-based observatories, the Earth atmosphere acts like a prism which disperses light of different wavelengths such that a star generates a rainbow along the direction that points to the zenith. So given an astronomical picture with a coordinate system with a known direction to the Celestial pole, the parallactic angle represents the direction of that prismatic effect relative to that reference direction. Knowledge of that angle is needed to align Atmospheric Dispersion Correctors with the beam axis of the telescope Depending on the type of mount of the telescope, this angle may also affect the orientation of the celestial object's disk as seen in a telescope. With an equatorial mount, the cardinal points of the celestial object's disk are aligned with the vertical and horizontal direction of the view in the telescope. With an altazimuth mount, those directions are rotated by the amount of the parallactic angle. The cardinal points referred to here are the points on the limb located such that a line from the center of the disk through them will point to one of the celestial poles or 90° away from them; these are not the cardinal points defined by the object's axis of rotation. Uses: The orientation of the disk of the Moon, as related to the horizon, changes throughout its diurnal motion and the parallactic angle changes equivalently. This is also the case with other celestial objects. Uses: In an ephemeris, the position angle of the midpoint of the bright limb of the Moon or planets, and the position angles of their North poles may be tabulated. If this angle is measured from the North point on the limb, it can be converted to an angle measured from the zenith point (the vertex) as seen by an observer by subtracting the parallactic angle. The position angle of the bright limb is directly related to that of the subsolar point. Derivation: The vector algebra to derive the standard formula is equivalent to the calculation of the long derivation for the compass course. The sign of the angle is basically kept, north over east in both cases, but as astronomers look at stars from the inside of the celestial sphere, the definition uses the convention that the q is the angle in an image that turns the direction to the NCP counterclockwise into the direction of the zenith. In the equatorial system of right ascension α and declination δ the star is at cos cos cos sin sin ⁡δ). In the same coordinate system the zenith is found by inserting a=π/2, cos a=0 into the transformation formulas cos cos cos sin sin ⁡φ), where φ is the observer's geographic latitude, a the star's altitude, z=π/2-a the zenith distance, and l the local sidereal time. The North Celestial Pole is at N=(001). The normalized cross product is the rotation axis that turns the star into the direction of the zenith: sin sin cos sin sin sin cos sin cos cos sin sin cos cos cos cos sin ⁡(α−l)). Finally ωz×s is the third axis of the tilted coordinate system and the direction into which the star is moved on the great circle towards the zenith. The plane tangential to the celestial sphere at the star is spanned by the unit vectors to the north, sin cos sin sin cos ⁡δ), and to the east sin cos ⁡α0). These are orthogonal: 1. The parallactic angle q is the angle of the initial section of the great circle at s, east of north, cos sin ⁡quα. cos sin cos sin sin cos cos ⁡h), sin sin sin cos ⁡φ. Derivation: (The previous formula is the sine formula of spherical trigonometry.) The values of sin z and of cos φ are positive, so using atan2 functions one may divide both expressions through these without losing signs; eventually tan sin cos cos sin sin cos cos sin cos tan sin cos ⁡h yields the angle in the full range -π ≤ q ≤ π. The advantage of this expression is that it does not depend on the various offset conventions of A; the uncontroversial offset of the hour angle h takes care of this. Derivation: For a sidereal target, by definition a target where δ and α are not time-dependent, the angle changes with a period of a sidereal day Ts. Let dots denote time derivatives; then the hour angle changes as h˙=2πTs and the time derivative of the tan q expression is cos cos cos cos sin sin cos cos sin sin cos cos ⁡h)2h˙; cos cos cos sin sin cos sin cos cos cos sin cos cos sin ⁡zh˙. The value derived above always refers to the North Celestial Pole as the origin of coordinates, even if that is not visible (i.e., if the telescope is south of the Equator). Some authors introduce more complicated formulas with variable signs to derive similar angles for telescopes south of the Equator that use the South Celestial Pole as the reference.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Traction control system** Traction control system: A traction control system (TCS), also known as ASR (from German: Antriebsschlupfregelung, lit. 'drive slippage regulation'), is typically (but not necessarily) a secondary function of the electronic stability control (ESC) on production motor vehicles, designed to prevent loss of traction (i.e., wheelspin) of the driven road wheels. TCS is activated when throttle input and engine power and torque transfer are mismatched to the road surface conditions. The intervention consists of one or more of the following: Brake force applied to one or more wheels Reduction or suppression of spark sequence to one or more cylinders Reduction of fuel supply to one or more cylinders Closing the throttle, if the vehicle is fitted with drive by wire throttle In turbocharged vehicles, a boost control solenoid is actuated to reduce boost and therefore engine power.Typically, traction control systems share the electrohydraulic brake actuator (which does not use the conventional master cylinder and servo) and wheel-speed sensors with ABS. Traction control system: The basic idea behind the need for a traction control system is the loss of road grip can compromise steering control and stability of vehicles. This is the result of the difference in traction of the drive wheels. The difference in slip may occur due to the turning of a vehicle or varying road conditions for different wheels. When a car turns, its outer and inner wheels rotate at different speeds; this is conventionally controlled by using a differential. A further enhancement of the differential is to employ an active differential that can vary the amount of power being delivered to outer and inner wheels as needed. For example, if outward slip is sensed while turning, the active differential may deliver more power to the outer wheel in order to minimize the yaw (essentially the degree to which the front and rear wheels of a car are out of line.) Active differential, in turn, is controlled by an assembly of electromechanical sensors collaborating with a traction control unit. History: The predecessor of modern electronic traction control systems can be found in high-torque, high-power rear-wheel-drive cars as a limited slip differential. A limited-slip differential is a purely mechanical system that transfers a relatively small amount of power to the non-slipping wheel, while still allowing some wheel spin to occur. In 1971, Buick introduced MaxTrac, which used an early computer system to detect rear wheel spin and modulate engine power to those wheels to provide the most traction. A Buick exclusive item at the time, it was an option on all full-size models, including the Riviera, Estate Wagon, Electra 225, Centurion, and LeSabre. Cadillac introduced the Traction Monitoring System (TMS) in 1979 on the redesigned Eldorado. Operation: When the traction control computer (often incorporated into another control unit, such as the ABS module) detects one or more driven wheels spinning significantly faster than another, it invokes the ABS electronic control unit to apply brake friction to wheels spinning with lessened traction. Braking action on slipping wheel(s) will cause power transfer to wheel axle(s) with traction due to the mechanical action within the differential. All-wheel-drive (AWD) vehicles often have an electronically controlled coupling system in the transfer case or transaxle engaged (active part-time AWD), or locked-up tighter (in a true full-time set up driving all wheels with some power all the time) to supply non-slipping wheels with torque. Operation: This often occurs in conjunction with the powertrain computer reducing available engine torque by electronically limiting throttle application and/or fuel delivery, retarding ignition spark, completely shutting down engine cylinders, and a number of other methods, depending on the vehicle and how much technology is used to control the engine and transmission. There are instances when traction control is undesirable, such as trying to get a vehicle unstuck in snow or mud. Allowing one wheel to spin can propel a vehicle forward enough to get it unstuck, whereas both wheels applying a limited amount of power will not produce the same effect. Many vehicles have a traction control shut-off switch for such circumstances. Components of traction control: Generally, the main hardware for traction control and ABS are mostly the same. In many vehicles, traction control is provided as an additional option for ABS. Each wheel is equipped with a sensor that senses changes in its speed due to loss of traction. The sensed speed from the individual wheels is passed on to an electronic control unit (ECU). The ECU processes the information from the wheels and initiates braking to the affected wheels via a cable connected to an automatic traction control (ATC) valve.In all vehicles, traction control is automatically started when the sensors detect loss of traction at any of the wheels. Use of traction control: In road cars: Traction control has traditionally been a safety feature in premium high-performance cars, which otherwise need sensitive throttle input to prevent spinning driven wheels when accelerating, especially in wet, icy, or snowy conditions. In recent years, traction control systems have become widely available in non-performance cars, minivans, and light trucks and in some small hatchbacks. In race cars: Traction control is used as a performance enhancement, allowing maximum traction under acceleration without wheel spin. When accelerating out of a turn, it keeps the tires at optimal slip ratio. In heavy trucks: Traction control is available as well. Here the pneumatic brake system needs some additional valves and control logic to realize a TCS (or sometimes called ASR) system. Use of traction control: In motorcycles: Traction control for production motorcycles was first available with the BMW K1 in 1988. Honda offered Traction Control as an option, along with ABS, on their ST1100 beginning about 1992. By 2009, traction control was an option for several models offered by BMW and Ducati, the model year 2010 Kawasaki Concours 14 (1400GTR) and Honda CBR 650R in the year 2019, and Triumph "Modern Classic" line of motorcycles. Use of traction control: In off-road vehicles: Traction control is used instead of or in addition to, the mechanical limited-slip or locking differential. It is often implemented with an electronic limited-slip differential, as well as other computerized controls of the engine and transmission. The spinning wheel is slowed with short applications of brakes, diverting more torque to the non-spinning wheel; this is the system adopted by Range Rover in 1993, for example. ABS brake-traction control has several advantages over limited-slip and locking differentials, such as steering control of a vehicle is easier, so the system can be continuously enabled. It also creates less stress on powertrain and driveline components, and increases durability as there are fewer moving parts to fail.When programmed or calibrated for off-road use, traction control systems like Ford’s four-wheel electronic traction control (ETC) which is included with AdvanceTrac, and Porsche’s four-wheel automatic brake differential (ABD), can send 100 percent of torque to any one wheel or wheels, via an aggressive brake strategy or "brake locking", allowing vehicles like the Expedition and Cayenne to keep moving, even with two wheels (one front, one rear) completely off the ground. Use of traction control: Controversy in motorsports Very effective yet small units are available that allow the driver to remove the traction control system after an event if desired. In Formula One, an effort to ban traction control led to a change of rules for 2008: every car must have a standard (but custom mappable) ECU, issued by the FIA, which is relatively basic and does not have traction control capabilities. In 2003, Paul Tracy admitted that CART teams used traction control in the nineties, a device that was not formally legal until 2002 (although the switch to single engine supplier for 2003 reverted the legalization). In 2008, NASCAR suspended a Whelen Modified Tour driver, crew chief, and car owner for one race and disqualified the team after finding questionable wiring in the ignition system, which could have been used to implement traction control. Traction control in cornering: Traction control is not just used for improving acceleration under slippery conditions. It can also help a driver to corner more safely. If too much throttle is applied during cornering, the driven wheels will lose traction and slide sideways. This occurs as understeer in front-wheel-drive vehicles and oversteer in rear-wheel-drive vehicles. Traction control can mitigate and possibly even correct understeer or oversteer from happening by limiting power to the overdriven wheel or wheels. However, it cannot increase the limits of frictional grip available and is used only to decrease the effect of driver error or compensate for a driver's inability to react quickly enough to wheel slip. Traction control in cornering: Automobile manufacturers state in vehicle manuals that traction control systems should not encourage dangerous driving or encourage driving in conditions beyond the driver's control.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frozen Evolution** Frozen Evolution: Frozen Evolution is a 2008 book written by parasitologist Jaroslav Flegr, which aims to explain modern developments in evolutionary biology. It also contains information boxes which clarify important topics in science like peer review, scientific journals, citation metrics, philosophy of science, paradigm shifts, and Occam's razor. Flegr's previous research in toxoplasmosis is also mentioned. Frozen Evolution: The book also discusses Flegr's model of "frozen plasticity," a hypothesis which describes a possible mechanism for the evolution of adaptive traits. This hypothesis proposes that natural selection can only explain adaptation for a limited range of conditions, for instance when populations are genetically homogeneous. He describes frozen plasticity as being more general, and maintains that it better explains the origin of adaptive traits in genetically heterogeneous populations of sexual reproducing organisms. His hypothesis of frozen plasticity is an extension of Niles Eldredge and Stephen Jay Gould's theory of punctuated equilibrium, which describes the history of most fossil species as being relatively stable for millions of years, later punctuated by swift periods of evolutionary change during episodes of speciation. It also draws upon John Maynard Smith's concept of an evolutionarily stable strategy. Frozen Evolution: Biologist Brian K. Hall described the book as broad and integrative, but also combative, "verging on the disrespectful". Biologist Dan Graur described the book as "sloppily written, unprofessionally translated, inadequately conceived, improperly edited, dubiously syntaxed, and horribly pompous and tedious stream-of-consciousness monologue masquerading as a scholarly work."The book was dedicated to the memory of Stephen Jay Gould and John Maynard Smith, "the two most influential evolutionary biologists of the end of the late 20th century". Reviews: Book review – Brian K. Hall, Evolution & Development Book review – by Dan Graur, The Quarterly Review of Biology
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Winding hole** Winding hole: A winding hole () is a widened area of a canal (usually in the United Kingdom), used for turning a canal boat such as a narrowboat. In sea ports an area for turning ships is usually called a turning basin. Etymology: The word is commonly believed to derive from the practice of using the wind to assist with the turn.Another etymology, however, is the Old English word for turn - "windan", (pronounced with a short I (as in windlass, a handle for winding (long I) gears)). Much UK canal terminology comes from spoken rather than written tradition and from bargees who did not read or write.It is also possible that the word has a similar derivation to that of the windlass, which derives from the Old Norse "vinda" and "ás"—words currently used in Iceland—where the modern word for "windlass" is "vinda". History: Because the average width of a canal channel (about 30' to 40' feet) is less than the length of a full-size narrow boat (72') it is not usually possible to turn a boat in the canal. Winding holes are typically indentations in the off-side (non-towpath side) of the canal, allowing sufficient space to turn the boat. Use: A winding hole consists of a "notch" in the canal bank. A turning boat inserts its bow into the notch and swings the stern round. In the days of horse-drawn boats, this was accomplished using bargepoles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exodeoxyribonuclease (phage SP3-induced)** Exodeoxyribonuclease (phage SP3-induced): Exodeoxyribonuclease (phage SP3-induced) (EC 3.1.11.4, phage SP3 DNase, DNA 5′-dinucleotidohydrolase, deoxyribonucleate 5′-dinucleotidase, deoxyribonucleic 5′-dinucleotidohydrolase, bacteriophage SP3 deoxyribonuclease) is an enzyme. that catalyses the following chemical reaction Exonucleolytic cleavage in the 5′- to 3′-direction to yield nucleoside 5′-phosphatesPreference for single-stranded DNA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Propeller** Propeller: A propeller (colloquially often called a screw if on a ship or an airscrew if on an aircraft) is a device with a rotating hub and radiating blades that are set at a pitch to form a helical spiral which, when rotated, exerts linear thrust upon a working fluid such as water or air. Propellers are used to pump fluid through a pipe or duct, or to create thrust to propel a boat through water or an aircraft through air. The blades are shaped so that their rotational motion through the fluid causes a pressure difference between the two surfaces of the blade by Bernoulli's principle which exerts force on the fluid. Most marine propellers are screw propellers with helical blades rotating on a propeller shaft with an approximately horizontal axis. History: Early developments The principle employed in using a screw propeller is derived from sculling. In sculling, a single blade is moved through an arc, from side to side taking care to keep presenting the blade to the water at the effective angle. The innovation introduced with the screw propeller was the extension of that arc through more than 360° by attaching the blade to a rotating shaft. Propellers can have a single blade, but in practice there are nearly always more than one so as to balance the forces involved. History: The origin of the screw propeller starts at least as early as Archimedes (c. 287 – c. 212 BC), who used a screw to lift water for irrigation and bailing boats, so famously that it became known as Archimedes' screw. It was probably an application of spiral movement in space (spirals were a special study of Archimedes) to a hollow segmented water-wheel used for irrigation by Egyptians for centuries. A flying toy, the bamboo-copter, was enjoyed in China beginning around 320 AD. Later, Leonardo da Vinci adopted the screw principle to drive his theoretical helicopter, sketches of which involved a large canvas screw overhead. History: In 1661, Toogood and Hays proposed using screws for waterjet propulsion, though not as a propeller. Robert Hooke in 1681 designed a horizontal watermill which was remarkably similar to the Kirsten-Boeing vertical axis propeller designed almost two and a half centuries later in 1928; two years later Hooke modified the design to provide motive power for ships through water. In 1693 a Frenchman by the name of Du Quet invented a screw propeller which was tried in 1693 but later abandoned. In 1752, the Academie des Sciences in Paris granted Burnelli a prize for a design of a propeller-wheel. At about the same time, the French mathematician Alexis-Jean-Pierre Paucton suggested a water propulsion system based on the Archimedean screw. In 1771, steam-engine inventor James Watt in a private letter suggested using "spiral oars" to propel boats, although he did not use them with his steam engines, or ever implement the idea.One of the first practical and applied uses of a propeller was on a submarine dubbed Turtle which was designed in New Haven, Connecticut, in 1775 by Yale student and inventor David Bushnell, with the help of clock maker, engraver, and brass foundryman Isaac Doolittle. Bushnell's brother Ezra Bushnell and ship's carpenter and clock maker Phineas Pratt constructed the hull in Saybrook, Connecticut. On the night of September 6, 1776, Sergeant Ezra Lee piloted Turtle in an attack on HMS Eagle in New York Harbor. Turtle also has the distinction of being the first submarine used in battle. Bushnell later described the propeller in an October 1787 letter to Thomas Jefferson: "An oar formed upon the principle of the screw was fixed in the forepart of the vessel its axis entered the vessel and being turned one way rowed the vessel forward but being turned the other way rowed it backward. It was made to be turned by the hand or foot." The brass propeller, like all the brass and moving parts on Turtle, was crafted by Issac Doolittle of New Haven.In 1785, Joseph Bramah of England proposed a propeller solution of a rod going through the underwater aft of a boat attached to a bladed propeller, though he never built it.In February 1800, Edward Shorter of London proposed using a similar propeller attached to a rod angled down temporarily deployed from the deck above the waterline and thus requiring no water seal, and intended only to assist becalmed sailing vessels. He tested it on the transport ship Doncaster at Gibraltar and Malta, achieving a speed of 1.5 mph (2.4 km/h).In 1802, American lawyer and inventor John Stevens built a 25-foot (7.6 m) boat with a rotary steam engine coupled to a four-bladed propeller. The craft achieved a speed of 4 mph (6.4 km/h), but Stevens abandoned propellers due to the inherent danger in using the high-pressure steam engines. His subsequent vessels were paddle-wheeled boats.By 1827, Czech-Austrian inventor Josef Ressel had invented a screw propeller with multiple blades on a conical base. He tested it in February 1826 on a manually-driven ship and successfully used it on a steamboat in 1829. His 48-ton ship Civetta reached 6 knots. This was the first successful Archimedes screw-propelled ship. His experiments were banned by police after a steam engine accident. Ressel, a forestry inspector, held an Austro-Hungarian patent for his propeller. The screw propeller was an improvement over paddlewheels as it wasn't affected by ship motions or draft changes.John Patch, a mariner in Yarmouth, Nova Scotia developed a two-bladed, fan-shaped propeller in 1832 and publicly demonstrated it in 1833, propelling a row boat across Yarmouth Harbour and a small coastal schooner at Saint John, New Brunswick, but his patent application in the United States was rejected until 1849 because he was not an American citizen. His efficient design drew praise in American scientific circles but by then he faced multiple competitors. History: Screw propellers Despite experimentation with screw propulsion before the 1830s, few of these inventions were pursued to the testing stage, and those that were proved unsatisfactory for one reason or another.In 1835, two inventors in Britain, John Ericsson and Francis Pettit Smith, began working separately on the problem. Smith was first to take out a screw propeller patent on 31 May, while Ericsson, a gifted Swedish engineer then working in Britain, filed his patent six weeks later. Smith quickly built a small model boat to test his invention, which was demonstrated first on a pond at his Hendon farm, and later at the Royal Adelaide Gallery of Practical Science in London, where it was seen by the Secretary of the Navy, Sir William Barrow. Having secured the patronage of a London banker named Wright, Smith then built a 30-foot (9.1 m), 6-horsepower (4.5 kW) canal boat of six tons burthen called Francis Smith, which was fitted with his wooden propeller and demonstrated on the Paddington Canal from November 1836 to September 1837. By a fortuitous accident, the wooden propeller of two turns was damaged during a voyage in February 1837, and to Smith's surprise the broken propeller, which now consisted of only a single turn, doubled the boat's previous speed, from about four miles an hour to eight. Smith would subsequently file a revised patent in keeping with this accidental discovery. History: In the meantime, Ericsson built a 45-foot (14 m) screw-propelled steamboat, Francis B. Ogden in 1837, and demonstrated his boat on the River Thames to senior members of the British Admiralty, including Surveyor of the Navy Sir William Symonds. In spite of the boat achieving a speed of 10 miles an hour, comparable with that of existing paddle steamers, Symonds and his entourage were unimpressed. The Admiralty maintained the view that screw propulsion would be ineffective in ocean-going service, while Symonds himself believed that screw propelled ships could not be steered efficiently. Following this rejection, Ericsson built a second, larger screw-propelled boat, Robert F. Stockton, and had her sailed in 1839 to the United States, where he was soon to gain fame as the designer of the U.S. Navy's first screw-propelled warship, USS Princeton. History: Apparently aware of the Royal Navy's view that screw propellers would prove unsuitable for seagoing service, Smith determined to prove this assumption wrong. In September 1837, he took his small vessel (now fitted with an iron propeller of a single turn) to sea, steaming from Blackwall, London to Hythe, Kent, with stops at Ramsgate, Dover and Folkestone. On the way back to London on the 25th, Smith's craft was observed making headway in stormy seas by officers of the Royal Navy. This revived Admiralty's interest and Smith was encouraged to build a full size ship to more conclusively demonstrate the technology. History: SS Archimedes was built in 1838 by Henry Wimshurst of London, as the world's first steamship to be driven by a screw propeller.The Archimedes had considerable influence on ship development, encouraging the adoption of screw propulsion by the Royal Navy, in addition to her influence on commercial vessels. Trials with Smith's Archimedes led to a tug-of-war competition in 1845 between HMS Rattler and HMS Alecto with the screw-driven Rattler pulling the paddle steamer Alecto backward at 2.5 knots (4.6 km/h).The Archimedes also influenced the design of Isambard Kingdom Brunel's SS Great Britain in 1843, then the world's largest ship and the first screw-propelled steamship to cross the Atlantic Ocean in August 1845. History: HMS Terror and HMS Erebus were both heavily modified to become the first Royal Navy ships to have steam-powered engines and screw propellers. Both participated in Franklin's lost expedition, last seen in July 1845 near Baffin Bay. Screw propeller design stabilized in the 1880s. History: Aircraft The Wright brothers pioneered the twisted aerofoil shape of modern aircraft propellers. They realized an air propeller was similar to a wing. They verified this using wind tunnel experiments. They introduced a twist in their blades to keep the angle of attack constant. Their blades were only 5% less efficient than those used 100 years later. Understanding of low-speed propeller aerodynamics was complete by the 1920s, although increased power and smaller diameters added design constraints.Alberto Santos Dumont, another early pioneer, applied the knowledge he gained from experiences with airships to make a propeller with a steel shaft and aluminium blades for his 14 bis biplane. Some of his designs used a bent aluminium sheet for blades, thus creating an airfoil shape. They were heavily undercambered, and this plus the absence of lengthwise twist made them less efficient than the Wright propellers. Even so, this may have been the first use of aluminium in the construction of an airscrew. Theory: In the nineteenth century, several theories concerning propellers were proposed. The momentum theory or disk actuator theory – a theory describing a mathematical model of an ideal propeller – was developed by W.J.M. Rankine (1865), A.G. Greenhill (1888) and R.E. Froude (1889). The propeller is modelled as an infinitely thin disc, inducing a constant velocity along the axis of rotation and creating a flow around the propeller. Theory: A screw turning through a solid will have zero "slip"; but as a propeller screw operates in a fluid (either air or water), there will be some losses. The most efficient propellers are large-diameter, slow-turning screws, such as on large ships; the least efficient are small-diameter and fast-turning (such as on an outboard motor). Using Newton's laws of motion, one may usefully think of a propeller's forward thrust as being a reaction proportionate to the mass of fluid sent backward per time and the speed the propeller adds to that mass, and in practice there is more loss associated with producing a fast jet than with creating a heavier, slower jet. (The same applies in aircraft, in which larger-diameter turbofan engines tend to be more efficient than earlier, smaller-diameter turbofans, and even smaller turbojets, which eject less mass at greater speeds.) Propeller geometry The geometry of a marine screw propeller is based on a helicoidal surface. This may form the face of the blade, or the faces of the blades may be described by offsets from this surface. The back of the blade is described by offsets from the helicoid surface in the same way that an aerofoil may be described by offsets from the chord line. The pitch surface may be a true helicoid or one having a warp to provide a better match of angle of attack to the wake velocity over the blades. A warped helicoid is described by specifying the shape of the radial reference line and the pitch angle in terms of radial distance. The traditional propeller drawing includes four parts: a side elevation, which defines the rake, the variation of blade thickness from root to tip, a longitudinal section through the hub, and a projected outline of a blade onto a longitudinal centreline plane. The expanded blade view shows the section shapes at their various radii, with their pitch faces drawn parallel to the base line, and thickness parallel to the axis. The outline indicated by a line connecting the leading and trailing tips of the sections depicts the expanded blade outline. The pitch diagram shows variation of pitch with radius from root to tip. The transverse view shows the transverse projection of a blade and the developed outline of the blade.The blades are the foil section plates that develop thrust when the propeller is rotated The hub is the central part of the propeller, which connects the blades together and fixes the propeller to the shaft. Theory: Rake is the angle of the blade to a radius perpendicular to the shaft. Skew is the tangential offset of the line of maximum thickness to a radius The propeller characteristics are commonly expressed as dimensionless ratios: Pitch ratio PR = propeller pitch/propeller diameter, or P/D Disk area A0 = πD2/4 Expanded area ratio = AE/A0, where expanded area AE = Expanded area of all blades outside of the hub. Theory: Developed area ratio = AD/A0, where developed area AD = Developed area of all blades outside of the hub Projected area ratio = AP/A0, where projected area AP = Projected area of all blades outside of the hub Mean width ratio = (Area of one blade outside the hub/length of the blade outside the hub)/Diameter Blade width ratio = Maximum width of a blade/Diameter Blade thickness fraction = Thickness of a blade produced to shaft axis/Diameter Cavitation Cavitation is the formation of vapor bubbles in water near a moving propeller blade in regions of very low pressure. It can occur if an attempt is made to transmit too much power through the screw, or if the propeller is operating at a very high speed. Cavitation can waste power, create vibration and wear, and cause damage to the propeller. It can occur in many ways on a propeller. The two most common types of propeller cavitation are suction side surface cavitation and tip vortex cavitation. Theory: Suction side surface cavitation forms when the propeller is operating at high rotational speeds or under heavy load (high blade lift coefficient). The pressure on the upstream surface of the blade (the "suction side") can drop below the vapor pressure of the water, resulting in the formation of a vapor pocket. Under such conditions, the change in pressure between the downstream surface of the blade (the "pressure side") and the suction side is limited, and eventually reduced as the extent of cavitation is increased. When most of the blade surface is covered by cavitation, the pressure difference between the pressure side and suction side of the blade drops considerably, as does the thrust produced by the propeller. This condition is called "thrust breakdown". Operating the propeller under these conditions wastes energy, generates considerable noise, and as the vapor bubbles collapse it rapidly erodes the screw's surface due to localized shock waves against the blade surface. Tip vortex cavitation is caused by the extremely low pressures formed at the core of the tip vortex. The tip vortex is caused by fluid wrapping around the tip of the propeller; from the pressure side to the suction side. This video demonstrates tip vortex cavitation. Tip vortex cavitation typically occurs before suction side surface cavitation and is less damaging to the blade, since this type of cavitation doesn't collapse on the blade, but some distance downstream. Types of propellers: Variable-pitch propeller Variable-pitch propellers may be either controllable (controllable-pitch propellers) or automatically feathering (folding propellers ). Variable-pitch propellers have significant advantages over the fixed-pitch variety, namely: the ability to select the most effective blade angle for any given speed; when motorsailing, the ability to coarsen the blade angle to attain the optimum drive from wind and engines; the ability to move astern (in reverse) much more efficiently (fixed props perform very poorly in astern); the ability to "feather" the blades to give the least resistance when not in use (for example, when sailing). For large airplanes, if the engine is uncontrollable, the ability to feather the propeller is necessary to prevent the propeller from spinning so fast it breaks apart. Types of propellers: Skewback propeller An advanced type of propeller used on German Type 212 submarines is called a skewback propeller. As in the scimitar blades used on some aircraft, the blade tips of a skewback propeller are swept back against the direction of rotation. In addition, the blades are tilted rearward along the longitudinal axis, giving the propeller an overall cup-shaped appearance. This design preserves thrust efficiency while reducing cavitation, and thus makes for a quiet, stealthy design.A small number of ships use propellers with winglets similar to those on some airplane wings, reducing tip vortices and improving efficiency. Types of propellers: Modular propeller A modular propeller provides more control over the boat's performance. There is no need to change an entire propeller when there is an opportunity to only change the pitch or the damaged blades. Being able to adjust pitch will allow for boaters to have better performance while in different altitudes, water sports, or cruising. Voith Schneider propeller Voith Schneider propellers use four untwisted straight blades turning around a vertical axis instead of helical blades and can provide thrust in any direction at any time, at the cost of higher mechanical complexity. Types of propellers: Shaftless A rim-driven thruster integrates an electric motor into a ducted propeller. The cylindrical acts as the stator, while the tips of the blades act as the rotor. They typically provide high torque and operate at low RPMs, producing less noise. The system does not require a shaft, reducing weight. Units can be placed at various locations around the hull and operated independently, e.g., to aid in maneuvering. The absence of a shaft allows alternative rear hull designs. Types of propellers: Toroidal Twisted-toroid (ring-shaped) propellers, first invented over 120 years ago, replace the blades with a-circular rings. They are significantly quieter (particularly at audible frequencies) and more efficient than traditional propellers for both air and water applications. The design distributes vortices generated by the propeller across the entire shape, causing them to dissipate faster in the atmosphere. Damage protection: Shaft protection For smaller engines, such as outboards, where the propeller is exposed to the risk of collision with heavy objects, the propeller often includes a device that is designed to fail when overloaded; the device or the whole propeller is sacrificed so that the more expensive transmission and engine are not damaged. Damage protection: Typically in smaller (less than 10 hp or 7.5 kW) and older engines, a narrow shear pin through the drive shaft and propeller hub transmits the power of the engine at normal loads. The pin is designed to shear when the propeller is put under a load that could damage the engine. After the pin is sheared the engine is unable to provide propulsive power to the boat until a new shear pin is fitted.In larger and more modern engines, a rubber bushing transmits the torque of the drive shaft to the propeller's hub. Under a damaging load the friction of the bushing in the hub is overcome and the rotating propeller slips on the shaft, preventing overloading of the engine's components. After such an event the rubber bushing may be damaged. If so, it may continue to transmit reduced power at low revolutions, but may provide no power, due to reduced friction, at high revolutions. Also, the rubber bushing may perish over time leading to its failure under loads below its designed failure load. Damage protection: Whether a rubber bushing can be replaced or repaired depends upon the propeller; some cannot. Some can, but need special equipment to insert the oversized bushing for an interference fit. Others can be replaced easily. The "special equipment" usually consists of a funnel, a press and rubber lubricant (soap). If one does not have access to a lathe, an improvised funnel can be made from steel tube and car body filler; as the filler is only subject to compressive forces it is able to do a good job. Often, the bushing can be drawn into place with nothing more complex than a couple of nuts, washers and a threaded rod. A more serious problem with this type of propeller is a "frozen-on" spline bushing, which makes propeller removal impossible. In such cases the propeller must be heated in order to deliberately destroy the rubber insert. Once the propeller is removed, the splined tube can be cut away with a grinder and a new spline bushing is then required. To prevent a recurrence of the problem, the splines can be coated with anti-seize anti-corrosion compound. Damage protection: In some modern propellers, a hard polymer insert called a drive sleeve replaces the rubber bushing. The splined or other non-circular cross section of the sleeve inserted between the shaft and propeller hub transmits the engine torque to the propeller, rather than friction. The polymer is weaker than the components of the propeller and engine so it fails before they do when the propeller is overloaded. This fails completely under excessive load, but can easily be replaced. Damage protection: Weed hatches and rope cutters Whereas the propeller on a large ship will be immersed in deep water and free of obstacles and flotsam, yachts, barges and river boats often suffer propeller fouling by debris such as weed, ropes, cables, nets and plastics. British narrowboats invariably have a weed hatch over the propeller, and once the narrowboat is stationary, the hatch may be opened to give access to the propeller, enabling debris to be cleared. Yachts and river boats rarely have weed hatches; instead they may fit a rope cutter that fits around the prop shaft and rotates with the propeller. These cutters clear the debris and obviate the need for divers to attend manually to the fouling. Several forms of rope cutters are available: A simple sharp edged disc that cuts like a razor; A rotor with two or more projecting blades that slice against a fixed blade, cutting with a scissor action; A serrated rotor with a complex cutting edge made up of sharp edges and projections. Propeller variations: A cleaver is a type of propeller design especially used for boat racing. Its leading edge is formed round, while the trailing edge is cut straight. It provides little bow lift, so that it can be used on boats that do not need much bow lift, for instance hydroplanes, that naturally have enough hydrodynamic bow lift. To compensate for the lack of bow lift, a hydrofoil may be installed on the lower unit. Hydrofoils reduce bow lift and help to get a boat out of the hole and onto plane.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semiperfect number** Semiperfect number: In number theory, a semiperfect number or pseudoperfect number is a natural number n that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number. The first few semiperfect numbers are: 6, 12, 18, 20, 24, 28, 30, 36, 40, ... (sequence A005835 in the OEIS) Properties: Every multiple of a semiperfect number is semiperfect. A semiperfect number that is not divisible by any smaller semiperfect number is called primitive. Every number of the form 2mp for a natural number m and an odd prime number p such that p < 2m+1 is also semiperfect. In particular, every number of the form 2m(2m+1 − 1) is semiperfect, and indeed perfect if 2m+1 − 1 is a Mersenne prime. The smallest odd semiperfect number is 945 (see, e.g., Friedman 1993). A semiperfect number is necessarily either perfect or abundant. An abundant number that is not semiperfect is called a weird number. With the exception of 2, all primary pseudoperfect numbers are semiperfect. Every practical number that is not a power of two is semiperfect. The natural density of the set of semiperfect numbers exists. Primitive semiperfect numbers: A primitive semiperfect number (also called a primitive pseudoperfect number, irreducible semiperfect number or irreducible pseudoperfect number) is a semiperfect number that has no semiperfect proper divisor.The first few primitive semiperfect numbers are 6, 20, 28, 88, 104, 272, 304, 350, ... (sequence A006036 in the OEIS) There are infinitely many such numbers. All numbers of the form 2mp, with p a prime between 2m and 2m+1, are primitive semiperfect, but this is not the only form: for example, 770. There are infinitely many odd primitive semiperfect numbers, the smallest being 945, a result of Paul Erdős: there are also infinitely many primitive semiperfect numbers that are not harmonic divisor numbers.Every semiperfect number is a multiple of a primitive semiperfect number.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LIS (programming language)** LIS (programming language): LIS (Language d'Implementation de Systèmes) was a system implementation programming language designed by Jean Ichbiah, who later designed Ada. LIS was used to implement the compiler for the Ada-0 subset of Ada at Karlsruhe on the BS2000 Siemens operating system. Later on the Karlsruhe Ada compilation system got rewritten in Ada-0 itself, which was easy, because LIS and Ada-0 are very close.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beamstrahlung** Beamstrahlung: Beamstrahlung (from beam + bremsstrahlung ) is the radiation from one beam of charged particles in storage rings, linear or circular colliders, namely the synchrotron radiation emitted due to the electromagnetic field of the opposing beam. Coined by J. Rees in 1978.It is a source of radiation loss in colliders, more specifically a type of synchrotron radiation and because of that a beam particle is lost whenever, during the collision, it radiates a photon (or photons) of an energy high enough that the emittance particle falls outside the momentum acceptance. Furthermore, with a non-zero dispersion at the interaction point, beamstrahlung can also affect the transverse beam emittance, which can either be due to incompletely corrected beam optics errors or be intentionally introduced for the purpose of reducing the centre-of-mass energy spread for monochromatization.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transcendental Étude No. 6 (Liszt)** Transcendental Étude No. 6 (Liszt): Transcendental Étude No. 6 in G minor, "Vision" is the sixth of twelve Transcendental Études by Franz Liszt. It is a study of the extensions of the hand, hands moving in opposite directions, arpeggiated double notes, and tremolos. It is one of the less difficult études out of Liszt's 12 Transcendental Études, though the beginning of the piece can be quite troublesome if it is played as directed: completely with the left hand (linked hand in the second edition [Dover]). It would require large stretches and dexterous leaps if done so. Visual image: The visual image of this piece is a funeral. The middle section's wild octaves and rapidly climbing and descending arpeggios are filled with exaltation (as the original notes Franz Liszt scripted).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VG-10** VG-10: VG-10 is a cutlery grade stainless steel produced in Japan. The name stands for V Gold 10 ("gold" meaning quality), or sometimes V-Kin-10 (V金10号) (kin means "gold" in Japanese). It is a stainless steel with a high carbon content containing 1% Carbon, 15% Chromium, 1% Molybdenum, 0.2% Vanadium, and 1.5% Cobalt.The VG-10 stainless steel was originally designed by Takefu Special Steel Co. Ltd., based in Takefu, Fukui Prefecture, Japan (the former cutlery/sword-making center of Echizen). Takefu also made another version: VG10W, which contains 0.4% tungsten. Almost all VG-10 steel knife blades were manufactured in Japan. VG-10 was originally aimed at Japanese chefs, but also found its way into sports cutlery. Spyderco and Kizer have produced some of their most popular models from VG-10, SOG categorizes VG-10 as its highest grade of blade steel, and Fällkniven uses laminated VG-10 in many of their knives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IPA non-pulmonic consonant chart with audio** IPA non-pulmonic consonant chart with audio: The International Phonetic Alphabet, or IPA, is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was devised by the International Phonetic Association as a standardized representation of the sounds of spoken language.In the IPA, non-pulmonic consonants are sounds whose airflow is not dependent on the lungs. These include clicks (found in the Khoisan languages and some neighboring Bantu languages of Africa), implosives (found in languages such as Sindhi, Hausa, Swahili and Vietnamese), and ejectives (found in many Amerindian and Caucasian languages). Ejectives occur in about 20% of the world's languages, implosives in roughly 13%, and clicks in very few. IPA non-pulmonic consonant chart with audio: In the audio samples below, the consonants are pronounced with the vowel [a] for demonstration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Branch theory** Branch theory: Branch theory is an ecclesiological proposition that the One, Holy, Catholic, and Apostolic Church includes various different Christian denominations whether in formal communion or not. The theory is often incorporated in the Protestant notion of an invisible Christian Church structure binding them together. Branch theory: Anglican proponents of Anglo-Catholic churchmanship who support the theory include only the Roman Catholic, Eastern Orthodox, Oriental Orthodox, Scandinavian Lutheran, Old Catholic, Moravian, Persian and Anglican churches as branches. On the other hand, the majority of Anglicans, including those of low church, broad church and high churchmanship, have "followed the major continental Reformers in their doctrine of the true church, identifiable by the authentic ministry of word and sacrament, in their rejection of the jurisdiction of the pope, and in their alliance with the civil authority ('the magistrate')". The Church of England historically considered itself "Protestant and Reformed" and recognized as true churches the Continental Reformed Churches, participating in the Synod of Dort in 1618–1619.As such, Anglicans have entered into full communion with bodies such as the Evangelical Church in Germany and in some countries, have merged with Methodist, Presbyterian and Lutheran denominations to form united Protestant Churches, such as the Church of North India, Church of Pakistan, Church of South India, and the Church of Bangladesh for example. For Anglicans of Evangelical churchmanship, the notion of apostolic continuity is seen as "fidelity to the teaching of the apostles as set out in scripture, rather than in historical or institutional terms" and thus they place focus on "the gospel, and the means by which this is proclaimed, articulated, and reinforced--namely, the ministers of word and sacrament."Other Protestant Christians, including Evangelical Anglicans, generally reject the Anglo-Catholic version of the branch theory and hold a theory in which the Christian Church "has no visible unity" but contains numerous denominations that are "invisibly connected." Fortescue states that "this theory is common among all Protestant bodies, although each one generally holds that it is the purest branch." In expounding upon branch theory, theologian Paul Evdokimov states that some view each distinct Christian tradition as contributing something special to the whole of Christendom: the famous "branch theory", according to which each ecclesiastical tradition possesses only part of the truth, so that the true Church will come into being only when they all join together; such a belief encourages the "churches" to continue as they are, confirming in their fragmented state, and the final result is Christianity without the Church. Each church, in its more pronounced form, displays, according to its own native spirit, a particular version of the unique revelation. So, for example, Roman Christianity is characterized by filial love and obedience expressed towards the fatherly authority hypostatized in the first Person of the Trinity: the Church is there to teach and to obey. For the Reformed Churches the vital thing is sacramental reverence for the Word; it is the Church's duty to listen and reform itself. The Orthodox treasure the liberty of the children of God that flowers in liturgical communion, while the Church hymns the love of God for the human race. Views: Anglican Charles Daubeny (1745-1827) formulated a branch theory in which the One, Holy, Catholic, and Apostolic Church included the Anglican, Scandinavian Lutheran, Roman Catholic and Eastern Orthodox Churches; to this the Oriental Orthodox Churches, Moravian Church, Church of the East, and Old Catholic Churches were also added. The theory was popularized during the Oxford Movement, particularly through the work of the Tractarians. Although the Anglican Roman Catholic International Commission, an organization sponsored by the Anglican Consultative Council and the Pontifical Council for Promoting Christian Unity, seeks to make ecumenical progress between the Roman Catholic Church and the Anglican Communion, it has made no statement on the topic. The theory "has received mixed reception even within the Anglican Communion."The majority of Anglicans, including those of low church, broad church and high churchmanship, have "followed the major continental Reformers in their doctrine of the true church, identifiable by the authentic ministry of word and sacrament, in their rejection of the jurisdiction of the pope, and in their alliance with the civil authority ('the magistrate')". The Church of England historically considered itself "Protestant and Reformed" and recognized as true churches the Continental Reformed Churches, participating in the Synod of Dort in 1618–1619; in 1567, Edmund Grindal, who became the Church of England's Archbishop of Canterbury, declared that "all reformed churches do differ in rites and ceremonies, but we agree with all reformed churches in substance of doctrine."As such, Anglicans have entered into full communion with bodies such as the Evangelical Church in Germany and in some countries, have merged with Methodist, Presbyterian and Lutheran denominations to form united Protestant Churches, such as the Church of North India, Church of Pakistan, Church of South India, and the Church of Bangladesh for example. For Anglicans of Evangelical churchmanship, the notion of apostolic continuity is seen as "fidelity to the teaching of the apostles as set out in scripture, rather than in historical or institutional terms" and thus they place focus on "the gospel, and the means by which this is proclaimed, articulated, and reinforced--namely, the ministers of word and sacrament." Catholic The Catholic Church does not accept that those churches separated by schism or heresy are part of the one true church, maintaining that "there exists a single Church of Christ, which subsists in the Catholic Church, governed by the Successor of Peter and by the Bishops in communion with him". Several Popes have explicitly condemned the Anglican "branch theory". The Catholic Church additionally rejects the validity of Anglican Orders, defined formally in 1896 by Pope Leo XIII in the Papal Bull Apostolicae curae, which declares Anglican Orders "absolutely null and utterly void". Views: Soon after the formulation of the branch theory, in 1864, the Holy Office rejected the branch theory or idea that "the three Christian communions, Catholic, Greek schismatic, and Anglican, however separated and divided from one another, nevertheless with equal right claim for themselves the name "Catholic" and "together now constitute the Catholic Church". In 1870, English bishops attending the First Vatican Council raised objections to the expression Sancta Romana Catholica Ecclesia ("Holy Roman Catholic Church") which appeared in the schema (the draft) of the First Ecumenical Council of the Vatican's Dogmatic Constitution on the Catholic Faith, Dei Filius. These bishops proposed that the word "Roman" be omitted or at least that commas be inserted between the adjectives, out of concern that use of the term "Roman Catholic" would lend support to proponents of the branch theory. While the council overwhelmingly rejected this proposal, the text was finally modified to read "Sancta Catholica Apostolica Romana Ecclesia" translated into English either as "the holy Catholic Apostolic Roman Church" or, by separating each adjective, as "the holy, Catholic, Apostolic, and Roman Church". Views: Both lungs concept Pope Benedict XVI and Pope John Paul II used the "two lungs" concept to relate the Latin Church with the Eastern Catholic Churches. Views: Eastern Orthodox Non-acceptance of the branch theory by the Eastern Orthodox Church, was in 1853 called unfortunate by the theory's proponent, William Palmer, who wished the Eastern Church to claim to be no more than a part of the whole, not the whole of the true Church. Bishop Kallistos Ware says that "Orthodox writers sometimes speak as if they accepted the 'Branch Theory', once popular among High Church Anglicans", but explains that this opinion "cannot be reconciled with traditional Orthodox theology". Western Orthodox cleric Julian Joseph Overbeck writes: But what do we see in the Anglican Church? Heresies are not only tolerated and publicly preached from the pulpits, and the schismatical and heretical Church of Rome is by a great many fondled and looked up to, but a theory has sprung up, the so called Branch-Church theory, maintaining that the Catholic Church consists of three branches: the Roman, Greek, and Anglican Churches. Only fancy! the Roman and Greek Churches contradicting and anathematising each other, and the Anglican Church (in its Articles) contradicting both, and besides full of heretical teaching-these are the component parts of the One Catholic Church, the abode of the Spirit of Truth!!! And on this theory rests the "Corporate Reunion of Christendom," which entirely ignores all Apostolic teaching concerning schism and heresy! In its official declarations, the Eastern Orthodox Church states that the one true church founded by Jesus Christ is a real identifiable entity and that it is singularly the Orthodox Catholic Church. It has identified itself as the "One, Holy, Catholic, and Apostolic Church" in, for instance, synods held in 1836 and 1838 and in its correspondence with Pope Pius IX and Pope Leo XIII. Adrian Fortescue wrote of the Eastern Orthodox: "The idea of a church made up of mutually excommunicate bodies that teach different articles of faith and yet altogether form one Church is as inconceivable to them as it is to us (Catholics)". The Eastern Orthodox Church regards neither Catholics nor Protestants as branches of the "One True Church".The Eastern Orthodox Church is a part of several ecumenical efforts on international, national, and regional levels, such as the World Council of Churches. With respect to branch theory, some conservative Eastern Orthodox, however, take a decidedly anti-ecumenical stand. For example, in 1983 Metropolitan Philaret (Voznesensky) and the Holy Synod of Bishops of the Russian Orthodox Church Outside Russia stated: Those who attack the Church of Christ by teaching that Christ's Church is divided into so-called "branches" which differ in doctrine and way of life, or that the Church does not exist visibly, but will be formed in the future when all "branches" or sects or denominations, and even religions will be united into one body; and who do not distinguish the priesthood and mysteries of the Church from those of the heretics, but say that the baptism and eucharist of heretics is effectual for salvation; therefore, to those who knowingly have communion with these aforementioned heretics or who advocate, disseminate, or defend their new heresy of Ecumenism under the pretext of brotherly love or the supposed unification of separated Christians, Anathema! In addition, the Jubilee Council of 2000 of the Church of Russia also condemned "Divided Church" Ecclesiology or the so-called Branch Theory. Views: Oriental Orthodoxy It is considered by many that the Chalcedonian Schism resulted from a difference in semantics rather than actual doctrine, stating that both non-Chalcedonian and Chalcedonian Christianity share a similar Christology despite choosing to express it in different (Cyrillian vs. Chalcedonian) terms, and theological dialogue has resulted in formal statements of agreement on that issue, which have been officially accepted by groups on both sides. The Orthodoxy Cognate PAGE Society (Society for Orthodox Christian Unity and Faith), which is headquartered in India declares the Society's firm belief that, although "the two groups are not in communion with each other", "both the Byzantine (Eastern) Orthodox Churches and the Oriental Orthodox Churches are the true heirs to the One, Holy, Catholic and Apostolic Church of Christ, which was the Church of the apostles and the holy fathers. We also believe these Churches teach the true faith and morals of the Church established by Christ for which the ancient martyrs gave their lives." Analogous theories: Branches of the Evangelical Church theory In Church Dogmatics, Karl Barth, defined the "Evangelical Church" as having three branches: Lutheran, Reformed, and Anglican. The "Evangelical Church" was to be distinguished from what he termed the "three heresies of Neoprotestantism, Roman Catholicism and Eastern Orthodoxy". Analogous theories: Sister churches theory What has been called another version of the branch theory was propounded in the wake of the Second Vatican Council by some Roman Catholic theologians, such as Robert F. Taft Michael A. Fahey, and others. In this theory, the Eastern Orthodox Church and the Roman Catholic Church are two "sister churches". This theory was rejected outright by the Catholic Church, which applies the term "sister Churches" only to the relations between particular Churches, such as the sees of Constantinople and Rome. Most Eastern Orthodox theologians also reject it.A writer in the United States publication Orthodox Life says that ecumenism promotes the idea of a Church comprising all baptized Christians and within which the different confessions are "sister churches". Analogous theories: Two lungs theory The metaphor of Christianity compared to one body breathing with two lungs was coined by the Russian poet and philosopher Vyacheslav Ivanov, inspired by the worldview of the 19th century Russian philosopher Vladimir Solovyov. Solovyov "felt that eastern Christians could learn from the Western church's relatively active presence in the world."Ivanov accepted "the idea of 'Unia'", according to Robert Bird, the "combination of traditional rite and papal authority explains why Ivanov felt he was now breathing with both lungs." Pope John Paul II, according to Bird, "adopted Ivanov's imagery of the two 'lungs' of the universal Church" but John Paul II's "image of the full Church seems to presume their equal coexistence, supposedly without the submission of the East to papal authority."John Paul II used the two lungs of a single body metaphor in the context of "the different forms of the Church's great tradition" in Redemptoris Mater (1987). John Paul II used the metaphor to "the Church", which for him was not some amalgam of the Catholic and Eastern Orthodox Church, but the Catholic Church itself, thus indicating that the Catholic Church must avail itself of the traditions of both Eastern Christianity and Western Christianity. The Catholic Church uses this metaphor to compare the Latin Church's tradition to the Eastern Orthodox Churches' traditions and also Eastern Catholic Churches' traditions, as emphasized in the Second Vatican Council's Orientalium ecclesiarum, the decree on Eastern Catholic Churches. John Paul II elaborated the metaphor, in Sacri Canones (1990), "the Church itself, gathered in the one Spirit, breathes as though with two lungs – of the East and of the West – and that it burns with the love of Christ in one heart having two ventricles."An anonymous author wrote, in Orthodox Life magazine, that the metaphor comparing the Eastern Orthodox Church and the Roman Catholic Church to two lungs of one body was "shaped and influenced by" the branch theory and developed by "Orthodox ecumenists and Papists". Eastern Orthodox reject as incompatible with the Orthodox faith any such use of the "two lungs" expression to imply that the Eastern Orthodox and Roman Catholic churches are two parts of a single church and "that Orthodoxy is only for Easterners, and that Catholicism is only for Westerners", according to Archpriest Andrew Phillips. Patriarch Bartholomew I of Constantinople "rejects the opinion" that "there would be an 'incompatibility between Orthodox tradition and the European cultural way', which would be antinomic" and points out that idea "is against the principle of equality and respect of peoples and cultural traditions on our continent."Ion Bria wrote in 1991 that the metaphor "may be attractive as an aid for understanding the formation of two distinctive traditions in Christianity after A.D. 1054." In 2005, Bishop Hilarion Alfeyev, chairman of the Representation of the Russian Orthodox Church to the European Institutions, told the 6th Gniezno Convention that the metaphor is "particularly relevant" when he "proposed to form a European Catholic-Orthodox Alliance" and said "nothing should prevent us from uniting our efforts in order to defend Christian tradition, without waiting for the restoration of full unity between the two lungs of European Christianity."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Valperinol** Valperinol: Valperinol (INN; GA 30-905) is a drug which acts as a calcium channel blocker. It was patented as a possible sedative, antiepileptic, and/or antiparkinsonian agent, but was never marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passwd** Passwd: passwd is a command on Unix, Plan 9, Inferno, and most Unix-like operating systems used to change a user's password. The password entered by the user is run through a key derivation function to create a hashed version of the new password, which is saved. Only the hashed version is stored; the entered password is not saved for security reasons. Passwd: When the user logs on, the password entered by the user during the log on process is run through the same key derivation function and the resulting hashed version is compared with the saved version. If the hashes are identical, the entered password is considered to be correct, and the user is authenticated. In theory, it is possible for two different passwords to produce the same hash. However, cryptographic hash functions are designed in such a way that finding any password that produces the same hash is very difficult and practically infeasible, so if the produced hash matches the stored one, the user can be authenticated. Passwd: The passwd command may be used to change passwords for local accounts, and on most systems, can also be used to change passwords managed in a distributed authentication mechanism such as NIS, Kerberos, or LDAP. Password file: The /etc/passwd file is a text-based database of information about users that may log into the system or other operating system user identities that own running processes. In many operating systems this file is just one of many possible back-ends for the more general passwd name service. The file's name originates from one of its initial functions as it contained the data used to verify passwords of user accounts. However, on modern Unix systems the security-sensitive password information is instead often stored in a different file using shadow passwords, or other database implementations. The /etc/passwd file typically has file system permissions that allow it to be readable by all users of the system (world-readable), although it may only be modified by the superuser or by using a few special purpose privileged commands. The /etc/passwd file is a text file with one record per line, each describing a user account. Each record consists of seven fields separated by colons. The ordering of the records within the file is generally unimportant. An example record may be: The fields, in order from left to right, are: jsmith: User name: the string a user would type in when logging into the operating system: the logname. Must be unique across users listed in the file. x: Information used to validate a user's password. The format is the same as that of the analogous field in the shadow password file, with the additional convention that setting it to "x" means the actual password is found in the shadow file, a common occurrence on modern systems. 1001: user identifier number, used by the operating system for internal purposes. It need not be unique. 1000: group identifier number, which identifies the primary group of the user; all files that are created by this user may initially be accessible to this group. Joe Smith,Room 1007...: Gecos field, commentary that describes the person or account. Typically, this is a set of comma-separated values including the user's full name and contact details. /home/jsmith: Path to the user's home directory. /bin/sh: Program that is started every time the user logs into the system. For an interactive user, this is usually one of the system's command line interpreters (shells). Shadow file: /etc/shadow is used to increase the security level of passwords by restricting all but highly privileged users' access to hashed password data. Typically, that data is kept in files owned by and accessible only by the super user. Shadow file: Systems administrators can reduce the likelihood of brute-force attacks by making the list of hashed passwords unreadable by unprivileged users. The obvious way to do this is to make the passwd database itself readable only by the root user. However, this would restrict access to other data in the file such as username-to-userid mappings, which would break many existing utilities and provisions. One solution is a "shadow" password file to hold the password hashes separate from the other data in the world-readable passwd file. For local files, this is usually /etc/shadow on Linux and Unix systems, or /etc/master.passwd on BSD systems; each is readable only by root. (Root access to the data is considered acceptable since on systems with the traditional "all-powerful root" security model, the root user would be able to obtain the information in other ways in any case). Virtually all recent Unix-like operating systems use shadowed passwords. Shadow file: The shadow password file does not entirely solve the problem of attacker access to hashed passwords, as some network authentication schemes operate by transmitting the hashed password over the network (sometimes in cleartext, e.g., Telnet), making it vulnerable to interception. Copies of system data, such as system backups written to tape or optical media, can also become a means for illicitly obtaining hashed passwords. In addition, the functions used by legitimate password-checking programs need to be written in such a way that malicious programs cannot make large numbers of authentication checks at high rates of speed. Shadow file: Regardless of whether password shadowing is in effect on a given system, the passwd file is readable by all users so that various system utilities (e.g., grep) can work (e.g., to ensure that user names existing on the system can be found inside the file), while only the root user can write to it. Without password shadowing, this means that an attacker with unprivileged access to the system can obtain the hashed form of every user's password. Those values can be used to mount a brute force attack offline, testing possible passwords against the hashed passwords relatively quickly without alerting system security arrangements designed to detect an abnormal number of failed login attempts. Especially when the hash is not salted it is also possible to look up these hashed passwords in rainbow tables, databases specially made for giving back a password for a unique hash. Shadow file: With a shadowed password scheme in use, the /etc/passwd file typically shows a character such as '*', or 'x' in the password field for each user instead of the hashed password, and /etc/shadow usually contains the following user information: User login name salt and hashed password OR a status exception value e.g.: $id$salt$hashed, the printable form of a password hash as produced by crypt (C), where $id is the algorithm used. Other Unix-like systems may have different values, like NetBSD. Key stretching is used to increase password cracking difficulty, using by default 1000 rounds of modified MD5, 64 rounds of Blowfish, 5000 rounds of SHA-256 or SHA-512. The number of rounds may be varied for Blowfish, or for SHA-256 and SHA-512 by using $A$rounds=X$, where "A" and "X" are the algorithm IDs and the number of rounds. Common id values include:$1$ – MD5 $2$, $2a$, $2b$ – bcrypt $5$ – SHA-256 $6$ – SHA-512 $y$ – yescrypt Empty string – No password, the account has no password (reported by passwd on Solaris with "NP"). Shadow file: "!", "*" – the account is password locked, user will be unable to log in via password authentication but other methods (e.g. ssh key, logging in as root) may be still allowed. "*LK*" – the account itself is locked, user will be unable to log in. Shadow file: "*NP*", "!!" – the password has never been set Days since epoch of last password change Days until change allowed Days before change required Days warning for expiration Days after no logins before account is locked Days since epoch when account expires Reserved and unusedThe format of the shadow file is simple, and basically identical to that of the password file, to wit, one line per user, ordered fields on each line, and fields separated by colons. Many systems require the order of user lines in the shadow file be identical to the order of the corresponding users in the password file. History: Prior to password shadowing, a Unix user's hashed password was stored in the second field of their record in the /etc/passwd file (within the seven-field format as outlined above). Password shadowing first appeared in Unix systems with the development of SunOS in the mid-1980s, System V Release 3.2 in 1988 and BSD4.3 Reno in 1990. But, vendors who had performed ports from earlier UNIX releases did not always include the new password shadowing features in their releases, leaving users of those systems exposed to password file attacks. History: System administrators may also arrange for the storage of passwords in distributed databases such as NIS and LDAP, rather than in files on each connected system. In the case of NIS, the shadow password mechanism is often still used on the NIS servers; in other distributed mechanisms the problem of access to the various user authentication components is handled by the security mechanisms of the underlying data repository. History: In 1987, the author of the original Shadow Password Suite, Julie Haugh, experienced a computer break-in and wrote the initial release of the Shadow Suite containing the login, passwd and su commands. The original release, written for the SCO Xenix operating system, quickly got ported to other platforms. The Shadow Suite was ported to Linux in 1992 one year after the original announcement of the Linux project, and was included in many early distributions, and continues to be included in many current Linux distributions. History: In the past, it was necessary to have different commands to change passwords in different authentication schemes. For example, the command to change a NIS password was yppasswd. This required users to be aware of the different methods to change passwords for different systems, and also resulted in wasteful duplication of code in the various programs that performed the same functions with different back ends. In most implementations, there is now a single passwd command, and the control of where the password is actually changed is handled transparently to the user via pluggable authentication modules (PAMs). For example, the type of hash used is dictated by the configuration of the pam_unix.so module. By default, the MD5 hash has been used, while current modules are also capable of stronger hashes such as blowfish, SHA256 and SHA512.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Embryomics** Embryomics: Embryomics is the identification, characterization and study of the diverse cell types which arise during embryogenesis, especially as this relates to the location and developmental history of cells in the embryo. Cell type may be determined according to several criteria: location in the developing embryo, gene expression as indicated by protein and nucleic acid markers and surface antigens, and also position on the embryogenic tree. Embryome: There are many cell markers useful in distinguishing, classifying, separating and purifying the numerous cell types present at any given time in a developing organism. These cell markers consist of select RNAs and proteins present inside, and surface antigens present on the surface of, the cells making up the embryo. For any given cell type, these RNA and protein markers reflect the genes characteristically active in that cell type. The catalog of all these cell types and their characteristic markers is known as the organism's embryome. The word is a portmanteau of embryo and genome. “Embryome” may also refer to the totality of the physical cell markers themselves. Embryogenesis: As an embryo develops from a fertilized egg, the single egg cell splits into many cells, which grow in number and migrate to the appropriate locations inside the embryo at appropriate times during development. As the embryo's cells grow in number and migrate, they also differentiate into an increasing number of different cell types, ultimately turning into the stable, specialized cell types characteristic of the adult organism. Each of the cells in an embryo contains the same genome, characteristic of the species, but the level of activity of each of the many thousands of genes that make up the complete genome varies with, and determines, a particular cell's type (e.g. neuron, bone cell, skin cell, muscle cell, etc.). Embryogenesis: During embryo development (embryogenesis), many cell types are present which are not present in the adult organism. These temporary cells are called progenitor cells, and are intermediate cell types which disappear during embryogenesis by turning into other progenitor cells, or into mature adult somatic cell types, or which disappear due to programmed cell death (apoptosis). Embryogenesis: The entire process of embryogenesis can be described with the aid of two maps: an embryo map, a temporal sequence of 3-dimensional images of the developing embryo, showing the location of cells of the many cell types present in the embryo at a given time, and an embryogenic tree, a diagram showing how the cell types are derived from each other during embryogenesis. Embryogenesis: The embryo map is a sequence of 3-D images, or slices of 3-D images, of the developing embryo which, if viewed rapidly in temporal order, forms a time-lapse view of the growing embryo. Embryogenesis: The embryogenic tree is a diagram which shows the temporal development of each of the cell lines in the embryo. When drawn on a piece of paper, this diagram takes the form of a tree, analogous to the evolutionary tree of life which illustrates the development of life on Earth. However, instead of each branch on this tree representing a species, as in the tree of life, each branch represents a particular cell type present in the embryo at a particular time. And of course, an embryogenic tree covers the gestation period of weeks or months, instead of billions of years, as in the case of the evolutionary tree of life. Embryogenesis: Human embryogenesis is the referent here, but embryogenesis in other vertebrate species closely follows the same pattern. The egg cell (ovum), after fertilization with a sperm cell, becomes the zygote, represented by the trunk at the very bottom of the tree. This single zygote cell divides in two, three times, forming first a cluster of two-cells, then four-cells, and finally eight-cells. One more cell division brings the number of cells to 16, at which time it is called a morula, instead of a zygote. This ball of 16 cells then reorganizes into a hollow sphere called a blastocyst. As the number of cells grows from 16 to between 40 and 150, the blastocyst differentiates into two layers, an outer sphere of cells called the trophoblast and an inner cell mass called the embryoblast. Embryogenesis: The spherical outer cell layer (trophoblast), after implantation in the wall of the uterus, further differentiates and grows to form the placenta. Embryogenesis: The cells of the inner cell mass (embryoblast), which are known as human embryonic stem cells (hESCs), will further differentiate to form four structures: the amnion, the yolk sac, the allantois, and the embryo itself. Human embryonic stem cells are pluripotent, that is, they can differentiate into any of the cell types present in the adult human, and into any of the intermediate progenitor cell types that eventually turn into the adult cell lines. hESCs are also immortal, in that they can divide and grow in number indefinitely, without undergoing either differentiation or cellular aging (cellular senescence). Embryogenesis: The first differentiation of the hESCs that form the embryo proper, is into three cell types known as the germ layers: the ectoderm, the mesoderm, and the endoderm. The ectoderm eventually forms the skin (including hair and nails), mucous membranes and nervous system. The mesoderm forms the skeleton and muscles, heart and circulatory system, urinary and reproductive systems, and connective tissues inside the body. The endoderm forms the gastrointestinal tract (stomach and intestines), the respiratory tract, and the endocrine system (liver and endocrine glands). Mapping the embryogenic tree: A primary goal in embryomics is a complete mapping the embryogenic tree: Identifying each of the cell types present in the developing embryo and placing it in the tree on its proper branch. There is an unknown number, probably thousands, of distinct cell types present in the developing embryo, including progenitor cell lines which are only temporarily present and which disappear either by differentiating into the permanent somatic cell types which make up the tissues of the infant's body at birth (or into other progenitor cell lines), or by undergoing the programmed cell death process known as apoptosis. Mapping the embryogenic tree: Each cell type is defined by which genes are characteristically active in that cell type. A particular gene in a cell's genome codes for the production of a particular protein, that is, when that gene is turned on (active), the protein coded for by that gene is produced and present somewhere in the cell. Production of a particular protein involves the production of a particular mRNA (messenger RNA) sequence as an intermediate step in protein synthesis. This mRNA is produced by copying process called transcription, from the DNA in the cell's nucleus. The mRNA so produced travels from the nucleus into the cytoplasm, where it encounters and latches onto ribosomes stuck to the cytoplasmic side of the endoplasmic reticulum. Attachment of the mRNA strand to the ribosome initiates the production of the protein coded for by the mRNA strand. Therefore, the profile of active genes in a cell is reflected in the presence or absence of corresponding proteins and mRNA strands in the cell's cytoplasm, and antigen proteins present on the cell's outer membrane. Discovering, determining and classifying cells as to their type therefore involves detecting and measuring the type and amount of specific protein and RNA molecules present in the cells. Mapping the embryogenic tree: In addition, mapping the tree of embryogenesis involves assigning to each specific, identifiable cell type, a particular branch, or place, in the tree. This requires knowing the “ancestry” of each cell type, that is, which cell type preceded it in the development process. This information can be deduced by observing in detail the distribution and placement of cells, by type, in the developing embryo, and by also observing, in cells growing in culture (“in vitro”) any differentiation events, should they occur for whatever reason, and by other means. Mapping the embryogenic tree: Cells, embryonic cells in particular, are sensitive to the presence or absence of specific chemical molecules in their surroundings. This is the basis for cell signaling, and during embryogenesis cells “talk to each other” by emitting and receiving signalling molecules. This is how development of the embryo's structure is organized and controlled. If cells of a particular line have been removed from the embryo and are growing alone in a Petri dish in the lab, and some cell signaling chemicals are put in the growth medium bathing the cells, this can induce the cells to differentiate into a different, “daughter” cell type, mimicking the differentiation process that occurs naturally in the developing embryo. Artificially inducing differentiation in this way can yield clues to the correct placement of a particular cell line in the embryogenic tree, by observing what kind of cell results from inducing the differentiation. Mapping the embryogenic tree: In the laboratory, human embryonic stem cells growing in culture can be induced to differentiate into progenitor cells by exposing the hESCs to chemicals (e.g. protein growth and differentiation factors) present in the developing embryo. The progenitor cells so produced may then be isolated into pure colonies, grown in culture, and then classified according to type and assigned positions in the embryogenic tree. Such purified cultures of progenitor cells may be used in research to study disease processes in vitro, as diagnostic tools, or potentially developed for use in regenerative medicine therapies. Regenerative medicine: Embryomics is the core science supporting the development of regenerative medicine. Regenerative medicine involves use of specially grown cells, tissues and organs as therapeutic agents to cure disease and repair injury, and springs from the development of mammalian cloning technology. Other medical and surgical methods may use chemicals (pharmaceuticals) as therapeutic agents, or involve removal of injured or diseased tissue (surgery), or use inserted tissues or organs (transplant surgery). Use of transplanted tissue or organs in medicine is not classified as regenerative medicine, because the tissues and organs were not grown specifically for use as therapeutic agents. Regenerative medicine: Ultimately, one of the goals of regenerative medicine and applied embryomics, is the creation of cells, tissues and organs grown from cells taken from the patient to be treated. This would be accomplished by reprogramming adult stem or somatic cells removed from the patient, so that these cells revert to the pluripotent, embryonic state. These synthetic stem cells would then be grown in culture and differentiated into the appropriate cell type indicated for treating the patient's disease or injury. The advantages here over current therapies are: elimination of immune rejection accompanying allograft transplantation, creation of a full complement of cells, tissues and organs as needed, and creation of youthful cells, tissues and organs for transplant and rejuvenation. Regenerative medicine: Technology for growing cells, tissues and organs for use in regenerative medicine can be developed by using the natural course of development of those cells, tissues and organs during embryogenesis, as a guide. Therefore, detailed knowledge of the complete embryome and the embryogenic tree is key to developing the full potential of regenerative medicine. Embryomics also includes the application of embryomic data and theory, to the development of practical methods for evaluating, classifying, culturing, purifying, differentiating and manipulating human embryonic cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oritin** Oritin: Oritin is a flavan-3-ol, a type of flavonoid. It is a component of the proteracacinidin tannins of Acacia galpinii and Acacia caffra (Senegalia caffra).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ideogramme** Ideogramme: An ideogramme is a form of poetry that relies heavily on typographical elements, design, and layout. It comparable in manner to onomatopoetics or onomatopoeia. With onomatopoeia the word said sounds like what it represents: Moo, Whack, Bang, etc. etc.. In an ideogramme a word or group of words visually embody their content. Ideogramme: One of the first and most recognizable ideogrammes is Guillaume Apollinaire's Il Pleut (It's Raining), written in 1916. It was published in his book Calligrammes: Poems of Peace and War. Often this form is grouped within the Futurist movement. But, it extends beyond it. ee cummings was not a Futurist and his poem l(a is often cited for use of the ideogramme. Ideogramme: In November 1917 at Vieux Colombier Apollinaire stated in his lecture New Spirit and the Poets that "Typographical artifices worked out with great audacity have the advantage of bringing to life a visual lyricism which was almost unknown before our age."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java Management Extensions** Java Management Extensions: Java Management Extensions (JMX) is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (such as printers) and service-oriented networks. Those resources are represented by objects called MBeans (for Managed Bean). In the API, classes can be dynamically loaded and instantiated. Java Management Extensions: Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.JSR 003 of the Java Community Process defined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn. The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160. An extension of the JMX Remote API for Web Services was being developed under JSR 262.Adopted early on by the J2EE community, JMX has been a part of J2SE since version 5.0. "JMX" is a trademark of Oracle Corporation. Architecture: JMX uses a three-level architecture: The Probe level – also called the Instrumentation level – contains the probes (called MBeans) instrumenting the resources The Agent level, or MBeanServer – the core of JMX. It acts as an intermediary between the MBean and the applications. Architecture: The Remote Management level enables remote applications to access the MBeanServer through connectors and adaptors. A connector provides full remote access to the MBeanServer API using various communication (RMI, IIOP, JMS, WS-* …), while an adaptor adapts the API to another protocol (SNMP, …) or to Web-based GUI (HTML/HTTP, WML/HTTP, …).Applications can be generic consoles (such as JConsole and MC4J) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application. Architecture: Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol. Managed beans: A managed bean – sometimes simply referred to as an MBean – is a type of JavaBean, created with dependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean. Managed beans: The MBean represents a resource running in the Java virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push). Managed beans: Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification (for example, the JavaServer Faces technology specification), or if it meets all of the following conditions: It is not a non-static inner class. Managed beans: It is a concrete class, or is annotated @Decorator. It is not annotated with an EJB component-defining annotation or declared as an EJB bean class in ejb-jar.xml.No special declaration, such as an annotation, is required to define a managed bean. A MBean can notify the MBeanServer of its internal changes (for the attributes) by implementing the javax.management.NotificationEmitter. The application interested in the MBean's changes registers a listener (javax.management.NotificationListener) to the MBeanServer. Note that JMX does not guarantee that the listeners will receive all notifications. Types There are two basic types of MBean: Standard MBeans implement a business interface containing setters and getters for the attributes and the operations (i.e., methods). Managed beans: Dynamic MBeans implement the javax.management.DynamicMBean interface that provides a way to list the attributes and operations, and to get and set the attribute values.Additional types are Open MBeans, Model MBeans and Monitor MBeans. Open MBeans are dynamic MBeans that rely on the basic data types. They are self-explanatory and more user-friendly. Model MBeans are dynamic MBeans that can be configured during runtime. A generic MBean class is also provided for dynamically configuring the resources during program runtime. Managed beans: A MXBean (Platform MBean) is a special type of MBean that reifies Java virtual machine subsystems such as garbage collection, JIT compilation, memory pools, multi-threading, etc. A MLet (Management applet) is a utility MBean to load, instantiate and register MBeans in a MBeanServer from an XML description. The format of the XML descriptor is: <MLET CODE = ''class'' | OBJECT = ''serfile'' ARCHIVE = ''archiveList'' [CODEBASE = ''codebaseURL''] [NAME = ''objectName''] [VERSION = ''version''] > [arglist] </MLET> Support: JMX is supported at various levels by different vendors: JMX is supported by Java application servers such as OpenCloud Rhino Application Server [1], JBoss, JOnAS, WebSphere Application Server, WebLogic, SAP NetWeaver Application Server, Oracle Application Server 10g and Sun Java System Application Server. JMX is supported by the UnboundID Directory Server, Directory Proxy Server, and Synchronization Server. Systems management tools that support the protocol include Empirix OneSight, GroundWork Monitor, Hyperic, HP OpenView, IBM Director, ITRS Geneos, Nimsoft NMS, OpenNMS, Zabbix, Zenoss Core, and Zyrion, SolarWinds, Uptime Infrastructure Monitor, and LogicMonitor. JMX is also supported by servlet containers such as Apache Tomcat. & Jetty (web server) MX4J [2] is Open Source JMX for Enterprise Computing. jManage [3] Archived 2020-08-03 at the Wayback Machine is an open source enterprise-grade JMX Console with Web and command-line interfaces. MC4J [4] is an open source visual console for connecting to servers supporting JMX snmpAdaptor4j [5] is an open source providing a simple access to MBeans via the SNMP protocol. jvmtop is a lightweight open source JMX monitoring tool for the command-line Prometheus can ingest JMX data via the JMX exporter which exposes metrics in Prometheus format. Jolokia is a j2ee application which exposes JMX over HTTP.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beam me up, Scotty** Beam me up, Scotty: "Beam me up, Scotty" is a catchphrase and misquotation that made its way into popular culture from the science fiction television series Star Trek: The Original Series. It comes from the command Captain Kirk gives his chief engineer, Montgomery "Scotty" Scott, when he needs to be transported back to the Starship Enterprise. Beam me up, Scotty: Though it has become irrevocably associated with the series and films, the exact phrase was never actually spoken in any Star Trek television episode or film. Despite this, the quote has become a phrase of its own over time. It can be used to describe one's desire to be elsewhere, technology such as teleportation, slang for certain drugs, or as a phrase to show appreciation and association with the television show. Precise quotations: Despite the phrase entering into popular culture, it is a misquotation and has never been said in any of the television series or films, contrary to popular belief. There have, however, been several "near misses" of phrasing. In the Original Series episodes "The Gamesters of Triskelion" and "The Savage Curtain", Kirk said, "Scotty, beam us up"; while in the episode "This Side of Paradise", Kirk simply said, "Beam me up". In the episode "The Cloud Minders", Kirk says, "Mr. Scott, beam us up". The animated episodes "The Lorelei Signal" and "The Infinite Vulcan" used the phrasing "Beam us up, Scotty". The original film series has the wording "Scotty, beam me up" in Star Trek IV: The Voyage Home and "Beam them out of there, Scotty" in Star Trek Generations. The complete phrase was eventually said by William Shatner in the audio adaptation of his non-canon novel Star Trek: The Ashes of Eden. Legacy: The popularity of the misquotation has led to many new phrases, both associated with Star Trek or otherwise. Legacy: The misquotation's influence led to James Doohan, the actor who played Scotty, to be misrepresented in his own obituary. In it, he is referenced as the character who "responded to the command, 'Beam me up, Scotty'", despite having never responded to this exact command in the show. Doohan himself chose to use the phrase as the title of his 1996 autobiography.The quote "Beam me up, Scotty!" has been extended beyond its original meaning to describe an expression of "the desire to be elsewhere", or the desire to be out of an unwanted situation. Along with this, it has been associated with things that are futuristic, such as the possibility of teleportation.The phrase has also been used as slang for certain drugs. An Oxford Reference page defined "Beam me up, Scotty" as "a mixture of phencyclidine and cocaine" and to "talk to Scotty", "high off Scotty", "see Scotty", etc.The phrase has been referenced by Baxter County Sheriff's drug slang definitions. It is also referenced in the book "Vice Slang" by Tom Dalzell and Terry Victor, for crack cocaine, and to describe "Beamers" or "Beemers" as those taking said drugs.The exact timing of when the phrase became popular is unclear. However, early signs of the quote's usage to describe something separate from Star Trek can be found roughly ten years after Star Trek's airing in 1966, in a publication of the Royal Aeronautical Journal. It describes a certain routine as "a sort of 'beam me up, Scotty routine'". Over time, the phrase has been extended to, "Beam me up, Scotty, there's no intelligent life down here!", popularized on bumper stickers and t-shirts, despite neither quote ever being said on the show.A character in the 1993 educational video game Where in Space is Carmen Sandiego? is named "Bea Miupscotti."The planetarium in the animated series South Park (1997) carries the inscription "Me transmitte sursum, Caledoni!", which is a translation of the misquotation into Latin.The quote was used in the movie Armageddon (1998) by Rockhound, the character played by Steve Buscemi. When asked by Harry S. Stamper (played by Bruce Willis) if Rockhound would join them to divert the asteroid, he replies "You know me. Beam me up, Scotty" The quote was also used by American rapper Nicki Minaj as the title of, as well as the name of a track, on her third mixtape Beam Me Up Scotty. Legacy: The pop-culture centric wiki TV Tropes uses the phrase to refer to quotes that are never actually said in a certain work in spite of popular belief.Additionally, the quote was used in Season 3 Episode 3 of Superstore (2017) by Mateo in a scene in which he is speaking to a construction worker named Scott, who continues to try to use his employee bathroom pass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paola Bonfante** Paola Bonfante: Paola Bonfante is a Professor Emerita of plant biology at the University of Turin, she has studied symbiosis between fungi and plants (mycorrhizae), associations that involve 90% of plants with significant impacts on ecosystems, as well as on agriculture. Awards and honors: 2019: Appointed Commander (“Commendatore”) of the “Order of Merit of the Italian Republic” by the President of the Italian Republic (“motu proprio”) 2010: Award for the Award for the French Food Spirit- Science – Paris, December 16, 2010 2021: The Adam Kondorosi -Academia Europaea Award for Advanced research, September 2021She is in the list of: Clarivate Analytics: highly quoted researcher 2017, 2018, 2020 One hundred Italian Experts
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Viral phylodynamics** Viral phylodynamics: Viral phylodynamics is defined as the study of how epidemiological, immunological, and evolutionary processes act and potentially interact to shape viral phylogenies. Since the coining of the term in 2004, research on viral phylodynamics has focused on transmission dynamics in an effort to shed light on how these dynamics impact viral genetic variation. Transmission dynamics can be considered at the level of cells within an infected host, individual hosts within a population, or entire populations of hosts. Many viruses, especially RNA viruses, rapidly accumulate genetic variation because of short generation times and high mutation rates. Patterns of viral genetic variation are therefore heavily influenced by how quickly transmission occurs and by which entities transmit to one another. Patterns of viral genetic variation will also be affected by selection acting on viral phenotypes. Although viruses can differ with respect to many phenotypes, phylodynamic studies have to date tended to focus on a limited number of viral phenotypes. These include virulence phenotypes, phenotypes associated with viral transmissibility, cell or tissue tropism phenotypes, and antigenic phenotypes that can facilitate escape from host immunity. Due to the impact that transmission dynamics and selection can have on viral genetic variation, viral phylogenies can therefore be used to investigate important epidemiological, immunological, and evolutionary processes, such as epidemic spread, spatio-temporal dynamics including metapopulation dynamics, zoonotic transmission, tissue tropism, and antigenic drift. The quantitative investigation of these processes through the consideration of viral phylogenies is the central aim of viral phylodynamics. Sources of phylodynamic variation: In coining the term phylodynamics, Grenfell and coauthors postulated that viral phylogenies "... are determined by a combination of immune selection, changes in viral population size, and spatial dynamics". Their study showcased three features of viral phylogenies, which may serve as rules of thumb for identifying important epidemiological, immunological, and evolutionary processes influencing patterns of viral genetic variation. Sources of phylodynamic variation: The relative lengths of internal versus external branches will be affected by changes in viral population size over time Rapid expansion of a virus in a population will be reflected by a "star-like" tree, in which external branches are long relative to internal branches. Star-like trees arise because viruses are more likely to share a recent common ancestor when the population is small, and a growing population has an increasingly smaller population size towards the past. Compared to a phylogeny of an expanding virus, a phylogeny of a viral population that stays constant in size will have external branches that are shorter relative to branches on the interior of the tree. The phylogeny of HIV provides a good example of a star-like tree, as the prevalence of HIV infection rose rapidly throughout the 1980s (exponential growth). The phylogeny of hepatitis B virus instead reflects a viral population that has remained roughly consistent (constant size). Similarly, trees reconstructed from viral sequences isolated from chronically infected individuals can be used to gauge changes in viral population sizes within a host.The clustering of taxa on a viral phylogeny will be affected by host population structure Viruses within similar hosts, such as hosts that reside in the same geographic region, are expected to be more closely related genetically if transmission occurs more commonly between them. The phylogenies of measles and rabies virus illustrate viruses with spatially structured host population. These phylogenies stand in contrast to the phylogeny of human influenza, which does not appear to exhibit strong spatial structure over extended periods of time. Clustering of taxa, when it occurs, is not necessarily observed at all scales, and a population that appears structured at some scale may appear panmictic at another scale, for example at a smaller spatial scale. While spatial structure is the most commonly observed population structure in phylodynamic analyses, viruses may also have nonrandom admixture by attributes such as the age, race, and risk behavior. This is because viral transmission can preferentially occur between hosts sharing any of these attributes.Tree balance will be affected by selection, most notably immune escape The effect of directional selection on the shape of a viral phylogeny is exemplified by contrasting the trees of influenza virus and HIV's surface proteins. The ladder-like phylogeny of influenza virus A/H3N2's hemagglutinin protein bears the hallmarks of strong directional selection, driven by immune escape (imbalanced tree). In contrast, a more balanced phylogeny may occur when a virus is not subject to strong immune selection or other source of directional selection. An example of this is the phylogeny of the HIV envelope protein inferred from sequences isolated from different individuals in a population (balanced tree). phylogenies of the HIVf envelope protein from chronically infected hosts resemble influenza's ladder-like tree. This highlights that the processes affecting viral genetic variation can differ across scales. Indeed, contrasting patterns of viral genetic variation within and between hosts has been an active topic in phylodynamic research since the field's inception.Although these three phylogenetic features are useful rules of thumb to identify epidemiological, immunological, and evolutionary processes that might be impacting viral genetic variation, there is growing recognition that the mapping between process and phylogenetic pattern can be many-to-one. For instance, although ladder-like trees could reflect the presence of directional selection, ladder-like trees could also reflect sequential genetic bottlenecks that might occur with rapid spatial spread, as in the case of rabies virus. Because of this many-to-one mapping between process and phylogenetic pattern, research in the field of viral phylodynamics has sought to develop and apply quantitative methods to effectively infer process from reconstructed viral phylogenies (see Methods). The consideration of other data sources (e.g., incidence patterns) may aid in distinguishing between competing phylodynamic hypotheses. Combining disparate sources of data for phylodynamic analysis remains a major challenge in the field and is an active area of research. Applications: Viral origins Phylodynamic models may aid in dating epidemic and pandemic origins. The rapid rate of evolution in viruses allows molecular clock models to be estimated from genetic sequences, thus providing a per-year rate of evolution of the virus. With the rate of evolution measured in real units of time, it is possible to infer the date of the most recent common ancestor (MRCA) for a set of viral sequences. The age of the MRCA of these isolates is a lower bound; the common ancestor of the entire virus population must have existed earlier than the MRCA of the virus sample. In April 2009, genetic analysis of 11 sequences of swine-origin H1N1 influenza suggested that the common ancestor existed at or before 12 January 2009. This finding aided in making an early estimate of the basic reproduction number R0 of the pandemic. Similarly, genetic analysis of sequences isolated from within an individual can be used to determine the individual's infection time. Viral spread Phylodynamic models may provide insight into epidemiological parameters that are difficult to assess through traditional surveillance means. For example, assessment of R0 from surveillance data requires careful control of the variation of the reporting rate and the intensity of surveillance. Inferring the demographic history of the virus population from genetic data may help to avoid these difficulties and can provide a separate avenue for inference of R0 Such approaches have been used to estimate R0 in hepatitis C virus and HIV. Additionally, differential transmission between groups, be they geographic-, age-, or risk-related, is very difficult to assess from surveillance data alone. Phylogeographic models have the possibility of more directly revealing these otherwise hidden transmission patterns. Phylodynamic approaches have mapped the geographic movement of the human influenza virus and quantified the epidemic spread of rabies virus in North American raccoons. However, nonrepresentative sampling may bias inferences of both R0 and migration patterns. Phylodynamic approaches have also been used to better understand viral transmission dynamics and spread within infected hosts. For example, phylodynamic studies have been used to infer the rate of viral growth within infected hosts and to argue for the occurrence of viral compartmentalization in hepatitis C infection. Applications: Viral control efforts Phylodynamic approaches can also be useful in ascertaining the effectiveness of viral control efforts, particularly for diseases with low reporting rates. For example, the genetic diversity of the DNA-based hepatitis B virus declined in the Netherlands in the late 1990s, following the initiation of a vaccination program. This correlation was used to argue that vaccination was effective at reducing the prevalence of infection, although alternative explanations are possible.Viral control efforts can also impact the rate at which virus populations evolve, thereby influencing phylogenetic patterns. Phylodynamic approaches that quantify how evolutionary rates change over time can therefore provide insight into the effectiveness of control strategies. For example, an application to HIV sequences within infected hosts showed that viral substitution rates dropped to effectively zero following the initiation of antiretroviral drug therapy. This decrease in substitution rates was interpreted as an effective cessation of viral replication following the commencement of treatment, and would be expected to lead to lower viral loads. This finding is especially encouraging because lower substitution rates are associated with slower progression to AIDS in treatment-naive patients.Antiviral treatment also creates selective pressure for the evolution of drug resistance in virus populations, and can thereby affect patterns of genetic diversity. Commonly, there is a fitness trade-off between faster replication of susceptible strains in the absence of antiviral treatment and faster replication of resistant strains in the presence of antivirals. Thus, ascertaining the level of antiviral pressure necessary to shift evolutionary outcomes is of public health importance. Phylodynamic approaches have been used to examine the spread of Oseltamivir resistance in influenza A/H1N1. Methods: Most often, the goal of phylodynamic analyses is to make inferences of epidemiological processes from viral phylogenies. Thus, most phylodynamic analyses begin with the reconstruction of a phylogenetic tree. Genetic sequences are often sampled at multiple time points, which allows the estimation of substitution rates and the time of the MRCA using a molecular clock model. For viruses, Bayesian phylogenetic methods are popular because of the ability to fit complex demographic scenarios while integrating out phylogenetic uncertainty.Traditional evolutionary approaches directly utilize methods from computational phylogenetics and population genetics to assess hypotheses of selection and population structure without direct regard for epidemiological models. Methods: For example, the magnitude of selection can be measured by comparing the rate of nonsynonymous substitution to the rate of synonymous substitution (dN/dS); the population structure of the host population may be examined by calculation of F-statistics; and hypotheses concerning panmixis and selective neutrality of the virus may be tested with statistics such as Tajima's D.However, such analyses were not designed with epidemiological inference in mind and it may be difficult to extrapolate from standard statistics to desired epidemiological quantities. Methods: In an effort to bridge the gap between traditional evolutionary approaches and epidemiological models, several analytical methods have been developed to specifically address problems related to phylodynamics. These methods are based on coalescent theory, birth-death models, and simulation, and are used to more directly relate epidemiological parameters to observed viral sequences. Coalescent theory and phylodynamics Effective population size The coalescent is a mathematical model that describes the ancestry of a sample of nonrecombining gene copies. In modeling the coalescent process, time is usually considered to flow backwards from the present. In a selectively neutral population of constant size N and nonoverlapping generations (the Wright Fisher model), the expected time for a sample of two gene copies to coalesce (i.e., find a common ancestor) is N generations. Methods: More generally, the waiting time for two members of a sample of n gene copies to share a common ancestor is exponentially distributed, with rate λn=(n2)1N .This time interval is labeled Tn , and at its end there are n−1 extant lineages remaining. These remaining lineages will coalesce at the rate λn−1⋯λ2 after intervals Tn−1⋯T2 This process can be simulated by drawing exponential random variables with rates {λn−i}i=0,⋯,n−2 until there is only a single lineage remaining (the MRCA of the sample). Methods: In the absence of selection and population structure, the tree topology may be simulated by picking two lineages uniformly at random after each coalescent interval Ti The expected waiting time to find the MRCA of the sample is the sum of the expected values of the internode intervals, E[TMRCA]=E[Tn]+E[Tn−1]+⋯+E[T2]=1/λn+1/λn−1+⋯+1/λ2=2N(1−1n). Two corollaries are : The time to the MRCA (TMRCA) of a sample is not unbounded in the sample size. lim n→∞E[TMRCA]=2N. Few samples are required for the expected TMRCA of the sample to be close to the theoretical upper bound, as the difference is O(1/n) .Consequently, the TMRCA estimated from a relatively small sample of viral genetic sequences is an asymptotically unbiased estimate for the time that the viral population was founded in the host population. For example, Robbins et al. estimated the TMRCA for 74 HIV-1 subtype-B genetic sequences collected in North America to be 1968. Assuming a constant population size, we expect the time back to 1968 to represent 74 99 % of the TMRCA of the North American virus population. If the population size N(t) changes over time, the coalescent rate λn(t) will also be a function of time. Methods: Donnelley and Tavaré derived this rate for a time-varying population size under the assumption of constant birth rates: λn(t)=(n2)1N(t) .Because all topologies are equally likely under the neutral coalescent, this model will have the same properties as the constant-size coalescent under a rescaling of the time variable: t→∫τ=0tdτN(τ) Very early in an epidemic, the virus population may be growing exponentially at rate r , so that t units of time in the past, the population will have size N(t)=N0e−rt In this case, the rate of coalescence becomes λn(t)=(n2)1N0e−rt .This rate is small close to when the sample was collected ( t=0 ), so that external branches (those without descendants) of a gene genealogy will tend to be long relative to those close to the root of the tree. This is why rapidly growing populations yield trees with long tip branches. Methods: If the rate of exponential growth is estimated from a gene genealogy, it may be combined with knowledge of the duration of infection or the serial interval D for a particular pathogen to estimate the basic reproduction number, R0 The two may be linked by the following equation: r=R0−1D .For example, one of the first estimates of R0 was for pandemic H1N1 influenza in 2009 by using a coalescent-based analysis of 11 hemagglutinin sequences in combination with prior data about the infectious period for influenza. Methods: Compartmental models Infectious disease epidemics are often characterized by highly nonlinear and rapid changes in the number of infected individuals and the effective population size of the virus. In such cases, birth rates are highly variable, which can diminish the correspondence between effective population size and the prevalence of infection. Many mathematical models have been developed in the field of mathematical epidemiology to describe the nonlinear time series of prevalence of infection and the number of susceptible hosts. A well studied example is the Susceptible-Infected-Recovered (SIR) system of differential equations, which describes the fractions of the population S(t) susceptible, I(t) infected, and R(t) recovered as a function of time: dSdt=−βSI ,dIdt=βSI−γI , and dRdt=γI .Here, β is the per capita rate of transmission to susceptible hosts, and γ is the rate at which infected individuals recover, whereupon they are no longer infectious. In this case, the incidence of new infections per unit time is f(t)=βSI , which is analogous to the birth rate in classical population genetics models. The general formula for the rate of coalescence is: λn(t)=(n2)2f(t)I(t)2 .The ratio 2(n2)/I(t)2 can be understood as arising from the probability that two lineages selected uniformly at random are both ancestral to the sample. This probability is the ratio of the number of ways to pick two lineages without replacement from the set of lineages and from the set of all infections: (n2)/(I(t)2)≈2(n2)/I(t)2 . Coalescent events will occur with this probability at the rate given by the incidence function f(t) For the simple SIR model, this yields λn(t)=(n2)2βS(t)I(t) .This expression is similar to the Kingman coalescent rate, but is damped by the fraction susceptible S(t) Early in an epidemic, S(0)≈1 , so for the SIR model λn(t)≈(n2)2βI(t) .This has the same mathematical form as the rate in the Kingman coalescent, substituting Ne=I(t)/2β . Consequently, estimates of effective population size based on the Kingman coalescent will be proportional to prevalence of infection during the early period of exponential growth of the epidemic.When a disease is no longer exponentially growing but has become endemic, the rate of lineage coalescence can also be derived for the epidemiological model governing the disease's transmission dynamics. This can be done by extending the Wright Fisher model to allow for unequal offspring distributions. With a Wright Fisher generation taking τ units of time, the rate of coalescence is given by: λn=(n2)1Neτ ,where the effective population size Ne is the population size N divided by the variance of the offspring distribution σ2 . The generation time τ for an epidemiological model at equilibrium is given by the duration of infection and the population size N is closely related to the equilibrium number of infected individuals. To derive the variance in the offspring distribution σ2 for a given epidemiological model, one can imagine that infected individuals can differ from one another in their infectivities, their contact rates, their durations of infection, or in other characteristics relating to their ability to transmit the virus with which they are infected. These differences can be acknowledged by assuming that the basic reproduction number is a random variable ν that varies across individuals in the population and that ν follows some continuous probability distribution. The mean and variance of these individual basic reproduction numbers, E[ν] and Var[ν] , respectively, can then be used to compute σ2 . The expression relating these quantities is given by: σ2=Var[ν]E[ν]2+1 .For example, for the SIR model above, modified to include births into the population and deaths out of the population, the population size N is given by the equilibrium number of infected individuals, I . The mean basic reproduction number, averaged across all infected individuals, is given by β/γ , under the assumption that the background mortality rate is negligible compared to the rate of recovery γ . The variance in individuals' basic reproduction rates is given by (β/γ)2 , because the duration of time individuals remain infected in the SIR model is exponentially distributed. The variance in the offspring distribution σ2 is therefore 2. Ne therefore becomes I2 and the rate of coalescence becomes: λn=(n2)2γI .This rate, derived for the SIR model at equilibrium, is equivalent to the rate of coalescence given by the more general formula. Rates of coalescence can similarly be derived for epidemiological models with superspreaders or other transmission heterogeneities, for models with individuals who are exposed but not yet infectious, and for models with variable infectious periods, among others. Given some epidemiological information (such as the duration of infection) and a specification of a mathematical model, viral phylogenies can therefore be used to estimate epidemiological parameters that might otherwise be difficult to quantify. Methods: Phylogeography At the most basic level, the presence of geographic population structure can be revealed by comparing the genetic relatedness of viral isolates to geographic relatedness. A basic question is whether geographic character labels are more clustered on a phylogeny than expected under a simple nonstructured model. This question can be answered by counting the number of geographic transitions on the phylogeny via parsimony, maximum likelihood or through Bayesian inference. If population structure exists, then there will be fewer geographic transitions on the phylogeny than expected in a panmictic model. This hypothesis can be tested by randomly scrambling the character labels on the tips of the phylogeny and counting the number of geographic transitions present in the scrambled data. Methods: By repeatedly scrambling the data and calculating transition counts, a null distribution can be constructed and a p-value computed by comparing the observed transition counts to this null distribution.Beyond the presence or absence of population structure, phylodynamic methods can be used to infer the rates of movement of viral lineages between geographic locations and reconstruct the geographic locations of ancestral lineages. Methods: Here, geographic location is treated as a phylogenetic character state, similar in spirit to 'A', 'T', 'G', 'C', so that geographic location is encoded as a substitution model. The same phylogenetic machinery that is used to infer models of DNA evolution can thus be used to infer geographic transition matrices. The end result is a rate, measured in terms of years or in terms of nucleotide substitutions per site, that a lineage in one region moves to another region over the course of the phylogenetic tree. In a geographic transmission network, some regions may mix more readily and other regions may be more isolated. Additionally, some transmission connections may be asymmetric, so that the rate at which lineages in region 'A' move to region 'B' may differ from the rate at which lineages in 'B' move to 'A'. With geographic location thus encoded, ancestral state reconstruction can be used to infer ancestral geographic locations of particular nodes in the phylogeny. These types of approaches can be extended by substituting other attributes for geographic locations. For example, in an application to rabies virus, Streicker and colleagues estimated rates of cross-species transmission by considering host species as the attribute. Simulation As discussed above, it is possible to directly infer parameters of simple compartmental epidemiological models, such as SIR models, from sequence data by looking at genealogical patterns. Additionally, general patterns of geographic movement can be inferred from sequence data, but these inferences do not involve an explicit model of transmission dynamics between infected individuals. For more complicated epidemiological models, such as those involving cross-immunity, age structure of host contact rates, seasonality, or multiple host populations with different life history traits, it is often impossible to analytically predict genealogical patterns from epidemiological parameters. As such, the traditional statistical inference machinery will not work with these more complicated models, and in this case, it is common to instead use a forward simulation-based approach. Simulation-based models require specification of a transmission model for the infection process between infected hosts and susceptible hosts and for the recovery process of infected hosts. Simulation-based models may be compartmental, tracking the numbers of hosts infected and recovered to different viral strains, or may be individual-based, tracking the infection state and immune history of every host in the population. Generally, compartmental models offer significant advantages in terms of speed and memory usage, but may be difficult to implement for complex evolutionary or epidemiological scenarios. A forward simulation model may account for geographic population structure or age structure by modulating transmission rates between host individuals of different geographic or age classes. Additionally, seasonality may be incorporated by allowing time of year to influence transmission rate in a stepwise or sinusoidal fashion. To connect the epidemiological model to viral genealogies requires that multiple viral strains, with different nucleotide or amino acid sequences, exist in the simulation, often denoted I1⋯In for different infected classes. In this case, mutation acts to convert a host in one infected class to another infected class. Over the course of the simulation, viruses mutate and sequences are produced, from which phylogenies may be constructed and analyzed. For antigenically variable viruses, it becomes crucial to model the risk of transmission from an individual infected with virus strain 'A' to an individual who has previously been infected with virus strains 'B', 'C', etc... The level of protection against one strain of virus by a second strain is known as cross-immunity. In addition to risk of infection, cross-immunity may modulate the probability that a host becomes infectious and the duration that a host remains infectious. Often, the degree of cross-immunity between virus strains is assumed to be related to their sequence distance. Methods: In general, in needing to run simulations rather than compute likelihoods, it may be difficult to make fine-scale inferences on epidemiological parameters, and instead, this work usually focuses on broader questions, testing whether overall genealogical patterns are consistent with one epidemiological model or another. Additionally, simulation-based methods are often used to validate inference results, providing test data where the correct answer is known ahead of time. Because computing likelihoods for genealogical data under complex simulation models has proven difficult, an alternative statistical approach called Approximate Bayesian Computation (ABC) is becoming popular in fitting these simulation models to patterns of genetic variation, following successful application of this approach to bacterial diseases. This is because ABC makes use of easily computable summary statistics to approximate likelihoods, rather than the likelihoods themselves. Examples: Phylodynamics of influenza Human influenza is an acute respiratory infection primarily caused by viruses influenza A and influenza B. Influenza A viruses can be further classified into subtypes, such as A/H1N1 and A/H3N2. Here, subtypes are denoted according to their hemagglutinin (H or HA) and neuraminidase (N or NA) genes, which as surface proteins, act as the primary targets for the humoral immune response. Influenza viruses circulate in other species as well, most notably as swine influenza and avian influenza. Through reassortment, genetic sequences from swine and avian influenza occasionally enter the human population. If a particular hemagglutinin or neuraminidase has been circulating outside the human population, then humans will lack immunity to this protein and an influenza pandemic may follow a host switch event, as seen in 1918, 1957, 1968 and 2009. After introduction into the human population, a lineage of influenza generally persists through antigenic drift, in which HA and NA continually accumulate mutations allowing viruses to infect hosts immune to earlier forms of the virus. These lineages of influenza show recurrent seasonal epidemics in temperate regions and less periodic transmission in the tropics. Generally, at each pandemic event, the new form of the virus outcompetes existing lineages. The study of viral phylodynamics in influenza primarily focuses on the continual circulation and evolution of epidemic influenza, rather than on pandemic emergence. Of central interest to the study of viral phylodynamics is the distinctive phylogenetic tree of epidemic influenza A/H3N2, which shows a single predominant trunk lineage that persists through time and side branches that persist for only 1–5 years before going extinct. Selective pressures Phylodynamic techniques have provided insight into the relative selective effects of mutations to different sites and different genes across the influenza virus genome. The exposed location of hemagglutinin (HA) suggests that there should exist strong selective pressure for evolution to the specific sites on HA that are recognized by antibodies in the human immune system. These sites are referred to as epitope sites. Phylogenetic analysis of H3N2 influenza has shown that putative epitope sites of the HA protein evolve approximately 3.5 times faster on the trunk of the phylogeny than on side branches. This suggests that viruses possessing mutations to these exposed sites benefit from positive selection and are more likely than viruses lacking such mutations to take over the influenza population. Conversely, putative nonepitope sites of the HA protein evolve approximately twice as fast on side branches than on the trunk of the H3 phylogeny, indicating that mutations to these sites are selected against and viruses possessing such mutations are less likely to take over the influenza population. Thus, analysis of phylogenetic patterns gives insight into underlying selective forces. Examples: A similar analysis combining sites across genes shows that while both HA and NA undergo substantial positive selection, internal genes show low rates of amino acid fixation relative to levels of polymorphism, suggesting an absence of positive selection.Further analysis of HA has shown it to have a very small effective population size relative to the census size of the virus population, as expected for a gene undergoing strong positive selection. However, across the influenza genome, there is surprisingly little variation in effective population size; all genes are nearly equally low. Examples: This finding suggests that reassortment between segments occurs slowly enough, relative to the actions of positive selection, that genetic hitchhiking causes beneficial mutations in HA and NA to reduce diversity in linked neutral variation in other segments of the genome. Influenza A/H1N1 shows a larger effective population size and greater genetic diversity than influenza H3N2, suggesting that H1N1 undergoes less adaptive evolution than H3N2. This hypothesis is supported by empirical patterns of antigenic evolution; there have been nine vaccine updates recommended by the WHO for H1N1 in the interpandemic period between 1978 and 2009, while there have been 20 vaccine updates recommended for H3N2 during this same time period. Additionally, an analysis of patterns of sequence evolution on trunk and side branches suggests that H1N1 undergoes substantially less positive selection than H3N2. However, the underlying evolutionary or epidemiological cause for this difference between H3N2 and H1N1 remains unclear. Circulation patterns The extremely rapid turnover of the influenza population means that the rate of geographic spread of influenza lineages must also, to some extent, be rapid. Examples: Surveillance data show a clear pattern of strong seasonal epidemics in temperate regions and less periodic epidemics in the tropics. The geographic origin of seasonal epidemics in the Northern and Southern Hemispheres had been a major open question in the field. However, temperate epidemics usually emerge from a global reservoir rather than emerging from within the previous season's genetic diversity. This and subsequent work, has suggested that the global persistence of the influenza population is driven by viruses being passed from epidemic to epidemic, with no individual region in the world showing continual persistence. However, there is considerable debate regarding the particular configuration of the global network of influenza, with one hypothesis suggesting a metapopulation in East and Southeast Asia that continually seeds influenza in the rest of the world, and another hypothesis advocating a more global metapopulation in which temperate lineages often return to the tropics at the end of a seasonal epidemic.All of these phylogeographic studies necessarily suffer from limitations in the worldwide sampling of influenza viruses. For example, the relative importance of tropical Africa and India has yet to be uncovered. Additionally, the phylogeographic methods used in these studies (see section on phylogeographic methods) make inferences of the ancestral locations and migration rates on only the samples at hand, rather than on the population in which these samples are embedded. Examples: Because of this, study-specific sampling procedures are a concern in extrapolating to population-level inferences. However, estimates of migration rates that are jointly based on epidemiological and evolutionary simulations appear robust to a large degree of undersampling or oversampling of a particular region. Further methodological progress is required to more fully address these issues. Simulation-based models Forward simulation-based approaches for addressing how immune selection can shape the phylogeny of influenza A/H3N2's hemagglutinin protein have been actively developed by disease modelers since the early 2000s. These approaches include both compartmental models and agent-based models. One of the first compartmental models for influenza was developed by Gog and Grenfell, who simulated the dynamics of many strains with partial cross-immunity to one another. Under a parameterization of long host lifespan and short infectious period, they found that strains would form self-organized sets that would emerge and replace one another. Although the authors did not reconstruct a phylogeny from their simulated results, the dynamics they found were consistent with a ladder-like viral phylogeny exhibiting low strain diversity and rapid lineage turnover. Later work by Ferguson and colleagues adopted an agent-based approach to better identify the immunological and ecological determinants of influenza evolution. The authors modeled influenza's hemagglutinin as four epitopes, each consisting of three amino acids. They showed that under strain-specific immunity alone (with partial cross-immunity between strains based on their amino acid similarity), the phylogeny of influenza A/H3N2's HA was expected to exhibit 'explosive genetic diversity', a pattern that is inconsistent with empirical data. This led the authors to postulate the existence of a temporary strain-transcending immunity: individuals were immune to reinfection with any other influenza strain for approximately six months following an infection. With this assumption, the agent-based model could reproduce the ladder-like phylogeny of influenza A/H3N2's HA protein. Examples: Work by Koelle and colleagues revisited the dynamics of influenza A/H3N2 evolution following the publication of a paper by Smith and colleagues which showed that the antigenic evolution of the virus occurred in a punctuated manner. The phylodynamic model designed by Koelle and coauthors argued that this pattern reflected a many-to-one genotype-to-phenotype mapping, with the possibility of strains from antigenically distinct clusters of influenza sharing a high degree of genetic similarity. Examples: Through incorporating this mapping of viral genotype into viral phenotype (or antigenic cluster) into their model, the authors were able to reproduce the ladder-like phylogeny of influenza's HA protein without generalized strain-transcending immunity. The reproduction of the ladder-like phylogeny resulted from the viral population passing through repeated selective sweeps. These sweeps were driven by herd immunity and acted to constrain viral genetic diversity. Instead of modeling the genotypes of viral strains, a compartmental simulation model by Gökaydin and colleagues considered influenza evolution at the scale of antigenic clusters (or phenotypes). This model showed that antigenic emergence and replacement could result under certain epidemiological conditions. These antigenic dynamics would be consistent with a ladder-like phylogeny of influenza exhibiting low genetic diversity and continual strain turnover. In recent work, Bedford and colleagues used an agent-based model to show that evolution in a Euclidean antigenic space can account for the phylogenetic pattern of influenza A/H3N2's HA, as well as the virus's antigenic, epidemiological, and geographic patterns. The model showed the reproduction of influenza's ladder-like phylogeny depended critically on the mutation rate of the virus as well as the immunological distance yielded by each mutation. The phylodynamic diversity of influenza Although most research on the phylodynamics of influenza has focused on seasonal influenza A/H3N2 in humans, influenza viruses exhibit a wide variety of phylogenetic patterns. Qualitatively similar to the phylogeny of influenza A/H3N2's hemagglutinin protein, influenza A/H1N1 exhibits a ladder-like phylogeny with relatively low genetic diversity at any point in time and rapid lineage turnover. However, the phylogeny of influenza B's hemagglutinin protein has two circulating lineages: the Yamagata and the Victoria lineage. It is unclear how the population dynamics of influenza B contribute to this evolutionary pattern, although one simulation model has been able to reproduce this phylogenetic pattern with longer infectious periods of the host.Genetic and antigenic variation of influenza is also present across a diverse set of host species. The impact of host population structure can be seen in the evolution of equine influenza A/H3N8: instead of a single trunk with short side-branches, the hemagglutinin of influenza A/H3N8 splits into two geographically distinct lineages, representing American and European viruses. The evolution of these two lineages is thought to have occurred as a consequence of quarantine measures. Additionally, host immune responses are hypothesized to modulate virus evolutionary dynamics. Swine influenza A/H3N2 is known to evolve antigenically at a rate that is six times slower than that of the same virus circulating in humans, although these viruses' rates of genetic evolution are similar. Influenza in aquatic birds is hypothesized to exhibit 'evolutionary stasis', although recent phylogenetic work indicates that the rate of evolutionary change in these hosts is similar to those in other hosts, including humans. In these cases, it is thought that short host lifespans prevent the build-up of host immunity necessary to effectively drive antigenic drift. Phylodynamics of HIV Origin and spread The global diversity of HIV-1 group M is shaped by its origins in Central Africa around the turn of the 20th century. The epidemic underwent explosive growth throughout the early 20th century with multiple radiations out of Central Africa. While traditional epidemiological surveillance data are almost nonexistent for the early period of epidemic expansion, phylodynamic analyses based on modern sequence data can be used to estimate when the epidemic began and to estimate the early growth rate. The rapid early growth of HIV-1 in Central Africa is reflected in the star-like phylogenies of the virus, with most coalescent events occurring in the distant past. Multiple founder events have given rise to distinct HIV-1 group M subtypes which predominate in different parts of the world. Subtype B is most prevalent in North America and Western Europe, while subtypes A and C, which account for more than half of infections worldwide, are common in Africa. Examples: HIV subtypes differ slightly in their transmissibility, virulence, effectiveness of antiretroviral therapy, and pathogenesis.The rate of exponential growth of HIV in Central Africa in the early 20th century preceding the establishment of modern subtypes has been estimated using coalescent approaches. Several estimates based on parametric exponential growth models are shown in table 1, for different time periods, risk groups and subtypes. The early spread of HIV-1 has also been characterized using nonparametric ("skyline") estimates of Ne The early growth of subtype B in North America was quite high, however, the duration of exponential growth was relatively short, with saturation occurring in the mid- and late-1980s. Examples: At the opposite extreme, HIV-1 group O, a relatively rare group that is geographically confined to Cameroon and that is mainly spread by heterosexual sex, has grown at a lower rate than either subtype B or C. HIV-1 sequences sampled over a span of five decades have been used with relaxed molecular clock phylogenetic methods to estimate the time of cross-species viral spillover into humans around the early 20th century. The estimated TMRCA for HIV-1 coincides with the appearance of the first densely populated large cities in Central Africa. Similar methods have been used to estimate the time that HIV originated in different parts of the world. The origin of subtype B in North America is estimated to be in the 1960s, where it went undetected until the AIDS epidemic in the 1980s. There is evidence that progenitors of modern subtype B originally colonized the Caribbean before undergoing multiple radiations to North and South America. Subtype C originated around the same time in Africa. Contemporary epidemiological dynamics At shorter time scales and finer geographical scales, HIV phylogenies may reflect epidemiological dynamics related to risk behavior and sexual networks. Very dense sampling of viral sequences within cities over short periods of time has given a detailed picture of HIV transmission patterns in modern epidemics. Sequencing of virus from newly diagnosed patients is now routine in many countries for surveillance of drug resistance mutations, which has yielded large databases of sequence data in those areas. Examples: There is evidence that HIV transmission within heterogeneous sexual networks leaves a trace in HIV phylogenies, in particular making phylogenies more imbalanced and concentrating coalescent events on a minority of lineages.By analyzing phylogenies estimated from HIV sequences from men who have sex with men in London, United Kingdom, Lewis et al. found evidence that transmission is highly concentrated in the brief period of primary HIV infection (PHI), which consists of approximately the first 6 months of the infectious period. Examples: In a separate analysis, Volz et al. found that simple epidemiological dynamics explain phylogenetic clustering of viruses collected from patients with PHI. Examples: Patients who were recently infected were more likely to harbor virus that is phylogenetically close to samples from other recently infected patients. Such clustering is consistent with observations in simulated epidemiological dynamics featuring an early period of intensified transmission during PHI. These results therefore provided further support for Lewis et al.'s findings that HIV transmission occurs frequently from individuals early in their infection. Examples: Viral adaptation Purifying immune selection dominates evolution of HIV within hosts, but evolution between hosts is largely decoupled from within-host evolution. Immune selection has relatively little influence on HIV phylogenies at the population level for three reasons. Examples: First, there is an extreme bottleneck in viral diversity at the time of sexual transmission. Second, transmission tends to occur early in infection before immune selection has had a chance to operate. Finally, the replicative fitness of a viral strain (measured in transmissions per host) is largely extrinsic to virological factors, depending more heavily on behaviors in the host population. These include heterogeneous sexual and drug-use behaviors. Examples: There is some evidence from comparative phylogenetic analysis and epidemic simulations that HIV adapts at the level of the population to maximize transmission potential between hosts. This adaptation is towards intermediate virulence levels, which balances the productive lifetime of the host (time until AIDS) with the transmission probability per act. A useful proxy for virulence is the set-point viral load (SPVL), which is correlated with the time until AIDS. SPVL is the quasi-equilibrium titer of viral particles in the blood during chronic infection. For adaptation towards intermediate virulence to be possible, SPVL needs to be heritable and a trade-off between viral transmissibility and the lifespan of the host needs to exist. SPVL has been shown to be correlated between HIV donor and recipients in transmission pairs, thereby providing evidence that SPVL is at least partly heritable. The transmission probability of HIV per sexual act is positively correlated with viral load, thereby providing evidence of the trade-off between transmissibility and virulence. It is therefore theoretically possible that HIV evolves to maximize its transmission potential. Epidemiological simulation and comparative phylogenetic studies have shown that adaptation of HIV towards optimum SPVL could be expected over 100–150 years. These results depend on empirical estimates for the transmissibility of HIV and the lifespan of hosts as a function of SPVL. Future directions: Up to this point, phylodynamic approaches have focused almost entirely on RNA viruses, which often have mutation rates on the order of 10−3 to 10−4 substitutions per site per year. This allows a sample of around 1000 bases to have power to give a fair degree of confidence in estimating the underlying genealogy connecting sampled viruses. However, other pathogens may have significantly slower rates of evolution. DNA viruses, such as herpes simplex virus, evolve orders of magnitude more slowly. These viruses have commensurately larger genomes. Bacterial pathogens such as pneumococcus and tuberculosis evolve slower still and have even larger genomes. In fact, there exists a very general negative correlation between genome size and mutation rate across observed systems. Because of this, similar amounts of phylogenetic signal are likely to result from sequencing full genomes of RNA viruses, DNA viruses or bacteria. As sequencing technologies continue to improve, it is becoming increasingly feasible to conduct phylodynamic analyses on the full diversity of pathogenic organisms. Additionally, improvements in sequencing technologies will allow detailed investigation of within-host evolution, as the full diversity of an infecting quasispecies may be uncovered given enough sequencing effort.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physical media** Physical media: Physical media refers to the physical materials that are used to store or transmit information in data communications. These physical media are generally physical objects made of materials such as copper or glass. They can be touched and felt, and have physical properties such as weight and color. For a number of years, copper and glass were the only media used in computer networking. Physical media: The term physical media can also be used to describe data storage media like records, cassettes, VHS, LaserDiscs, CDs, DVDs, and Blu-rays, especially when compared with modern streaming media or content that has been downloaded from the Internet onto a hard drive or other storage device as files. Types of physical media: Copper wire Copper wire is currently the most commonly used type of physical media due to the abundance of copper in the world, as well as its ability to conduct electrical power. Copper is also one of the cheaper metals which makes it more feasible to use.Most copper wires used in data communications today have eight strands of copper, organized in unshielded twisted pairs, or UTP. The wires are twisted around one another because it reduces electrical interference from outside sources. In addition to UTP, some wires use shielded twisted pairs (STP), which reduce electrical interference even further. The way copper wires are twisted around one another also has an effect on data rates. Category 3 cable (Cat3), has three to four twists per foot and can support speeds of 10 Mbit/s. Category 5 cable (Cat5) is newer and has three to four twists per inch, which results in a maximum data rate of 100 Mbit/s. In addition, there are category 5e (Cat5e) cables which can support speeds of up to 1,000 Mbit/s, and more recently, category 6 cables (Cat6), which support data rates of up to 10,000 Mbit/s (i.e., 10 Gbit/s).On average, copper wire costs around $1 per foot. Types of physical media: Optical fiber Optical fiber is a thin and flexible piece of fiber made of glass or plastic. Unlike copper wire, optical fiber is typically used for long-distance data communications, being that it allows for data transmission over far distances and can produce high transmission speeds. Optical fiber also does not require signal repeaters, which ends up reducing maintenance costs, since signal repeaters are known to fail often.There are two major types of optical fiber in use today. Multimode fiber is approximately 62.5 µm in diameter and utilizes light-emitting diodes to carry signals over a maximum distance of about 2 kilometers. Single mode fiber is approximately 10 µm in diameter and is capable of carrying signals over tens of miles.Like copper wire, optical fiber currently costs about $1 per foot. Types of physical media: Coaxial cables Coaxial cables have two different layers surrounding a copper core. The inner most layer has an insulator. The next layer has a conducting shield. These are both covered by a plastic jacket. Coaxial cables are used for microwaves, televisions and computers. Types of physical media: This was the second transmission medium to be introduced (often called coax), around the mid-1920s. In the center of a coaxial cable is a copper wire that acts as a conductor, where the information travels. The copper wire in coax is thicker than that in twisted-pair, and it is also unaffected by surrounding wires that contribute to electromagnetic interference, so it can provide higher transmission rates than the twisted-pair. The center conductor is surrounded by plastic insulation, which helps filter out extraneous interference. This insulation is covered by a return path, which is usually braided-copper shielding or aluminum foil type covering. Outer jackets form a protective covering for coax; the number and type of outer jackets depend on the intended use of the cable (e.g., whether the cable is supposed to be strung in the air or underground, whether rodent protection is required). The two most popular types of coaxial cabling are used with Ethernet networks. Types of physical media: Thinnet is used on Ethernet 10BASE2 networks and is the thinner and more flexible of the two. Unlike a thicknet, it uses a Bayonet Niell-Concelman (BNC) on each end to connect to computers. Thinnet is part of the RG-58 family of cable with a maximum cable length of 185 meters and transmission speeds of 10 Mbit/s. Types of physical media: Thicknet coaxial cabling is used with Ethernet 10BASE5 networks, has a maximum cable length of 500 meters and transmission speeds of 10 Mbit/s. It's expensive and not commonly used, though it was originally used to directly connect computers. The computer is connected to the transceiver at the cable from the attachment unit interface of its network card using a drop cable. Maximum thicknet nodes are 100 on a segment. One end of each cable is grounded. Types of physical media: Application In the midst of the 1920s, coax was applied to telephone networks as inter-office trunks. Rather than adding more copper cable bundles with 1500 or 1000 pairs of copper wire and cable in them, it was possible to replace those big cables with much smaller coaxial cable. Types of physical media: The next major use of coax in telecommunications occurred in the 1950s, when it was deployed as submarine cable to carry international traffic. It was then introduced into the data processing realm in the mid 1960s. Early computer architectures required coax as the media type from the terminal to the host. Local area networks were predominantly based on coax from 1980 to about 1987.Coax has also been used in cable TV and the local loop, in the form of HFC architecture. HFC brings fiber as close as possible to the neighborhood. Fiber terminates at the neighborhood node, where coax fans out to provide home service. Types of physical media: Advantages Broadband system-coax has sufficient frequency range to support multiple channels, allowing greater throughput. Greater channel capacity - each of the multiple channels offers substantial capacity depending on the service location (6 MHz wide in North America, 8 MHz wide in Europe). Greater bandwidth - compared to twisted pairs, it has greater bandwidth for each channel. This allows it to support a mixed range of services (voice, data, video, multimedia). Lower error rates - the inner conductor serves as a Faraday shield that protects the network from electronic noise. Disadvantages The bus network on which coax is deployed is susceptible to congestion, noise and security risks. Great noise - the return path has some noise problems, and the end equipment requires added intelligence to take care of error control. High installation costs Susceptible to damage from lightning strikes - if lightning is conducted by a coaxial cable, it could very easily damage the equipment at the end of it. Debate on physical media: With technology constantly changing, there is a debate on whether physical media is still prudent and necessary to an increasingly wireless world. Wireless and physical media may actually complement each other, and physical media will matter more, not less, in a society dominated by the wireless technology. However, other opinions consider physical media a dead technology that will eventually disappear.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bromine compounds** Bromine compounds: Bromine compounds are compounds containing the element bromine (Br). These compounds usually form the -1, +1, +3 and +5 oxidation states. Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds. Hydrogen bromide: The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory: 2 P + 6 H2O + 3 Br2 → 6 HBr + 2 H3PO3 H3PO3 + H2O + Br2 → 2 HBr + H3PO4At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pKa = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/H2O system also involves many hydrates HBr·nH2O for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation.Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Br+ and HBr−2 ions – the latter, in any case, are much less stable than the bifluoride ions (HF−2) due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs+ and NR+4 (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. Other binary bromides: Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr2; extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.)Bromination of metals with Br2 tends to yield lower oxidation states than chlorination with Cl2 when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example: FeCl3 + BBr3 (excess) → FeBr3 + BCl3When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows: 3 WBr5 + Al thermal gradient→475 °C → 240 °C 3 WBr4 + AlBr3 EuBr3 + 1/2 H2 → EuBr2 + HBr 2 TaBr4 500 °C→ TaBr3 + TaBr5Most of the bromides of the pre-transition metals (groups 1, 2, and 3, along with the lanthanides and actinides in the +2 and +3 oxidation states) are mostly ionic, while nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine. Bromine halides: The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as BrF−2, BrCl−2, BrF+2, BrF+4, and BrF+6. Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN3).The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, But, OMe, Br; meta-bromination occurs for the deactivating X = –CO2Et, –CHO, –NO2); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br+.At room temperature, bromine trifluoride (BrF3) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF4 and BrF2SbF6 remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form BrF+2 and BrF−4 and thus conducts electricity.Bromine pentafluoride (BrF5) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger. Polybromine compounds: Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (S2O6F2) can oxidise it to form the cherry-red Br+2 cation. A few other bromine cations are known, namely the brown Br+3 and dark brown Br+5. The tribromide anion, Br−3, has also been characterised; it is analogous to triiodide. Bromine oxides and oxoacids: Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion.So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO3. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO2, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide.The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO2), and perbromic acid (HOBrO3), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur: Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate: Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows: BrO−3 + 5 Br− + 6 H+ → 3 Br2 + 3 H2OThere were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable 83SeO2−4. Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in BrO−4 is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals. Organobromine compounds: Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost.Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution.Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br+, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. Organobromine compounds: An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comparison of Pascal and Delphi** Comparison of Pascal and Delphi: Devised by Niklaus Wirth in the late 1960s and early 1970s, Pascal is a programming language. Originally produced by Borland Software Corporation, Embarcadero Delphi is composed of an IDE, set of standard libraries, and a Pascal-based language commonly called either Object Pascal, Delphi Pascal, or simply 'Delphi' (Embarcadero's current documentation refers to it as 'the Delphi language (Object Pascal)'). Since first released, it has become the most popular commercial Pascal implementation. Comparison of Pascal and Delphi: While developing Pascal, Wirth employed a bootstrapping procedure in which each newer version of the Pascal compiler was written and compiled with its predecessor. Thus, the 'P2' compiler was written in the dialect compilable by 'P1', 'P3' in turn was written in 'P2' and so on, all the way till 'P5'. The 'P5' compiler implemented Pascal in its final state as defined by Wirth, and subsequently became standardised as 'ISO 7185 Pascal'. Comparison of Pascal and Delphi: The Borland dialect, like the popular UCSD Pascal before it, took the 'P4' version of the language as its basis, rather than Wirth's final revision. After much evolution independent of Standard Pascal, the Borland variant became the basis for Delphi. This page goes over the differences between Delphi and Standard Pascal. It does not go into Delphi-specific extensions to the language, which are numerous and still increasing. Exclusive features: Following features are mutually exclusive. The Standard Pascal implementation is not accepted by Delphi and vice versa, the Delphi code is not acceptable in Standard Pascal. Modulo with negative dividend Standard Pascal has an Euclidean-like definition of the mod operator whereas Delphi uses a truncated definition. Nested comments Standard Pascal requires that the comment delimiters { and the bigramm (*, as well as } and *) are synonymous to each other. In Delphi, however, a block comment started by { must be closed with a }. The bigramm *) will only close any comment that started with (*. This scheme allows for nested comments at the expense of compiler complexity. Procedural data types The way procedures and functions can be passed as parameters differs: Delphi requires explicit procedural types to be declared where Standard Pascal does not. Conversion of newline characters Various computer systems show a wide variety how to indicate a newline. This affects the internal representation of text files which are composed of a series of “lines”. In order to relieve the programmer from any associated headaches, Standard Pascal mandates that reading an “end-of-line character” returns a single space character. To distinguish such an “end-of-line” space character from a space character that is actually genuine payload of the line, EOLn becomes true. Delphi does not show this behavior. Reading a newline will return whatever character sequence represents a newline on the current host system, for example two char values chr(13) (carriage return) plus chr(10) (line feed). Additional or missing features: Following features are present or missing in either language. Global goto Standard Pascal permits a goto to any label defined in scope. In Delphi a goto must be within the current routine, i. e. may not leave the begin … end-frame. Buffer variables Delphi does not support buffer variables and associated standard routines get and put. Discriminated variant record allocation In Standard Pascal allocating memory for a variant record may indicate a specific variant. This allows implementations to allocate the least amount of really necessary memory. Delphi does not support this. Temporary files In Delphi any file must be backed by a file in the file system. That means any file needs to be associated with a file name with Delphi’s assign procedure. In contrast, Standard Pascal is usable without file names. The following will produce a run-time error with Delphi. Packing Delphi does not implement the standard procedures pack and unpack. Regardless, transferring data between packed and unpacked data types is an easy feat, although the implementation might not be as efficient as a compiler vendor supplied implementation would be. Missing default write width Delphi does not associate the data type Boolean with a default width if specified as write/writeLn parameters. Delphi demonstrates the behavior as usual for character-strings. Overloading Delphi permits overloading routines. In Standard Pascal identifiers must be unique in every block. Default parameter values Delphi permits default parameters. Peculiar implementation characteristics: Standard write width In Pascal, if the destination file is a text file, the parameters to write/writeLn have an implemention-defined default total width. In Delphi, for integer values this is simply 1. That means always the least amount of space is occupied. Other compilers have shown default widths of, for example, 20 allowing for a fine tabular look at no cost of extra code.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lead (tack)** Lead (tack): A lead, lead line, lead rope (US) or head collar rope (UK), is used to lead an animal such as a horse. Usually, it is attached to a halter. The lead may be integral to the halter or, more often, separate. When separate, it is attached to the halter with a heavy clip or snap so that it can be added or removed as needed. A related term, lead shank or lead chain refers to a lead line with a chain attached that is used in a variety of ways to safely control possibly difficult or dangerous horses if they will not respond to a regular lead. Variations: A lead can be made from a variety of materials, including cotton, horsehair (woven or braided hair, usually from a horse's tail), leather, nylon or other synthetic materials. Lead ropes, as the name implies, are round and made of various types of rope, usually between 5/8 and 3/4 inch (about 2 cm) in diameter. Lead lines are usually flat webbing or leather, and are generally .75 to 1 inch (1.9 to 2.5 cm) wide, though may be narrower for show use. Flat lines are less bulky and more comfortable in the hand for leading and animal, but may lack adequate strength for tying. Variations: A lead most often attaches to the halter with a sturdy snap. In some cases, the lead is tied or spliced permanently to the halter. A lead for a horse usually is in the range of 9 to 12 feet (2.7 to 3.7 m) long, but longer and shorter lengths are seen. Variations: The lead shank consists of a lead, usually a flat line, with a chain end, or, less often, thin nylon or rope. The chain end ranges from 18 to 30 inches (46 to 76 cm) long and has a snap or clip on the end that attaches to the halter, and a ring on the other end that is attached to the lead line. Some lead lines are permanently sewn to the chain shank, others have buckles or clips allowing the chain to be removed. Lead shanks are usually used on potentially difficult or dangerous horses, such as stallions or those that, for various reasons, will not respond to a regular lead. For this reason, in some regions, lead shanks are sometimes called "stud chains." They are also commonly seen on in-hand horses of all ages and sexes at some horse shows, as the chain shank can also be used to transmit commands quickly but inobtrusively, encouraging a prompt response from the horse. Variations: For aesthetic purposes, the lead may be the same color as the halter, and sometimes even made of the same materials. Use: Leads are used to lead, hold, or tie an animal or string of animals. A horse may be led by a person on the ground, sometimes called "leading in-hand," or may be led by a rider mounted on another horse, a process called "ponying." A "string" of animals refers to animals tied to one another by their leads, whether the human leads the horses in hand or from another horse. Horses requiring physical conditioning, such as Polo ponies or roping horses, may be conditioned in strings. Pack horses are often led in strings on the trail, usually with the handler ponying the first pack horse and for the rest, the lead rope of one horse is tied to the tail or saddle of the horse in front of it. Safety in leading: By tradition, the handler leads a horse from the horse's left ("near") side, though situations may arise when a horse needs to be led from the right ("off") side. In some areas, particularly in the American west, the handler may be in front of the horse while leading, though this technique does place the handler at risk due to not being able to see what the horse is doing. Safety in leading: When leading a horse, the handler usually holds a single thickness of the lead with the right hand, while carrying the gathered slack of the lead in the left. The excess line should be laid in back-and-forth loops that fall on either side of the hand; holding the excess in circular loops, wrapping, or coiling the lead around the hand is dangerous, the handler can be dragged, injured or even killed if the horse pulls away, tightening the loops of the lead around the hand. Safety in leading: When used to lead a horse in hand, the materials used in a lead, particularly synthetics, may put a handler at risk of a rope burn should the horse pull the lead from the handler. Some handlers wear gloves while leading a horse. Tying: Lead ropes may be used to tie up animals. Common methods of tying off a lead include the halter hitch and a subset of other loop knots, collectively known among equestrians as safety knots and quick release knots. If the animal begins to panic, a person can pull the working end to quickly release the knot before it becomes too tight to untie quickly. The purpose of such a knot is to be easy to untie even when under significant tension. However, some animals do learn to untie themselves and may require the loose end of the rope to be passed through the slipped loop to prevent this occurrence, or be tied with alternative methods of restraint. Tying: Animals, usually horses, may also be placed in crossties, usually for grooming, tacking up and related activities. Crossties are commonly made from two lead ropes, each attached to a wall with the snap end placed on either side of the horse's halter. This technique of restraint keeps the horse from moving around as much as with a single lead, and is particularly handy when people are working on both sides of the animal. However, the method also presents some danger to the animal if it rears or falls. Ideally, crossties are attached at one end with either a quick release panic snap or breakaway mechanism. Tying: Flat lead shanks and thin diameter ropes generally lack the strength to securely tie a large animal such as a horse or cow, but may be more comfortable in a person's hand for leading. Ropes of a thick diameter (3/4 in or more) and high tensile strength generally are adequate to tie a large animal that resists being tied; thinner and/or weaker leads generally will break if significant tension is put on them. A common point of failure is the snap fastener used to attach the lead to the halter. Tying: An animal that panics and attempts to escape while tied with a lead can cause itself serious injury or damage the objects to which it is tied. When an animal is left unattended or if a safety knot is improperly tied and cannot be released, views differ as to whether a lead rope should be made strong enough not to break under tension, or if it should have safety elements that allow it to give way when tension reaches a certain point in order to minimize potential injury. Some people carry a very sharp knife in a belt holster or boot or keep a sharp knife in a convenient location in order to cut a lead in case of emergency. In other cases, particularly on leads used to restrain an animal in a horse trailer, a panic snap may be used, though releasing the snap while under extreme tension also may put a handler at some risk of injury. Use of a shank: Hard jerks on a lead shank can frighten a horse, damage the head, or cause a horse to rear. Light, short tugs are generally enough to get the attention of a horse. The chain should only come into action when pulled, not when hanging loosely. The handler does not hold the chain itself, as it can hurt the handler's hands should the horse pull back or move its head quickly. Use of a shank: Chain shank attachments Over the nose: The shank is run through the left ring of the halter (on the side of the face), wrapped once around the noseband of the halter, threaded through the right side nose ring of the halter, and attached on the upper right ring of the halter (near the ears of the horse). In some places, this configuration is called a "stallion chain," though the setup is used on horses of all sexes under some circumstances. If the chain is not attached to the upper right ring, the halter can slide into the horse's eye when the shank is applied. When pressure is applied, the shank puts pressure on the nose of the horse, encouraging the animal to become more aware of the handler's signals. If the shank is used harshly, the handler can damage the horse's nose. An alternative use is to take the chain over the nose, around and under the chin, and attached back to itself. Use of a shank: Under the chin: the shank is run through the lower left ring of the halter, under the chin, through the lower right ring of the halter, and attached either back to itself or to the upper right ring. This tends to make a horse raise his head, but also has a stronger disciplinary effect. The chain, if too short to be attached back on itself, can also be run through the left ring and attached to the right ring, though the halter may also be moved off-center when the shank is applied, and the snap may be subject to pressure that may cause it to fail. Use of a shank: Chain through mouth: The chain is run through the left lower ring, through the mouth, through the right lower ring, and attached to the upper right ring. This is quite severe and can damage the mouth if used harshly. Chain over gum: similar to the chain through the mouth, except the chain is rested on the upper gum of the horse's mouth, under the upper lip. The most severe attachment, may cause bleeding if the horse resists.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bra–ket notation** Bra–ket notation: Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. The name comes from the English word "Bracket". Bra–ket notation: Paul Dirac created Bra-ket notation in 1939 to make it easier to write quantum mechanical equations. In his 1939 publication, A New Notation for Quantum Mechanics, Dirac wrote: A Hilbert-space vector, which was denoted in the old notation by the letter ψ, will now be denoted by a special new symbol ⟩ . If we are concerned with a particular vector, specified by a label, a say, which would be used as a suffix to the ψ in the old notation, we write it |a⟩ Quantum mechanics: In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, ⟨ and ⟩ , and a vertical bar | , to construct "bras" and "kets". A ket is of the form |v⟩ . Mathematically it denotes a vector, v , in an abstract (complex) vector space V , and physically it represents a state of some quantum system. Quantum mechanics: A bra is of the form ⟨f| . Mathematically it denotes a linear form f:V→C , i.e. a linear map that maps each vector in V to a number in the complex plane C . Letting the linear functional ⟨f| act on a vector |v⟩ is written as ⟨f|v⟩∈C Assume that on V there exists an inner product (⋅,⋅) with antilinear first argument, which makes V an inner product space. Then with this inner product each vector ϕ≡|ϕ⟩ can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: (ϕ,⋅)≡⟨ϕ| . The correspondence between these notations is then (ϕ,ψ)≡⟨ϕ|ψ⟩ . The linear form ⟨ϕ| is a covector to |ϕ⟩ , and the set of all covectors form a subspace of the dual vector space V∨ , to the initial vector space V . The purpose of this linear form ⟨ϕ| can now be understood in terms of making projections on the state ϕ , to find how linearly dependent two states are, etc. Quantum mechanics: For the vector space Cn , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplication. If Cn has the standard Hermitian inner product (v,w)=v†w , under this identification, the identification of kets and bras and vice versa provided by the inner product is taking the Hermitian conjugate (denoted † ). Quantum mechanics: It is common to suppress the vector or linear form from the bra–ket notation and only use a label inside the typography for the bra or ket. For example, the spin operator σ^z on a two dimensional space Δ of spinors, has eigenvalues {\textstyle \pm {\frac {1}{2}}} with eigenspinors ψ+,ψ−∈Δ . In bra–ket notation, this is typically denoted as ψ+=|+⟩ , and ψ−=|−⟩ . As above, kets and bras with the same label are interpreted as kets and bras corresponding to each other using the inner product. In particular, when also identified with row and column vectors, kets and bras with the same label are identified with Hermitian conjugate column and row vectors. Quantum mechanics: Bra–ket notation was effectively established in 1939 by Paul Dirac; it is thus also known as Dirac notation, despite the notation having a precursor in Hermann Grassmann's use of [ϕ∣ψ] for inner products nearly 100 years earlier. Vector spaces: Vectors vs kets In mathematics, the term "vector" is used for an element of any vector space. In physics, however, the term "vector" tends to refer almost exclusively to quantities like displacement or velocity, which have components that relate directly to the three dimensions of space, or relativistically, to the four of spacetime. Such vectors are typically denoted with over arrows ( r→ ), boldface ( p ) or indices ( vμ ). Vector spaces: In quantum mechanics, a quantum state is typically represented as an element of a complex Hilbert space, for example, the infinite-dimensional vector space of all possible wavefunctions (square integrable functions mapping each point of 3D space to a complex number) or some more abstract Hilbert space constructed more algebraically. To distinguish this type of vector from those described above, it is common and useful in physics to denote an element ϕ of an abstract complex vector space as a ket |ϕ⟩ , to refer to it as a "ket" rather than as a vector, and to pronounce it "ket- ϕ " or "ket-A" for |A⟩. Vector spaces: Symbols, letters, numbers, or even words—whatever serves as a convenient label—can be used as the label inside a ket, with the |⟩ making clear that the label indicates a vector in vector space. In other words, the symbol "|A⟩" has a recognizable mathematical meaning as to the kind of variable being represented, while just the "A" by itself does not. For example, |1⟩ + |2⟩ is not necessarily equal to |3⟩. Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics through a listing of their quantum numbers. At its simplest, the label inside the ket is the eigenvalue of a physical operator, such as x^ , p^ , L^z , etc. Vector spaces: Notation Since kets are just vectors in a Hermitian vector space, they can be manipulated using the usual rules of linear algebra. For example: |A⟩=|B⟩+|C⟩|C⟩=(−1+2i)|D⟩|D⟩=∫−∞∞e−x2|x⟩dx. Note how the last line above involves infinitely many different kets, one for each real number x. Vector spaces: Since the ket is an element of a vector space, a bra ⟨A| is an element of its dual space, i.e. a bra is a linear functional which is a linear map from the vector space to the complex numbers. Thus, it is useful to think of kets and bras as being elements of different vector spaces (see below however) with both being different useful concepts. Vector spaces: A bra ⟨ϕ| and a ket |ψ⟩ (i.e. a functional and a vector), can be combined to an operator |ψ⟩⟨ϕ| of rank one with outer product |ψ⟩⟨ϕ|:|ξ⟩↦|ψ⟩⟨ϕ|ξ⟩. Vector spaces: Inner product and bra–ket identification on Hilbert space The bra–ket notation is particularly useful in Hilbert spaces which have an inner product that allows Hermitian conjugation and identifying a vector with a continuous linear functional, i.e. a ket with a bra, and vice versa (see Riesz representation theorem). The inner product on Hilbert space (,) (with the first argument anti linear as preferred by physicists) is fully equivalent to an (anti-linear) identification between the space of kets and that of bras in the bra ket notation: for a vector ket ϕ=|ϕ⟩ define a functional (i.e. bra) fϕ=⟨ϕ| by =: =: ⟨ϕ∣ψ⟩ Bras and kets as row and column vectors In the simple case where we consider the vector space Cn , a ket can be identified with a column vector, and a bra as a row vector. If moreover we use the standard Hermitian inner product on Cn , the bra corresponding to a ket, in particular a bra ⟨m| and a ket |m⟩ with the same label are conjugate transpose. Moreover, conventions are set up in such a way that writing bras, kets, and linear operators next to each other simply imply matrix multiplication. In particular the outer product |ψ⟩⟨ϕ| of a column and a row vector ket and bra can be identified with matrix multiplication (column vector times row vector equals matrix). Vector spaces: For a finite-dimensional vector space, using a fixed orthonormal basis, the inner product can be written as a matrix multiplication of a row vector with a column vector: Based on this, the bras and kets can be defined as: and then it is understood that a bra next to a ket implies matrix multiplication. Vector spaces: The conjugate transpose (also called Hermitian conjugate) of a bra is the corresponding ket and vice versa: because if one starts with the bra then performs a complex conjugation, and then a matrix transpose, one ends up with the ket Writing elements of a finite dimensional (or mutatis mutandis, countably infinite) vector space as a column vector of numbers requires picking a basis. Picking a basis is not always helpful because quantum mechanics calculations involve frequently switching between different bases (e.g. position basis, momentum basis, energy eigenbasis), and one can write something like "|m⟩" without committing to any particular basis. In situations involving two different important basis vectors, the basis vectors can be taken in the notation explicitly and here will be referred simply as "|−⟩" and "|+⟩". Vector spaces: Non-normalizable states and non-Hilbert spaces Bra–ket notation can be used even if the vector space is not a Hilbert space. Vector spaces: In quantum mechanics, it is common practice to write down kets which have infinite norm, i.e. non-normalizable wavefunctions. Examples include states whose wavefunctions are Dirac delta functions or infinite plane waves. These do not, technically, belong to the Hilbert space itself. However, the definition of "Hilbert space" can be broadened to accommodate these states (see the Gelfand–Naimark–Segal construction or rigged Hilbert spaces). The bra–ket notation continues to work in an analogous way in this broader context. Vector spaces: Banach spaces are a different generalization of Hilbert spaces. In a Banach space B, the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply. Usage in quantum mechanics: The mathematical structure of quantum mechanics is based in large part on linear algebra: Wave functions and other quantum states can be represented as vectors in a complex Hilbert space. (The exact structure of this Hilbert space depends on the situation.) In bra–ket notation, for example, an electron might be in the "state" |ψ⟩. (Technically, the quantum states are rays of vectors in the Hilbert space, as c|ψ⟩ corresponds to the same state for any nonzero complex number c.) Quantum superpositions can be described as vector sums of the constituent states. For example, an electron in the state 1/√2|1⟩ + i/√2|2⟩ is in a quantum superposition of the states |1⟩ and |2⟩. Usage in quantum mechanics: Measurements are associated with linear operators (called observables) on the Hilbert space of quantum states. Dynamics are also described by linear operators on the Hilbert space. For example, in the Schrödinger picture, there is a linear time evolution operator U with the property that if an electron is in state |ψ⟩ right now, at a later time it will be in the state U|ψ⟩, the same U for every possible |ψ⟩. Usage in quantum mechanics: Wave function normalization is scaling a wave function so that its norm is 1.Since virtually every calculation in quantum mechanics involves vectors and linear operators, it can involve, and often does involve, bra–ket notation. A few examples follow: Spinless position–space wave function The Hilbert space of a spin-0 point particle is spanned by a "position basis" { |r⟩ }, where the label r extends over the set of all points in position space. This label is the eigenvalue of the position operator acting on such a basis state, r^|r⟩=r|r⟩ . Since there are an uncountably infinite number of vector components in the basis, this is an uncountably infinite-dimensional Hilbert space. The dimensions of the Hilbert space (usually infinite) and position space (usually 1, 2 or 3) are not to be conflated. Usage in quantum mechanics: Starting from any ket |Ψ⟩ in this Hilbert space, one may define a complex scalar function of r, known as a wavefunction, On the left-hand side, Ψ(r) is a function mapping any point in space to a complex number; on the right-hand side, is a ket consisting of a superposition of kets with relative coefficients specified by that function. Usage in quantum mechanics: It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by For instance, the momentum operator p^ has the following coordinate representation, One occasionally even encounters a expressions such as ∇|Ψ⟩ , though this is something of an abuse of notation. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected onto the position basis, ∇⟨r|Ψ⟩, even though, in the momentum basis, this operator amounts to a mere multiplication operator (by iħp). That is, to say, or Overlap of states In quantum mechanics the expression ⟨φ|ψ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state φ. Mathematically, this means the coefficient for the projection of ψ onto φ. It is also described as the projection of state ψ onto state φ. Usage in quantum mechanics: Changing basis for a spin-1/2 particle A stationary spin-1⁄2 particle has a two-dimensional Hilbert space. One orthonormal basis is: where |↑z⟩ is the state with a definite value of the spin operator Sz equal to +1⁄2 and |↓z⟩ is the state with a definite value of the spin operator Sz equal to −1⁄2. Since these are a basis, any quantum state of the particle can be expressed as a linear combination (i.e., quantum superposition) of these two states: where aψ and bψ are complex numbers. A different basis for the same Hilbert space is: defined in terms of Sx rather than Sz. Again, any state of the particle can be expressed as a linear combination of these two: In vector form, you might write depending on which basis you are using. In other words, the "coordinates" of a vector depend on the basis used. There is a mathematical relationship between aψ , bψ , cψ and dψ ; see change of basis. Pitfalls and ambiguous uses: There are some conventions and uses of notation that may be confusing or ambiguous for the non-initiated or early student. Pitfalls and ambiguous uses: Separation of inner product and vectors A cause for confusion is that the notation does not separate the inner-product operation from the notation for a (bra) vector. If a (dual space) bra-vector is constructed as a linear combination of other bra-vectors (for instance when expressing it in some basis) the notation creates some ambiguity and hides mathematical details. We can compare bra–ket notation to using bold for vectors, such as ψ , and (⋅,⋅) for the inner product. Consider the following dual space bra-vector in the basis {|en⟩} It has to be determined by convention if the complex numbers {ψn} are inside or outside of the inner product, and each convention gives different results. Pitfalls and ambiguous uses: Reuse of symbols It is common to use the same symbol for labels and constants. For example, α^|α⟩=α|α⟩ , where the symbol α is used simultaneously as the name of the operator α^ , its eigenvector |α⟩ and the associated eigenvalue α . Sometimes the hat is also dropped for operators, and one can see notation such as A|a⟩=a|a⟩ Hermitian conjugate of kets It is common to see the usage |ψ⟩†=⟨ψ| , where the dagger ( † ) corresponds to the Hermitian conjugate. This is however not correct in a technical sense, since the ket, |ψ⟩ , represents a vector in a complex Hilbert-space H , and the bra, ⟨ψ| , is a linear functional on vectors in H . In other words, |ψ⟩ is just a vector, while ⟨ψ| is the combination of a vector and an inner product. Pitfalls and ambiguous uses: Operations inside bras and kets This is done for a fast notation of scaling vectors. For instance, if the vector |α⟩ is scaled by 1/2 , it may be denoted |α/2⟩ . This can be ambiguous since α is simply a label for a state, and not a mathematical object on which operations can be performed. This usage is more common when denoting vectors as tensor products, where part of the labels are moved outside the designed slot, e.g. |α⟩=|α/2⟩1⊗|α/2⟩2 Linear operators: Linear operators acting on kets A linear operator is a map that inputs a ket and outputs a ket. (In order to be called "linear", it is required to have certain properties.) In other words, if A^ is a linear operator and |ψ⟩ is a ket-vector, then A^|ψ⟩ is another ket-vector. In an N -dimensional Hilbert space, we can impose a basis on the space and represent |ψ⟩ in terms of its coordinates as a N×1 column vector. Using the same basis for A^ , it is represented by an N×N complex matrix. The ket-vector A^|ψ⟩ can now be computed by matrix multiplication. Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time. Linear operators: Linear operators acting on bras Operators can also be viewed as acting on bras from the right hand side. Specifically, if A is a linear operator and ⟨φ| is a bra, then ⟨φ|A is another bra defined by the rule (in other words, a function composition). This expression is commonly written as (cf. energy inner product) In an N-dimensional Hilbert space, ⟨φ| can be written as a 1 × N row vector, and A (as in the previous section) is an N × N matrix. Then the bra ⟨φ|A can be computed by normal matrix multiplication. Linear operators: If the same state vector appears on both bra and ket side, then this expression gives the expectation value, or mean or average value, of the observable represented by operator A for the physical system in the state |ψ⟩. Linear operators: Outer products A convenient way to define linear operators on a Hilbert space H is given by the outer product: if ⟨ϕ| is a bra and |ψ⟩ is a ket, the outer product denotes the rank-one operator with the rule For a finite-dimensional vector space, the outer product can be understood as simple matrix multiplication: The outer product is an N × N matrix, as expected for a linear operator. Linear operators: One of the uses of the outer product is to construct projection operators. Given a ket |ψ⟩ of norm 1, the orthogonal projection onto the subspace spanned by |ψ⟩ is This is an idempotent in the algebra of observables that acts on the Hilbert space. Linear operators: Hermitian conjugate operator Just as kets and bras can be transformed into each other (making |ψ⟩ into ⟨ψ|), the element from the dual space corresponding to A|ψ⟩ is ⟨ψ|A†, where A† denotes the Hermitian conjugate (or adjoint) of the operator A. In other words, If A is expressed as an N × N matrix, then A† is its conjugate transpose. Linear operators: Self-adjoint operators, where A = A†, play an important role in quantum mechanics; for example, an observable is always described by a self-adjoint operator. If A is a self-adjoint operator, then ⟨ψ|A|ψ⟩ is always a real number (not complex). This implies that expectation values of observables are real. Properties: Bra–ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, c1 and c2 denote arbitrary complex numbers, c* denotes the complex conjugate of c, A and B denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets. Properties: Linearity Since bras are linear functionals, By the definition of addition and scalar multiplication of linear functionals in the dual space, Associativity Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra–ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example: def def A|ψ⟩⟨ϕ| and so forth. The expressions on the right (with no parentheses whatsoever) are allowed to be written unambiguously because of the equalities on the left. Note that the associative property does not hold for expressions that include nonlinear operators, such as the antilinear time reversal operator in physics. Properties: Hermitian conjugation Bra–ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted †) of expressions. The formal rules are: The Hermitian conjugate of a bra is the corresponding ket, and vice versa. The Hermitian conjugate of a complex number is its complex conjugate. Properties: The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e., Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra–ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each.These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows: Kets: Inner products: Note that ⟨φ|ψ⟩ is a scalar, so the Hermitian conjugate is just the complex conjugate, i.e., Matrix elements: Outer products: Composite bras and kets: Two Hilbert spaces V and W may form a third space V ⊗ W by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in V and W respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.) If |ψ⟩ is a ket in V and |φ⟩ is a ket in W, the tensor product of the two kets is a ket in V ⊗ W. This is written in various notations: |ψ⟩|ϕ⟩,|ψ⟩⊗|ϕ⟩,|ψϕ⟩,|ψ,ϕ⟩. Composite bras and kets: See quantum entanglement and the EPR paradox for applications of this product. The unit operator: Consider a complete orthonormal system (basis), for a Hilbert space H, with respect to the norm from an inner product ⟨·,·⟩. From basic functional analysis, it is known that any ket |ψ⟩ can also be written as with ⟨·|·⟩ the inner product on the Hilbert space. From the commutativity of kets with (complex) scalars, it follows that must be the identity operator, which sends each vector to itself. This, then, can be inserted in any expression without affecting its value; for example where, in the last line, the Einstein summation convention has been used to avoid clutter. The unit operator: In quantum mechanics, it often occurs that little or no information about the inner product ⟨ψ|φ⟩ of two arbitrary (state) kets is present, while it is still possible to say something about the expansion coefficients ⟨ψ|ei⟩ = ⟨ei|ψ⟩* and ⟨ei|φ⟩ of those vectors with respect to a specific (orthonormalized) basis. In this case, it is particularly useful to insert the unit operator into the bracket one time or more. The unit operator: For more information, see Resolution of the identity, where Since ⟨x′|x⟩ = δ(x − x′), plane waves follow, In his book (1958), Ch. III.20, Dirac defines the standard ket which, up to a normalization, is the translationally invariant momentum eigenstate lim {\textstyle |\varpi \rangle =\lim _{p\to 0}|p\rangle } in the momentum representation, i.e., p^|ϖ⟩=0 . Consequently, the corresponding wavefunction is a constant, ⟨x|ϖ⟩2πℏ=1 , and as well as Typically, when all matrix elements of an operator such as are available, this resolution serves to reconstitute the full operator, Notation used by mathematicians: The object physicists are considering when using bra–ket notation is a Hilbert space (a complete inner product space). Notation used by mathematicians: Let (H,⟨⋅,⋅⟩) be a Hilbert space and h ∈ H a vector in H. What physicists would denote by |h⟩ is the vector itself. That is, Let H* be the dual space of H. This is the space of linear functionals on H. The embedding Φ:H↪H∗ is defined by Φ(h)=φh , where for every h ∈ H the linear functional φh:H→C satisfies for every g ∈ H the functional equation φh(g)=⟨h,g⟩=⟨h∣g⟩ Notational confusion arises when identifying φh and g with ⟨h| and |g⟩ respectively. This is because of literal symbolic substitutions. Let φh=H=⟨h∣ and let g = G = |g⟩. This gives One ignores the parentheses and removes the double bars. Notation used by mathematicians: Moreover, mathematicians usually write the dual entity not at the first place, as the physicists do, but at the second one, and they usually use not an asterisk but an overline (which the physicists reserve for averages and the Dirac spinor adjoint) to denote complex conjugate numbers; i.e., for scalar products mathematicians usually write whereas physicists would write for the same quantity
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G beta-gamma complex** G beta-gamma complex: The G beta-gamma complex (Gβγ) is a tightly bound dimeric protein complex, composed of one Gβ and one Gγ subunit, and is a component of heterotrimeric G proteins. Heterotrimeric G proteins, also called guanosine nucleotide-binding proteins, consist of three subunits, called alpha, beta, and gamma subunits, or Gα, Gβ, and Gγ. When a G protein-coupled receptor (GPCR) is activated, Gα dissociates from Gβγ, allowing both subunits to perform their respective downstream signaling effects. One of the major functions of Gβγ is the inhibition of the Gα subunit. History: The individual subunits of the G protein complex were first identified in 1980 when the regulatory component of adenylate cyclase was successfully purified, yielding three polypeptides of different molecular weights. Initially, it was thought that Gα, the largest subunit, was the major effector regulatory subunit, and that Gβγ was largely responsible for inactivating the Gα subunit and enhancing membrane binding. However, downstream signalling effects of Gβγ were later discovered when the purified Gβγ complex was found to activate a cardiac muscarinic K+ channel. Shortly after, the Gβγ complex associated with a mating factor receptor-coupled G protein in yeast was found to initiate a pheromone response. Although these hypotheses were initially controversial, Gβγ has since been shown to directly regulate as many different protein targets as the Gα subunit.Recently, possible roles of the Gβγ complex in retinal rod photoreceptors have been investigated, with some evidence for the maintenance of Gα inactivation. However, these conclusions were drawn from in vitro experiments under unphysiological conditions, and the physiological role of the Gβγ complex in vision is still unclear. Nevertheless, recent in vivo findings demonstrate the necessity of the transducin Gβγ complex in the functioning of rod photoreceptors under low light conditions. Structure: The Gβγ subunit is a dimer composed of two polypeptides, however it acts functionally as a monomer, as the individual subunits do not separate, and have not been found to function independently. Structure: The Gβ subunit is a member of the β-propeller family of proteins, which typically possess 4-8 antiparallel β-sheets arranged in the shape of a propeller. Gβ contains a 7 bladed β-propeller, each blade arranged around a central axis and composed of 4 antiparallel β-sheets. The amino acid sequence contains 7 WD repeat motifs of about 40 amino acids, each highly conserved and possessing the Trp-Asp dipeptide that gives the repeat its name. The Gγ subunit is considerably smaller than Gβ, and is unstable on its own, requiring interaction with Gβ to fold, explaining the close association of the dimer. In the Gβγ dimer, the Gγ subunit wraps around the outside of Gβ, interacting through hydrophobic associations, and exhibits no tertiary interactions with itself. The N terminus helical domains of the two subunits form a coiled coil with one another that typically extends away from the core of the dimer. To date, 5 β-subunit and 11 γ-subunit genes and have been identified in mammals. The Gβ genes have very similar sequences, while significantly greater variation is seen in the Gγ genes, indicating that the functional specificity of the Gβγ dimer may be dependent on the type of Gγ subunit involved. Structure: Of additional structural interest is the discovery of a so-called “hotspot” present on the surface of the Gβγ dimer; a specific site of the protein that binds to diverse range of peptides and is thought to be a contributing factor in the ability of Gβγ to interact with a wide variety of effectors. Structure: Synthesis and Modification Synthesis of the subunits occurs in the cytosol. Folding of the β-subunit is thought to be aided by the chaperone CCT (chaperonin containing tailless-complex polypeptide 1), which also prevents aggregation of folded subunits. A second chaperone, PhLP (phosducin-like protein), binds to the CCT/Gβ complex, and is phosphorylated, allowing CCT to dissociate and Gγ to bind. Finally, PhLP is released, exposing the binding site for Gα, allowing for formation of the final trimer at the endoplasmic reticulum, where it is targeted to the plasma membrane. Gγ subunits are known to be prenylated (covalently modified by the addition of lipid moieties) prior to addition to Gβ, which itself has not been found to be modified. This prenylation is thought to be involved in directing the interaction of the subunit both with membrane lipids and other proteins. Function: The Gβγ complex is an essential element in the GPCR signaling cascade. It has two main states for which it performs different functions. When Gβγ is interacting with Gα it functions as a negative regulator. In the heterotrimer form, the Gβγ dimer increases the affinity of Gα for GDP, which causes the G protein to be in an inactive state. For the Gα subunit to become active, the nucleotide exchange must be induced by the GPCR. Studies have shown that it is the Gβγ dimer that demonstrates specificity for the appropriate receptor and that the Gγ subunit actually enhances the interaction of the Gα subunit with the GPCR. The GPCR is activated by an extracellular ligand and subsequently activates the G protein heterotrimer by causing a conformational change in the Gα subunit. This causes the replacement of GDP with GTP as well as the physical dissociation of the Gα and the Gβγ complex. Function: Once separated, both Gα and Gβγ are free to participate in their own distinct signaling pathways. Gβγ does not go through any conformational changes when it dissociates from Gα and it acts as a signaling molecule as a dimer. The Gβγ dimer has been found to interact with many different effector molecules by protein-protein interactions. Different combinations of the Gβ and Gγ subtypes can influence different effectors and work exclusively or synergistically with the Gα subunit.Gβγ signaling is diverse, inhibiting or activating many downstream events depending on its interaction with different effectors. Researchers have discovered that Gβγ regulates ion channels, such as G protein-gated inward rectifier channels, as well as calcium channels. In human PBMC, Gβγ complex has been shown to activate phosphorylation of ERK1/2. Another example of Gβγ signaling is its effect of activating or inhibiting adenylyl cyclase leading to the intracellular increase or decrease of the secondary messenger cyclic AMP. For more examples of Gβγ signaling see table. However, the full extent of Gβγ signaling has not yet been discovered. Medical implications: Drug design The Gβγ subunit plays a variety of roles in cell signalling processes and as such researchers are now examining its potential as a therapeutic drug target for the treatment of many medical conditions. However, it is recognized that there are a number of considerations to keep in mind when designing a drug which targets the Gβγ subunit: The Gβγ subunit is essential for the formation of heterotrimeric G protein through its association with the Gα subunit allowing the G proteins coupling to the GPCR. Therefore, any agent inhibiting the Gβγ subunits signalling effects must not interfere with the heterotrimeric G protein formation or Gα subunit signalling. Medical implications: Gβγ expression is universal throughout almost all the cells of the body so any agent acting to inhibit this subunit could elicit numerous side effects. Small molecule inhibitors that target the coupling of Gβγ to specific effectors and do not interfere with normal G protein cycling/ heterotrimeric formation, have the potential to work as therapeutic agents in treating some specific diseases. Targeting the Gβγ subunit in treatment Research has been conducted on how altering the actions of Gβγ subunits could be beneficial for the treatment of certain medical conditions. Gβγ signalling has been examined for its role in a variety of conditions including heart failure, inflammation and leukemia. Medical implications: Heart failure Heart failure can be characterized by a loss of β adrenergic receptor (βAR) signalling in heart cells. When the βAR is stimulated by catecholamines such as adrenaline and noradrenaline, there is normally an increase in the contractility of the heart. However, in heart failure there are sustained and elevated levels of catecholamines which result in chronic desensitization of the βAR receptor. This leads to a decrease in the strength of heart contractions. Some research suggests that this chronic desensitization is due to the over activation of a kinase, G protein-coupled receptor kinase 2 (GRK2), which phosphorylates and deactivates certain G protein coupled receptors . When the G protein coupled receptor is activated, the Gβγ subunit recruits GRK2 which then phosphorylates and desensitizes GPCRs like the βAR. Preventing the interaction of the βγ subunit with GRK2 has therefore been studied as a potential target for increasing heart contractile function. The developed molecule GRK2ct is a protein inhibitor which inhibits the signalling properties of Gβγ subunit but does not interfere with alpha subunit signalling. The over expression of GRK2ct has been shown to significantly rescue cardiac function in murine models of heart failure by blocking Gβγ subunit signalling. In another study, biopsies were taken from patients with heart failure and virally induced overexpression of GRK2ct in the heart myocytes. Other tests showed an improvement in cardiac cell contractile function by inhibiting Gβγ. Medical implications: Inflammation When particular GPCRs are activated by their specific chemokines Gβγ directly activates PI3Kγ which is involved in the recruitment of neutrophils that contribute to inflammation. It has been discovered that the inhibition of PI3Kγ significantly reduces inflammation. PI3Kγ is the intended target molecule in the prevention of inflammation as it is the common signalling effector of many different chemokine and receptor types involved in promoting inflammation. Although PI3Kγ is the intended target there are other isoforms of PI3 which perform different functions from PI3Kγ. Since PI3Kγ is specifically regulated by Gβγ, while other isoforms of PI3 are largely regulated by other molecules, inhibiting Gβγ signalling would provide the desired specificity of a therapeutic agent designed to treat inflammation. Medical implications: Leukemia The Gβγ subunit has been shown to activate a Rho guanine nucleotide exchange factor (RhoGef) gene PLEKHG2 which is upregulated in a number of leukemia cell lines and mouse models of leukemia. Lymphocyte chemotaxis as a result of Rac and CDC42 activation as well as actin polymerization is believed to be regulated by the Gβγ activated RhoGef. Therefore, a drug inhibiting the Gβγ could play a role in the treatment of leukemia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cinoxacin** Cinoxacin: Cinoxacin is a quinolone antibiotic that has been discontinued in the U.K. as well the United States, both as a branded drug or a generic. The marketing authorization of cinoxacin has been suspended throughout the EU.Cinoxacin was an older synthetic antimicrobial related to the quinolone class of antibiotics with activity similar to oxolinic acid and nalidixic acid. It was commonly used thirty years ago to treat urinary tract infections in adults. There are reports that cinoxacin had also been used to treat initial and recurrent urinary tract infections and bacterial prostatitis in dogs. however this veterinary use was never approved by the United States Food and Drug Administration (FDA). In complicated UTI, the older gyrase-inhibitors such as cinoxacin are no longer indicated. History: Cinoxacin is one of the original quinolone drugs, which were introduced in the 1970s. Commonly referred to as the first generation quinolones. This first generation also included other quinolone drugs such as pipemidic acid, and oxolinic acid, but this first generation proved to be only marginal improvements over nalidixic acid. Cinoxacin is similar chemically (and in antimicrobial activity) to oxolinic acid and nalidixic acid. Relative to nalidixic acid, cinoxacin was found to have a slightly greater inhibitory and bactericidal activity. Cinoxacin was patented in 1972 and assigned to Eli Lilly. Eli Lilly obtained approval from the FDA to market cinoxacin in the United States as Cinobac on June 13, 1980. Prior to this cinobac was marketed in the U.K. and Switzerland in 1979. History: Oclassen Pharmaceuticals (Oclassen Dermatologics) commenced sales of Cinobac in the United States and Canada back in September 1992, under an agreement with Eli Lilly which granted Oclassen exclusive United States and Canadian distribution rights. Oclassen promoted Cinobac primarily to urologists for the outpatient treatment of initial and recurrent urinary tract infections and prophylaxis. Oclassen Pharmaceuticals was a privately held pharmaceutical company founded in 1985 until acquired by Watson Pharmaceuticals, Inc., in 1997. Watson Pharmaceuticals, Inc., (also incorporated in 1985), having acquired Oclassen Pharmaceuticals (Oclassen Dermatologics) also acquired the marketing rights contained within the agreement with Eli Lilly to market Cinobac. Mode of action: Cinoxacin mode of action involves the inhibiting of DNA gyrase, a type II topoisomerase, and topoisomerase iv, which is an enzyme necessary to separate replicated DNA, thereby inhibiting cell division. Contraindications: Within the most recent package insert (c. 1999) Cinobac is listed as being contraindicated in patients with a history of hypersensitivity to cinoxacin or other quinolones. Adverse reactions: The safety profile of cinoxacin appears to be rather unremarkable. Adverse drug reactions appear to be limited to the gastrointestinal system and the central nervous system. Hypersensitivity resulting in an anaphylactic reactions (as seen with all drugs found within this class) has also been reported in association with cinoxacin. Animal studies have shown that Cinoxacin is associated with renal damage. Such damage appears to be due to the physical trauma resulting from deposition of cinoxacin crystals in the urinary tract. Such crystaluria has also been reported with other drugs in this class. A review of the literature indicates that patients treated with cinoxacin reported fewer adverse drug reactions than those treated with nalidixic acid, furadantin, amoxicillin, or trimethoprim-sulfamethoxazole.Although phototoxicity and photoallergenicity is well demonstrated experimentally, phototoxicity does not appear to be an issue with cinoxacin As a result of this safety profile the manufacturer, Eli Lilley states that "cinoxacin perhaps should be reserved only for those patients with organisms resistant to usual first-line agents or those who fail to respond to therapy with these agents." Overdose: Symptoms following an overdose of cinoxacin may include anorexia, nausea, vomiting, epigastric distress, and diarrhea. The severity of the epigastric distress and the diarrhea are dose related. Patients who have ingested an overdose of cinoxacin should be kept well hydrated to prevent crystalluria. Forced diuresis, peritoneal dialysis, hemodialysis, or charcoal hemoperfusion have not been established as beneficial for an overdose of cinoxacin. Pharmacokinetics: Biotransformation is mainly hepatic, with approximately 30-40% metabolized to inactive metabolites. Protein Binding ranges from 60 to 80%. Cinoxacin is rapidly absorbed after oral administration. The presence of food delays absorption but does not affect total absorption. The mean serum half-life is 1.5 hours. Half-life in patients with impaired renal function may exceed 10 hours. Dosing: The usual adult dosage for the treatment of urinary tract infections is 1 gram daily, administered orally in two or four divided doses (500 mg b.i.d. or 250 mg q.i.d. respectively) for seven to 14 days. Impaired renal function When renal function is impaired, a reduced dosage must be employed. Susceptible bacteria: Gram-negative aerobes: Enterobacter species Escherichia coli Klebsiella species Proteus mirabilis Proteus vulgarisEnterococcus species, Pseudomonas species, and Staphylococcus species are resistant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chiating Series** Chiating Series: The Chiating Series is a Mesozoic geologic formation in China. Fossil ornithopod tracks have been reported from the formation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pioneer anomaly** Pioneer anomaly: The Pioneer anomaly, or Pioneer effect, was the observed deviation from predicted accelerations of the Pioneer 10 and Pioneer 11 spacecraft after they passed about 20 astronomical units (3×109 km; 2×109 mi) on their trajectories out of the Solar System. The apparent anomaly was a matter of much interest for many years but has been subsequently explained by anisotropic radiation pressure caused by the spacecraft's heat loss. Pioneer anomaly: Both Pioneer spacecraft are escaping the Solar System but are slowing under the influence of the Sun's gravity. Upon very close examination of navigational data, the spacecraft were found to be slowing slightly more than expected. The effect is an extremely small acceleration towards the Sun, of (8.74±1.33)×10−10 m/s2, which is equivalent to a reduction of the outbound velocity by 1 km/h over a period of ten years. The two spacecraft were launched in 1972 and 1973. The anomalous acceleration was first noticed as early as 1980 but not seriously investigated until 1994. The last communication with either spacecraft was in 2003, but analysis of recorded data continues. Pioneer anomaly: Various explanations, both of spacecraft behavior and of gravitation itself, were proposed to explain the anomaly. Over the period from 1998 to 2012, one particular explanation became accepted. The spacecraft, which are surrounded by an ultra-high vacuum and are each powered by a radioisotope thermoelectric generator (RTG), can shed heat only via thermal radiation. If, due to the design of the spacecraft, more heat is emitted in a particular direction by what is known as a radiative anisotropy, then the spacecraft would accelerate slightly in the direction opposite of the excess emitted radiation due to the recoil of thermal photons. If the excess radiation and attendant radiation pressure were pointed in a general direction opposite the Sun, the spacecraft's velocity away from the Sun would be decreasing at a rate greater than could be explained by previously recognized forces, such as gravity and trace friction due to the interplanetary medium (imperfect vacuum). Pioneer anomaly: By 2012, several papers by different groups, all reanalyzing the thermal radiation pressure forces inherent in the spacecraft, showed that a careful accounting of this explains the entire anomaly; thus the cause is mundane and does not point to any new phenomenon or need to update the laws of physics. The most detailed analysis to date, by some of the original investigators, explicitly looks at two methods of estimating thermal forces, concluding that there is "no statistically significant difference between the two estimates and [...] that once the thermal recoil force is properly accounted for, no anomalous acceleration remains." Description: Pioneer 10 and 11 were sent on missions to Jupiter and Jupiter/Saturn respectively. Both spacecraft were spin-stabilised in order to keep their high-gain antennas pointed towards Earth using gyroscopic forces. Although the spacecraft included thrusters, after the planetary encounters they were used only for semiannual conical scanning maneuvers to track Earth in its orbit, leaving them on a long "cruise" phase through the outer Solar System. During this period, both spacecraft were repeatedly contacted to obtain various measurements on their physical environment, providing valuable information long after their initial missions were complete. Description: Because the spacecraft were flying with almost no additional stabilization thrusts during their "cruise", it is possible to characterize the density of the solar medium by its effect on the spacecraft's motion. In the outer Solar System this effect would be easily calculable, based on ground-based measurements of the deep space environment. When these effects were taken into account, along with all other known effects, the calculated position of the Pioneers did not agree with measurements based on timing the return of the radio signals being sent back from the spacecraft. These consistently showed that both spacecraft were closer to the inner Solar System than they should be, by thousands of kilometres—small compared to their distance from the Sun, but still statistically significant. This apparent discrepancy grew over time as the measurements were repeated, suggesting that whatever was causing the anomaly was still acting on the spacecraft. Description: As the anomaly was growing, it appeared that the spacecraft were moving more slowly than expected. Measurements of the spacecraft's speed using the Doppler effect demonstrated the same thing: the observed redshift was less than expected, which meant that the Pioneers had slowed down more than expected. Description: When all known forces acting on the spacecraft were taken into consideration, a very small but unexplained force remained. It appeared to cause an approximately constant sunward acceleration of (8.74±1.33)×10−10 m/s2 for both spacecraft. If the positions of the spacecraft were predicted one year in advance based on measured velocity and known forces (mostly gravity), they were actually found to be some 400 km closer to the sun at the end of the year. This anomaly is now believed to be accounted for by thermal recoil forces. Explanation: thermal recoil force: Starting in 1998, there were suggestions that the thermal recoil force was underestimated, and perhaps could account for the entire anomaly. However, accurately accounting for thermal forces was hard, because it needed telemetry records of the spacecraft temperatures and a detailed thermal model, neither of which was available at the time. Furthermore, all thermal models predicted a decrease in the effect with time, which did not appear in the initial analysis. Explanation: thermal recoil force: One by one these objections were addressed. Many of the old telemetry records were found, and converted to modern formats. This gave power consumption figures and some temperatures for parts of the spacecraft. Several groups built detailed thermal models, which could be checked against the known temperatures and powers, and allowed a quantitative calculation of the recoil force. The longer span of navigational records showed the acceleration was in fact decreasing.In July 2012, Slava Turyshev et al. published a paper in Physical Review Letters that explained the anomaly. The work explored the effect of the thermal recoil force on Pioneer 10, and concluded that "once the thermal recoil force is properly accounted for, no anomalous acceleration remains." Although the paper by Turyshev et al. has the most detailed analysis to date, the explanation based on thermal recoil force has the support of other independent research groups, using a variety of computational techniques. Examples include "thermal recoil pressure is not the cause of the Rosetta flyby anomaly but likely resolves the anomalous acceleration observed for Pioneer 10." and "It is shown that the whole anomalous acceleration can be explained by thermal effects". Indications from other missions: The Pioneers were uniquely suited to discover the effect because they have been flying for long periods of time without additional course corrections. Most deep-space probes launched after the Pioneers either stopped at one of the planets, or used thrusting throughout their mission. Indications from other missions: The Voyagers flew a mission profile similar to the Pioneers, but were not spin stabilized. Instead, they required frequent firings of their thrusters for attitude control to stay aligned with Earth. Spacecraft like the Voyagers acquire small and unpredictable changes in speed as a side effect of the frequent attitude control firings. This 'noise' makes it impractical to measure small accelerations such as the Pioneer effect; accelerations as large as 10−9 m/s2 would be undetectable.Newer spacecraft have used spin stabilization for some or all of their mission, including both Galileo and Ulysses. These spacecraft indicate a similar effect, although for various reasons (such as their relative proximity to the Sun) firm conclusions cannot be drawn from these sources. The Cassini mission has reaction wheels as well as thrusters for attitude control, and during cruise could rely for long periods on the reaction wheels alone, thus enabling precision measurements. It also had radioisotope thermoelectric generators (RTGs) mounted close to the spacecraft body, radiating kilowatts of heat in hard-to-predict directions.After Cassini arrived at Saturn, it shed a large fraction of its mass from the fuel used in the insertion burn and the release of the Huygens probe. This increases the acceleration caused by the radiation forces because they are acting on less mass. This change in acceleration allows the radiation forces to be measured independently of any gravitational acceleration. Comparing cruise and Saturn-orbit results shows that for Cassini, almost all the unmodelled acceleration was due to radiation forces, with only a small residual acceleration, much smaller than the Pioneer acceleration, and with opposite sign.The non-gravitational acceleration of the deep space probe New Horizons has been measured at about 1.25 x 10−9 m/s2 sunward, somewhat larger than the effect on Pioneer. Modelling of thermal effects indicates an expected sunward acceleration of 1.15 x 10−9 m/s2, and given the uncertainties, the acceleration appears consistent with thermal radiation as the source of the non-gravitational forces measured. The measured acceleration is slowly decreasing as would be expected from the decreasing thermal output of the RTG. Potential issues with the thermal solution: There are two features of the anomaly, as originally reported, that are not addressed by the thermal solution: periodic variations in the anomaly, and the onset of the anomaly near the orbit of Saturn. Potential issues with the thermal solution: First, the anomaly has an apparent annual periodicity and an apparent Earth sidereal daily periodicity with amplitudes that are formally greater than the error budget. However, the same paper also states this problem is most likely not related to the anomaly: "The annual and diurnal terms are very likely different manifestations of the same modeling problem. [...] Such a modeling problem arises when there are errors in any of the parameters of the spacecraft orientation with respect to the chosen reference frame." Second, the value of the anomaly measured over a period during and after the Pioneer 11 Saturn encounter had a relatively high uncertainty and a significantly lower value. The Turyshev, et al. 2012 paper compared the thermal analysis to the Pioneer 10 only. The Pioneer anomaly was unnoticed until after Pioneer 10 passed its Saturn encounter. However, the most recent analysis states: "Figure 2 is strongly suggestive that the previously reported "onset" of the Pioneer anomaly may in fact be a simple result of mis-modeling of the solar thermal contribution; this question may be resolved with further analysis of early trajectory data". Previously proposed explanations: Before the thermal recoil explanation became accepted, other proposed explanations fell into two classes—"mundane causes" or "new physics". Mundane causes include conventional effects that were overlooked or mis-modeled in the initial analysis, such as measurement error, thrust from gas leakage, or uneven heat radiation. The "new physics" explanations proposed revision of our understanding of gravitational physics. Previously proposed explanations: If the Pioneer anomaly had been a gravitational effect due to some long-range modifications of the known laws of gravity, it did not affect the orbital motions of the major natural bodies in the same way (in particular those moving in the regions in which the Pioneer anomaly manifested itself in its presently known form). Hence a gravitational explanation would need to violate the equivalence principle, which states that all objects are affected the same way by gravity. It was therefore argued that increasingly accurate measurements and modelling of the motions of the outer planets and their satellites undermined the possibility that the Pioneer anomaly is a phenomenon of gravitational origin. However, others believed that our knowledge of the motions of the outer planets and dwarf planet Pluto was still insufficient to disprove the gravitational nature of the Pioneer anomaly. The same authors ruled out the existence of a gravitational Pioneer-type extra-acceleration in the outskirts of the Solar System by using a sample of Trans-Neptunian objects.The magnitude of the Pioneer effect ap ((8.74±1.33)×10−10 m/s2) is numerically quite close to the product ((6.59±0.07)×10−10 m/s2) of the speed of light c and the Hubble constant H0 , hinting at a cosmological connection, but this is now believed to be of no particular significance. In fact the latest Jet Propulsion Laboratory review (2010) undertaken by Turyshev and Toth claims to rule out the cosmological connection by considering rather conventional sources whereas other scientists provided a disproof based on the physical implications of cosmological models themselves.Gravitationally bound objects such as the Solar System, or even the Milky Way, are not supposed to partake of the expansion of the universe—this is known both from conventional theory and by direct measurement. This does not necessarily interfere with paths new physics can take with drag effects from planetary secular accelerations of possible cosmological origin. Previously proposed explanations: Deceleration model It has been viewed as possible that a real deceleration is not accounted for in the current model for several reasons. Previously proposed explanations: Gravity It is possible that deceleration is caused by gravitational forces from unidentified sources such as the Kuiper belt or dark matter. However, this acceleration does not show up in the orbits of the outer planets, so any generic gravitational answer would need to violate the equivalence principle (see modified inertia below). Likewise, the anomaly does not appear in the orbits of Neptune's moons, challenging the possibility that the Pioneer anomaly may be an unconventional gravitational phenomenon based on range from the Sun. Previously proposed explanations: Drag The cause could be drag from the interplanetary medium, including dust, solar wind and cosmic rays. However, the measured densities are too small to cause the effect. Gas leaks Gas leaks, including helium from the spacecraft's radioisotope thermoelectric generators (RTGs) have been thought as possible cause. Previously proposed explanations: Observational or recording errors The possibility of observational errors, which include measurement and computational errors, has been advanced as a reason for interpreting the data as an anomaly. Hence, this would result in approximation and statistical errors. However, further analysis has determined that significant errors are not likely because seven independent analyses have shown the existence of the Pioneer anomaly as of March 2010.The effect is so small that it could be a statistical anomaly caused by differences in the way data were collected over the lifetime of the probes. Numerous changes were made over this period, including changes in the receiving instruments, reception sites, data recording systems and recording formats. Previously proposed explanations: New physics Because the "Pioneer anomaly" does not show up as an effect on the planets, Anderson et al. speculated that this would be interesting if this was new physics. Later, with the Doppler shifted signal confirmed, the team again speculated that one explanation may lie with new physics, if not some unknown systemic explanation. Previously proposed explanations: Clock acceleration Clock acceleration was an alternate explanation to anomalous acceleration of the spacecraft towards the Sun. This theory took notice of an expanding universe, which was thought to create an increasing background 'gravitational potential'. The increased gravitational potential would then accelerate cosmological time. It was proposed that this particular effect causes the observed deviation from predicted trajectories and velocities of Pioneer 10 and Pioneer 11.From their data, Anderson's team deduced a steady frequency drift of 1.5 Hz over eight years. This could be mapped on to a clock acceleration theory, which meant all clocks would be changing in relation to a constant acceleration: in other words, that there would be a non-uniformity of time. Moreover, for such a distortion related to time, Anderson's team reviewed several models in which time distortion as a phenomenon is considered. They arrived at the "clock acceleration" model after completion of the review. Although the best model adds a quadratic term to defined International Atomic Time, the team encountered problems with this theory. This then led to non-uniform time in relation to a constant acceleration as the most likely theory. Previously proposed explanations: Definition of gravity modified The Modified Newtonian dynamics or MOND hypothesis proposed that the force of gravity deviates from the traditional Newtonian value to a very different force law at very low accelerations on the order of 10−10 m/s2. Given the low accelerations placed on the spacecraft while in the outer Solar System, MOND may be in effect, modifying the normal gravitational equations. The Lunar Laser Ranging experiment combined with data of LAGEOS satellites refutes that simple gravity modification is the cause of the Pioneer anomaly. The precession of the longitudes of perihelia of the solar planets or the trajectories of long-period comets have not been reported to experience an anomalous gravitational field toward the Sun of the magnitude capable of describing the Pioneer anomaly. Previously proposed explanations: Definition of inertia modified MOND can also be interpreted as a modification of inertia, perhaps due to an interaction with vacuum energy, and such a trajectory-dependent theory could account for the different accelerations apparently acting on the orbiting planets and the Pioneer craft on their escape trajectories. A possible terrestrial test for evidence of a different model of modified inertia has also been proposed. Previously proposed explanations: Parametric time Another theoretical explanation was based on a possible non-equivalence of the atomic time and the astronomical time, which could give the same observational fingerprint as the anomaly. Previously proposed explanations: Celestial ephemerides in an expanding universe Another proposed explanation of Pioneer anomaly is that the background spacetime is described by a cosmological Friedmann–Lemaître–Robertson–Walker metric that is not Minkowski flat. In this model of spacetime manifold, light moves uniformly with respect to the conformal cosmological time whereas physical measurements are performed with the help of atomic clocks that count the proper time of observer coinciding with the cosmic time. This difference yields exactly the same numerical value and signature of the Doppler shift measured in the Pioneer experiment. However, this explanation requires the thermal effects be a small percentage of the total, in contradiction to the many studies that estimate it to be the bulk of the effect. Further research avenues: It is possible, but not proven, that this anomaly is linked to the flyby anomaly, which has been observed in other spacecraft. Although the circumstances are very different (planet flyby vs. deep space cruise), the overall effect is similar—a small but unexplained velocity change is observed on top of a much larger conventional gravitational acceleration. Further research avenues: The Pioneer spacecraft are no longer providing new data (the last contact was on 23 January 2003) and other deep-space missions that might be studied (Galileo and Cassini) were deliberately disposed of in the atmospheres of Jupiter and Saturn respectively at the ends of their missions. This leaves several remaining options for further research: Further analysis of the retrieved Pioneer data. This includes not only the data that was first used to detect the anomaly, but additional data that until recently was saved only in older, inaccessible computer formats and media. This data was recovered in 2006, converted to more modern formats, and is now available for analysis. Further research avenues: The New Horizons spacecraft to Pluto is spin-stabilised for long intervals, and there were proposals to use it to investigate the anomaly. It was known that New Horizons would have the same problem that precluded good data from the cruise portion of Cassini mission—its RTG is mounted close to the spacecraft body, so thermal radiation from it, bouncing off the spacecraft, will produce a systematic thrust of a not-easily predicted magnitude, as large or larger than the Pioneer effect. However, it was hoped that despite any large systematic bias from the RTG, the 'onset' of the anomaly at or near the orbit of Saturn might be observed. Further research avenues: A dedicated mission has also been proposed. Such a mission would probably need to surpass 200 AU from the Sun in a hyperbolic escape orbit. Observations of asteroids around 20 AU may provide insights if the anomaly's cause is gravitational. Meetings and conferences about the anomaly: A meeting was held at the University of Bremen in 2004 to discuss the Pioneer anomaly.The Pioneer Explorer Collaboration was formed to study the Pioneer Anomaly and has hosted three meetings (2005, 2007, and 2008) at International Space Science Institute in Bern, Switzerland, to discuss the anomaly, and discuss possible means for resolving the source.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Session Announcement Protocol** Session Announcement Protocol: The Session Announcement Protocol (SAP) is an experimental protocol for advertising multicast session information. SAP typically uses Session Description Protocol (SDP) as the format for Real-time Transport Protocol (RTP) session descriptions. Announcement data is sent using IP multicast and the User Datagram Protocol (UDP). Under SAP, senders periodically transmit SDP descriptions to a well-known multicast address and port number (9875). A listening application constructs a guide of all advertised multicast sessions. SAP was published by the IETF as RFC 2974. Announcement interval: The announcement interval is cooperatively modulated such that all SAP announcements in the multicast delivery scope, by default, consume 4000 bits per second. Regardless, the maximum announce interval is 300 seconds (5 minutes). Announcements automatically expire after 10 times the announcement interval or one hour, whichever is greater. Announcements may also be explicitly withdrawn by the original issuer. Authentication, encryption and compression: SAP features separate methods for authenticating and encrypting announcements. Use of encryption is not recommended. Authentication prevents unauthorized modification and other DoS attacks. Authentication is optional. Two authentication schemes are supported: Pretty Good Privacy as defined in RFC 2440 Cryptographic Message Syntax as defined in RFC 5652The message body may optionally be compressed using the zlib format as defined in RFC 1950. Applications and implementations: VLC media player monitors SAP announcements and presents the user a list of available streams.SAP is one of the optional discovery and connection management techniques described in the AES67 audio-over-Ethernet interoperability standard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protective distribution system** Protective distribution system: A protective distribution system (PDS), also called protected distribution system, is a US government term for wireline or fiber-optics telecommunication system that includes terminals and adequate acoustical, electrical, electromagnetic, and physical safeguards to permit its use for the unencrypted transmission of classified information. At one time these systems were called "approved circuits". A complete protected distribution system includes the subscriber and terminal equipment and the interconnecting lines. Description: The purpose of a PDS is to deter, detect and/or make difficult physical access to the communication lines carrying national security information. A specification called the National Security Telecommunications and Information Systems Security Instruction (NSTISSI) 7003 was issued in December 1996 by the Committee on National Security Systems. Approval authority, standards, and guidance for the design, installation, and maintenance for PDS are provided by NSTISSI 7003 to U.S. government departments and agencies and their contractors and vendors. This instruction describes the requirements for all PDS installations within the U.S. and for low and medium threat locations outside the U.S. PDS is commonly used to protect SIPRNet and JWICS networks. The document superseded one numbered NASCI 4009 on Protected Distribution Systems, dated December 30, 1981, and part of a document called NACSEM 5203, that covered guidelines for facility design, using the designations "red" and "black".There are two types of PDS: hardened distribution systems and simple distribution systems. Hardened distribution: Hardened distribution PDSs provide significant physical protection and can be implemented in three forms: hardened carrier PDSs, alarmed carrier PDSs and continuously viewed carrier PDSs. Hardened distribution: Hardened carrier In a hardened carrier PDS, the data cables are installed in a carrier constructed of electrical metallic tubing (EMT), ferrous conduit or pipe, or rigid sheet steel ducting. All of the connections in a Hardened Carrier System are permanently sealed completely around all surfaces with welds, epoxy or other such sealants. If the hardened carrier is buried under ground, to secure cables running between buildings for example, the carrier containing the cables is encased in concrete. Hardened distribution: With a hardened carrier system, detection is accomplished via human inspections that are required to be performed periodically. Therefore, hardened carriers are installed below ceilings or above flooring so they can be visually inspected to ensure that no intrusions have occurred. These periodic visual inspections (PVIs) occur at a frequency dependent upon the level of threat to the environment, the security classification of the data, and the access control to the area. Hardened distribution: Alarmed carrier As an alternative to conducting human visual inspections, an alarmed carrier PDS may be constructed to automate the inspection process through electronic monitoring with an alarm system. In an Alarmed Carrier PDS, the carrier system is “alarmed” with specialized optical fibers deployed within the conduit for the purpose of sensing acoustic vibrations that usually occur when an intrusion is being attempted on the conduit in order to gain access to the cables. Hardened distribution: Alarmed carrier PDS offers several advantages over hardened carrier PDS: Provides continuous monitoring 24/7/365 Eliminates the requirement for periodic visual inspections Allows the carrier to be hidden above the ceiling or below the floor, since periodic visual inspections are not required Eliminates the need for the welding and epoxying of the connections Eliminates the requirement for concrete encasement outdoors Eliminates the need to lock down manhole covers Enables rapid redeployment for evolving network arrangementsLegacy alarmed carrier systems monitor the carrier containing the cables being protected. More advanced systems monitor the fibers within, or intrinsic to, the cables being protected to turn those cables into sensors, which detect intrusion attempts. Hardened distribution: Depending on the government organization, utilizing an alarmed carrier PDS in conjunction with interlocking armored cable may, in some cases, allow for the elimination of the carrier systems altogether. In these instances, the cables being protected can be installed in existing conveyance (wire basket, ladder rack) or suspended cabling (on D-rings, J-Hooks, etc.). Hardened distribution: Continuously viewed carrier A Continuously Viewed Carrier PDS is one that is under continuous observation, 24 hours per day (including when operational). Such circuits may be grouped together, but should be separated from all non-continuously viewed circuits ensuring an open field of view. Standing orders should include the requirement to investigate any attempt to disturb the PDS. Appropriate security personnel should investigate the area of attempted penetration within 15 minutes of discovery. This type of hardened carrier is not used for Top Secret or special category information for non-U.S. UAA. Hardened distribution: UAA is an Uncontrolled Access Area (UAA). Like definitions include Controlled Access Area (CAA) and Restricted Access Area (RAA). A Secure Room (SR) offers the highest degree of protection. Therefore, from the least protected (least secure) to the most protected is as follows: UAA RAA CAA SR Simple distribution: Simple distribution PDSs are afforded a reduced level of physical security protection as compared to a hardened distribution PDS. They use a simple carrier system and the following means are acceptable under NSTISSI 7003: The data cables should be installed in a carrier The carrier can be constructed of any material (e.g., wood, PVT, EMT, ferrous conduit) The joints and access points should be secured and be controlled by personnel cleared to the highest level of data handled by the PDS The carrier is to be inspected in accordance with the requirements of NSTISSI 7003
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fibroblast growth factor 8** Fibroblast growth factor 8: Fibroblast growth factor 8 (FGF-8) is a protein that in humans is encoded by the FGF8 gene. Function: The protein encoded by this gene is a member of the fibroblast growth factor (FGF) family. FGF family members possess broad mitogenic and cell survival activities, and are involved in a variety of biological processes, including embryonic development, cell growth, morphogenesis, tissue repair, tumor growth and invasion.FGF-8 is important and necessary for setting up and maintaining the midbrain/hindbrain border (or mesencephalon/metencephalon border) which plays the vital role of “organizer” in development, like the Spemann organizer” of the gastrulating embryo. FGF-8 is expressed in the region where Otx2 and Gbx2 cross inhibit each other and is maintained expression by this interaction. Once expressed, the Fgf8 induces other transcription factors to form cross-regulatory loops between cells, thus the border is established. Through development, the Fgf8 goes to regulate the growth and differentiation of progenitor cells in this region to produce ultimate structure of midbrain and hindbrain. Crossely’s experiment proves that the FGF-8 is sufficient to induce the repatterning of midbrain and hindbrain structure.In the development of forebrain, cortical patterning centers are the boundaries or poles of cortical primordium, where multiple BMP and WNT genes are expressed. Besides, at the anterior pole several FGF family including Fgf3, 8,17 and 18 overlap in expression. The similarity in cortical gene expression in Emx2 mutants and mice in which the anterior FGF8 source is augmented suggests that FGF8 controls the graded expression (low anterior, high posterior) of Emx2 in the cortical primordium. Emx2 is one of the protomap molecular determinants that prove to be closely interacted with Pax6. Emx2 and Pax6 are expressed in opposing gradients along the A/P axis of the cortical primordium and cooperate to set up area pattern. Fgf8 and Emx2 antagonize each other to create the development map. FGF-8 promotes the development of anterior part and suppresses posterior fate, while the Emx2 does the reverse. What's more, FGF8 manipulations suggest FGF8 controls the cortical graded expression of COUP-TF1. Moreover, the sharpness of both COUPTF1 and COUP-TF2 expression borders would be expected of genes involved in boundary specification. Thus, the interaction between them regulates the A/P axis of cortical primordium and directs the development map of cortical area. Function: FGF8 signaling from the apical ectodermal ridge (AER), which borders the distal end of the limb bud, is necessary for forming normal limbs. In the absence of FGF8, limb buds can be reduced in size, hypoplasia or aplasia of bones or digits within the three limb segments may occur, as well as delays in subsequent expressions of other genes (Shh or FGF4). FGF8 is responsible for cell proliferation and survival, as well. Loss of function or decreased expression could result in the malformation or absence of essential limb components. Studies have shown that the forelimbs tend to be more affected by the loss of FGF8 signaling than the hindlimbs and the loss tends to affect the proximal components more heavily than the distal components. FGF8 not only aids in the formation of the limb bud and skeletal components of the limb, but the tendons within the limb are affected by it near the portions closest to the muscle extremities. This diffusible polypeptide is responsible for inducing the limb bud, then inducing and maintaining sonic hedgehog expression in the established limb bud promoting outgrowth of the limb. Evidence for this comes from a study done by Crossley and his colleagues, in which FGF8 soaked beads were surgically used to replace AER areas with the beads. These studies showed that ectopic limbs formed either fully functional or mostly functional limbs near the normal limbs or limb areas. FGF8 has also been recorded to regulate craniofacial structure formation, including the teeth, palate, mandible, and salivary glands. Decreased expression can result in the absence of molar teeth, failure to close the palate, or decreased mandible size. FGF8 has been documented to play a role in oralmaxillogacial diseases and CRISPR-cas9 gene targeting on FGF8 may be key in treating these diseases. Cleft lip and/or palate (CLP) genome wide gene analysis shows a D73H missense mutation in the FGF8 gene which reduces the binding affinity of FGF8. Loss of Tbx1 and Tfap2 can result in proliferation and apoptosis in the palate cells increasing the risk of CLP. Overexpression of FGF8 due to misregulation of the Gli processing gene may result in cliliopathies. Agnathia, a malformation of the mandible, is often a lethal condition that comes from the absence of BMP4 regulators (noggin and chordin), resulting in high levels of BMP4 signaling, which in turn drastically reduces FGF8 signaling, increasing cell death during mandibular outgrowth. Lastly, the ability for FGF8 to regulate cell proliferation has caused interest in its effects on tumors or squamous cell carcinoma. CRISPR-cas9 gene targeting methods are currently being studied to determine if they are the key to solving FGF8 mutations associated with oral diseases. Clinical significance: This protein is known to be a factor that supports androgen and anchorage independent growth of mammary tumor cells. Overexpression of this gene has been shown to increase tumor growth and angiogenesis. The adult expression of this gene was once thought to be restricted to testes and ovaries but has been described in several organ systems. Temporal and spatial pattern of this gene expression suggests its function as an embryonic epithelial factor. Studies of the mouse and chick homologs reveal roles in midbrain and limb development, organogenesis, embryo gastrulation and left-right axis determination. The alternative splicing of this gene results in four transcript variants.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Poularde** Poularde: Poularde is culinary term for a chicken that is at least 120 days old at the time of slaughter and fattened with a rich diet that delays egg production. In the past it was common to spay the chickens early in life to ensure desirable meat quality, similar to the castration of a capon. Poularde: Similar terms are often confused: in English, pullet refers to a young hen, generally under one year old. Sometimes it is more specific, indicating a hen that is fully grown but has not reached ‘point-of-lay’, i.e. has not yet started laying eggs, which often happens between 16 and 24 weeks of age, depending on breed. Poulard (no 'e') can be used to mean "roaster", i.e. a young chicken weighing up to 6-7 pounds and living 10-12 weeks, as opposed to smaller "broiler" chickens weighing less than 3 pounds. In French, poussin is a newly hatched chick (either sex), poulet is a young chick (either sex), poulette is a female young chicken (one form of a poulet, and corresponding to the male coquelet), poularde is a poulette deliberately fattened for eating (often spayed, and the equivalent of the castrated male chapon = capon), and a poule is an egg-laying hen (corresponding to the coq = cockerel). Poularde is used in English in the context of cooking (as opposed to poultry farming); Larousse Gastronomique lists around 98 recipes for "Poulardes et poulets" with a further 100 or more for "Farm-raised chickens".In France many varieties of poularde exist, including the Poularde de Bresse, the Poularde du Mans and the Poularde de Loué, which are generally protected by the AOC or Label Rouge certifications. The high price of these chickens meant that they were traditionally reserved for holiday meals, such as Christmas feasts. Poularde: Examples of protected certifications outside France include the Poularde de Bruxelles from Belgium, the Steierische Poularde from Austria, and the Poularde Den Dungen from the Netherlands.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dye-sensitized solar cell** Dye-sensitized solar cell: A dye-sensitized solar cell (DSSC, DSC, DYSC or Grätzel cell) is a low-cost solar cell belonging to the group of thin film solar cells. It is based on a semiconductor formed between a photo-sensitized anode and an electrolyte, a photoelectrochemical system. The modern version of a dye solar cell, also known as the Grätzel cell, was originally co-invented in 1988 by Brian O'Regan and Michael Grätzel at UC Berkeley and this work was later developed by the aforementioned scientists at the École Polytechnique Fédérale de Lausanne (EPFL) until the publication of the first high efficiency DSSC in 1991. Michael Grätzel has been awarded the 2010 Millennium Technology Prize for this invention.The DSSC has a number of attractive features; it is simple to make using conventional roll-printing techniques, is semi-flexible and semi-transparent which offers a variety of uses not applicable to glass-based systems, and most of the materials used are low-cost. In practice it has proven difficult to eliminate a number of expensive materials, notably platinum and ruthenium, and the liquid electrolyte presents a serious challenge to making a cell suitable for use in all weather. Although its conversion efficiency is less than the best thin-film cells, in theory its price/performance ratio should be good enough to allow them to compete with fossil fuel electrical generation by achieving grid parity. Commercial applications, which were held up due to chemical stability problems, had been forecast in the European Union Photovoltaic Roadmap to significantly contribute to renewable electricity generation by 2020. Current technology: semiconductor solar cells: In a traditional solid-state semiconductor, a solar cell is made from two doped crystals, one doped with n-type impurities (n-type semiconductor), which add additional free conduction band electrons, and the other doped with p-type impurities (p-type semiconductor), which add additional electron holes. When placed in contact, some of the electrons in the n-type portion flow into the p-type to "fill in" the missing electrons, also known as electron holes. Eventually enough electrons will flow across the boundary to equalize the Fermi levels of the two materials. The result is a region at the interface, the p–n junction, where charge carriers are depleted and/or accumulated on each side of the interface. In silicon, this transfer of electrons produces a potential barrier of about 0.6 to 0.7 eV.When placed in the sun, photons of the sunlight can excite electrons on the p-type side of the semiconductor, a process known as photoexcitation. In silicon, sunlight can provide enough energy to push an electron out of the lower-energy valence band into the higher-energy conduction band. As the name implies, electrons in the conduction band are free to move about the silicon. When a load is placed across the cell as a whole, these electrons will flow out of the p-type side into the n-type side, lose energy while moving through the external circuit, and then flow back into the p-type material where they can once again re-combine with the valence-band hole they left behind. In this way, sunlight creates an electric current.In any semiconductor, the band gap means that only photons with that amount of energy, or more, will contribute to producing a current. In the case of silicon, the majority of visible light from red to violet has sufficient energy to make this happen. Unfortunately higher energy photons, those at the blue and violet end of the spectrum, have more than enough energy to cross the band gap; although some of this extra energy is transferred into the electrons, the majority of it is wasted as heat. Another issue is that in order to have a reasonable chance of capturing a photon, the n-type layer has to be fairly thick. This also increases the chance that a freshly ejected electron will meet up with a previously created hole in the material before reaching the p–n junction. These effects produce an upper limit on the efficiency of silicon solar cells, currently around 12 to 15% for common modules and up to 25% for the best laboratory cells (33.16% is the theoretical maximum efficiency for single band gap solar cells, see Shockley–Queisser limit.). Current technology: semiconductor solar cells: By far the biggest problem with the conventional approach is cost; solar cells require a relatively thick layer of doped silicon in order to have reasonable photon capture rates, and silicon processing is expensive. There have been a number of different approaches to reduce this cost over the last decade, notably the thin-film approaches, but to date they have seen limited application due to a variety of practical problems. Another line of research has been to dramatically improve efficiency through the multi-junction approach, although these cells are very high cost and suitable only for large commercial deployments. In general terms the types of cells suitable for rooftop deployment have not changed significantly in efficiency, although costs have dropped somewhat due to increased supply. Dye-sensitized solar cells: In the late 1960s it was discovered that illuminated organic dyes can generate electricity at oxide electrodes in electrochemical cells. In an effort to understand and simulate the primary processes in photosynthesis the phenomenon was studied at the University of California at Berkeley with chlorophyll extracted from spinach (bio-mimetic or bionic approach). On the basis of such experiments electric power generation via the dye sensitization solar cell (DSSC) principle was demonstrated and discussed in 1972. The instability of the dye solar cell was identified as a main challenge. Its efficiency could, during the following two decades, be improved by optimizing the porosity of the electrode prepared from fine oxide powder, but the instability remained a problem.A modern n-type DSSC, the most common type of DSSC, is composed of a porous layer of titanium dioxide nanoparticles, covered with a molecular dye that absorbs sunlight, like the chlorophyll in green leaves. The titanium dioxide is immersed under an electrolyte solution, above which is a platinum-based catalyst. As in a conventional alkaline battery, an anode (the titanium dioxide) and a cathode (the platinum) are placed on either side of a liquid conductor (the electrolyte). Dye-sensitized solar cells: The working principle for n-type DSSCs can be summarized into a few basic steps. Sunlight passes through the transparent electrode into the dye layer where it can excite electrons that then flow into the conduction band of the n-type semiconductor, typically titanium dioxide. The electrons from titanium dioxide then flow toward the transparent electrode where they are collected for powering a load. After flowing through the external circuit, they are re-introduced into the cell on a metal electrode on the back, also known as the counter electrode, and flow into the electrolyte. The electrolyte then transports the electrons back to the dye molecules and regenerates the oxidized dye. Dye-sensitized solar cells: The basic working principle above, is similar in a p-type DSSC, where the dye-sensitised semiconductor is of p-type nature (typically nickel oxide). However, instead of injecting an electron into the semiconductor, in a p-type DSSC, a hole flows from the dye into the valence band of the p-type semiconductor.Dye-sensitized solar cells separate the two functions provided by silicon in a traditional cell design. Normally the silicon acts as both the source of photoelectrons, as well as providing the electric field to separate the charges and create a current. In the dye-sensitized solar cell, the bulk of the semiconductor is used solely for charge transport, the photoelectrons are provided from a separate photosensitive dye. Charge separation occurs at the surfaces between the dye, semiconductor and electrolyte. Dye-sensitized solar cells: The dye molecules are quite small (nanometer sized), so in order to capture a reasonable amount of the incoming light the layer of dye molecules needs to be made fairly thick, much thicker than the molecules themselves. To address this problem, a nanomaterial is used as a scaffold to hold large numbers of the dye molecules in a 3-D matrix, increasing the number of molecules for any given surface area of cell. In existing designs, this scaffolding is provided by the semiconductor material, which serves double-duty. Dye-sensitized solar cells: Counter Electrode Materials One of the most important components of DSSC is the counter electrode. As stated before, the counter electrode is responsible for collecting electrons from the external circuit and introducing them back into the electrolyte to catalyze the reduction reaction of the redox shuttle, generally I3− to I−. Thus, it is important for the counter electrode to not only have high electron conductivity and diffusive ability, but also electrochemical stability, high catalytic activity and appropriate band structure. The most common counter electrode material currently used is platinum in DSSCs, but is not sustainable owing to its high costs and scarce resources. Thus, much research has been focused towards discovering new hybrid and doped materials that can replace platinum with comparable or superior electrocatalytic performance. One such category being widely studied includes chalcogen compounds of cobalt, nickel, and iron (CCNI), particularly the effects of morphology, stoichiometry, and synergy on the resulting performance. It has been found that in addition to the elemental composition of the material, these three parameters greatly impact the resulting counter electrode efficiency. Of course, there are a variety of other materials currently being researched, such as highly mesoporous carbons, tin-based materials, gold nanostructures, as well as lead-based nanocrystals. However, the following section compiles a variety of ongoing research efforts specifically relating to CCNI towards optimizing the DSSC counter electrode performance. Dye-sensitized solar cells: Morphology Even with the same composition, morphology of the nanoparticles that make up the counter electrode play such an integral role in determining the efficiency of the overall photovoltaic. Because a material's electrocatalytic potential is highly dependent on the amount of surface area available to facilitate the diffusion and reduction of the redox species, numerous research efforts have been focused towards understanding and optimizing the morphology of nanostructures for DSSC counter electrodes. Dye-sensitized solar cells: In 2017, Huang et al. utilized various surfactants in a microemulsion-assisted hydrothermal synthesis of CoSe2/CoSeO3 composite crystals to produce nanocubes, nanorods, and nanoparticles. Comparison of these three morphologies revealed that the hybrid composite nanoparticles, due to having the largest electroactive surface area, had the highest power conversion efficiency of 9.27%, even higher than its platinum counterpart. Not only that, the nanoparticle morphology displayed the highest peak current density and smallest potential gap between the anodic and cathodic peak potentials, thus implying the best electrocatalytic ability. Dye-sensitized solar cells: With a similar study but a different system, Du et al. in 2017 determined that the ternary oxide of NiCo2O4 had the greatest power conversion efficiency and electrocatalytic ability as nanoflowers when compared to nanorods or nanosheets. Du et al. realized that exploring various growth mechanisms that help to exploit the larger active surface areas of nanoflowers may provide an opening for extending DSSC applications to other fields. Dye-sensitized solar cells: Stoichiometry Of course, the composition of the material that is used as the counter electrode is extremely important to creating a working photovoltaic, as the valence and conduction energy bands must overlap with those of the redox electrolyte species to allow for efficient electron exchange. Dye-sensitized solar cells: In 2018, Jin et al. prepared ternary nickel cobalt selenide (NixCoySe) films at various stoichiometric ratios of nickel and cobalt to understand its impact on the resulting cell performance. Nickel and cobalt bimetallic alloys were known to have outstanding electron conduction and stability, so optimizing its stoichiometry would ideally produce a more efficient and stable cell performance than its singly metallic counterparts. Such is the result that Jin et al. found, as Ni0.12Co0.80Se achieved superior power conversion efficiency (8.61%), lower charge transfer impedance, and higher electrocatalytic ability than both its platinum and binary selenide counterparts. Dye-sensitized solar cells: Synergy One last area that has been actively studied is the synergy of different materials in promoting superior electroactive performance. Whether through various charge transport material, electrochemical species, or morphologies, exploiting the synergetic relationship between different materials has paved the way for even newer counter electrode materials. Dye-sensitized solar cells: In 2016, Lu et al. mixed nickel cobalt sulfide microparticles with reduced graphene oxide (rGO) nanoflakes to create the counter electrode. Lu et al. discovered not only that the rGO acted as a co-catalyst in accelerating the triiodide reduction, but also that the microparticles and rGO had a synergistic interaction that decreased the charge transfer resistance of the overall system. Although the efficiency of this system was slightly lower than its platinum analog (efficiency of NCS/rGO system: 8.96%; efficiency of Pt system: 9.11%), it provided a platform on which further research can be conducted. Dye-sensitized solar cells: Construction In the case of the original Grätzel and O'Regan design, the cell has 3 primary parts. On top is a transparent anode made of fluoride-doped tin dioxide (SnO2:F) deposited on the back of a (typically glass) plate. On the back of this conductive plate is a thin layer of titanium dioxide (TiO2), which forms into a highly porous structure with an extremely high surface area. The (TiO2) is chemically bound by a process called sintering. TiO2 only absorbs a small fraction of the solar photons (those in the UV). The plate is then immersed in a mixture of a photosensitive ruthenium-polypyridyl dye (also called molecular sensitizers) and a solvent. After soaking the film in the dye solution, a thin layer of the dye is left covalently bonded to the surface of the TiO2. The bond is either an ester, chelating, or bidentate bridging linkage. Dye-sensitized solar cells: A separate plate is then made with a thin layer of the iodide electrolyte spread over a conductive sheet, typically platinum metal. The two plates are then joined and sealed together to prevent the electrolyte from leaking. The construction is simple enough that there are hobby kits available to hand-construct them. Although they use a number of "advanced" materials, these are inexpensive compared to the silicon needed for normal cells because they require no expensive manufacturing steps. TiO2, for instance, is already widely used as a paint base. Dye-sensitized solar cells: One of the efficient DSSCs devices uses ruthenium-based molecular dye, e.g. [Ru(4,4'-dicarboxy-2,2'-bipyridine)2(NCS)2] (N3), that is bound to a photoanode via carboxylate moieties. The photoanode consists of 12 μm thick film of transparent 10–20 nm diameter TiO2 nanoparticles covered with a 4 μm thick film of much larger (400 nm diameter) particles that scatter photons back into the transparent film. The excited dye rapidly injects an electron into the TiO2 after light absorption. The injected electron diffuses through the sintered particle network to be collected at the front side transparent conducting oxide (TCO) electrode, while the dye is regenerated via reduction by a redox shuttle, I3−/I−, dissolved in a solution. Diffusion of the oxidized form of the shuttle to the counter electrode completes the circuit. Dye-sensitized solar cells: Mechanism of DSSCs The following steps convert in a conventional n-type DSSC photons (light) to current: The efficiency of a DSSC depends on four energy levels of the component: the excited state (approximately LUMO) and the ground state (HOMO) of the photosensitizer, the Fermi level of the TiO2 electrode and the redox potential of the mediator (I−/I3−) in the electrolyte. Dye-sensitized solar cells: Nanoplant-like morphology In DSSC, electrodes consisted of sintered semiconducting nanoparticles, mainly TiO2 or ZnO. These nanoparticle DSSCs rely on trap-limited diffusion through the semiconductor nanoparticles for the electron transport. This limits the device efficiency since it is a slow transport mechanism. Recombination is more likely to occur at longer wavelengths of radiation. Moreover, sintering of nanoparticles requires a high temperature of about 450 °C, which restricts the fabrication of these cells to robust, rigid solid substrates. It has been proven that there is an increase in the efficiency of DSSC, if the sintered nanoparticle electrode is replaced by a specially designed electrode possessing an exotic 'nanoplant-like' morphology. Dye-sensitized solar cells: Operation In a conventional n-type DSSC, sunlight enters the cell through the transparent SnO2:F top contact, striking the dye on the surface of the TiO2. Photons striking the dye with enough energy to be absorbed create an excited state of the dye, from which an electron can be "injected" directly into the conduction band of the TiO2. From there it moves by diffusion (as a result of an electron concentration gradient) to the clear anode on top. Dye-sensitized solar cells: Meanwhile, the dye molecule has lost an electron and the molecule will decompose if another electron is not provided. The dye strips one from iodide in electrolyte below the TiO2, oxidizing it into triiodide. This reaction occurs quite quickly compared to the time that it takes for the injected electron to recombine with the oxidized dye molecule, preventing this recombination reaction that would effectively short-circuit the solar cell. Dye-sensitized solar cells: The triiodide then recovers its missing electron by mechanically diffusing to the bottom of the cell, where the counter electrode re-introduces the electrons after flowing through the external circuit. Dye-sensitized solar cells: Efficiency Several important measures are used to characterize solar cells. The most obvious is the total amount of electrical power produced for a given amount of solar power shining on the cell. Expressed as a percentage, this is known as the solar conversion efficiency. Electrical power is the product of current and voltage, so the maximum values for these measurements are important as well, Jsc and Voc respectively. Finally, in order to understand the underlying physics, the "quantum efficiency" is used to compare the chance that one photon (of a particular energy) will create one electron. Dye-sensitized solar cells: In quantum efficiency terms, DSSCs are extremely efficient. Due to their "depth" in the nanostructure there is a very high chance that a photon will be absorbed, and the dyes are very effective at converting them to electrons. Most of the small losses that do exist in DSSC's are due to conduction losses in the TiO2 and the clear electrode, or optical losses in the front electrode. The overall quantum efficiency for green light is about 90%, with the "lost" 10% being largely accounted for by the optical losses in the top electrode. The quantum efficiency of traditional designs vary, depending on their thickness, but are about the same as the DSSC. Dye-sensitized solar cells: In theory, the maximum voltage generated by such a cell is simply the difference between the (quasi-)Fermi level of the TiO2 and the redox potential of the electrolyte, about 0.7 V under solar illumination conditions (Voc). That is, if an illuminated DSSC is connected to a voltmeter in an "open circuit", it would read about 0.7 V. In terms of voltage, DSSCs offer slightly higher Voc than silicon, about 0.7 V compared to 0.6 V. This is a fairly small difference, so real-world differences are dominated by current production, Jsc. Dye-sensitized solar cells: Although the dye is highly efficient at converting absorbed photons into free electrons in the TiO2, only photons absorbed by the dye ultimately produce current. The rate of photon absorption depends upon the absorption spectrum of the sensitized TiO2 layer and upon the solar flux spectrum. The overlap between these two spectra determines the maximum possible photocurrent. Typically used dye molecules generally have poorer absorption in the red part of the spectrum compared to silicon, which means that fewer of the photons in sunlight are usable for current generation. These factors limit the current generated by a DSSC, for comparison, a traditional silicon-based solar cell offers about 35 mA/cm2, whereas current DSSCs offer about 20 mA/cm2. Dye-sensitized solar cells: Overall peak power conversion efficiency for current DSSCs is about 11%. Current record for prototypes lies at 15%. Dye-sensitized solar cells: Degradation DSSCs degrade when exposed to light. In 2014 air infiltration of the commonly-used amorphous Spiro-MeOTAD hole-transport layer was identified as the primary cause of the degradation, rather than oxidation. The damage could be avoided by the addition of an appropriate barrier.The barrier layer may include UV stabilizers and/or UV absorbing luminescent chromophores (which emit at longer wavelengths which may be reabsorbed by the dye) and antioxidants to protect and improve the efficiency of the cell. Dye-sensitized solar cells: Advantages DSSCs are currently the most efficient third-generation (2005 Basic Research Solar Energy Utilization 16) solar technology available. Other thin-film technologies are typically between 5% and 13%, and traditional low-cost commercial silicon panels operate between 14% and 17%. This makes DSSCs attractive as a replacement for existing technologies in "low density" applications like rooftop solar collectors, where the mechanical robustness and light weight of the glass-less collector is a major advantage. They may not be as attractive for large-scale deployments where higher-cost higher-efficiency cells are more viable, but even small increases in the DSSC conversion efficiency might make them suitable for some of these roles as well. Dye-sensitized solar cells: There is another area where DSSCs are particularly attractive. The process of injecting an electron directly into the TiO2 is qualitatively different from that occurring in a traditional cell, where the electron is "promoted" within the original crystal. In theory, given low rates of production, the high-energy electron in the silicon could re-combine with its own hole, giving off a photon (or other form of energy) which does not result in current being generated. Although this particular case may not be common, it is fairly easy for an electron generated by another atom to combine with a hole left behind in a previous photoexcitation. Dye-sensitized solar cells: In comparison, the injection process used in the DSSC does not introduce a hole in the TiO2, only an extra electron. Although it is energetically possible for the electron to recombine back into the dye, the rate at which this occurs is quite slow compared to the rate that the dye regains an electron from the surrounding electrolyte. Recombination directly from the TiO2 to species in the electrolyte is also possible although, again, for optimized devices this reaction is rather slow. On the contrary, electron transfer from the platinum coated electrode to species in the electrolyte is necessarily very fast. Dye-sensitized solar cells: As a result of these favorable "differential kinetics", DSSCs work even in low-light conditions. DSSCs are therefore able to work under cloudy skies and non-direct sunlight, whereas traditional designs would suffer a "cutout" at some lower limit of illumination, when charge carrier mobility is low and recombination becomes a major issue. The cutoff is so low they are even being proposed for indoor use, collecting energy for small devices from the lights in the house.A practical advantage which DSSCs share with most thin-film technologies, is that the cell's mechanical robustness indirectly leads to higher efficiencies at higher temperatures. In any semiconductor, increasing temperature will promote some electrons into the conduction band "mechanically". The fragility of traditional silicon cells requires them to be protected from the elements, typically by encasing them in a glass box similar to a greenhouse, with a metal backing for strength. Such systems suffer noticeable decreases in efficiency as the cells heat up internally. DSSCs are normally built with only a thin layer of conductive plastic on the front layer, allowing them to radiate away heat much easier, and therefore operate at lower internal temperatures. Dye-sensitized solar cells: Disadvantages The major disadvantage to the DSSC design is the use of the liquid electrolyte, which has temperature stability problems. At low temperatures the electrolyte can freeze, halting power production and potentially leading to physical damage. Higher temperatures cause the liquid to expand, making sealing the panels a serious problem. Another disadvantage is that costly ruthenium (dye), platinum (catalyst) and conducting glass or plastic (contact) are needed to produce a DSSC. A third major drawback is that the electrolyte solution contains volatile organic compounds (or VOC's), solvents which must be carefully sealed as they are hazardous to human health and the environment. This, along with the fact that the solvents permeate plastics, has precluded large-scale outdoor application and integration into flexible structure.Replacing the liquid electrolyte with a solid has been a major ongoing field of research. Recent experiments using solidified melted salts have shown some promise, but currently suffer from higher degradation during continued operation, and are not flexible. Dye-sensitized solar cells: Photocathodes and tandem cells Dye sensitised solar cells operate as a photoanode (n-DSC), where photocurrent result from electron injection by the sensitized dye. Photocathodes (p-DSCs) operate in an inverse mode compared to the conventional n-DSC, where dye-excitation is followed by rapid electron transfer from a p-type semiconductor to the dye (dye-sensitized hole injection, instead of electron injection). Such p-DSCs and n-DSCs can be combined to construct tandem solar cells (pn-DSCs) and the theoretical efficiency of tandem DSCs is well beyond that of single-junction DSCs. Dye-sensitized solar cells: A standard tandem cell consists of one n-DSC and one p-DSC in a simple sandwich configuration with an intermediate electrolyte layer. n-DSC and p-DSC are connected in series, which implies that the resulting photocurrent will be controlled by the weakest photoelectrode, whereas photovoltages are additive. Thus, photocurrent matching is very important for the construction of highly efficient tandem pn-DSCs. However, unlike n-DSCs, fast charge recombination following dye-sensitized hole injection usually resulted in low photocurrents in p-DSC and thus hampered the efficiency of the overall device. Dye-sensitized solar cells: Researchers have found that using dyes comprising a perylenemonoimide (PMI) as the acceptor and an oligothiophene coupled to triphenylamine as the donor greatly improve the performance of p-DSC by reducing charge recombination rate following dye-sensitized hole injection. The researchers constructed a tandem DSC device with NiO on the p-DSC side and TiO2 on the n-DSC side. Photocurrent matching was achieved through adjustment of NiO and TiO2 film thicknesses to control the optical absorptions and therefore match the photocurrents of both electrodes. The energy conversion efficiency of the device is 1.91%, which exceeds the efficiency of its individual components, but is still much lower than that of high performance n-DSC devices (6%–11%). The results are still promising since the tandem DSC was in itself rudimentary. The dramatic improvement in performance in p-DSC can eventually lead to tandem devices with much greater efficiency than lone n-DSCs.As previously mentioned, using a solid-state electrolyte has several advantages over a liquid system (such as no leakage and faster charge transport), which has also been realised for dye-sensitised photocathodes. Using electron transporting materials such as PCBM, TiO2 and ZnO instead of the conventional liquid redox couple electrolyte, researchers have managed to fabricate solid state p-DSCs (p-ssDSCs), aiming for solid state tandem dye sensitized solar cells, which have the potential to achieve much greater photovoltages than a liquid tandem device. Development: The dyes used in early experimental cells (circa 1995) were sensitive only in the high-frequency end of the solar spectrum, in the UV and blue. Newer versions were quickly introduced (circa 1999) that had much wider frequency response, notably "triscarboxy-ruthenium terpyridine" [Ru(4,4',4"-(COOH)3-terpy)(NCS)3], which is efficient right into the low-frequency range of red and IR light. The wide spectral response results in the dye having a deep brown-black color, and is referred to simply as "black dye". The dyes have an excellent chance of converting a photon into an electron, originally around 80% but improving to almost perfect conversion in more recent dyes, the overall efficiency is about 90%, with the "lost" 10% being largely accounted for by the optical losses in top electrode. Development: A solar cell must be capable of producing electricity for at least twenty years, without a significant decrease in efficiency (life span). The "black dye" system was subjected to 50 million cycles, the equivalent of ten years' exposure to the sun in Switzerland. No discernible performance decrease was observed. However the dye is subject to breakdown in high-light situations. Over the last decade an extensive research program has been carried out to address these concerns. The newer dyes included 1-ethyl-3 methylimidazolium tetrocyanoborate [EMIB(CN)4] which is extremely light- and temperature-stable, copper-diselenium [Cu(In,GA)Se2] which offers higher conversion efficiencies, and others with varying special-purpose properties. Development: DSSCs are still at the start of their development cycle. Efficiency gains are possible and have recently started more widespread study. These include the use of quantum dots for conversion of higher-energy (higher frequency) light into multiple electrons, using solid-state electrolytes for better temperature response, and changing the doping of the TiO2 to better match it with the electrolyte being used. Development: New developments 2003 A group of researchers at the École Polytechnique Fédérale de Lausanne (EPFL) has reportedly increased the thermostability of DSC by using amphiphilic ruthenium sensitizer in conjunction with quasi-solid-state gel electrolyte. The stability of the device matches that of a conventional inorganic silicon-based solar cell. The cell sustained heating for 1,000 h at 80 °C. The group has previously prepared a ruthenium amphiphilic dye Z-907 (cis-Ru(H2dcbpy)(dnbpy)(NCS)2, where the ligand H2dcbpy is 4,4′-dicarboxylic acid-2,2′-bipyridine and dnbpy is 4,4′-dinonyl-2,2′-bipyridine) to increase dye tolerance to water in the electrolytes. In addition, the group also prepared a quasi-solid-state gel electrolyte with a 3-methoxypropionitrile (MPN)-based liquid electrolyte that was solidified by a photochemically stable fluorine polymer, polyvinylidenefluoride-co-hexafluoropropylene (PVDF-HFP). Development: The use of the amphiphilic Z-907 dye in conjunction with the polymer gel electrolyte in DSC achieved an energy conversion efficiency of 6.1%. More importantly, the device was stable under thermal stress and soaking with light. The high conversion efficiency of the cell was sustained after heating for 1,000 h at 80 °C, maintaining 94% of its initial value. After accelerated testing in a solar simulator for 1,000 h of light-soaking at 55 °C (100 mW cm−2) the efficiency had decreased by less than 5% for cells covered with an ultraviolet absorbing polymer film. These results are well within the limit for that of traditional inorganic silicon solar cells. Development: The enhanced performance may arise from a decrease in solvent permeation across the sealant due to the application of the polymer gel electrolyte. The polymer gel electrolyte is quasi-solid at room temperature, and becomes a viscous liquid (viscosity: 4.34 mPa·s) at 80 °C compared with the traditional liquid electrolyte (viscosity: 0.91 mPa·s). The much improved stabilities of the device under both thermal stress and soaking with light has never before been seen in DSCs, and they match the durability criteria applied to solar cells for outdoor use, which makes these devices viable for practical application. Development: 2006 The first successful solid-hybrid dye-sensitized solar cells were reported.To improve electron transport in these solar cells, while maintaining the high surface area needed for dye adsorption, two researchers have designed alternate semiconductor morphologies, such as arrays of nanowires and a combination of nanowires and nanoparticles, to provide a direct path to the electrode via the semiconductor conduction band. Such structures may provide a means to improve the quantum efficiency of DSSCs in the red region of the spectrum, where their performance is currently limited.In August 2006, to prove the chemical and thermal robustness of the 1-ethyl-3 methylimidazolium tetracyanoborate solar cell, the researchers subjected the devices to heating at 80 °C in the dark for 1000 hours, followed by light soaking at 60 °C for 1000 hours. After dark heating and light soaking, 90% of the initial photovoltaic efficiency was maintained – the first time such excellent thermal stability has been observed for a liquid electrolyte that exhibits such a high conversion efficiency. Contrary to silicon solar cells, whose performance declines with increasing temperature, the dye-sensitized solar-cell devices were only negligibly influenced when increasing the operating temperature from ambient to 60 °C. Development: 2007 Wayne Campbell at Massey University, New Zealand, has experimented with a wide variety of organic dyes based on porphyrin. In nature, porphyrin is the basic building block of the hemoproteins, which include chlorophyll in plants and hemoglobin in animals. He reports efficiency on the order of 5.6% using these low-cost dyes. Development: 2008 An article published in Nature Materials demonstrated cell efficiencies of 8.2% using a new solvent-free liquid redox electrolyte consisting of a melt of three salts, as an alternative to using organic solvents as an electrolyte solution. Although the efficiency with this electrolyte is less than the 11% being delivered using the existing iodine-based solutions, the team is confident the efficiency can be improved. Development: 2009 A group of researchers at Georgia Tech made dye-sensitized solar cells with a higher effective surface area by wrapping the cells around a quartz optical fiber. The researchers removed the cladding from optical fibers, grew zinc oxide nanowires along the surface, treated them with dye molecules, surrounded the fibers by an electrolyte and a metal film that carries electrons off the fiber. The cells are six times more efficient than a zinc oxide cell with the same surface area. Photons bounce inside the fiber as they travel, so there are more chances to interact with the solar cell and produce more current. These devices only collect light at the tips, but future fiber cells could be made to absorb light along the entire length of the fiber, which would require a coating that is conductive as well as transparent. Max Shtein of the University of Michigan said a sun-tracking system would not be necessary for such cells, and would work on cloudy days when light is diffuse. Development: 2010 Researchers at the École Polytechnique Fédérale de Lausanne and at the Université du Québec à Montréal claim to have overcome two of the DSC's major issues: "New molecules" have been created for the electrolyte, resulting in a liquid or gel that is transparent and non-corrosive, which can increase the photovoltage and improve the cell's output and stability. At the cathode, platinum was replaced by cobalt sulfide, which is far less expensive, more efficient, more stable and easier to produce in the laboratory. 2011 Dyesol and Tata Steel Europe announced in June the development of the world's largest dye sensitized photovoltaic module, printed onto steel in a continuous line.Dyesol and CSIRO announced in October a Successful Completion of Second Milestone in Joint Dyesol / CSIRO Project. Dyesol Director Gordon Thompson said, "The materials developed during this joint collaboration have the potential to significantly advance the commercialisation of DSC in a range of applications where performance and stability are essential requirements. Development: Dyesol is extremely encouraged by the breakthroughs in the chemistry allowing the production of the target molecules. This creates a path to the immediate commercial utilisation of these new materials."Dyesol and Tata Steel Europe announced in November the targeted development of Grid Parity Competitive BIPV solar steel that does not require government subsidised feed in tariffs. TATA-Dyesol "Solar Steel" Roofing is currently being installed on the Sustainable Building Envelope Centre (SBEC) in Shotton, Wales. Development: 2012 Northwestern University researchers announced a solution to a primary problem of DSSCs, that of difficulties in using and containing the liquid electrolyte and the consequent relatively short useful life of the device. This is achieved through the use of nanotechnology and the conversion of the liquid electrolyte to a solid. The current efficiency is about half that of silicon cells, but the cells are lightweight and potentially of much lower cost to produce. Development: 2013 During the last 5–10 years, a new kind of DSSC has been developed – the solid state dye-sensitized solar cell. In this case the liquid electrolyte is replaced by one of several solid hole conducting materials. From 2009 to 2013 the efficiency of Solid State DSSCs has dramatically increased from 4% to 15%. Michael Grätzel announced the fabrication of Solid State DSSCs with 15.0% efficiency, reached by the means of a hybrid perovskite CH3NH3PbI3 dye, subsequently deposited from the separated solutions of CH3NH3I and PbI2.The first architectural integration was demonstrated at EPFL's SwissTech Convention Center in partnership with Romande Energie. The total surface is 300 m2, in 1400 modules of 50 cm x 35 cm. Designed by artists Daniel Schlaepfer and Catherine Bolle. Development: 2018 Researchers have investigated the role of surface plasmon resonances present on gold nanorods in the performance of dye-sensitized solar cells. They found that with an increase nanorod concentration, the light absorption grew linearly; however, charge extraction was also dependent on the concentration. With an optimized concentration, they found that the overall power conversion efficiency improved from 5.31 to 8.86% for Y123 dye-sensitized solar cells.The synthesis of one-dimensional TiO2 nanostructures directly on fluorine-doped tin oxide glass substrates was successful demonstrated via a two-stop solvothermal reaction. Additionally, through a TiO2 sol treatment, the performance of the dual TiO2 nanowire cells was enhanced, reaching a power conversion efficiency of 7.65%.Stainless steel based counter-electrodes for DSSCs have been reported which further reduce cost compared to conventional platinum based counter electrode and are suitable for outdoor application.Researchers from EPFL have advanced the DSSCs based on copper complexes redox electrolytes, which have achieved 13.1% efficiency under standard AM1.5G, 100 mW/cm2 conditions and record 32% efficiency under 1000 lux of indoor light.Researchers from Uppsala University have used n-type semiconductors instead of redox electrolyte to fabricate solid state p-type dye sensitized solar cells. Development: 2021 The field of building-integrated photovoltaics (BIPV) has gained attention from the scientific community due to its potential to reduce pollution and materials and electricity costs, as well as to improve the aesthetics of a building. In recent years, scientists have looked at ways to incorporate DSSC’s in BIPV applications, since the dominant Si-based PV systems in the market have a limited presence in this field due to their energy-intensive manufacturing methods, poor conversion efficiency under low light intensities, and high maintenance requirements. In 2021, a group of researchers from the Silesian University of Technology in Poland developed a DSSC in which the classic glass counter electrode was replaced by an electrode based on a ceramic tile and nickel foil. The motivation for this change was that, despite that glass substrates have resulted in the highest recorded efficiencies for DSSC’s, for BIPV applications like roof tiles or building facades, lighter and more flexible materials are essential. This includes plastic films, metals, steel, or paper, which may also reduce manufacturing costs. The team found that the cell had an efficiency of 4% (close to that of a solar cell with a glass counter electrode), demonstrated the potential for creating building-integrated DSSC’s that are stable and low-cost. Development: 2022 Photosensitizers are dye compounds that absorb the photons from incoming light and eject electrons, producing an electric current that can be used to power a device or a storage unit. According to a new study performed by Michael Grätzel and fellow scientist Anders Hagfeldt, advances in photosensitizers have resulted in a substantial improvement in performance of DSSC’s under solar and ambient light conditions. Another key factor to achieve power-conversion records is cosensitization, due to its ability combine dyes that can absorb light across a wider range of the light spectrum. Cosensitization is a chemical manufacturing method that produces DSSC electrodes containing two or more different dyes with complementary optical absorption capabilities, enabling the use of all available sunlight.The researchers from Switzerland’s École polytechnique fédérale de Lausanne (EPFL) found that the efficiency to cosensitized solar cells can be raised by the pre-adsorption of a monolayer of hydroxamic acid derivative on a surface of nanocrystalline mesoporous titanium dioxide, which functions as the electron transport mechanism of the electrode. The two photosensitizer molecules used in the study were the organic dye SL9, which served as the primary long wavelength-light harvester, and the dye SL10, which provided an additional absorption peak that compensates the SL9’s inefficient blue light harvesting. It was found that adding this hydroxamic acid layer improved the dye layer’s molecular packing and ordering. This slowed down the adsorption of the sensitizers and augmented their fluorescence quantum yield, improving the power conversion efficiency of the cell.The DSSC developed by the team showed a record-breaking power conversion efficiency of 15.2% under standard global simulated sunlight and long-term operational stability over 500 hours. In addition, devices with a larger active area exhibited efficiencies of around 30% while maintaining high stability, offering new possibilities for the DSSC field. Market introduction: Several commercial providers are promising availability of DSCs in the near future: Fujikura is a major supplier of DSSC's for applications in IoT, smart factories, agriculture and infrastructure modelling. (See : Fujikura Ltd. | Fujikura Releases thin Dye-Sensitized Solar Cell module panels) and also (https://dsc.fujikura.jp/en/). Market introduction: Dyesol officially opened its new manufacturing facilities in Queanbeyan Australia on 7 October 2008. It has subsequently announced partnerships with Tata Steel (TATA-Dyesol) and Pilkington Glass (Dyetec-Solar) for the development and large scale manufacture of DSC BIPV. Dyesol has also entered working relationships with Merck, Umicore, CSIRO, Japanese Ministry of Economy and Trade, Singapore Aerospace Manufacturing and a joint Venture with TIMO Korea (Dyesol-TIMO). Market introduction: Solaronix, a Swiss company specialized in the production of DSC materials since 1993, has extended their premises in 2010 to host a manufacturing pilot line of DSC modules. Market introduction: SolarPrint was founded in Ireland in 2008 by Dr. Mazhar Bari, Andre Fernon and Roy Horgan. SolarPrint was the first Ireland-based commercial entity involved in the manufacturing of PV technology. SolarPrint's innovation was the solution to the solvent-based electrolyte which to date has prohibited the mass commercialisation of DSSC. The company went into receivership in 2014 and was wound up. Market introduction: G24innovations founded in 2006, based in Cardiff, South Wales, UK. On 17 October 2007, claimed the production of the first commercial grade dye sensitised thin films. Sony Corporation has developed dye-sensitized solar cells with an energy conversion efficiency of 10%, a level seen as necessary for commercial use. Tasnee Enters Strategic Investment Agreement with Dyesol. Market introduction: H.Glass was founded 2011 in Switzerland. H.Glass has put enormous efforts to create industrial process for the DSSC technology – the first results where shown at the EXPO 2015 in Milano at the Austrian Pavilion. The milestone for DSSC is the Science Tower in Austria – it is the largest installation of DSSC in the world – carried out by SFL technologies. Market introduction: Exeger Operations AB, Sweden, has built a factory in Stockholm with a capacity of 300,000m2. SoftBank Group Corp. has made two investments of US$10M in Exeger during 2019. [1]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barium sulfite** Barium sulfite: Barium sulfite is the inorganic compound with the chemical formula BaSO3. It is a white powder that finds few applications. It is an intermediate in the carbothermal reduction of barium sulfate to barium sulfide: BaSO4 + CO → BaSO3 + CO2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metabolic typing** Metabolic typing: Metabolic typing is a pseudoscience whose proponents believe that each person has a unique metabolism, and that the proportion of macromolecules (proteins, carbohydrates and fats) which are optimal for one person may not be for a second, and could even be detrimental to them. Metabolic typing: Metabolic typing uses common visible symptoms related to the skin, eyes, and other parts of the body to assess different aspects of a person's metabolism and categorize them into broad metabolic types. In addition, some proponents of metabolic typing use tests such as hair analysis to determine a person's metabolic type.A number of somewhat different metabolic typing diet plans are currently marketed, though the validity and effectiveness of metabolic typing have yet to be established. Background: Metabolic typing was introduced by William Donald Kelley, a dentist, in the 1960s. Kelley advocated basing dietary choices on the activity of one's sympathetic and parasympathetic nervous systems. In 1970, Kelley was convicted of practicing medicine without a license, as he had diagnosed a patient with lung cancer based on a fingerstick blood test and prescribed nutritional therapy. He continued to promote a metabolic typing diet through the 1980s. The practice has been further developed by others including Harold J. Kristal and William Wolcott. Effectiveness: Some metabolic typing companies use a battery of blood and urine tests performed by reputable laboratories, but interpret the results in an unconventional and medically questionable fashion. During a 1985 investigation into one such firm, an investigator sent two separate samples of his own blood and urine for analysis. He received two drastically different "metabolic typing" reports and dietary plans. Both plans involved the purchase of dietary supplements costing several dollars per day. Metabolic therapies: "Metabolic therapy", including administration of laetrile, was promoted for cancer patients by John Richardson in the San Francisco Bay Area in the 1970s, until his arrest for violating the California Cancer Law and revocation of his license by the California Board of Medical Quality Assurance.The Memorial Sloan-Kettering Cancer Center (MSKCC) website describes metabolic therapies as "strict dietary and detoxification regimens touted to prevent and treat cancer and degenerative diseases", a term and definition different from that used for metabolic typing in this Wikipedia article. The MSKCC website notes, in relation to three such anti-cancer therapies, that "...retrospective reviews of the Gerson, Kelley, and Contreras metabolic therapies show no evidence of efficacy." Metabolic diet: William Donald Kelley, in his book, classified the Metabolism of an individual into three categories. Few are fast oxidizer, few are a slow oxidizer. Some have a normal rate of food oxidization called mixed oxidizer. Based on this rate of oxidization the Diet (nutrition) for an individual varies. Since fast oxidizers oxidize food quickly, they are advised to rely more on fat protein efficient diet. This diet will help them to bear hunger. Meanwhile, slow oxidizers are given carbohydrate efficient diet. Eating more proteins or fats can cause Abdominal pain in them. Mixed oxidizer eat a mixture of fat and protein efficient diet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Planets in science fiction** Planets in science fiction: Planets in science fiction are fictional planets that appear in various media of the science fiction genre as story-settings or depicted locations. Planet lists: For planets from specific fictional milieux, use the following lists: Literature Alliance–Union Universe by C. J. Cherryh: planet list The works of Hal Clement: planet list Childe Cycle by Gordon R. Dickson: planet list Demon Princes by Jack Vance: planet list Known Space by Larry Niven: planet list Noon Universe by Arkady and Boris Strugatsky: planet list The Three Worlds Cycle by Ian Irvine: planet list Time Quintet by Madeleine L'Engle: planet list Uplift by David Brin: planet list Various works by Kurt Vonnegut: Tralfamadore (different planets with the same name) Comics DC Comics: planet list Marvel Comics: planet list Film and television Marvel Cinematic Universe: planet list Star Wars: planet list Animation Teenage Mutant Ninja Turtles: planet list Computer/video games Warcraft: planet list List of planets: Planets and the works or franchise they appear in. This lists planets that have their own Wikipedia articles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stone carving** Stone carving: Stone carving is an activity where pieces of rough natural stone are shaped by the controlled removal of stone. Owing to the permanence of the material, stone work has survived which was created during our prehistory or past time. Work carried out by paleolithic societies to create stone tools is more often referred to as knapping. Stone carving that is done to produce lettering is more often referred to as lettering. The process of removing stone from the earth is called mining or quarrying. Stone carving: Stone carving is one of the processes which may be used by an artist when creating a sculpture. The term also refers to the activity of masons in dressing stone blocks for use in architecture, building or civil engineering. It is also a phrase used by archaeologists, historians, and anthropologists to describe the activity involved in making some types of petroglyphs. History: The earliest known works of representational art are stone carvings. Often marks carved into rock or petroglyphs will survive where painted work will not. Prehistoric Venus figurines such as the Venus of Berekhat Ram may be as old as 250,000 years, and are carved in stones such as tuff and limestone. These earliest examples of the stone carving are the result of hitting or scratching a softer stone with a harder one, although sometimes more resilient materials such as antlers are known to have been used for relatively soft stone. Another early technique was to use an abrasive that was rubbed on the stone to remove the unwanted area. History: Prior to the discovery of steel by any culture, all stone carving was carried out by using an abrasion technique, following rough hewing of the stone block using hammers. The reason for this is that bronze, the hardest available metal until steel, is not hard enough to work any but the softest stone. The Ancient Greeks used the ductility of bronze to trap small granules of carborundum, that are naturally occurring on the island of Milos, thus making a very efficient file for abrading the stone. History: The development of iron made possible stone carving tools, such as chisels, drills and saws made from steel, that were capable of being hardened and tempered to a state hard enough to cut stone without deforming, while not being so brittle as to shatter. Carving tools have changed little since then. Modern, industrial, large quantity techniques still rely heavily on abrasion to cut and remove stone, although at a significantly faster rate with processes such as water erosion and diamond saw cutting. History: One modern stone carving technique uses a new process: The technique of applying sudden high temperature to the surface. The expansion of the top surface due to the sudden increase in temperature causes it to break away. On a small scale, Oxy-acetylene torches are used. On an industrial scale, lasers are used. On a massive scale, carvings such as the Crazy Horse Memorial carved from the Harney Peak granite of Mount Rushmore and the Confederate Memorial Park in Albany, Georgia are produced using jet heat torches. Stone sculpture: Carving stone into sculpture is an activity older than civilization itself. Prehistoric sculptures were usually human forms, such as the Venus of Willendorf and the faceless statues of the Cycladic cultures. Later cultures devised animal, human-animal and abstract forms in stone. The earliest cultures used abrasive techniques, and modern technology employs pneumatic hammers and other devices. But for most of human history, sculptors used hammer and chisel as the basic tools for carving stone. Stone sculpture: The process begins with the selection of a stone for carving. Some artists use the stone itself as inspiration; the Renaissance artist Michelangelo claimed that his job was to free the human form trapped inside the block. Other artists begin with a form already in mind and find a stone to complement their vision. The sculptor may begin by forming a model in clay or wax, sketching the form of the statue on paper or drawing a general outline of the statue on the stone itself. Stone sculpture: When ready to carve, the artist usually begins by knocking off large portions of unwanted stone. This is the "roughing out" stage of the sculpting process. For this task they may select a point chisel, which is a long, hefty piece of steel with a point at one end and a broad striking surface at the other. A pitching tool may also be used at this early stage; which is a wedge-shaped chisel with a broad, flat edge. The pitching tool is useful for splitting the stone and removing large, unwanted chunks. Those two chisels are used in combination with a masons driving hammer. Stone sculpture: Once the general shape of the statue has been determined, the sculptor uses other tools to refine the figure. A toothed chisel or claw chisel has multiple gouging surfaces which create parallel lines in the stone. These tools are generally used to add texture to the figure. An artist might mark out specific lines by using calipers to measure an area of stone to be addressed, and marking the removal area with pencil, charcoal or chalk. The stone carver generally uses a shallower stroke at this point in the process, usually in combination with a wooden mallet. Stone sculpture: Eventually the sculptor has changed the stone from a rough block into the general shape of the finished statue. Tools called rasps and rifflers are then used to enhance the shape into its final form. A rasp is a flat, steel tool with a coarse surface. The sculptor uses broad, sweeping strokes to remove excess stone as small chips or dust. A riffler is a smaller variation of the rasp, which can be used to create details such as folds of clothing or locks of hair. Stone sculpture: The final stage of the carving process is polishing. Sandpaper can be used as a first step in the polishing process, or sand cloth. Emery, a stone that is harder and rougher than the sculpture media, is also used in the finishing process. This abrading, or wearing away, brings out the color of the stone, reveals patterns in the surface and adds a sheen. Tin and iron oxides are often used to give the stone a highly reflective exterior. Stone sculpture: Sculptures can be carved via either the direct or the indirect carving method. Indirect carving is a way of carving by using an accurate clay, wax or plaster model, which is then copied with the use of a compass or proportional dividers or a pointing machine. The direct carving method is a way of carving in a more intuitive way, without first making an elaborate model. Sometimes a sketch on paper or a rough clay draft is made. Stone carving considerations: Stone has been used for carving since ancient times for many reasons. Most types of stone are easier to find than metal ores, which have to be mined and smelted. Stone can be dug from the surface and carved with hand tools. Stone is more durable than wood, and carvings in stone last much longer than wooden artifacts. Stone comes in many varieties and artists have abundant choices in color, quality and relative hardness. Stone carving considerations: Soft stone such as chalk, soapstone, pumice and Tufa can be easily carved with found items such as harder stone or in the case of chalk even the fingernail. Limestones and marbles can be worked using abrasives and simple iron tools. Granite, basalt and some metamorphic stone is difficult to carve even with iron or steel tools; usually tungsten carbide tipped tools are used, although abrasives still work well. Modern techniques often use abrasives attached to machine tools to cut the stone. Precious and semi-precious gemstones are also carved into delicate shapes for jewellery or larger items, and polished; this is sometimes referred to as lapidary, although strictly speaking lapidary refers to cutting and polishing alone. When worked, some stones release dust that can damage lungs (silica crystals are usually to blame), so a respirator is sometimes needed. Stone shaping and tools: Basic stone carving tools fall into five categories: Percussion tools for hitting - such as mallets, axes, adzes, bouchards and toothed hammers. Tools for rough shaping of stone, to form a block the size needed for the carving. These include feathers and wedges and pitching tools. Chisels for cutting - such as lettering chisels, points, pitching tools, and claw chisels. Chisels, in turn, may be handheld and hammered or pneumatic powered. Diamond tools which include burrs, cup wheels, and blades mounted on a host of power tools. These are used sometimes through the entire carving process from rough work to the final finish. Stone shaping and tools: Abrasives for material removals - such as carborundum blocks, drills, saws, grinding and cutting wheels, water-abrasive machinery and dressing tools such as French and English drags.More advanced processes, such as laser cutting and jet torches, use sudden high temperature with a combination of cooling water to spall flakes of stone. Other modern processes may involve diamond-wire machines or other large scale production equipment to remove large sections of undesired stone. Stone shaping and tools: The use of chisels for stone carving is possible in several ways. Two are: The mason's stroke, in which a flat chisel is used at approximately 90 degrees to the surface in an organized sweep. It shatters the stone beneath it and each successive pass lowers the surface. Stone shaping and tools: The lettering stroke, in which the chisel is used along the surface at approximately 30 degrees to cut beneath the existing surface.There are many types and styles of stone carving tools, each carver will decide for themselves which tools to use. Traditionalists might use hand tools only. Lettering chisels for incising small strokes create the details of letters in larger applications. Stone shaping and tools: Fishtail carving chisels are used to create pockets, valleys and for intricate carving, whilst providing good visibility around the stone. Masonry chisels are used for the general shaping of stones. Stone point tools are used to rough out the surface of the stone. Stone claw tools are used to remove the peaks and troughs left from the previously used tools. Stone pitching tools are used to remove large quantities of stone. Stone shaping and tools: Stone nickers are used to split stones by tracing a line along the stone with progressive strikes until the stone breaks along the line.Powered pneumatic hammers make the hard work easier. Progress on shaping stone is faster with pneumatic carving tools. Air hammers (such as Cuturi) place many thousands of impacts per minute upon the end of the tool, which would usually be manufactured or modified to suit the purpose. This type of tool creates the ability to 'shave' the stone, providing a smooth and consistent stroke, allowing for larger surfaces to be worked. Stone shaping and tools: Among modern tool types, there are two main stone carving chisels: Heat treated high carbon steel tools - Generally forged Tungsten carbide tipped tools - Generally forged, slotted, and carbide inserts brazed in to provide a harder and longer-wearing cutting edge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Influential observation** Influential observation: In statistics, an influential observation is an observation for a statistical calculation whose deletion from the dataset would noticeably change the result of the calculation. In particular, in regression analysis an influential observation is one whose deletion has a large effect on the parameter estimates. Assessment: Various methods have been proposed for measuring influence. Assume an estimated regression y=Xb+e , where y is an n×1 column vector for the response variable, X is the n×k design matrix of explanatory variables (including a constant), e is the n×1 residual vector, and b is a k×1 vector of estimates of some population parameter β∈Rk . Also define H≡X(XTX)−1XT , the projection matrix of X . Then we have the following measures of influence: DFBETA i≡b−b(−i)=(XTX)−1xiTei1−hii , where b(−i) denotes the coefficients estimated with the i-th row xi of X deleted, hii=xi(XTX)−1xiT denotes the i-th value of matrix's H main diagonal. Thus DFBETA measures the difference in each parameter estimate with and without the influential point. There is a DFBETA for each variable and each observation (if there are N observations and k variables there are N·k DFBETAs). Table shows DFBETAs for the third dataset from Anscombe's quartet (bottom left chart in the figure): Outliers, leverage and influence: An outlier may be defined as a data point that differs significantly from other observations. A high-leverage point are observations made at extreme values of independent variables. Both types of atypical observations will force the regression line to be close to the point. In Anscombe's quartet, the bottom right image has a point with high leverage and the bottom left image has an outlying point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ganem oxidation** Ganem oxidation: In organic chemistry, the Ganem oxidation is a name reaction that allows for the preparation of carbonyls from primary or secondary alkyl halides with the use of trialkylamine N-oxides, such as N-methylmorpholine N-oxide or trimethylamine N-oxide. Mechanism: As in other oxoammonium-catalyzed oxidation reactions, the negatively charged oxygen atom of the trialkylamine N-oxide molecule attacks the alkyl halide in a SN2 manner, kicking of the halide as a leaving group. A trialkylamine deprotonates the α-carbon atom, the resulting electron pair shifts onto the oxygen atom, which shifts its own excess electron pair onto the nitrogen atom. This generates the desired carbonyl, as well as the aforementioned trialkylamine. The reaction is an enhancement of the Kornblum oxidation protocol, which was originally developed using dimethyl sulfoxide or pyridine-N-oxide as the nucleophile. Applications: The Ganem oxidation has been used as an intermediate step in the total synthesis of (−)-okilactomycin, converting a primary alkyl halide into an aldehyde.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semiwadcutter** Semiwadcutter: A semiwadcutter (SWC) or flat-nose is a type of all-purpose bullet commonly used in revolvers. The SWC combines features of the traditional round-nosed bullets and the wadcutter bullets used in target shooting, and is used in both revolver and rifle cartridges for hunting, target shooting and plinking. Full wadcutters frequently have problems reliably feeding from the magazines of semi-automatic pistols, so SWCs may be used when a true WC is desired but cannot be used for this reason. Semiwadcutter: The semiwadcutter design consists of a roughly conical shape with the tip truncated flat (called a meplat), sitting on a cylinder (A at right). The base of the cone is slightly smaller in diameter than the cylinder, leaving a sharp shoulder. The flat nose punches a clean hole in the target, rather than stretching/tearing it like a round nose bullet would, and the sharp shoulder enlarges the hole neatly, allowing easy and accurate scoring of the target. The SWC design offers better external ballistics than the wadcutter, as its conical nose produces less drag than the flat cylinder. A typical modification is to alter the conical section to make the sides concave, to reduce the bullet mass, or convex, to increase it. B shows a concave sided SWC, typical of a lightweight .45 ACP bullet used in bullseye shooting. The concave sides reduce the bullet weight, and thus the recoil, while keeping the overall length of the bullet long enough to feed reliably in a semi-automatic pistol such as the M1911 commonly found in bullseye competitions. Semiwadcutter: Some of the most famous SWC designs were developed by Elmer Keith for use in handgun hunting. These designs (C) use a wider front, and convex sides on the "cone" in front. This puts more weight in the front of the bullet, allowing a heavier bullet with no reduction in case capacity. Since Keith was a prime motivating force in the development of the first magnum handgun cartridge, the .357 Magnum, he was very interested in maximizing the amount of case volume for the slower burning powders needed to push heavy bullets at high velocities. The choice of bullet for the .357 Magnum cartridge varied during its development. During the development at Smith & Wesson, the original Keith bullet was modified slightly, to the form of the Sharpe bullet, which itself was based upon the Keith bullet, but which had 5/6 of the bearing surface of the Keith bullet, Keith bullets typically being made oversized and sized down. Winchester, however, upon experimenting further during the cartridge development, modified the Sharpe bullet shape slightly, while keeping the Sharpe contour of the bullet. The final choice of bullet for the .357 Magnum was thus based on the earlier Keith and Sharpe bullets, while additionally having slight differences from both.The Keith-style SWC has been taken even further, to produce designs that are nearly wadcutters in shape (D), but are intended for large game hunting with handguns. These have nearly cylindrical noses, which are as long as the firearm chamber allows, and just slightly smaller than bore diameter so they will easily chamber. The massive nose provides a large surface area for producing large wound channels, resulting in rapid incapacitation, and the heavy bullet provides excellent penetration. The wide nose is less prone to deformation than a narrow nose, allowing the bullet to keep its shape and continue to penetrate even if it encounters bone. Semiwadcutter: Originally Keith specified a meplat that was 65% of the bullet caliber, but later increased it to a 70% meplat. The other distinguishing characteristics of a "Keith-style" SWC are a double radius ogive, beveled crimp groove, three equal width driving bands, wide square bottomed grease groove, and a plain base with sharp corners. The wide forward driving band helps keep the bullet aligned as it jumps across the cylinder gap. Because of the three wide equal width driving bands, the total bearing surface is greater than half the overall length of the bullet. This large bearing surface helps the Keith-style SWC to be an inherently accurate bullet, and minimizes leading from gas blow-by. The wide square bottom grease groove holds ample lubricant.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Musicogenic epilepsy** Musicogenic epilepsy: Musicogenic epilepsy is a form of reflex epilepsy with seizures elicited by special stimuli.It has probably been described for the first time in 1605 by the French philosopher and scholar Joseph Justus Scaliger (1540-1609). Later publications were, in the eighteenth century, among others, by the German physician Samuel Schaarschmidt, in the nineteenth century 1823 by the British physician John C. Cooke, 1881 by the British neurologist and epileptologist William Richard Gowers, as well as in 1913 by the Russian neurologist, clinical neurophysiologist and psychiatrist Vladimir Mikhailovich Bekhterev. In 1937 the British neurologist Macdonald Critchley coined the term for the first time and classified it as a form of reflex epilepsy.Most patients have temporal lobe epilepsy. Listening, probably also thinking or playing, of usually very specific music with an emotional content triggers focal seizures with or without loss of awareness, occasionally also evolving to bilateral tonic-clonic seizures. Musicogenic epilepsy: Although musicality is at least in non-musicians predominantly located in the right temporal lobe, the seizure onset may also be left-hemispherical. Of the approximately 100 patients reported in the literature so far, about 75% had temporal lobe epilepsy, women were slightly more affected, and the mean age of onset was about 28 years. Ictal EEG and SPECT findings as well as functional MRI studies localized the epileptogenic area predominantly in the right temporal lobe. Treatment with epilepsy surgery leading to complete seizure freedom has been reported.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cooling curve** Cooling curve: A cooling curve is a line graph that represents the change of phase of matter, typically from a gas to a solid or a liquid to a solid. The independent variable (X-axis) is time and the dependent variable (Y-axis) is temperature. Below is an example of a cooling curve used in castings. Cooling curve: The initial point of the graph is the starting temperature of the matter, here noted as the "pouring temperature". When the phase change occurs, there is a "thermal arrest"; that is, the temperature stays constant. This is because the matter has more internal energy as a liquid or gas than in the state that it is cooling to. The amount of energy required for a phase change is known as latent heat. The "cooling rate" is the slope of the cooling curve at any point. Alloy have range of melting point. It solidifies as above. First, molten alloy reaches to liquidus temperature and then freezig range starts. At solidus temperature, molten alloys becomes solid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supplemental instruction** Supplemental instruction: Supplemental instruction (SI) is an academic support model that uses peer learning to improve university student retention and student success in high-attrition courses. Supplemental Instruction is used worldwide by institutions of higher learning. SI is also called "Peer-Assisted Study Sessions," "PASS" or "SI-PASS" in parts of the Africa, Europe, North America, and Oceania. According to an article in the peer-reviewed journal, Research and Teaching in Developmental Education, "Since its introduction in 1974 at the University of Missouri-Kansas City by Deanna C. Martin, Supplemental Instruction (SI) has been implemented, studied, and evaluated for its effectiveness across a variety of disciplines and institutional levels." The article further noted that for some students, "SI is a program that works. Since SI is an enrichment program designed to target high risk courses, it takes the emphasis off the individual student's projected performance. A high risk course, as defined repeatedly in the literature, is any course (usually entry-level) in which unsuccessful enrollment (percentages of D's and F's as final grades and rates of withdrawal from the course and/or institution) exceeds 30%." Peer Learning Using the SI Model: Supplemental Instruction differs from other types of student support, such as tutoring: "Typical learning center programs operate on a drop-in basis, offering services primarily designed to address the needs of high-risk students. Staff devote a high percentage of time to one-on-one tutorial instruction, with basic skills courses and workshops complementing individual services."Unlike tutoring, SI is attached to the course and not the student: "This approach focused not on 'at risk students,' but rather on 'at risk classes,' entry-level classes in health sciences, and later in general arts and sciences classes." According to Martin and Arendale, Supplemental Instruction "provides regularly scheduled, out-of-class, peer-facilitated sessions that offer students an opportunity to discuss and process course information." An SI program organizes peer support through an "SI leader," who is typically a student who succeeded in the particular academic course (e.g., Organic Chemistry, Economics 101, Algebra II). SI leaders "attend the course lectures where they take notes and complete assigned readings. The specialists also schedule and conduct three or four, fifty-minute SI sessions each week at times convenient to the majority of students in the course. Student attendance is voluntary. Individual attendance by participants ranges widely from one to twenty-five hours, and averages 6.5 hours per semester. The leader is presented as a 'student of the subject.'" The use of trained SI leaders rather than professors, lecturers, PhDs, MDs or other credentialed experts allows the service to scale up quickly to large numbers of students and for a large variety of courses at the undergraduate, graduate, and professional-school levels. The SI model evolved during the 1970s and 1980s from its beginnings at a single "Student Learning Center" at the University of Missouri-Kansas City. Supplemental Instruction expanded globally in the 1990s: Higher-education institutions around the world have adopted some variant of the UMKC Supplementary Instruction model. Some attribute this widespread diffusion to the SI model and to its founders."Early on, SI’s founders decided that the SI model should be modified by its users rather than its creators. Martin and Blanc ... argue that SI should be 'fluid rather than rigid, dynamic rather than static.'"Hundreds of institutions around the world implement some variant of SI, PASS or SI-PASS. Hundreds of scholarly articles have studied and extended the Supplemental Instruction model since the 1970's. Supplemental Instruction programs are frequently in the news. Video Supplemental Instruction: With the widespread use of consumer video products in 1980s, Deanna Martin, Robert Blanc, and their colleagues applied video to Supplemental Instruction sessions for students who hadn't previously benefited from SI, such as student athletes. Video Supplemental Instruction (VSI) allows the SI leader to play back a lecture at a rate tailored to the particular group of students. "In VSI courses, instructors record their lectures on video tape and enroll students in a video section of the same course that they teach live on campus. For students in the VSI section, a trained facilitator [VSI leader] uses the taped lectures to regulate the flow of information to the learner. The lectures are stopped and started as needed, allowing the facilitator to verify that students have comprehended one idea before moving on to the next." In contrast to non-video Supplemental Instruction in which one lecture is always matched with one class, VSI starts with a video-recorded lecture that SI leaders then use to lead discussions in one or more SI classes, as was done for teaching basic sciences for medical board certification exams: Martin and Arendale subsequently reported that the "VSI method has been used with salutary effect by two dozen different medical schools and health-care institutions, preparing people to perform well on medical boards." For many learner, however, VSI offers advantages that SI lacks:"The foregoing should not be interpreted to suggest that SI is a one-size-fits-all solution to academic problems. Data suggest that the SI experience can move a student’s performance from below average to average, from average to above average, from above average to excellent. In the lower ranges of performance, it appears that participation in SI can elevate a student’s grade from sub-marginal to below average. At UMKC as at other Universities, however, practitioners have found that there are students for whom SI offers insufficient support. Typically, these students fall at or near the bottom of the fourth quartile in terms of entry level scores and/or high school rank. SI is not scheduled often enough, nor does it have sufficient structure, breadth, or depth to meet the needs of this population. On other campuses, these students would typically be tracked into developmental courses which, for UMKC, has never been an option."Thus, the population of students taking SI is stratified with some being unsuccessful in SI; "a more intense and sustained experience was needed for the least academically prepared students," namely, VSI. Philosophy: From its inception, the goal of SI has been equity in higher education: “Supplemental Instruction (SI) was created at the University of Missouri-Kansas City (UMKC) in 1973 as a response to a need at the institution created by a dramatic change in the demographics of the student body and a sudden rise in student attrition ... Gary Widmar, Chief Student Affairs Officer, hired Deanna Martin, a then-doctoral student in reading education, in 1972 to work on a $7,000 grant from the Greater Kansas City Association of Trusts and Foundation to solve the attrition problem among minority professional school students in medicine, pharmacy, and dentistry.” Also from the start, Martin's SI programs were driven by practical results: A successful SI program will show statistically-significant drops in attrition in the classes that have SI services. Supplemental Instruction was also supposed to be cheap: What Martin and her collaborators sought was "an academic support service that would be both cost-effective and successful in reducing the high rates of student attrition." To be cost effective: Typically, "student SI leaders can be assigned to the program from the work study program.Thus, SI was founded to be a cheap and effective means to reduce attrition and thereby improve equity in higher-education graduation rates. Smith and MacGregor see Supplemental Instruction (PASS or SI-PASS) as a cooperative learning approach. Although the founders of Supplemental Instruction were influenced by Piaget and Karplus, as SI spread across the world, the founders encouraged adaptation to local circumstances and change when needed. Effectiveness: In the early 1990s, the U.S. Department of Education validated three specific claims about the effectiveness of SI: Students participating in SI within the targeted high-risk courses earn higher mean final course grades than students who do not participate in SI. This finding is still true when analyses control for ethnicity and prior academic achievement. Despite ethnicity and prior academic achievement, students participating in SI within targeted high-risk courses succeed at a higher rate (withdraw at a lower rate and receive a lower percentage of [fail] final course grades) than those who do not participate in SI. Students participating in SI persist at the institution (reenroll and graduate) at higher rates than students who do not participate in SIA more recent review of all published SI research between 2001 and 2010 found studies in support of all three of these claims, and no studies contradicting them. Dissemination: The International Center for SI is located at the University of Missouri-Kansas City in Kansas City, Missouri within Academic Support and Mentoring, formerly the Center for Academic Development. The International Center for SI hosts and conducts regular trainings on the SI model and has trained individuals representing more than 1,500 institutions in 30 different countries.There are national centers for SI at the University of Wollongong, Australia, the University of Guelph, Canada, Nelson Mandela University, South Africa, Lund University, Sweden. There is also a Regional Center for SI in northern South Africa at North-West University. SI may also be called PASS (peer assisted study sessions) and PAL (peer assisted learning). Each national center is responsible for supervision and training interested institutions in their region in the SI model. Conferences: Every two years, the International Center for SI hosts a conference where administrators, educators, SI leaders, and students gather to share new research and ideas that pertain to SI. Adaptations: The name "Supplemental Instruction" has been changed to better fit into other variations of the English Language. For example, "the University of Manchester engages students as partners in two established Peer Support programs: Peer Mentoring and Peer Assisted Study Sessions (PASS)," which is "Based on the Supplemental Instruction model." Criticisms: There has been criticism and debate concerning self-selection bias when measuring SI outcomes in non-experimental settings: A. R. Paloyo of the University of Wollongong noted that "we expect the selection-bias term to be nonzero, implying that the observed difference in, say, final marks is not equal to the effect of SI because it is contaminated by self-selection. Good final marks can be expected from motivated students, but motivation is also positively correlated with participation in SI."Based on their study of students enrolled in a Mathematics course at the Ethembeni Community College in Port Elizabeth, South Africa, Koch and Snyders concluded that a lecture that is adapted to the student may have at least as good outcomes as Video Supplementary Instruction; in the study, one adaptation was longer lecture times in the professor's class that matched the time spent in VSI; the authors conceded that VSI has at least one advantage: "Although experienced lecturers might be still preferable to VSI, these results may have positive implications for distance learning in the absence of enough experienced lecturing or teaching staff' especially in rural areas."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Open Transport Network** Open Transport Network: Open Transport Network (OTN) is a flexible private communication network based on fiber optic technology, manufactured by OTN Systems. Open Transport Network: It is a networking technology used in vast, private networks with a great diversity of communication requirements, such as subway systems, pipelines, the mining industry, tunnels and the like (ref). It permits all kinds of applications such as video images, different forms of speech and data traffic, information for process management and the like to be sent flawlessly and transparent over a practically unlimited distance. The system is a mix of Transmission and Access NE, communicating over an optical fiber. The communication protocols include serial protocols (e.g. RS232) as well as telephony (POTS/ISDN), audio, Ethernet, video and video-over-IP (via M-JPEG, MPEG2/4, H.264 or DVB) (ref). Open Transport Network: Open Transport Network is a brand name and not to be mistaken with Optical Transport Network. Concept: The basic building block of OTN is called a node. It is a 19" frame that houses and interconnects the building blocks that produce the OTN functionality. Core building blocks are the power supply and the optical ring adapter (called BORA : Broadband Optical Ring Adapter) (ref). The remaining node space can be configured with up to 8 (different) layer 1 interfaces as required. Concept: OTN nodes are interconnected using pluggable optical fibers in a dual counterrotating ring topology. The primary ring consists of fibers carrying data from node to node in one direction, the secondary ring runs parallel with the primary ring but carries data in the opposite direction. Under normal circumstances, only one ring carries active data. If a failure is detected in this data path, the secondary ring is activated. This hot standby topology results in a 1 + 1 path redundancy. The switchover mechanism is hardware based and results in ultrafast (50ms) switchover without service loss. Concept: Virtual bidirectional point-to-point or point-to-multipoint connections (services) between identical interfaces in different nodes are programmed via a configuration software called OMS (OTN management system). By doing this, OTN mimics a physical wire harness interconnecting electronic data equipment but with the added advantages typical for fiber transmission and with high reliability due to the intrinsic redundant concept. This concept makes the Open Transport Network the de facto transmission backbone standard for industrial high reliability communication sites that require errorfree communication for a large spectrum of protocols over long distances like pipelines, metros, rail, motorways and industrial sites. Concept: The optical rings transport frames with a bitrate of (approximately) 150 Mbit/s (STM-1/OC-3), 622 Mbit/s (STM-4/OC-12), 2.5 Gbit/s (STM-16/OC-48) or 10Gbit/s (STM-64/OC-192). The frames are divided into 32 kb payload cells that carry the service data from source to destination. Via the OTN management system (OMS), as many cells as required by the service are allocated to connections. This bandwidth allocation is transferred to the non-volatile memory of the control boards of the nodes. As a result, the network is able to start up and work without the OMS connected or on line.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Delivery Bar Code Sorter** Delivery Bar Code Sorter: A Delivery Bar Code Sorter (DBCS) is a mail sorting machine used primarily by the United States Postal Service. Introduced in 1990, these machines sort letters at a rate of approximately 36,000 pieces per hour, with a 99% accuracy rate. A computer scans the addresses of the mail, and sorts it to one of up to 286 pockets, setting it up for delivery by the letter carrier.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isobornyl acetate** Isobornyl acetate: Isobornyl acetate is an organic compound consisting of the acetate ester or the terpenoid isoborneol. It is a colorless liquid with a pleasant pine-like scent, and it is produced on a multi-ton scale for this purpose. The compound is prepared by reaction of camphene with acetic acid in the presence of a strongly acidic catalyst such as sulfuric acid. Hydrolysis of isobornyl acetate gives isoborneol, a precursor to camphor.Like many plant exudates, isobornyl acetate appears to have antifeedant properties.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BBS software for the TI-99/4A** BBS software for the TI-99/4A: There are several notable bulletin board systems (BBS) for the TI-99/4A home computer. Technology writer Ron Albright wrote of several BBS applications written for the TI-99/4A in the March 1985 article Touring The Boards in the monthly TI-99/4A magazine MICROpendium. While Albright's article references several notable bulletin board systems, it does not confirm what was the first BBS system written for the TI-99/4A. Zyolog BBS: The first commercially available BBS system written for the TI-99/A in 1983 by Dr. Bryan Wilcutt, DC.S, when he was 15 years old. The name Zyolog was a play on words from Zylog who made low end 8-bit chips and was the first processor type used by the author. The software was officially copyrighted in 1985. The Bulletin Board Software was written in a mixture of TI Extended BASIC and TI Assembly Language for the TMS9900 processor. The author ran the BBS system until moving to the Amiga platform in 1991. Over 200 Zyolog BBS systems existed world wide. TIBBS: One of the most popular BBS applications for the TI-99/4A in the early to mid 1980s was aptly named TIBBS (Texas Instruments Bulletin Board System). TIBBS was purported to be the first BBS written to run on the TI-99/4A microcomputer. Its author, Ralph Fowler of Atlanta, Georgia, began the program because he was told by TI's engineers that the machine was not powerful enough to support a BBS. Approximately 200 copies of the application were officially licensed by Fowler and many TIBBS systems popped up around the World. Operators ranged from teenagers to one sysop in Sacramento, California who was over 70 years old. After Texas Instruments ceased producing the 99/4A, its enthusiasts became even more supportive of each other and TIBBS continued into the late 1980s. Eventually Fowler made the program public domain and moved to a different PC platform. Phillip (P.J.) Holly's BBS: 12-year-old programmer Phillip (P.J.) Holly aired a BBS written in TI Extended Basic around late 1982 or early 1983 in the Northwest Chicago suburbs. His code was given to fellow BBS friends, and eventually used as a starting point for the Chicago TI-User's Group BBS, which later was coded in assembly language using TI's Editor Assembler. Holly wrote his BBS software on his own due to the lack of available BBS software options for the TI-99/4A. Months later, he discovered Mr. Fowler's TIBBS in Atlanta. SoftWorx: Houston, Texas based programmer Mark Shields wrote a BBS program called SoftWorx in the summer of 1983 which served his board The USS Enterprise. Shields' inspiration came after watching the motion picture WarGames. The application originally made outgoing calls in an attempt to locate other computers, and was eventually adapted to accept calls. The user interface was modeled directly on Nick Naimo's Networks II BBS software which had been written for the Apple 2. Shields used TI Extended BASIC as the basis for his application. No actual code from the Naimo's software was used, although the online experience to modem users at the time was comparable. Shields donated the application to the public domain and several sites briefly sprang up in the 1980s. TI-COMM: John Clulow gave away this program, whose unique feature was its use of a modified Volksmodem. A sysop could modify an inexpensive Volksmodem to add auto-answer and auto-dial capability, for $30 in parts. Because TI-COMM was written entirely in TI Extended Basic, it relied on PRINT and INPUT commands and the routines built-in to the RS232 DSR ROM. As such, it could only do line-oriented input and output. John Clulow was a prolific contributor to users' group newsletters. Techie: Monty Schmidt released Techie as freeware in 1985. It used many assembly language support routines, but still ran from TI Extended Basic in 40-column text mode. Techie featured multiple message boards. An online adventure game came with the initial release. Monty had been programming the TI-99/4A since 1981, and was a student at University of Wisconsin, Madison at the time of Techie's release. Monty Schmidt started the software company Sonic Foundry in 1991. TI-SUB (TI-Net BBS): Erik Olson wrote and marketed this BBS software in 1985, while in junior high school. Matt Storm operated the flagship bulletin board, The Panhandler, a reference to the region around Lubbock, Texas, USA. Lubbock was the birthplace of the TI-99/4A, but the 806 area code had not yet had a BBS running on one.TI-SUB, soon renamed TI-Net BBS, was written in TI Extended Basic, with assembly language support routines for RS232 communications and 40-column text mode. About 30 copies were distributed. A notable feature was a function key which allowed the sysop to enter or exit chat mode at any time (modeled after the chat function in ABBS.) Other keys could alter privilege levels or gracefully end the user's session. The sysop could modify any of the prompts used in the software by editing one text file. Screens, such as menus, were also loaded from simple text files. TI-Net BBS featured "Doors" or external programs that a caller could launch. The standard software distribution has a conversion of Sam Moore, Jr's adventure game SWORDS, and an original DUNGEON game with competition between players, written by Matt Storm. Greg McGill ported many more games to run with TI-Net.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soy boy** Soy boy: Soy boy is a pejorative term sometimes used in online communities to describe men perceived to be lacking masculine characteristics. The term bears many similarities and has been compared to the slang terms cuck (derived from cuckold), nu-male and low-T ("low testosterone") – terms sometimes used as an insult for male femininity by online communities.The term is based on the presence of the phytoestrogen isoflavone in soybeans, which has led some to claim that soy products feminize men who consume them, although there is no correlation between consumption of soy phytoestrogens and testosterone or estrogen levels or sperm quality. History: Soy products contain high amounts of phytoestrogens. As they are structurally similar to estradiol (the major female sex hormone) and have activity at the estrogen receptor, concerns have been raised that it may act as an endocrine disruptor that adversely affects health. While there is some evidence that phytoestrogens may affect male fertility, "further investigation is needed before a firm conclusion can be drawn". Several review studies have not found any effect of phytoestrogens on sperm quality or reproductive hormone levels. Usage: The term is often used as an epithet by internet trolls. It is often targeted at perceived social justice warriors, vegans, social liberals, and similar groups. The term has also been used in online debates about the fashion appeal of cargo shorts.Soy boys are often depicted as feminized and unathletic, usually with glasses and a poorly-groomed beard, and having a characteristic open-mouthed smile called a "soy face" or "soylent grin."After UFC Vegas 11 in September 2020, UFC fighter Colby Covington made disparaging reference to Nate Diaz's "soy boy diet"; Diaz is a vegan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**K2-3b** K2-3b: K2-3b also known as EPIC 201367065 b is an exoplanet orbiting the red dwarf K2-3 every 10 days. It is the largest and most massive planet of the K2-3 system, with about 2.3 times the radius of Earth and almost 7 times the mass. Its density of about 3.0 g/cm3 indicates a composition of almost entirely water, or a hydrogen envelope comprising 0.7% of the planet's mass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Teriparatide** Teriparatide: Teriparatide, sold under the brand name Forteo, is a form of parathyroid hormone (PTH) consisting of the first (N-terminus) 34 amino acids, which is the bioactive portion of the hormone. It is an effective anabolic (promoting bone formation) agent used in the treatment of some forms of osteoporosis. Teriparatide is a recombinant human parathyroid hormone analog (PTH 1-34). It has an identical sequence to the 34 N-terminal amino acids of the 84-amino acid human parathyroid hormone. Medical uses: Teriparatide is indicated for the treatment of postmenopausal women with osteoporosis; for the increase of bone mass in men with primary or hypogonadal osteoporosis; and treatment of men and women with osteoporosis associated with sustained systemic glucocorticoid therapy.It is effective in growing bone (e.g., 8% increase in bone density in the spine after one year) and reducing the risk of fragility fractures.Teriparatide cuts the risk of hip fracture by more than half but does not reduce the risk of arm or wrist fracture. Contraindications: Teriparatide is contraindicated for those with open epiphyses, metabolic bone diseases, Paget's Disease of bone, bone metastases, history of skeletal malignancies, or prior external beam or implant radiation therapy involving the skeleton. In the animal studies and in one human case report, it was found to potentially be associated with developing osteosarcoma in test subjects after over two years of use. Adverse effects: Adverse effects of teriparatide include headache, nausea, dizziness, and limb pain. Teriparatide has a theoretical risk of osteosarcoma, which was found in rat studies but not confirmed in humans. This may be because, unlike humans, rat bones grow for their entire life. The tumors found in the rat studies were located on the end of the bones which grew after the injections began. After nine years on the market, there were only two cases of osteosarcoma reported. This risk was considered by the FDA as "extremely rare" (1 in 100,000 people) and is only slightly more than the incidence in the population over 60 years old (0.4 in 100,000). Mechanism of action: Teriparatide is a portion of human parathyroid hormone (PTH), amino acid sequence 1 through 34, of the complete molecule (containing 84 amino acids). Endogenous PTH is the primary regulator of calcium and phosphate metabolism in bone and kidney. PTH increases serum calcium, partially accomplishing this by increasing bone resorption. Thus, chronically elevated PTH will deplete bone stores. However, intermittent exposure to PTH will activate osteoblasts more than osteoclasts. Thus, once-daily injections of teriparatide have a net effect of stimulating new bone formation leading to increased bone mineral density. Society and culture: Legal status Teriparatide was approved for medical use in the United States in 1987. Teriparatide (Forteo) was approved by the FDA in November 2002, for the treatment of osteoporosis in men and postmenopausal women who are at high risk for having a fracture. In October 2019, the US FDA approved the recombinant teriparatide product with brand name Bonsity. Society and culture: Biosimilars Recombinant teriparatide is sold by Eli Lilly and Company under the brand names Forteo and Forsteo. In June 2020, Alvogen, Inc, Pfenex Inc.'s commercialization partner, launched teriparatide injection (Bonsity) in the United States. Teriparatide injection was developed by Pfenex Inc and approved by the US Food and Drug Administration (FDA) in October 2019. Teriparatide injection is pharmaceutically equivalent to Forteo (that is, has the same active ingredient in the same strength, dosage form and route of administration) and has been shown to have comparable bioavailability. These characteristics allowed the product to be approved under a 505(b)(2) NDA for which Forteo was the reference drug. It may provide a lower-cost teriparatide option for increasing bone density in patients at high risk for fracture, and is FDA-approved for the same indications as Forteo, which means it can be used for the same patients as Forteo, including new patients and those currently responding to treatment.Teriparatide was approved for medical use in the European Union in June 2003. A synthetic teriparatide from Teva Generics has been authorized for marketing in the European Union. Biosimilar product from Gedeon Richter plc has been authorized in the European Union. In October 2019, the US FDA approved a recombinant teriparatide product.In June 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) recommended the approval of the biosimilar products Qutavina and Livogiva.ref name="Livogiva EPAR" /> Qutavina and Livogiva were approved for medical use in the European Union in August 2020.Osnuvo was approved for medical use in Canada in January 2020.Sondelbay was approved for medical use in the European Union in March 2022.On 10 November 2022, the Committee for Medicinal Products for Human Use (CHMP) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Kauliv, intended for the treatment of osteoporosis. The applicant for this medicinal product is Strides Pharma Cyprus. Kauliv was approved for medical use in the European Union in February 2023. Research: Teriparatide is undergoing a clinical trial with zoledronic acid as a treatment for osteogenesis imperfecta to reduce the risk of broken bones. Research: Combined teriparatide and denosumab Combined teriparatide and denosumab increased BMD more than either agent alone and more than has been reported with approved therapies. Combination treatment might, therefore, be useful to treat patients at high risk of fracture by increasing BMD. However, there is no evidence of fracture rate reduction in patients taking a teriparatide and denosumab combination. The first such trial was published by Leder et al. in Lancet in 2013 with further data subsequently published in JCEM in a trial of post menopausal osteoporotic women demonstrating larger bone mineral density increases in the spine and hip with combination therapy compared to either drug alone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coco (robot)** Coco (robot): Coco is the latest platform at the Massachusetts Institute of Technology's Humanoid Robotics Group, and a successor to Cog. Unlike previous platforms, Coco is built along more ape-like lines, rather than human. Coco is also notable for being mobile. Although there is ongoing research on the robot, the group has many robots dealing with human interactions. The Humanoid Robotics Group has planned to add more useful functions in the future, but have not set an exact date for such project. Humanoid Robotics Group Mission: The mission of the Humanoid Robotics Group is to create a robot that can interact with humans and objects without being dependent on a caretaker. Coco should be able to investigate environments and be able to discover important outlooks of the world. Using multiple sensors, Coco should be conducive to human interaction. Interactions with humans include: reacting to others' emotions showing empathy non-aggressive social behavior independence Physical: All the following dimensions of Coco are in millimeters: length of the head is 165 width of the head is 140 from left shoulder Y-axis to right shoulder Y-axis is 252 from shoulder Y-axis to shoulder X-axis is 58 from hip to hip is 269 from hip to shoulder 292 forearm is 156 upper arm is 154 upper leg is 65 lower leg is 45Coco's appearance is ape-like, which coincides with early evolutionary behaviors. It has broad shoulders, short legs, and long arms made of carbon fiber. The robot's color is all black except for the head which is clear and has two colored eyes with cameras that indicate objects near it. The cords connecting the back of the head to the body are used for transmitting codes for movements and reactions. Physical: Coco is a fifteen DOF (degrees of freedom) quadruped with gorilla-like proportions. DOF is the number of independent conditions that define Coco's arrangement. The DOF are located all throughout the robot. There are two DOF per hind leg, one on the hip, another at the knee, three in each front limb, two on the shoulder, and one on the elbow. The head has an additional five DOF for the movement of the object. Coco can change postures and its vestibular system allows it to have its eye ground level to see objects in a small radius. It has a high speed serial cable that links the robot to the main controller. Physical: The controlling method is called torque-position control, which is the force applied to a lever in a rotation. The method most similar to the torque control is the Series Elastic Actuators, "springs that are intentionally placed in series between the motor and actuator output to have a constant force" but that method powers the elastic element. Most of the above methods are useful but the least useful is the elastic element. Uses: As of right now, Coco is controlled through many sensors to walk and be aware of the objects in its perimeter. For future uses, Coco will be able to be aware of others emotions and produce a reaction. Coco will also be able to help different types of learning and interact with humans or objects that need its help. Future Work: The aim for the Humanoid Robotics Group is for Coco to have many human-like experiences through common sense, emotions, and visuals. The Humanoid Robotics Group would still like to contribute more work to Coco such as: providing the robot with high level functions to develop interactive behaviors, providing aid for some types of learning, providing an improvement in the force control, and providing hand-eye coordination. Some time in the near future Coco should be able to be aware of its own body, have flexible limb dynamics, and be able to interact with human without it being controlled. Related Robots from the HRG: The links below are websites to robots that the Humanoid Robotics Group has been involved with. These projects are similar to Coco but have different body structures and postures. Cog [1]-motor dynamics that are similar to humans Kismet [2]-human communication skills Macaco [3]-reacts to its surrounding Retired Robots include: Wheelesley Pebbles [4] Boadicea [5] Polly Modots [6] Ants [7] Hannibal and Attila [8] Genghis [9]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Obidos (software)** Obidos (software): Obidos was the name used by Amazon.com for their original page rendering engine, and appears in many of their URLs such as https://www.amazon.com/exec/obidos/ASIN/0596515162. Obidos was phased out in 2006 and replaced by the Gurupa engine. Amazon.com subsequently used the name for their building at 551 Boren Ave N, Seattle, WA 98109, United States.It was named after the town of Óbidos in Brazil near the swiftest point on the Amazon River, which is in turn named after the town of Óbidos, Portugal.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benign hypertension** Benign hypertension: Benign hypertension or benign essential hypertension are historical terms that are considered misleading, as hypertension is never benign, and consequently they have fallen out of use (see history of hypertension). The terminology persisted in the International Classification of Disease (ICD9), but is not included in the current ICD10.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semtex** Semtex: Semtex is a general-purpose plastic explosive containing RDX and PETN. It is used in commercial blasting, demolition, and in certain military applications. Semtex was developed and manufactured in Czechoslovakia, originally under the name B 1 and then under the "Semtex" designation since 1964, labeled as SEMTEX 1A, since 1967 as SEMTEX H, and since 1987 as SEMTEX 10. Originally developed for Czechoslovak military use and export, Semtex eventually became popular with paramilitary groups and rebels or terrorists because prior to 2000 it was extremely difficult to detect, as in the case of Pan Am Flight 103. Composition: The composition of the two most common variants differ according to their use. The 1A (or 10) variant is used for mining, and is based mostly on crystalline PETN. The versions 1AP and 2P are formed as hexagonal booster charges; a special assembly of PETN and wax inside the charge assures high reliability for detonating cord or detonator. The H (or SE) variant is intended for explosion hardening. History: Semtex was invented in the late 1950s by Stanislav Brebera and Radim Fukátko, chemists at VCHZ Synthesia, Czechoslovakia (now Czech Republic). The explosive is named after Semtín, a suburb of Pardubice where the mixture was first manufactured starting in 1964. The plant was later renamed to become Explosia a.s., a subsidiary of Synthesia.Semtex was very similar to other plastic explosives, especially C-4, in being highly malleable; but it is usable over a greater temperature range than other plastic explosives, since it stays plastic between −40 and +60 °C. It is also waterproof. There are visual differences between Semtex and other plastic explosives, too: while C-4 is off-white in colour, Semtex is red or brick-orange. History: The new explosive was widely exported, notably to the government of North Vietnam, which received 14 tons during the Vietnam War. However, the main consumer was Libya; about 700 tons of Semtex were exported to Libya between 1975 and 1981 by Omnipol. It has also been used by Islamic militants in the Middle East and by the Provisional Irish Republican Army (PIRA) and the Irish National Liberation Army in Northern Ireland.Sales declined after Semtex became closely associated with terrorist attacks. Rules governing the explosive's export were progressively tightened over the years, and since 2002 all of Explosia's trading has been controlled by a government ministry. As of 2001, only approximately 10 tons of Semtex were produced annually, almost all for domestic use. On 21 December 1988, 340 g (12 ounces) of Semtex brought down a Boeing 747 over Lockerbie, Scotland killing all 259 passengers and crew aboard the aircraft and 11 bystanders on the ground. History: Also in response to international agreements, Semtex has a detection taggant added to produce a distinctive vapor signature to aid detection. First, ethylene glycol dinitrate was used, but was later switched to 2,3-dimethyl-2,3-dinitrobutane (DMDNB) or p-mononitrotoluene (1-methyl-4-nitrobenzene), which is used currently. According to the manufacturer, the taggant agent was voluntarily being added by 1991, years before the protocol became compulsory. Batches of Semtex made before 1990, however, are untagged, though it is not known whether there are still major stocks of such old batches of Semtex. According to the manufacturer, even this untagged Semtex can now be detected. The shelf life of Semtex was reduced from ten years before the 1990s to five years now. Explosia states that there is no compulsory tagging allowing reliable post-detonation detection of a certain plastic explosive (such as incorporating a unique metallic code into the mass of the explosive), so Semtex is not tagged in this way.On 25 May 1997, Bohumil Šole, a scientist who claimed to have been involved with inventing Semtex, committed suicide at a spa in Jeseník by blowing himself up with explosives. Šole, 63, was being treated there for psychological problems. It was unclear what explosives were used. Twenty other people were hurt in the explosion, while six were seriously injured. According to the manufacturer, Explosia, he was not a member of the team that developed the explosive in the 1960s.According to the producer's 2017 catalog, several variants of Semtex are offered: Semtex 1A, Semtex 1H, Semtex 10, Semtex 10-SE, Semtex S 30, Semtex C-4, Semtex PW 4, and Semtex 90.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Born–Infeld model** Born–Infeld model: In theoretical physics, the Born–Infeld model is a particular example of what is usually known as a nonlinear electrodynamics. It was historically introduced in the 1930s to remove the divergence of the electron's self-energy in classical electrodynamics by introducing an upper bound of the electric field at the origin. Overview: Born–Infeld electrodynamics is named after physicists Max Born and Leopold Infeld, who first proposed it. The model possesses a whole series of physically interesting properties. In analogy to a relativistic limit on velocity, Born–Infeld theory proposes a limiting force via limited electric field strength. A maximum electric field strength produces a finite electric field self-energy, which when attributed entirely to electron mass-produces maximum field 1.187 10 20 V/m. Born–Infeld electrodynamics displays good physical properties concerning wave propagation, such as the absence of shock waves and birefringence. A field theory showing this property is usually called completely exceptional, and Born–Infeld theory is the only completely exceptional regular nonlinear electrodynamics. This theory can be seen as a covariant generalization of Mie's theory and very close to Albert Einstein's idea of introducing a nonsymmetric metric tensor with the symmetric part corresponding to the usual metric tensor and the antisymmetric to the electromagnetic field tensor. Overview: The compatibility of Born–Infeld theory with high-precision atomic experimental data requires a value of a limiting field some 200 times higher than that introduced in the original formulation of the theory.Since 1985 there was a revival of interest on Born–Infeld theory and its nonabelian extensions, as they were found in some limits of string theory. It was discovered by E.S. Fradkin and A.A. Tseytlin that the Born–Infeld action is the leading term in the low-energy effective action of the open string theory expanded in powers of derivatives of gauge field strength. Equations: We will use the relativistic notation here, as this theory is fully relativistic. Equations: The Lagrangian density is det (η+Fb)+b2, where η is the Minkowski metric, F is the Faraday tensor (both are treated as square matrices, so that we can take the determinant of their sum), and b is a scale parameter. The maximal possible value of the electric field in this theory is b, and the self-energy of point charges is finite. For electric and magnetic fields much smaller than b, the theory reduces to Maxwell electrodynamics. Equations: In 4-dimensional spacetime the Lagrangian can be written as L=−b21−E2−B2b2−(E⋅B)2b4+b2, where E is the electric field, and B is the magnetic field. In string theory, gauge fields on a D-brane (that arise from attached open strings) are described by the same type of Lagrangian: det (η+2πα′F), where T is the tension of the D-brane and 2πα′ is the invert of the string tension.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Non-representational theory** Non-representational theory: Non-representational theory is the study of a specific theory focused on human geography. It is the work of Nigel Thrift (Warwick University). The theory is based on using social theory, conducting geographical research, and the 'embodied experience.' Definition: Instead of studying and representing social relationships, non-representational theory focuses upon practices – how human and nonhuman formations are enacted or performed – not simply on what is produced. "First, it valorizes those processes that operate before … conscious, reflective thought … [and] second, it insists on the necessity of not prioritizing representations as the primary epistemological vehicles through which knowledge is extracted from the world". Recent studies have examined a wide range of activities including dance, musical performance, walking, gardening, rave, listening to music and children's play. Post-structuralist origins: This is a post-structuralist theory inspired in part by the ideas of the physicist-philosopher Niels Bohr, and thinkers such as Michel Foucault, Gilles Deleuze, Félix Guattari, Bruno Latour, Michel Serres and Karen Barad, and by phenomenonologists such as Martin Heidegger and Maurice Merleau-Ponty. More recently it considers views from political science (including ideas about radical democracy) and anthropological discussions of the material dimensions of human life. It parallels the conception of "hybrid geographies" developed by Sarah Whatmore. Criticism: Critics have suggested that Thrift's use of the term "non-representational theory" is problematic, and that other non-representational theories could be developed. Richard G. Smith said that Baudrillard's work could be considered a "non-representational theory", for example, which has fostered some debate. In 2005, Hayden Lorimer (Glasgow University) said that the term "more-than-representational" was preferable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metroid Fusion** Metroid Fusion: Metroid Fusion is an action-adventure game developed and published by Nintendo for the Game Boy Advance in 2002. It was developed by Nintendo Research & Development 1, which had developed the previous Metroid game, Super Metroid (1994). Players control the bounty hunter Samus Aran, who investigates a space station infected with shapeshifting parasites known as X. Metroid Fusion: Like previous Metroid games, Fusion is a side-scrolling game with platform jumping, shooting, and puzzle elements. It introduces mission-based progression that guides the player through certain areas. It was released the day before the GameCube game Metroid Prime in North America; both games can be linked using the GameCube – Game Boy Advance link cable to unlock additional content for Prime. Metroid Fusion: Fusion was acclaimed for its gameplay, controls, graphics and music, though its shorter length and greater linearity received some criticism. It received several awards, including "Handheld Game of the Year" from the Academy of Interactive Arts & Sciences, "Best Game Boy Advance Adventure Game" from IGN, and "Best Action Game on Game Boy Advance" from GameSpot. It was rereleased on the Nintendo 3DS's Virtual Console in 2011 as part of the 3DS Ambassador Program, the Wii U's Virtual Console in 2014, and the Nintendo Switch Online + Expansion Pack in 2023. A sequel, Metroid Dread, was released in 2021 for the Nintendo Switch. Gameplay: Metroid Fusion is an action-adventure game in which the player controls Samus Aran. Like previous games in the series, Fusion is set in a large open-ended world with elevators that connect regions, which each in turn contains rooms separated by doors. Samus opens most doors by shooting at them, while some only open after she reaches a certain point. Fusion is more linear than other Metroid games due to its focus on storyline; for example, Fusion introduces Navigation Rooms, which tell the player where to go.The gameplay involves solving puzzles to uncover secrets, platform jumping, shooting enemies, and searching for power-ups that allow Samus to reach new areas. Samus can absorb X Parasites, which restore health, missiles, and bombs. Power-ups are obtained by downloading them in Data Rooms or absorbing a Core-X, which appears after defeating a boss. New features include the ability to grab ledges and climb ladders.The player can use the GameCube – Game Boy Advance link cable to connect to Fusion and unlock features in Prime: after completing Prime, they can unlock Samus's Fusion Suit, and after completing Fusion, they can unlock an emulated version of the first Metroid game. In Metroid: Zero Mission (2004), players can connect to Fusion using the Game Boy Advance Game Link Cable to unlock a Fusion picture gallery, which includes its ending images. Plot: Bounty hunter Samus Aran explores the surface of the planet SR388 with a survey crew from Biologic Space Laboratories (BSL). She is attacked by parasitic organisms known as X. On returning to the BSL station, Samus loses consciousness, and her ship crashes. The BSL ship she was escorting recovers her body and transfers it to the Galactic Federation for medical treatment, who discover that the X has infected Samus' central nervous system. They cure her with a vaccine made from cells taken from the infant Metroid that Samus adopted on SR388.: 88  The vaccine gives her the ability to absorb the X nuclei for nourishment,: 8  but burdens her with the Metroids' vulnerability to cold. Portions of Samus's infected Power Suit is sent to the BSL station for examination, although the entire suit was too integrated with her body to remove during surgery.When Samus recovers consciousness, she discovers an explosion occurred at the BSL station. She is sent to investigate. The mission is overseen by her new gunship's computer, whom Samus nicknames "Adam" after her former commanding officer, Adam Malkovich.: 13  Samus learns that the X parasites can replicate their hosts' physical appearances, and that the X have infected the station with the help of the "SA-X", an X parasite mimicking Samus at full power.Samus avoids the SA-X and explores the space station,: 98, 107  defeating larger creatures infected by the X to recover her abilities. She discovers a restricted lab containing Metroids, and the SA-X sets off the labs' auto-destruct sequence while also attacking the released Metroids, who also devour the SA-X. Samus escapes, but the lab is destroyed.: 135–136  The computer berates Samus for ignoring orders and admits that the Federation was secretly using the lab to breed Metroids. It also reveals that the SA-X has asexually reproduced, subsequently cloning itself. The computer advises Samus to leave the station.On her way to her ship, the computer orders Samus to leave the rest of the investigation to the Federation, which plans to capture SA-X for military purposes. Knowing that the X would only infect the arriving Federation troops and absorb their spacefaring knowledge to conquer the universe, Samus states her intention to destroy the station. Although the computer initially intends to stop Samus, she calls it "Adam", and reveals that Adam died saving her life. The computer suggests that she should alter the station's propulsion to intercept with SR388 and destroy the planet along with all X populations. Samus realizes that the computer is the consciousness of Adam, uploaded after death. En route to initiate the propulsion sequence, Samus confronts an SA-X, defeats it, and sets the BSL station on a collision course with SR388. As Samus prepares to leave, she is attacked by an Omega Metroid. The SA-X appears and attacks it, but is destroyed; Samus absorbs its nucleus and uses her newly restored Ice Beam to destroy the Omega Metroid.: 141–143  Her ship arrives, piloted by creatures Samus rescued from the station's Habitation Deck. They escape before the station crashes into the planet, destroying it. Development: Nintendo confirmed a Metroid game for the Game Boy Advance in March 2001. Ken Lobb, Nintendo of America's director of game development, said that it is a new game and not a port of the 1994 Super NES game Super Metroid. Early footage was shown at the 2001 E3 convention under the name Metroid IV. The footage showed Samus in a dark suit, running on walls and ceilings, with simpler, more "Game Boy Color-like" graphics. At E3 2002, Nintendo demonstrated the game again, now under the title Metroid Fusion, with updated graphics. IGN awarded Metroid Fusion Best of Show and Best Action Game.Metroid Fusion was developed by Nintendo Research & Development 1 (R&D1), the same team that created Super Metroid. Fusion's gameplay, screen layout, and controls mimic those of Super Metroid, with enhancements. Metroid Fusion is the first 2D Metroid game with animated cutscenes; the story is revealed through text and close-ups. It was written and directed by series designer Yoshio Sakamoto, and produced by Takehiro Izushi.Sakamoto decided to create an original story instead of remaking a Metroid game because he wanted to do "something really unprecedented", and looked forward to the response. Fusion introduces new gameplay mechanics, such as a more direct, almost mission-based structure that supports the player to explore areas. Objectives are also flexible in how they can be completed, acting "more as a guide for what the player should do instead of giving a completely blank map and saying 'Here you go, figure out what to do and how to do it'".According to the lead programmer, Katsuya Yamano, Nintendo R&D1 did not consult previous Metroid games for programming techniques, and instead used their previous game Wario Land 4 as a reference. Samus's suit design was revamped; the canonical explanation is that this was because an X Parasite had attacked Samus and made her lose all her abilities. Missiles were expanded with two "upgrades", much like the various beam upgrades: the Ice Missile which has a similar effect to the Ice Beam, and the Diffusion Missile which greatly increases the blast radius. Other minor abilities were added to Fusion, such as climbing walls and ceilings. The health and missile drops are replaced by X Parasites that are similarly released after defeating enemies.The music was composed by Minako Hamano and Akira Fujiwara. According to Hamano, Sakamoto wanted her to create music in accordance with Adam's dialogue. Hamano aimed for "serious, ambient music rather than melody" because she did not want the exploration themes to be "annoying". She also rearranged jingles from Super Metroid for Fusion. As Nintendo of America wanted the developers to look for "Hollywood-like" voice actors, Hamano added a voice of an announcer. The developers planned to feature voice acting, but the voices were only used for warning announcements due to ROM cartridge limitations. Release: Metroid Fusion was released in North America on November 18, 2002. Fusion can be connected to Metroid Prime for the GameCube, a Metroid game that was released on the same day as Fusion. In Europe, Fusion was released on November 22, followed by the Australian release on November 29. It was released in Japan on February 14, 2003, and in China on March 2, 2006.A two-disc soundtrack album, Metroid Prime & Fusion Original Soundtracks, was published by Scitron on June 18, 2003. The second disc contains tracks from Fusion, along with an additional track arranged by Shinji Hosoe.Metroid Fusion was released on the Nintendo 3DS Virtual Console in December 2011 as part of the "3DS Ambassadors" program, one of ten Game Boy Advance games for those who purchased their 3DS consoles before a price drop. Metroid Fusion was among the first three Game Boy Advance games to be released on the Wii U Virtual Console in April 2014. It was released on the Nintendo Switch Online + Expansion Pack service in March 2023. A sequel, Metroid Dread, was released in 2021 for the Nintendo Switch, developed by Nintendo and MercurySteam. Reception: Metroid Fusion received "universal acclaim" according to review aggregator Metacritic. The Japanese magazine Famitsu gave it 34 out of 40. X-Play said it was a "pleasure to play", and praised its "beautiful" graphics and audio. IGN praised it as an "outstanding achievement on the Game Boy Advance". GamesRadar and GamePro felt that Fusion was too short, but "love[d] every minute of it", finding the hidden secrets and new power-ups "sublimely ingenious". GameSpot was disappointed that the game ended so soon, but said that Metroid fans would enjoy it. Nintendo World Report and Eurogamer called it the best 2D Metroid game and the best Game Boy Advance game so far. Game Informer agreed, describing it as "everything you could want from a Game Boy Advance game" from beginning to end, giving it a perfect review score. Play described it as a "magnified, modified, and improved" version of everything great from Metroid and Super Metroid.GameSpot thought that Metroid Fusion offered Super Metroid's best qualities packaged in a new adventure. Nintendo Power heralded it as a return to the classic Metroid action gameplay. The "perfect" controls were praised by Electronic Gaming Monthly. Fusion did not feel new to GameSpy, which complained that even the final enemy encounter draws heavy inspiration from Super Metroid. GameZone found that the small screen of the Game Boy Advance was a poor environment in which to play Metroid Fusion, but they found it an exciting game.Metroid Fusion received several accolades. It was awarded "Handheld Game of the Year" by the Academy of Interactive Arts & Sciences at the 6th Annual Interactive Achievement Awards. It was also chosen as "Best Game Boy Advance Adventure Game" by IGN and "Best Action Game on Game Boy Advance" by GameSpot, which had named it the handheld's best game of November 2002 earlier in the year. It was a runner-up for GameSpot's annual "Best Sound", "Best Graphics", "Best Story" and overall "Game of the Year" awards among Game Boy Advance games. In 2009, Official Nintendo Magazine called Fusion "sleek, slick and perfectly formed", ranking it the 62nd-best Nintendo game. Reception: Sales Metroid Fusion has sold over 1.6 million units worldwide. In its debut week, Fusion sold more than 100,000 units in North America. It finished the month of November 2002 with 199,723 copies sold in the United States alone, for total revenues of US$5,590,768, making it the third best-selling Game Boy Advance game that month, and the tenth best-selling game across all platforms. It sold 940,000 copies by August 2006, with revenues of US$27 million. During the period between January 2000 and August 2006, in the United States it was the twenty-first highest-selling game for the Game Boy Advance, Nintendo DS or PlayStation Portable. As of November 2004, Fusion had sold 180,000 units in Japan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded