text
stringlengths 60
353k
| source
stringclasses 2
values |
|---|---|
**Vermifilter**
Vermifilter:
A vermifilter (also vermi-digester or lumbrifilter) is an aerobic treatment system, consisting of a biological reactor containing media that filters organic material from wastewater. The media also provides a habitat for aerobic bacteria and composting earthworms that purify the wastewater by removing pathogens and oxygen demand. The "trickling action" of the wastewater through the media dissolves oxygen into the wastewater, ensuring the treatment environment is aerobic for rapid decomposition of organic substances.
Vermifilter:
Vermifilters are most commonly used for sewage treatment and for agro-industrial wastewater treatment. Vermifilters can be used for primary, secondary and tertiary treatment of sewage, including blackwater and greywater in on-site systems and municipal wastewater in large centralised systems. Vermifilters are used where wastewater requires treatment before being safely discharged into the environment. Treated effluent is disposed of to either surface or subsurface leach fields. Solid material (such as fecal matter and toilet paper) is retained, de-watered and digested by bacteria and earthworms into humus that is integrated into the filtration media. The liquid passes through the filtration media where the attached aerobic microorganisms biodegrade pathogens and other organic compounds, resulting in treated wastewater.
Vermifilter:
Vermifiltration is a low-cost aerobic wastewater treatment option. Because energy is not required for aeration, vermifilters can be considered "passive treatment" systems (pumps may be required if gravity flow is not possible). Another advantage is the high treatment efficiency given the low space requirement.
Terminology:
Alternative terms used to describe the vermifiltration process include aerobic biodigester, biological filter with earthworms, or wet vermicomposting. The treatment system may be described using terms such as vermi-digester and vermi-trickling filter.
When this kind of sanitation system is used to treat only the mixture of excreta and water from flush toilets or pour-flush toilets (called blackwater) then the term "toilet" is added to the name of the process, such as vermifilter toilet.
Overview:
Vermifiltration was first advocated by researchers at the University of Chile in 1992 as a low-cost sustainable technology suitable for decentralised sewage treatment in rural areas. Vermifilters offer treatment performance similar to conventional decentralised wastewater treatment systems, but with potentially higher hydraulic processing capacities.Vermifilters are a type of wastewater treatment biofilter or trickling filter, but with the addition of earthworms to improve treatment efficiency. Vermifilters provide an aerobic environment and wet substrate that facilitates microorganism growth as a biofilm. Microorganisms perform biochemical degradation of organic matter present in wastewater. Earthworms regulate microbial biomass and activity by directly or/and indirectly grazing on microorganisms. Biofilm and organic matter consumed by composting earthworms is then digested into biologically inert castings (humus). The vermicast is incorporated into the media substrate, slowly increasing its volume. When this builds up, it can be removed and applied to soil as an amendment to improve soil fertility and structure.
Overview:
Microorganisms present are heterotrophic and autotrophic. Heterotrophic microorganisms are important in oxidising carbon (decomposition), whereas autotrophic microorganisms are important in nitrification.
Overview:
As a result of oxidation reactions, biodegradation and microbial stimulation by enzymatic action, organic matter decomposition and pathogen destruction occurs in the vermifilter. In a study where municipal wastewater was treated in a vermifilter, removal ratios for biochemical oxygen demand (BOD5) were 90%, chemical oxygen demand (COD) 85%, total suspended solids (TSS) 98%, ammonia nitrogen 75% and fecal coliforms eliminated to a level that meets World Health Organisation guidelines for safe re-use in crops.
Process types:
Vermifilters can be used for primary, secondary and tertiary treatment of blackwater and greywater.
Process types:
Primary treatment of blackwater Vermifilters can be used for aerobic primary treatment of domestic blackwater. Untreated blackwater enters a ventilated enclosure above a bed of filter medium. Solids accumulate on the surface of the filter bed while liquid drains through the filter medium and is discharged from the reactor. The solids (feces and toilet paper) are aerobically digested by aerobic bacteria and composting earthworms into castings (humus), thereby significantly reducing the volume of organic material.
Process types:
Primary treatment vermifilter reactors are designed to digest solid material, such as contained in raw sewage. Twin-chamber parallel reactors offer the advantage of being able to allow one to rest, while the other is active, in order to facilitate hygienic removal of humus with reduced pathogen levels.
Process types:
Worms actively digest the solid organic material. Over time, an equilibrium is reached in which the volume digested by a stable population of worms is equal to the input volume of solid waste. Seasonal and environmental factors (such as temperature) and variable influent volumes can cause buildup of solid waste as a pile. Although oxygen is excluded from the centre of this "wet" compost pile, worms work from the outside in and introduce air as necessary into the pile to meet their nutritional requirements. This food resource buffer ensures primary treatment vermifilters have a level of resilience and reliability, provided space is provided for a pile to build up. There is some evidence that the wet environment facilitates digestion of solid waste by worms. The volume of vermicast humus increases only slowly and occasionally needs to be removed from the primary treatment reactor.
Process types:
Primary treatment of wet mixed blackwater can also include greywater containing food solids, grease and other biodegradable waste. Solid material is reduced to stable humus (worm castings), with volume reductions of up to tenfold.The process produces primary treated blackwater, with much of the solid organic material removed from the effluent. Because liquid effluent is discharged almost immediately on entering the digester, little dissolved oxygen is consumed by the wastewater through the filtration stage. However, oxygen demand is leached into the wastewater flow through the filter as worms digest the retained solids. This oxygen demand can be removed with secondary treatment vermifilter reactors. Primary treatment vermifilters provide a similar level of liquid effluent treatment to a septic tank, but in less time because digestion of solids by worms takes place rapidly in an aerobic environment.The liquid effluent is either discharged directly to a drain field or undergoes secondary treatment before being used for surface or subsurface irrigation of agricultural soil.
Process types:
Secondary treatment Secondary and tertiary treatment vermifilters can be underneath the primary vermifilter in a single tower, but are typically single reactors, where several reactors can be chained in series as sequential vermifilters. Drainage within the reactor is provided by filter media packed according to the hydraulic conductivity and permeability of each material that is present within the vermifilter. The filter packing retains the solid particles present in the effluent wastewater, increases the hydraulic retention time and also provides a suitable habitat for sustaining a population of composting earthworms. This population requires adequate moisture levels within the filter media, but also adequate drainage and oxygen levels.
Process types:
Sprinklers or drippers can be used in secondary and tertiary treatment vermifilter reactors (see image).
Hydraulic factors (hydraulic retention time, hydraulic loading rate and organic loading rate) and biological factors (earthworm numbers, levels of biofilm) can influence treatment efficiency.
Design:
Vermifilters are enclosed reactors made from durable materials that eliminate the entry of vermin, usually plastic or concrete. Ventilation must be sufficient to ensure an aerobic environment for the worms and microorganisms, while also inhibiting entry of unwanted flies. Temperature within the reactor needs to be maintained within a range suitable for the species of compost worms used.
Design:
Influent entry Influent entry is from above the filter media. Full-flush toilets can have the entry point into the side of the reactor, whereas micro-flush toilets, because these do not provide sufficient water to convey solids through sewer pipes, are generally installed directly above the reactor. For primary treatment reactors, sufficient vertical space must be provided for growth of the pile. This is dependent on the volume of solids in the influent and the presence of slower decomposing materials such as toilet paper. Secondary and tertiary treatment reactors can use sprinklers or tricklers to distribute the influent wastewater evenly over the filter media to improve treatment efficiency of the filter media.
Design:
Filter substrate Drainage within the vermifilter reactor is provided by the filter media. The filter media has the dual purpose of retaining the solid organic material, while also providing a habitat suitable for sustaining a population of composting worms. This population requires adequate moisture levels within the media, along with good drainage and aerobic conditions.
Design:
Vermifilter reactors may comprise a single section packed only with organic media, or up to three filter sections comprising an organic top layer that provides habitat for the earthworms, an inorganic upper layer of sand and lower layer of gravel. The filter sits on top of a sump or drainage layer of coarse gravel, rocks or pervious plastic drainage coil where the treated effluent is discharged and/or recirculated to the top of the reactor. Alternatively the filter media may be suspended above the sump in a basket. Synthetic geotextile cloth is sometimes used to retain the filter media in place above the drainage layer. To remain aerobic, adequate ventilation must be provided, along with an outlet for the liquid effluent to drain away.
Design:
Common filter packing materials include sawdust, wood chips, coir, bark, peat, and straw for the organic layer. Gravel, quartz sand, round stones, pumice, mud balls, glass balls, ceramsite and charcoal are commonly used for the inorganic layer. The surface area and porosity of these filter materials influence treatment performance. Materials with low granulometry (small particles) and large surface area may improve the performance of the vermifilter but impede its drainage.
Sizing:
Vermifilters can be constructed as single tower systems, or separate staged reactors (either gravity or pump operated) for the treatment of wastewater according to design requirements (primary, secondary, tertiary treatment). More stages can increase the degree of treatment because multiple stage systems provide accumulating aerobic conditions suitable for nitrification of ammonium and removal of COD.
Sizing:
The design parameters of vermifilters include stocking density of earthworms (although over time earthworm population tends to be self-moderating), filter media composition, hydraulic loading rate, hydraulic retention time and organic loading rate. Hydraulic retention time and hydraulic loading rate both affect effluent quality. Hydraulic retention time is the actual time the wastewater is in contact with the filter media and is related to the depth of the vermifilter (which may increase over time due to the accumulation of earthworm vermicastings), reactor volume and type of material used (porosity). The hydraulic retention time determines wastewater inflow rate (hydraulic loading as influent volume per hour).
Sizing:
In principle, provided the environment is aerobic, the longer the wastewater remains inside the filter, the greater the BOD5 and COD removal efficiency will be, but at the expense of hydraulic loading. Wastewater requires sufficient contact time with the biofilm to allow for the adsorption, transformation, and reduction of contaminants. The hydraulic loading rate is an essential design parameter, consisting of the volume of wastewater that a vermifilter can reasonably treat in a given amount of time. For a given system, higher hydraulic loading rates will cause hydraulic retention time to decrease and therefore reduce level of treatment. Hydraulic loading rate may depend on parameters such as structure, effluent quality and bulk density of filter packing, along with method of effluent application. Common hydraulic retention time values in vermifiltration systems range from 1 to 3 hours. Hydraulic loading rates commonly vary between 0.2 m3 m−2 day−1, 3.0 m3 m−2 day−1 or 10–20 g L−1. Organic loading rate is defined as the amount of soluble and particulate organic matter (as BOD5) per unit area per unit time.Treatment efficiency is influenced by health, maturity and population abundance of the earthworms. Abundance is a fundamental parameter for efficient operation of a vermifiltration system. Different values are reported in the literature, usually in grams or number of individuals per volume of filter packing or surface area of filter packing. Common densities vary between 10 g L−1 and 40 g L−1 of filter packing material.An abundance of earthworms improves treatment efficiency, in particular BOD5, TSS and NH4+ removal. This is because earthworms release organic matter into the filter media and stimulate nitrogen mineralization. Earthworm castings may have substances which contribute to higher BOD5 removal.
Operation and maintenance:
A vermifilter has low mechanical and manual maintenance requirements, and, where gravity operated, requires no energy input. Recirculation, if required for improved effluent quality, would require a pump. Occasional topping up of organic materials may be required as these decompose and reduce in volume. The volume of worm castings increases only slowly and occasionally vermicompost needs to be removed from the vermifilter.
Operation and maintenance:
Solids accumulate on the surface of the organic filter media (or filter packing). The liquid fraction drains through the medium into the sump or equaliser and is either discharged from the reactor or recirculated to the top for further treatment. Wastewater is discharged to the surface of the filter material by direct application or by sprinklers, drippers or tricklers.
Examples:
Construction of primary and secondary domestic vermifilters from readily available materials A household pour-flush toilet, with combined primary vermifiltration and direct effluent infiltration into the soil below, is called the "Tiger Toilet" and has been tested by Bear Valley Ventures and Primove Infrastructure Development Consultants in rural India. Unlike a pit latrine , it was found that there was virtually no accumulation of fecal material over a one-year period. In the effluent, there was a 99% reduction in fecal coliforms . User satisfaction is high, driven mainly by a lack of odour. This system is now being marketed commercially in India, where over 2000 of these toilets and treatment systems had been sold and installed by May 2017.
Examples:
"Tiger worm toilets" are also being promoted by Oxfam as a sanitation solution in refugee camps, slums and peri-urban areas in Africa, for example in Liberia.
Low-flush vermifilter toilet systems with direct subsoil soakage are being marketed in Ghana and other African countries by the Ghana Sustainable Aid Project (GSAP) with support by Providence College in the U.S. and the University of Ghana.
Biofilcom is a company active in Ghana which is marketing the process under the name of "Biofil Digester".
In Australia and New Zealand, there are numerous suppliers offering vermifilter systems for domestic greywater and/or blackwater treatment, with primary treated effluent disposal to subsurface leach fields. Examples include Wormfarm, Zenplumb, Naturalflow, SWWSNZ and Autoflow.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Paraboloid**
Paraboloid:
In geometry, a paraboloid is a quadric surface that has exactly one axis of symmetry and no center of symmetry. The term "paraboloid" is derived from parabola, which refers to a conic section that has a similar property of symmetry.
Paraboloid:
Every plane section of a paraboloid by a plane parallel to the axis of symmetry is a parabola. The paraboloid is hyperbolic if every other plane section is either a hyperbola, or two crossing lines (in the case of a section by a tangent plane). The paraboloid is elliptic if every other nonempty plane section is either an ellipse, or a single point (in the case of a section by a tangent plane). A paraboloid is either elliptic or hyperbolic.
Paraboloid:
Equivalently, a paraboloid may be defined as a quadric surface that is not a cylinder, and has an implicit equation whose part of degree two may be factored over the complex numbers into two different linear factors. The paraboloid is hyperbolic if the factors are real; elliptic if the factors are complex conjugate.
An elliptic paraboloid is shaped like an oval cup and has a maximum or minimum point when its axis is vertical. In a suitable coordinate system with three axes x, y, and z, it can be represented by the equation z=x2a2+y2b2.
where a and b are constants that dictate the level of curvature in the xz and yz planes respectively. In this position, the elliptic paraboloid opens upward.
A hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation z=y2b2−x2a2.
In this position, the hyperbolic paraboloid opens downward along the x-axis and upward along the y-axis (that is, the parabola in the plane x = 0 opens upward and the parabola in the plane y = 0 opens downward).
Any paraboloid (elliptic or hyperbolic) is a translation surface, as it can be generated by a moving parabola directed by a second parabola.
Properties and applications:
Elliptic paraboloid In a suitable Cartesian coordinate system, an elliptic paraboloid has the equation z=x2a2+y2b2.
If a = b, an elliptic paraboloid is a circular paraboloid or paraboloid of revolution. It is a surface of revolution obtained by revolving a parabola around its axis.
A circular paraboloid contains circles. This is also true in the general case (see Circular section).
From the point of view of projective geometry, an elliptic paraboloid is an ellipsoid that is tangent to the plane at infinity.
Plane sectionsThe plane sections of an elliptic paraboloid can be: a parabola, if the plane is parallel to the axis, a point, if the plane is a tangent plane.
an ellipse or empty, otherwise.
Properties and applications:
Parabolic reflector On the axis of a circular paraboloid, there is a point called the focus (or focal point), such that, if the paraboloid is a mirror, light (or other waves) from a point source at the focus is reflected into a parallel beam, parallel to the axis of the paraboloid. This also works the other way around: a parallel beam of light that is parallel to the axis of the paraboloid is concentrated at the focal point. For a proof, see Parabola § Proof of the reflective property.
Properties and applications:
Therefore, the shape of a circular paraboloid is widely used in astronomy for parabolic reflectors and parabolic antennas.
The surface of a rotating liquid is also a circular paraboloid. This is used in liquid-mirror telescopes and in making solid telescope mirrors (see rotating furnace).
Hyperbolic paraboloid The hyperbolic paraboloid is a doubly ruled surface: it contains two families of mutually skew lines. The lines in each family are parallel to a common plane, but not to each other. Hence the hyperbolic paraboloid is a conoid.
These properties characterize hyperbolic paraboloids and are used in one of the oldest definitions of hyperbolic paraboloids: a hyperbolic paraboloid is a surface that may be generated by a moving line that is parallel to a fixed plane and crosses two fixed skew lines.
Properties and applications:
This property makes it simple to manufacture a hyperbolic paraboloid from a variety of materials and for a variety of purposes, from concrete roofs to snack foods. In particular, Pringles fried snacks resemble a truncated hyperbolic paraboloid.A hyperbolic paraboloid is a saddle surface, as its Gauss curvature is negative at every point. Therefore, although it is a ruled surface, it is not developable.
Properties and applications:
From the point of view of projective geometry, a hyperbolic paraboloid is one-sheet hyperboloid that is tangent to the plane at infinity.
A hyperbolic paraboloid of equation z=axy or z=a2(x2−y2) (this is the same up to a rotation of axes) may be called a rectangular hyperbolic paraboloid, by analogy with rectangular hyperbolas.
Properties and applications:
Plane sectionsA plane section of a hyperbolic paraboloid with equation z=x2a2−y2b2 can be a line, if the plane is parallel to the z-axis, and has an equation of the form bx±ay+b=0 a parabola, if the plane is parallel to the z-axis, and the section is not a line, a pair of intersecting lines, if the plane is a tangent plane, a hyperbola, otherwise.
Properties and applications:
Examples in architecture Saddle roofs are often hyperbolic paraboloids as they are easily constructed from straight sections of material. Some examples: Philips Pavilion Expo '58, Brussels (1958) IIT Delhi - Dogra Hall Roof St. Mary's Cathedral, Tokyo, Japan (1964) Cathedral of Saint Mary of the Assumption, San Francisco, California, US (1971) Saddledome in Calgary, Alberta, Canada (1983) Scandinavium in Gothenburg, Sweden (1971) L'Oceanogràfic in Valencia, Spain (2003) London Velopark, England (2011) Waterworld Leisure & Activity Centre, Wrexham, Wales (1970) Markham Moor Service Station roof, A1(southbound), Nottinghamshire, England
Cylinder between pencils of elliptic and hyperbolic paraboloids:
The pencil of elliptic paraboloids z=x2+y2b2,b>0, and the pencil of hyperbolic paraboloids z=x2−y2b2,b>0, approach the same surface z=x2 for b→∞ which is a parabolic cylinder (see image).
Curvature:
The elliptic paraboloid, parametrized simply as σ→(u,v)=(u,v,u2a2+v2b2) has Gaussian curvature K(u,v)=4a2b2(1+4u2a4+4v2b4)2 and mean curvature H(u,v)=a2+b2+4u2a2+4v2b2a2b2(1+4u2a4+4v2b4)3 which are both always positive, have their maximum at the origin, become smaller as a point on the surface moves further away from the origin, and tend asymptotically to zero as the said point moves infinitely away from the origin.
The hyperbolic paraboloid, when parametrized as σ→(u,v)=(u,v,u2a2−v2b2) has Gaussian curvature K(u,v)=−4a2b2(1+4u2a4+4v2b4)2 and mean curvature H(u,v)=−a2+b2−4u2a2+4v2b2a2b2(1+4u2a4+4v2b4)3.
Geometric representation of multiplication table:
If the hyperbolic paraboloid z=x2a2−y2b2 is rotated by an angle of π/4 in the +z direction (according to the right hand rule), the result is the surface z=(x2+y22)(1a2−1b2)+xy(1a2+1b2) and if a = b then this simplifies to z=2xya2 .Finally, letting a = √2, we see that the hyperbolic paraboloid z=x2−y22.
is congruent to the surface z=xy which can be thought of as the geometric representation (a three-dimensional nomograph, as it were) of a multiplication table.
The two paraboloidal ℝ2 → ℝ functions z1(x,y)=x2−y22 and z2(x,y)=xy are harmonic conjugates, and together form the analytic function f(z)=z22=f(x+yi)=z1(x,y)+iz2(x,y) which is the analytic continuation of the ℝ → ℝ parabolic function f(x) = x2/2.
Dimensions of a paraboloidal dish:
The dimensions of a symmetrical paraboloidal dish are related by the equation 4FD=R2, where F is the focal length, D is the depth of the dish (measured along the axis of symmetry from the vertex to the plane of the rim), and R is the radius of the rim. They must all be in the same unit of length. If two of these three lengths are known, this equation can be used to calculate the third.
Dimensions of a paraboloidal dish:
A more complex calculation is needed to find the diameter of the dish measured along its surface. This is sometimes called the "linear diameter", and equals the diameter of a flat, circular sheet of material, usually metal, which is the right size to be cut and bent to make the dish. Two intermediate results are useful in the calculation: P = 2F (or the equivalent: P = R2/2D) and Q = √P2 + R2, where F, D, and R are defined as above. The diameter of the dish, measured along the surface, is then given by ln (R+QP), where ln x means the natural logarithm of x, i.e. its logarithm to base e.
Dimensions of a paraboloidal dish:
The volume of the dish, the amount of liquid it could hold if the rim were horizontal and the vertex at the bottom (e.g. the capacity of a paraboloidal wok), is given by π2R2D, where the symbols are defined as above. This can be compared with the formulae for the volumes of a cylinder (πR2D), a hemisphere (2π/3R2D, where D = R), and a cone (π/3R2D). πR2 is the aperture area of the dish, the area enclosed by the rim, which is proportional to the amount of sunlight a reflector dish can intercept. The surface area of a parabolic dish can be found using the area formula for a surface of revolution which gives A=πR((R2+4D2)3−R3)6D2.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gravity spreading**
Gravity spreading:
Gravity spreading is a phenomenon in which a geological body laterally extends and vertically contracts to reduce its gravitational potential energy. It has been observed on many different scales, and at numerous locations on Earth, from rhyolite lava flows to passive margins. Additionally, gravity spreading is likely to have occurred on both Mars and Venus.
Distinction from Gravity Gliding:
Historically, geologists have used the terms "gravity spreading" and "gravity gliding" interchangeably, or with little distinction. This article follows the convention of "Excursus on gravity gliding and gravity spreading" by D.D. Schultz-Ela, which defines gravity spreading as a lateral extension and vertical contraction, which thus must be applied to a non-rigid body. Gravity gliding, however, is applied to a block that is not being deformed, and is therefore less common to observe. However, it can be difficult to distinguish between the two in real world scenarios, and often both occur simultaneously.
Mechanism:
For gravity spreading to occur, a rock mass must be driven to deformation by gravity. As long as the center of gravity of the system descends, portions of the system may rise. Of course, a material normally resists such deformation. For gravity spreading to occur, the differential stress must be greater than that rock body's yield strength. Gravity spreading can be thought of as a mound of molasses that spreads out, and gravity gliding can be imagined as a wooden block sliding down a slope.
Examples:
Earth Mountains Heart Mountain in Wyoming, United States, has been extensively studied, because Ordovician age carbonates (Madison Limestone), sit on top of a much younger (~50Ma) sedimentary formation (Willwood Formation). It is now largely accepted that this juxtaposition of old rocks on top of young is the result of gravity spreading and gliding. Field observations, such as slight internal deformations of the older formation, indicate the gravity gliding and spreading of the Madison Limestones. The specific details of the gravity spreading event are unclear, but it is thought that it was induced by the Laramide Orogeny, approximately 50 Ma. This caused the Madison Limestones to slide into the nearby Bighorn Basin, where it came to rest on top of the Willwood Formation. The cause for block motion is debated, with numerous models to explain how such a large block could have moved tens of kilometers down a slope of less than 2°. Models have ranged from lubrication by hydrothermal circulation, movement initiation from volcanogenic seismicity, to frictional heating dissociating CO2 from the carbonates, resulting in dramatically reduced friction. The last of these theories is among the most recent, and by far the most spectacular. The authors envisage initially slow sliding, likely as the result of a volcanic eruption, until frictional heating of the carbonate rocks creates a supercritical CO2 layer, decreasing the friction tremendously. From this point, the sliding would occur rapidly, perhaps as high as 150 km/h.
Examples:
Passive Margins Gravity spreading in passive margins occurs when gravitational forces are strong enough to overcome the overburdens resistance to motion along its basal surface, and the internal strength. The gravitational forces are a function of the dip of the slope and the dip of the décollement layer.
Lava Flows Rhyolite lava flows in northeastern New South Wales, Australia show recumbent folds that record a history of vertical shortening and lateral extension during deposition, consistent with what one would expect from gravity spreading. This is the result of lava being displaced by new lava extruding from the vent.
Examples:
Mars Satellite images of Mars have shown that the Thaumasia Plateau has large amounts of thrust faults, normal faults, and ridges. This rifting has resulted in canyons, and compression at the front of the "mega-slide" has caused the ridges and thrust faults observed at low end of the region. To explain these faults and ridges, a four-stage model involving gravity spreading is used: A thick salt layer is deposited. This is possible in either wet or dry conditions.
Examples:
Tharsis, a volcanic plateau, was emplaced. This increased both the heat flux of the area, as well as the topographic slope. Volcanism associated with Tharsis also deposited ash and lava flows.
The layers of salt and ice beneath the volcanics provided detachment points for the initiation of gravity spreading to the southeast.
Fractures from the basal detachment plane opened an aquifer, resulting in the release of water and incision of outflow channels.
Venus It has been suggested that the "blocky" surface of Venus is the result of gravity spreading. This is thought because of flow-like structures correlated with topography, and as well as potential regions of thermal uplift, and has been reinforced by terrestrial analogues.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Relations (philosophy)**
Relations (philosophy):
Relations are ways in which things, the relata, stand to each other. Relations are in many ways similar to properties in that both characterize the things they apply to. Properties are sometimes treated as a special case of relations involving only one relatum. In philosophy (especially metaphysics), theories of relations are typically introduced to account for repetitions of how several things stand to each other.
Overview:
The concept of relation has a long and complicated history. One of the interests for the Greek philosophers lay in the number of ways in which a particular thing might be described, and the establishment of a relation between one thing and another was one of these. A second interest lay in the difference between these relations and the things themselves. This was to culminate in the view that the things in themselves could not be known except through their relations. Debates similar to these continue into modern philosophy and include further investigations into types of relation and whether relations exist only in the mind or the real world or both.
Overview:
An understanding of types of relation is important to an understanding of relations between many things including those between people, communities and the wider world. Most of these are complex relations but of the simpler, analytical relations out of which they are formed there are sometimes held to be three types, although opinion on the number may differ. The three types are (1) spatial relations, which include geometry and number, (2) relations of cause and effect, and (3) the classificatory relations of similarity and difference that underlie knowledge. Similar classifications have been suggested in the sciences, mathematics, and the arts.
Internal and external relations:
An important distinction is between internal and external relations. A relation is internal if it is fully determined by the features of its relata. For example, an apple and a tomato stand in the internal relation of similarity to each other because they are both red. Some philosophers have inferred from this that internal relations do not have a proper ontological status since they can be reduced to intrinsic properties. External relations, on the other hand, are not fixed by the features of their relata. For example, a book stands in an external relation to a table by lying on top of it. But this is not determined by the book's or the table's features like their color, their shape, etc. One problem associated with external relations is that they are difficult to locate. For example, the lying-on-top is located neither in the table nor in the apple. This has prompted some philosophers to deny that there are external relations. Properties do not face this problem since they are located in their bearer.
History:
Ancient Greek philosophy Traditionally the history of the concept of relation begins with Aristotle and his concept of relative terms. In Metaphysics he states: "Things are called relative as the double to the half... as that which can act to that which can be acted upon... and as the knowable to knowledge". It has been argued that the content of these three types can be traced back to the Eleatic Dilemmas, a series of puzzles through which the world can be explained in totally opposite ways, for example things can be both one and many, both moving and stationary and both like and unlike one another.For Aristotle relation was one of ten distinct kinds of categories (Greek: kategoriai) which list the range of things that can be said about any particular subject: "...each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon". Subjects and predicates were combined to form simple propositions. These were later redefined as "categorical" propositions in order to distinguish them from two other types of proposition, the disjunctive and the hypothetical, identified a little later by Chrysippus.An alternative strand of thought at the time was that relation was more than just one of ten equal categories. A fundamental opposition was developing between substance and relation. Plato in Theaetetus had noted that "some say all things are said to be relative" and Speusippus, his nephew and successor at the Academy maintained the view that "... a thing cannot be known apart from the knowledge of other things, for to know what a thing is, we must know how it differs from other things".Plotinus in third century Alexandria reduced Aristotle's categories to five: substance, relation, quantity, motion and quality.: VI.3.3, VI.3.21 He gave further emphasis to the distinction between substance and relation and stated that there were grounds for the latter three: quantity, motion and quality to be considered as relations. Moreover, these latter three categories were posterior to the Eleatic categories, namely unity/plurality; motion/stability and identity/difference concepts that Plotinus called "the hearth of reality".: V.1.4 Plotinus liked to picture relations as lines linking elements, but in a process of abstraction our minds tend to ignore the lines "and think only of their terminals".: VI.5.5 His pupil and biographer, Porphyry, developed a tree analogy picturing the relations of knowledge as a tree branching from the highest genera down through intermediate species to the individuals themselves.: V.3.10, V.6.1 Scholasticism to the Enlightenment The opposition between substance and relation was given a theological perspective in the Christian era. Basil in the Eastern church suggested that an understanding of the Trinity lay more in understanding the types of relation existing between the three members of the Godhead than in the nature of the Persons themselves. Thomas Aquinas in the Western church noted that in God "relations are real",: 52 and, echoing Aristotle, claimed that there were indeed three type of relation which give a natural order to the world. These were quantity, as in double and half; activity, as in acting and being acted upon; and understanding, through the qualitative concepts of genus and species. "Some have said that relation is not a reality but only an idea. But this is plainly seen to be false from the very fact that things themselves have a mutual natural order and relation... There are three conditions that make a relation to be real or logical ..."The end of the Scholastic period marked the beginning of a decline in the pre-eminence of the classificatory relation as a way of explaining the world. Science was now in the ascendant and with it scientific reason and the relation of cause and effect. In Britain, John Locke, influenced by Isaac Newton and the laws of motion, developed a similar mechanistic view of the human mind. Following Hobbes's notion of "trains of thought" where one idea naturally follows another in the mind, Locke developed further the concept of knowledge as the perception of relations between ideas. These relations included mathematical relations, scientific relations such as co-existence and succession, and the relations of identity and difference.
History:
It was left to the Scottish philosopher David Hume to reduce these kinds of mental association to three: "To me there appears to be only three principles of connexion among ideas namely Resemblance, Contiguity in time or place, and Cause or Effect".The problem which troubled Hume of being able to establish the reality of relations from experience, in particular the relation of cause and effect, was solved in another way by Immanuel Kant who took the view that our knowledge is only partly derived from the external world. Part of our knowledge he argued must be due to the modifying nature of our own minds which imposes on perception not only the forms of space and time but also the categories of relation which he understood to be a priori concepts contained within the understanding. Of these he famously said: "Everything in our knowledge... contains nothing but mere relations".: 87 Kant took a more analytical view of the concept of relation and his categories of relation were three namely, community, causality and inherence.: 113 These can be compared with Hume's three kinds of association in that, firstly, community depicts elements conjoined in time and space, secondly causality compares directly with cause and effect, and thirdly inherence implies the relation of a quality to its subject and plays an essential part in any consideration of the concept of resemblance. Preceding the table of categories in the Critique of Pure Reason is the table of judgements and here, under the heading of relation, are the three types of syllogism namely the disjunctive, the hypothetical and the categorical,: 107, 113 developed as we have seen through Aristotle, Chryssipus and the logicians of the Middle Ages. Schopenhauer raised objections to the term Community and the term disjunction, as a relation, can be usefully substituted for the more complex concept of community. G.W.F. Hegel also referred to three types of proposition but in Hegel the categories of relation which for Kant were "subjective mental processes" have now become "objective ontological entities".
History:
Late modern and contemporary philosophy Late modern American philosopher C. S. Peirce recorded that his own categories of relation grew originally out of a study of Kant. He introduced three metaphysical categories which pervaded his philosophy, and these were ordered through a consideration of the development of our mental processes: Firstness: "The first is predominant in feeling... the whole content of consciousness is made up of qualities of feeling as truly as the whole of space is made up of points or the whole of time by instants".: 149–159 Consciousness in a sense arises through the gradual disjunction of what was once whole. Elements appear to be monadic in character and are represented as points in space and time.
History:
Secondness: The idea of secondness "is predominant in the ideas of causation" coming into being as "an action and reaction" between ourselves and some other, or between ourselves and a stimulus.: 159–163 It is essentially dyadic in character and in some versions of symbolic logic is represented by an arrow.
History:
Thirdness: "Ideas in which thirdness predominates include the idea of a sign or representation... For example, a picture signifies by similarity". This type of relation is essentially triadic in nature and is represented in Peirce's logic as a brace or bracket.These categories of relation appeared in Peirce's logic of relatives and followed earlier work undertaken by the mathematician Augustus De Morgan at Cambridge who had introduced the notion of relation into formal logic in 1849. Among the philosophers who followed may be mentioned T. H. Green in England who took the view that all reality lies in relations and William James in America who, emphasising the concept of relation, pictured the world as a "concatenated unity" with some parts joined and other parts disjoined.Contemporary British philosopher Bertrand Russell, in 1921, reinforced James's view that "... the raw material out of which the world is built up, is not of two sorts, one matter and one mind but that it is designed in different patterns by its interrelations, and that some arrangements may be called mental, while others may be called physical". Wittgenstein, also in 1921, saw the same kinds of relation structuring both the material world and the mental world. While the real world consisted of objects and their relations which combined to form facts, the mental world consisted of similar subjects and predicates which pictured or described the real world. For Wittgenstein there were three kinds of description (enumeration, function and law) which themselves bear a notable if distant "family resemblance" to the three kinds of relation whose history we have been following.Also of note at the beginning of the twentieth century were arguments associated with G. E. Moore among others concerning the concept of internal and external relations whereby relations could be seen as either contingent or accidental parts of the definition of a thing.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Infertility**
Infertility:
Infertility is the inability of a person, animal or plant to reproduce by natural means. It is usually not the natural state of a healthy adult, except notably among certain eusocial species (mostly haplodiploid insects). It is the normal state of a human child or other young offspring, because they have not undergone puberty, which is the body's start of reproductive capacity.
Infertility:
In humans, infertility is the inability to become pregnant after one year of unprotected and regular sexual intercourse involving a male and female partner. There are many causes of infertility, including some that medical intervention can treat. Estimates from 1997 suggest that worldwide about five percent of all heterosexual couples have an unresolved problem with infertility. Many more couples, however, experience involuntary childlessness for at least one year: estimates range from 12% to 28%.
Infertility:
The main cause of infertility in humans is age, and an advanced maternal age can raise the probability of suffering a spontaneous abortion during pregnancy.
Infertility:
Male infertility is responsible for 20–30% of infertility cases, while 20–35% are due to female infertility, and 25–40% are due to combined problems in both parts. In 10–20% of cases, no cause is found. The most common cause of female infertility is age, which generally manifests in sparse or absent menstrual periods. Male infertility is most commonly due to deficiencies in the semen, and semen quality is used as a surrogate measure of male fecundity.Women who are fertile experience a period of fertility before and during ovulation, and are infertile for the rest of the menstrual cycle. Fertility awareness methods are used to discern when these changes occur by tracking changes in cervical mucus or basal body temperature.
Definition:
"Demographers tend to define infertility as childlessness in a population of women of reproductive age," whereas the epidemiological definition refers to "trying for" or "time to" a pregnancy, generally in a population of women exposed to a probability of conception. Currently, female fertility normally peaks in young adulthood and diminishes after 35 with pregnancy occurring rarely after age 50. A female is most fertile within 24 hours of ovulation. Male fertility peaks usually in young adulthood and declines after age 40.The time needed to pass (during which the couple tries to conceive) for that couple to be diagnosed with infertility differs between different jurisdictions. Existing definitions of infertility lack uniformity, rendering comparisons in prevalence between countries or over time problematic. Therefore, data estimating the prevalence of infertility cited by various sources differ significantly. A couple that tries unsuccessfully to have a child after a certain period of time (often a short period, but definitions vary) is sometimes said to be subfertile, meaning less fertile than a typical couple. Both infertility and subfertility are defined similarly and often used interchangeably, but subfertility is the delay in conceiving within six to twelve months, whereas infertility is the inability to conceive naturally within a full year.
Definition:
World Health Organization The World Health Organization defines infertility as follows: Infertility is "a disease of the reproductive system defined by the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse (and there is no other reason, such as breastfeeding or postpartum amenorrhoea). Primary infertility is infertility in a couple who have never had a child. Secondary infertility is failure to conceive following a previous pregnancy. Infertility may be caused by infection in the man or woman, but often there is no obvious underlying cause" United States One definition of infertility that is frequently used in the United States by reproductive endocrinologists, doctors who specialize in infertility, to consider a couple eligible for treatment is: a woman under 35 has not conceived after 12 months of contraceptive-free intercourse. Twelve months is the lower reference limit for Time to Pregnancy (TTP) by the World Health Organization.
Definition:
a woman over 35 has not conceived after six months of contraceptive-free sexual intercourse.These time intervals would seem to be reversed; this is an area where public policy trumps science. The idea is that for women beyond age 35, every month counts and if made to wait another six months to prove the necessity of medical intervention, the problem could become worse. The corollary to this is that, by definition, failure to conceive in women under 35 is not regarded with the same urgency as it is in those over 35.
Definition:
United Kingdom In the UK, previous NICE guidelines defined infertility as failure to conceive after regular unprotected sexual intercourse for two years in the absence of known reproductive pathology. Updated NICE guidelines do not include a specific definition, but recommend that "A woman of reproductive age who has not conceived after 1 year of unprotected vaginal sexual intercourse, in the absence of any known cause of infertility, should be offered further clinical assessment and investigation along with her partner, with earlier referral to a specialist if the woman is over 36 years of age." Other definitions Researchers commonly base demographic studies on infertility prevalence over a five-year period. Practical measurement problems, however, exist for any definition, because it is difficult to measure continuous exposure to the risk of pregnancy over a period of years.
Definition:
Primary vs. secondary infertility Primary infertility is defined as the absence of a live birth for women who desire a child and have been in a union for at least 12 months, during which they have not used any contraceptives. The World Health Organisation also adds that 'women whose pregnancy spontaneously miscarries, or whose pregnancy results in a still born child, without ever having had a live birth would present with primarily infertility'.Secondary infertility is defined as the absence of a live birth for women who desire a child and have been in a union for at least 12 months since their last live birth, during which they did not use any contraceptives.Thus the distinguishing feature is whether or not the couple have ever had a pregnancy that led to a live birth.
Effects:
Psychological The consequences of infertility are manifold and can include societal repercussions and personal suffering. Advances in assisted reproductive technologies, such as IVF, can offer hope to many couples where treatment is available, although barriers exist in terms of medical coverage and affordability. The medicalization of infertility has unwittingly led to a disregard for the emotional responses that couples experience, which include distress, loss of control, stigmatization, and a disruption in the developmental trajectory of adulthood. One of the main challenges in assessing the distress levels in women with infertility is the accuracy of self-report measures. It is possible that women "fake good" in order to appear mentally healthier than they are. It is also possible that women feel a sense of hopefulness/increased optimism prior to initiating infertility treatment, which is when most assessments of distress are collected. Some early studies concluded that infertile women did not report any significant differences in symptoms of anxiety and depression than fertile women. The further into treatment a patient goes, the more often they display symptoms of depression and anxiety. Patients with one treatment failure had significantly higher levels of anxiety, and patients with two failures experienced more depression when compared with those without a history of treatment. However, it has also been shown that the more depressed the infertile woman, the less likely she is to start infertility treatment and the more likely she is to drop out after only one cycle. Researchers have also shown that despite a good prognosis and having the finances available to pay for treatment, discontinuation is most often due to psychological reasons. Fertility does not seem to increase when the women takes antioxidants to reduce the oxidative stress brought by the situation.Infertility may have psychological effects. Parenthood is one of the major transitions in adult life for both men and women. The stress of the non-fulfilment of a wish for a child has been associated with emotional consequences such as anger, depression, anxiety, marital problems and feelings of worthlessness.
Effects:
Partners may become more anxious to conceive, increasing sexual dysfunction. Marital discord often develops, especially when they are under pressure to make medical decisions. Women trying to conceive often have depression rates similar to women who have heart disease or cancer. Emotional stress and marital difficulties are greater in couples where the infertility lies with the man.
Male and female partner respond differently to infertility problems. In general, women show higher depression levels than their male partners when dealing with infertility. A possible explanation may be that women feel more responsible and guilty than men during the process of trying to conceive. On the other hand, infertile men experience a psychosomatic distress.
Social Having a child is considered to be important in most societies. Infertile couples may experience social and family pressure leading to a feeling of social isolation. Factors of gender, age, religion, and socioeconomic status are important influences. Societal pressures may affect a couple's decision to approach, avoid, or experience an infertility treatment.
Moreover, the socioeconomic status influences the psychology of the infertile couples: low socioeconomic status is associated with increased chances of developing depression.
Effects:
In many cultures, inability to conceive bears a stigma. In closed social groups, a degree of rejection (or a sense of being rejected by the couple) may cause considerable anxiety and disappointment. Some respond by actively avoiding the issue altogether.In the United States some treatments for infertility, including diagnostic tests, surgery and therapy for depression, can qualify one for Family and Medical Leave Act leave. It has been suggested that infertility be classified as a form of disability.
Causes:
Male infertility is responsible for 20–30% of infertility cases, while 20–35% are due to female infertility, and 25–40% are due to combined problems in both parts. In 10–20% of cases, no cause is found. The most common cause of female infertility are ovulation problems, usually manifested by scanty or absent menstrual periods. Male infertility is most commonly due to deficiencies in the semen, and semen quality is used as a surrogate measure of male fecundity.
Causes:
Iodine Deficiency Iodine deficiency may lead to infertility.
Causes:
Natural infertility Before puberty, humans are naturally infertile; their gonads have not yet developed the gametes required to reproduce: boys' testicles have not developed the sperm cells required to impregnate a female; girls have not begun the process of ovulation which activates the fertility of their egg cells (ovulation is confirmed by the first menstrual cycle, known as menarche, which signals the biological possibility of pregnancy). Infertility in children is commonly referred to as prepubescence (or being prepubescent, an adjective used to also refer to humans without secondary sex characteristics).
Causes:
The absence of fertility in children is considered a natural part of human growth and child development, as the hypothalamus in their brain is still underdeveloped and cannot release the hormones required to activate the gonads' gametes. Fertility in children before the ages of eight or nine is considered a disease known as precocious puberty. This disease is usually triggered by a brain tumor or other related injury.
Causes:
Delayed puberty Delayed puberty, puberty absent past or occurring later than the average onset (between the ages of ten and fourteen), may be a cause of infertility. In the United States, girls are considered to have delayed puberty if they have not started menstruating by age 16 (alongside lacking breast development by age 13). Boys are considered to have delayed puberty if they lack enlargement of the testicles by age 14. Delayed puberty affects about 2% of adolescents.Most commonly, puberty may be delayed for several years and still occur normally, in which case it is considered constitutional delay of growth and puberty, a common variation of healthy physical development. Delay of puberty may also occur due to various causes such as malnutrition, various systemic diseases, or defects of the reproductive system (hypogonadism) or the body's responsiveness to sex hormones.
Causes:
Immune infertility Antisperm antibodies (ASA) have been considered as infertility cause in around 10–30% of infertile couples. In both men and women, ASA production are directed against surface antigens on sperm, which can interfere with sperm motility and transport through the female reproductive tract, inhibiting capacitation and acrosome reaction, impaired fertilization, influence on the implantation process, and impaired growth and development of the embryo. The antibodies are classified into different groups: There are IgA, IgG and IgM antibodies. They also differ in the location of the spermatozoon they bind on (head, mid piece, tail). Factors contributing to the formation of antisperm antibodies in women are disturbance of normal immunoregulatory mechanisms, infection, violation of the integrity of the mucous membranes, rape and unprotected oral or anal sex. Risk factors for the formation of antisperm antibodies in men include the breakdown of the blood‑testis barrier, trauma and surgery, orchitis, varicocele, infections, prostatitis, testicular cancer, failure of immunosuppression and unprotected receptive anal or oral sex with men.
Causes:
Sexually transmitted infections Infections with the following sexually transmitted pathogens have a negative effect on fertility: Chlamydia trachomatis and Neisseria gonorrhoeae. There is a consistent association of Mycoplasma genitalium infection and female reproductive tract syndromes. M. genitalium infection is associated with increased risk of infertility.
Causes:
Genetic Mutations to NR5A1 gene encoding steroidogenic factor 1 (SF-1) have been found in a small subset of men with non-obstructive male factor infertility where the cause is unknown. Results of one study investigating a cohort of 315 men revealed changes within the hinge region of SF-1 and no rare allelic variants in fertile control men. Affected individuals displayed more severe forms of infertility such as azoospermia and severe oligozoospermia.Small supernumerary marker chromosomes are abnormal extra chromosomes; they are three times more likely to occur in infertile individuals and account for 0.125% of all infertility cases. See Infertility associated with small supernumerary marker chromosomes and Genetics of infertility#Small supernumerary marker chromosomes and infertility.
Causes:
Other causes Factors that can cause male as well as female infertility are: DNA damage DNA damage reduces fertility in female ovocytes, as caused by smoking, other xenobiotic DNA damaging agents (such as radiation or chemotherapy) or accumulation of the oxidative DNA damage 8-hydroxy-deoxyguanosine DNA damage reduces fertility in male sperm, as caused by oxidative DNA damage, smoking, other xenobiotic DNA damaging agents (such as drugs or chemotherapy) or other DNA damaging agents including reactive oxygen species, fever or high testicular temperature. The damaged DNA related to infertility manifests itself by the increased susceptibility to denaturation inducible by heat or acid or by the presence of double-strand breaks that can be detected by the TUNEL assay. In this assay, the sperm's DNA will be denaturated and renatured. If DNA fragmentation occurs (double and single-strand-breaks) a halo will not appear surrounding the spermatozoas, but if the spermatozoa does not have DNA damaged, a halo surrounding the spermatozoa could be visualized under the microscope.
Causes:
General factors Diabetes mellitus, thyroid disorders, undiagnosed and untreated coeliac disease, adrenal disease Hypothalamic-pituitary factors Hyperprolactinemia Hypopituitarism The presence of anti-thyroid antibodies is associated with an increased risk of unexplained subfertility with an odds ratio of 1.5 and 95% confidence interval of 1.1–2.0.
Causes:
Environmental factors Toxins such as glues, volatile organic solvents or silicones, physical agents, flame retardants, chemical dusts, polychlorinated biphenyls, and pesticides. Tobacco smokers are 60% more likely to be infertile than non-smokers.German scientists have reported that a virus called adeno-associated virus might have a role in male infertility, though it is otherwise not harmful. Other diseases such as chlamydia, and gonorrhea can also cause infertility, due to internal scarring (fallopian tube obstruction).
Causes:
Alimentary habitsObesity: Obesity can have a significant impact on male and female fertility. BMI (body mass index) may be a significant factor in fertility, as an increase in BMI in the male by as little as three units can be associated with infertility. Several studies have demonstrated that an increase in BMI is correlated with a decrease in sperm concentration, a decrease in motility and an increase in DNA damage in sperm. A relationship also exists between obesity and erectile dysfunction (ED). ED may be the consequence of the conversion of androgens to estradiol. The enzyme aromatase is responsible for this conversion and is found primarily in adipose tissue. As the number of adipose tissue increases, there is more aromatase available to convert androgens, and serum estradiol levels increase. Other hormones including inhibin B and leptin, may also be affected by obesity. Inhibin B levels have been reported to decrease with increasing weight, which results in decreased Sertoli cells and sperm production. Leptin is a hormone associated with numerous effects including appetite control, inflammation, and decreased insulin secretion, according to many studies. Obese women have a higher rate of recurrent, early miscarriage compared to non-obese women.
Causes:
Low weight: Obesity is not the only way in which weight can impact fertility. Men who are underweight tend to have lower sperm concentrations than those who are at a normal BMI. For women, being underweight and having extremely low amounts of body fat are associated with ovarian dysfunction and infertility and they have a higher risk for preterm birth. Eating disorders such as anorexia nervosa are also associated with extremely low BMI. Although relatively uncommon, eating disorders can negatively affect menstruation, fertility, and maternal and fetal well-being.
Causes:
Females The following causes of infertility may only be found in females.
Causes:
For a woman to conceive, certain things have to happen: vaginal intercourse must take place around the time when an egg is released from her ovary; the system that produces eggs has to be working at optimum levels; and her hormones must be balanced.For women, problems with fertilization arise mainly from either structural problems in the Fallopian tube or uterus or problems releasing eggs. Infertility may be caused by blockage of the Fallopian tube due to malformations, infections such as chlamydia or scar tissue. For example, endometriosis can cause infertility with the growth of endometrial tissue in the Fallopian tubes or around the ovaries. Endometriosis is usually more common in women in their mid-twenties and older, especially when postponed childbirth has taken place.Another major cause of infertility in women may be the inability to ovulate. Ovulatory disorders make up 25% of the known causes of female infertility. Oligo-ovulation or anovulation results in infertility because no oocyte will be released monthly. In the absence of an oocyte, there is no opportunity for fertilization and pregnancy. World Health Organization subdivided ovulatory disorders into four classes: Hypogonadotropic hypogonadal anovulation: i.e., hypothalamic amenorrhea Normogonadotropic normoestrogenic anovulation: i.e., polycystic ovarian syndrome (PCOS) Hypergonadotropic hypoestrogenic anovulation: i.e., premature ovarian failure Hyperprolactinemic anovulation: i.e., pituitary adenomaMalformation of the eggs themselves may complicate conception. For example, polycystic ovarian syndrome (PCOS) is when the eggs only partially develop within the ovary and there is an excess of male hormones. Some women are infertile because their ovaries do not mature and release eggs. In this case, synthetic FSH by injection or Clomid (Clomiphene citrate) via a pill can be given to stimulate follicles to mature in the ovaries.Other factors that can affect a woman's chances of conceiving include being overweight or underweight, or her age as female fertility declines after the age of 30.Sometimes it can be a combination of factors, and sometimes a clear cause is never established.
Causes:
Common causes of infertility of females include: ovulation problems (e.g. PCOS, the leading reason why women present to fertility clinics due to anovulatory infertility) tubal blockage pelvic inflammatory disease caused by infections like tuberculosis age-related factors uterine problems previous tubal ligation endometriosis advanced maternal age immune infertility Males Male infertility is defined as the inability of a male to make a fertile female pregnant, for a minimum of at least one year of unprotected intercourse. There are multiple causes for male infertility. These include endocrine disorders (usually due to hypogonadism) at an estimated 2% to 5%, sperm transport disorders (such as vasectomy) at 5%, primary testicular defects (which includes abnormal sperm parameters without any identifiable cause) at 65% to 80% and idiopathic (where an infertile male has normal sperm and semen parameters) at 10% to 20%.The main cause of male infertility is low semen quality. In men who have the necessary reproductive organs to procreate, infertility can be caused by low sperm count due to endocrine problems, drugs, radiation, or infection. There may be testicular malformations, hormone imbalance, or blockage of the man's duct system. Although many of these can be treated through surgery or hormonal substitutions, some may be indefinite.
Causes:
Infertility associated with viable, but immotile sperm may be caused by primary ciliary dyskinesia. The sperm must provide the zygote with DNA, centrioles, and activation factor for the embryo to develop. A defect in any of these sperm structures may result in infertility that will not be detected by semen analysis. Antisperm antibodies cause immune infertility. Cystic fibrosis can lead to infertility in men.
Causes:
Combined infertility In some cases, both the man and woman may be infertile or subfertile, and the couple's infertility arises from the combination of these conditions. In other cases, the cause is suspected to be immunological or genetic; it may be that each partner is independently fertile but the couple cannot conceive together without assistance.
Causes:
Unexplained infertility In the US, up to 20% of infertile couples have unexplained infertility. In these cases, abnormalities are likely to be present but not detected by current methods. Possible problems could be that the egg is not released at the optimum time for fertilization, that it may not enter the fallopian tube, sperm may not be able to reach the egg, fertilization may fail to occur, transport of the zygote may be disturbed, or implantation fails. It is increasingly recognized that egg quality is of critical importance and women of advanced maternal age have eggs of reduced capacity for normal and successful fertilization. Also, polymorphisms in folate pathway genes could be one reason for fertility complications in some women with unexplained infertility. However, a growing body of evidence suggests that epigenetic modifications in sperm may be partially responsible.
Diagnosis:
If both partners are young and healthy and have been trying to conceive for one year without success, a visit to a physician or women's health nurse practitioner (WHNP) could help to highlight potential medical problems earlier rather than later. The doctor or WHNP may also be able to suggest lifestyle changes to increase the chances of conceiving.Women over the age of 35 should see their physician or WHNP after six months as fertility tests can take some time to complete, and age may affect the treatment options that are open in that case.A doctor or WHNP takes a medical history and gives a physical examination. They can also carry out some basic tests on both partners to see if there is an identifiable reason for not having achieved a pregnancy. If necessary, they refer patients to a fertility clinic or local hospital for more specialized tests. The results of these tests help determine the best fertility treatment.
Treatment:
Treatment depends on the cause of infertility, but may include counselling, fertility treatments, which include in vitro fertilization. According to ESHRE recommendations, couples with an estimated live birth rate of 40% or higher per year are encouraged to continue aiming for a spontaneous pregnancy. Treatment methods for infertility may be grouped as medical or complementary and alternative treatments. Some methods may be used in concert with other methods. Drugs used for both women and men include clomiphene citrate, human menopausal gonadotropin (hMG), follicle-stimulating hormone (FSH), human chorionic gonadotropin (hCG), gonadotropin-releasing hormone (GnRH) analogues, aromatase inhibitors, and metformin.
Treatment:
Medical treatments Medical treatment of infertility generally involves the use of fertility medication, medical device, surgery, or a combination of the following. If the sperm is of good quality and the mechanics of the woman's reproductive structures are good (patent fallopian tubes, no adhesions or scarring), a course of ovulation induction may be used. The physician or WHNP may also suggest using a conception cap cervical cap, which the patient uses at home by placing the sperm inside the cap and putting the conception device on the cervix, or intrauterine insemination (IUI), in which the doctor or WHNP introduces sperm into the uterus during ovulation, via a catheter. In these methods, fertilization occurs inside the body.If conservative medical treatments fail to achieve a full-term pregnancy, the physician or WHNP may suggest the patient to undergo in vitro fertilization (IVF). IVF and related techniques (ICSI, ZIFT, GIFT) are called assisted reproductive technology (ART) techniques.ART techniques generally start with stimulating the ovaries to increase egg production. After stimulation, the physician surgically extracts one or more eggs from the ovary, and unites them with sperm in a laboratory setting, with the intent of producing one or more embryos. Fertilization takes place outside the body, and the fertilized egg is reinserted into the woman's reproductive tract, in a procedure called embryo transfer.Other medical techniques are e.g. tuboplasty, assisted hatching, and preimplantation genetic diagnosis.
Treatment:
In vitro fertilization IVF is the most commonly used ART. It has been proven useful in overcoming infertility conditions, such as blocked or damaged tubes, endometriosis, repeated IUI failure, unexplained infertility, poor ovarian reserve, poor or even nil sperm count.
Intracytoplasmic sperm injection ICSI technique is used in case of poor semen quality, low sperm count or failed fertilization attempts during prior IVF cycles. This technique involves an injection of a single healthy sperm directly injected into mature egg. The fertilized embryo is then transferred to womb.
Tourism Fertility tourism is the practice of traveling to another country for fertility treatments. It may be regarded as a form of medical tourism. The main reasons for fertility tourism are legal regulation of the sought procedure in the home country, or lower price. In-vitro fertilization and donor insemination are major procedures involved.
Treatment:
Stem cell therapy Nowadays, there are several treatments (still in experimentation) related to stem cell therapy. It is a new opportunity, not only for partners with lack of gametes, but also for same-sex couples and single people who want to have offspring. Theoretically, with this therapy, we can get artificial gametes in vitro. There are different studies for both women and men.
Treatment:
Spermatogonial stem cells transplant: it takes places in the seminiferous tubule. With this treatment, the patient experience spermatogenesis, and therefore, it has the chance to have offspring if he wants to. It is specially oriented for cancer patients, whose sperm is destroyed due to the gonadotoxic treatment they are submitted to.
Treatment:
Ovarian stem cells: it is thought that women have a finite number of follicles from the very beginning. Nevertheless, scientists have found these stem cells, which may generate new oocytes in postnatal conditions. Apparently there are only 0.014% of them (this could be an explanation of why they were not discovered until now). There is still some controversy about their existence, but if the discoveries are true, this could be a new treatment for infertility.Stem cell therapy is really new, and everything is still under investigation. Additionally, it could be the future for the treatment of multiple diseases, including infertility. It will take time before these studies can be available for clinics and patients.
Epidemiology:
Prevalence of infertility varies depending on the definition, i.e. on the time span involved in the failure to conceive.
Infertility rates have increased by 4% since the 1980s, mostly from problems with fecundity due to an increase in age.
Fertility problems affect one in seven couples in the UK. Most couples (about 84%) who have regular sexual intercourse (that is, every two to three days) and who do not use contraception get pregnant within a year. About 95 out of 100 couples who are trying to get pregnant do so within two years.
Women become less fertile as they get older. For women aged 35, about 94% who have regular unprotected sexual intercourse get pregnant after three years of trying. For women aged 38, however, only about 77%. The effect of age upon men's fertility is less clear.
In people going forward for IVF in the UK, roughly half of fertility problems with a diagnosed cause are due to problems with the man, and about half due to problems with the woman. However, about one in five cases of infertility have no clear diagnosed cause.
In Britain, male factor infertility accounts for 25% of infertile couples, while 25% remain unexplained. 50% are female causes with 25% being due to anovulation and 25% tubal problems/other.
In Sweden, approximately 10% of couples wanting children are infertile. In approximately one-third of these cases the man is the factor, in one third the woman is the factor, and in the remaining third the infertility is a product of factors on both parts.
In many lower-income countries, estimating infertility is difficult due to incomplete information and infertility and childlessness stigmas.
Epidemiology:
Data on income-limited individuals, male infertility, and fertility within non-traditional families may be limited due to traditional social norms. Historical data on fertility and infertility is limited as any form of study or tracking only began in the early 20th century. Per one account, "The invisibility of marginalised social groups in infertility tracking reflects broader social beliefs about who can and should reproduce. The offspring of privileged social groups are seen as a boon to society. The offspring of marginalised groups are perceived as a burden."
Society and culture:
Perhaps except for infertility in science fiction, films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing first in the latter part of the 2000s decade, although the techniques have been available for decades. Yet, the number of people that can relate to it by personal experience in one way or another is ever-growing, and the variety of trials and struggles is huge.Pixar's Up contains a depiction of infertility in an extended life montage that lasts the first few minutes of the film.Other individual examples are referred to individual sub-articles of assisted reproductive technology Ethics There are several ethical issues associated with infertility and its treatment.
Society and culture:
High-cost treatments are out of financial reach for some couples.
Debate over whether health insurance companies (e.g. in the US) should be required to cover infertility treatment.
Allocation of medical resources that could be used elsewhere The legal status of embryos fertilized in vitro and not transferred in vivo. (See also beginning of pregnancy controversy).
Opposition to the destruction of embryos not transferred in vivo.
IVF and other fertility treatments have resulted in an increase in multiple births, provoking ethical analysis because of the link between multiple pregnancies, premature birth, and a host of health problems.
Religious leaders' opinions on fertility treatments; for example, the Roman Catholic Church views infertility as a calling to adopt or to use natural treatments (medication, surgery, or cycle charting) and members must reject assisted reproductive technologies.
Infertility caused by DNA defects on the Y chromosome is passed on from father to son. If natural selection is the primary error correction mechanism that prevents random mutations on the Y chromosome, then fertility treatments for men with abnormal sperm (in particular ICSI) only defer the underlying problem to the next male generation.
Specific procedures, such as gestational surrogacy, have led to numerous ethical issues, particularly when people living in one country contract for surrogacy in another (transnational surrogacy).Many countries have special frameworks for dealing with the ethical and social issues around fertility treatment.
Society and culture:
One of the best known is the HFEA – The UK's regulator for fertility treatment and embryo research. This was set up on 1 August 1991 following a detailed commission of enquiry led by Mary Warnock in the 1980s A similar model to the HFEA has been adopted by the rest of the countries in the European Union. Each country has its own body or bodies responsible for the inspection and licensing of fertility treatment under the EU Tissues and Cells directive Regulatory bodies are also found in Canada and in the state of Victoria in Australia
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Allen Brain Atlas**
Allen Brain Atlas:
The Allen Mouse and Human Brain Atlases are projects within the Allen Institute for Brain Science which seek to combine genomics with neuroanatomy by creating gene expression maps for the mouse and human brain. They were initiated in September 2003 with a $100 million donation from Paul G. Allen and the first atlas went public in September 2006. As of May 2012, seven brain atlases have been published: Mouse Brain Atlas, Human Brain Atlas, Developing Mouse Brain Atlas, Developing Human Brain Atlas, Mouse Connectivity Atlas, Non-Human Primate Atlas, and Mouse Spinal Cord Atlas. There are also three related projects with data banks: Glioblastoma, Mouse Diversity, and Sleep. It is the hope of the Allen Institute that their findings will help advance various fields of science, especially those surrounding the understanding of neurobiological diseases. The atlases are free and available for public use online.
History:
In 2001, Paul Allen gathered a group of scientists, including James Watson and Steven Pinker, to discuss the future of neuroscience and what could be done to enhance neuroscience research (Jones 2009). During these meetings David Anderson from the California Institute of Technology proposed the idea that a three-dimensional atlas of gene expression in the mouse brain would be of great use to the neuroscience community. The project was set in motion in 2003 with a 100 million dollar donation by Allen through the Allen Institute for Brain Science. The project used a technique for mapping gene expression developed by Gregor Eichele and colleagues at the Max Planck Institute for Biophysical Chemistry in Goettingen, Germany. The technique uses colorimetric in situ hybridization to map gene expression. The project set a 3-year goal of finishing the project and making it available to the public.
History:
An initial release of the first atlas, the mouse brain atlas, occurred in December 2004. Subsequently, more data for this atlas was released in stages. The final genome-wide data set was released in September 2006. However, the final release of the atlas was not the end of the project; the Atlas is still being improved upon. Also, other projects including the human brain atlas, developing mouse brain, developing human brain, mouse connectivity, non-human primate atlas, and the mouse spinal cord atlas are being developed through the Allen Institute for Brain Science in conjunction with the Allen Mouse Brain Atlas.
Goals for the project:
The overarching goal and motto for all Allen Institute projects is "fueling discovery". The project strives to fulfill this goal and advance science in a few ways. First, they create brain atlases to better understand the connections between genes and brain functioning. They aim to advance the research and knowledge about neurobiological conditions such as Parkinson's, Alzheimer's, and Autism with their mapping of gene expression throughout the brain. The Brain Atlas projects also follow the "Allen Institute" motto with their open release of data and findings. This policy is also related to another goal of the Institute: collaborative and multidisciplinary research. Thus, any scientist from any discipline is able to look at the findings and take them into account while designing their own experiments. Also available to the public is the Brain Explorer application.
Research techniques:
The Allen Institute for Brain Science uses a project-based philosophy for their research. Each brain atlas focuses on its own project, made up of its own team of researchers. To complete an atlas, each research team collects and synthesizes brain scans, medical data, genetic information and psychological data. With this information, they are able to construct the 3-D biochemical architecture of the brain and figure out which proteins are expressed in certain parts of the brain. To gather the needed data, scientists at the Allen Institute use various techniques. One technique involves the use of postmortem brains and brain scanning technology to discover where in the brain genes are turned on and off. Another technique, called in situ hybridization, or ISH, is used to view gene expression patterns as in situ hybridization images.
Research techniques:
Within the Brain Atlases, these 3-D ISH digital images and graphs reveal, in color, the regions where a given gene is expressed. In the Brain Explorer, any gene can be searched for and selected resulting in the in situ image appearing as an easily manipulated and explored fashion. Part of the creation of this anatomy-centred database of gene expression, includes aligning ISH data for each gene with a three-dimensional coordinate space through registration with a reference atlas created for the project.
Contributions to neuroscience:
The different types of cells in the central nervous system originate from varying gene expression. A map of gene expression in the brain allows researchers to correlate forms and functions. The Allen Brain Atlas lets researchers view the areas of differing expression in the brain which enables the viewing of neural connections throughout the brain. Viewing these pathways through differing gene expression as well as functional imaging techniques permits researchers to correlate between gene expression, cell types, and pathway function in relation to behaviors or phenotypes.
Contributions to neuroscience:
Even though the majority of research has been done in mice, 90% of genes in mice have a counterpart in humans. This makes the Atlas particularly useful for modeling neurological diseases. The gene expression patterns in normal individuals provide a standard for comparing and understanding altered phenotypes. Extending information learned from mouse diseases will help better the understanding of human neurological disorders. The atlas can show which genes and particular areas are effected in neurological disorders; the action of a gene in a disease can be evaluated in conjunction with general expression patterns and this data could shed light on the role of the particular gene in the disorder.
Brain explorer:
The Allen Brain Atlas website contains a downloadable 3-D interactive Brain explorer. The explorer is essentially a search engine for locations of gene expression; this is particularly useful in finding regions that express similar genes. Users can delineate networks and pathways using this application by connecting regions that co-express a certain gene. The explorer uses a multicolor scale and contains multiple planes of the brain that let viewers see differences in density and expression level. The images are a composite of many averaged samples so it is useful when comparing to individuals with abnormally low gene expression.
Atlases:
Mouse Brain The Allen Mouse Brain Atlas is a comprehensive genome-wide map of the adult mouse brain that reveals where each gene is expressed. The mouse brain atlas was the original project of the Allen Brain Atlas and was finished in 2006. The purpose of the atlas is to aid in the development of neuroscience research. The hope of the project is that it will allow scientists to gain a better understanding of brain diseases and disorders such as autism and depression.
Atlases:
Human Brain The Allen Human Brain Atlas was made public in May 2010. It was the first anatomically and genomically comprehensive three-dimensional human brain map. The atlas was created to enhance research in many neuroscience research fields including neuropharmacology, human brain imaging, human genetics, neuroanatomy, genomics and more. The atlas is also geared toward furthering research into mental health disorders and brain injuries such as Alzheimer's disease, autism, schizophrenia and drug addiction.
Atlases:
Developing Mouse Brain The Allen Developing Mouse Brain Atlas is an atlas which tracks gene expression throughout the development of a C57BL/6 mouse brain. The project began in 2008 and is currently ongoing. The atlas is based on magnetic resonance imaging (MRI). It traces the growth, white matter, connectivity, and development of the C57BL/6 mouse brain from embryonic day 12 to postnatal day 80.
Atlases:
This atlas enhances the ability of neuroscientists to study how pollutants and genetic mutations effect the development of the brain. Thus, the atlas may be used to determine what toxins pose special threats to children and pregnant mothers.
Atlases:
Mouse Brain Connectivity The Allen Mouse Brain Connectivity Atlas was launched in November 2011. Unlike other atlases from the Allen Institute, this atlas focuses on the identification of neural circuitry that govern behavior and brain function. This neural circuitry is responsible for functions like behavior and perception. This map will allow scientists to further understand how the brain works and what causes brain diseases and disorders, such as Parkinson's disease and depression.
Atlases:
Mouse Spinal Cord Unveiled in July 2008, the Allen Mouse Spinal Cord Atlas was the first genome-wide map of the mouse spinal cord ever constructed. The spinal cord atlas is a map of genome wide gene expression in the spinal cord of adult and juvenile C57 black mice. The initial unveiling included data for 2,000 genes and an anatomical reference section. A plan for the future includes expanding the amount of data to about 20,000 genes spanning the full length of the spinal cord.
Atlases:
The aim of the spinal cord atlas is to enhance research in the treatment of spinal cord injury, diseases, and disorders such as Lou Gehrig's diseases and spinal muscular atrophy. The project was funded by an array of donors including the Allen Research Institute, Paralyzed Veterans of America Research Foundation, the ALS Association, Wyeth Research, PEMCO Insurance, National Multiple Sclerosis Society, International Spinal Research Trust, and many other organizations, foundations, corporate and private donors.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Fruit**
Fruit:
In botany, a fruit is the seed-bearing structure in flowering plants that is formed from the ovary after flowering.
Fruit:
Fruits are the means by which flowering plants (also known as angiosperms) disseminate their seeds. Edible fruits in particular have long propagated using the movements of humans and animals in a symbiotic relationship that is the means for seed dispersal for the one group and nutrition for the other; in fact, humans and many other animals have become dependent on fruits as a source of food. Consequently, fruits account for a substantial fraction of the world's agricultural output, and some (such as the apple and the pomegranate) have acquired extensive cultural and symbolic meanings.
Fruit:
In common language usage, fruit normally means the seed-associated fleshy structures (or produce) of plants that typically are sweet or sour and edible in the raw state, such as apples, bananas, grapes, lemons, oranges, and strawberries. In botanical usage, the term fruit also includes many structures that are not commonly called 'fruits' in everyday language, such as nuts, bean pods, corn kernels, tomatoes, and wheat grains.
Botanical vs. culinary:
Many common language terms used for fruit and seeds differ from botanical classifications. For example, in botany, a fruit is a ripened ovary or carpel that contains seeds, e.g., an apple, pomegranate, tomato or a pumpkin. A nut is a type of fruit (and not a seed), and a seed is a ripened ovule.In culinary language, a fruit is the sweet- or not sweet- (even sour-) tasting produce of a specific plant (e.g., a peach, pear or lemon); nuts are hard, oily, non-sweet plant produce in shells (hazelnut, acorn). Vegetables, so called, typically are savory or non-sweet produce (zucchini, lettuce, broccoli, and tomato); but some may be sweet-tasting (sweet potato).Examples of botanically classified fruit that are typically called vegetables include: cucumber, pumpkin, and squash (all are cucurbits); beans, peanuts, and peas (all legumes); corn, eggplant, bell pepper (or sweet pepper), and tomato. The spices chili pepper and allspice are fruits, botanically speaking. In contrast, rhubarb is often called a fruit when used in making pies, but the edible produce of rhubarb is actually the leaf stalk or petiole of the plant. Edible gymnosperm seeds are often given fruit names, e.g., ginkgo nuts and pine nuts.
Botanical vs. culinary:
Botanically, a cereal grain, such as corn, rice, or wheat is a kind of fruit (termed a caryopsis). However, the fruit wall is thin and fused to the seed coat, so almost all the edible grain-fruit is actually a seed.
Structure:
The outer layer, often edible, of most fruits is called the pericarp. Typically formed from the ovary, it surrounds the seeds; in some species, however, other structural tissues contribute to or form the edible portion. The pericarp may be described in three layers from outer to inner, i.e., the epicarp, mesocarp and endocarp.
Fruit that bears a prominent pointed terminal projection is said to be beaked.
Development:
A fruit results from the fertilizing and maturing of one or more flowers. The gynoecium, which contains the stigma-style-ovary system, is centered in the flower-head, and it forms all or part of the fruit. Inside the ovary(ies) are one or more ovules. Here begins a complex sequence called double fertilization: a female gametophyte produces an egg cell for the purpose of fertilization. (A female gametophyte is called a megagametophyte, and also called the embryo sac.) After double fertilization, the ovules will become seeds.
Development:
Ovules are fertilized in a process that starts with pollination, which is the movement of pollen from the stamens to the stigma-style-ovary system within the flower-head. After pollination, a pollen tube grows from the (deposited) pollen through the stigma down the style into the ovary to the ovule. Two sperm are transferred from the pollen to a megagametophyte. Within the megagametophyte, one sperm unites with the egg, forming a zygote, while the second sperm enters the central cell forming the endosperm mother cell, which completes the double fertilization process. Later, the zygote will give rise to the embryo of the seed, and the endosperm mother cell will give rise to endosperm, a nutritive tissue used by the embryo.
Development:
As the ovules develop into seeds, the ovary begins to ripen and the ovary wall, the pericarp, may become fleshy (as in berries or drupes), or it may form a hard outer covering (as in nuts). In some multi-seeded fruits, the extent to which a fleshy structure develops is proportional to the number of fertilized ovules. The pericarp typically is differentiated into two or three distinct layers; these are called the exocarp (outer layer, also called epicarp), mesocarp (middle layer), and endocarp (inner layer).
Development:
In some fruits, the sepals, petals, stamens and/or the style of the flower fall away as the fleshy fruit ripens. However, for simple fruits derived from an inferior ovary – i.e., one that lies below the attachment of other floral parts – there are parts (including petals, sepals, and stamens) that fuse with the ovary and ripen with it. For such a case, when floral parts other than the ovary form a significant part of the fruit that develops, it is called an accessory fruit. Examples of accessory fruits include apple, rose hip, strawberry, and pineapple.
Development:
Because several parts of the flower besides the ovary may contribute to the structure of a fruit, it is important to study flower structure to understand how a particular fruit forms. There are three general modes of fruit development: Apocarpous fruits develop from a single flower (while having one or more separate, unfused, carpels); they are the simple fruits.
Syncarpous fruits develop from a single gynoecium (having two or more carpels fused together).
Multiple fruits form from many flowers – i.e., an inflorescence of flowers.
Classification of fruits:
Consistent with the three modes of fruit development, plant scientists have classified fruits into three main groups: simple fruits, aggregate fruits, and multiple (or composite) fruits. The groupings reflect how the ovary and other flower organs are arranged and how the fruits develop, but they are not evolutionarily relevant as diverse plant taxa may be in the same group.
While the section of a fungus that produces spores is called a fruiting body, fungi are members of the fungi kingdom and not of the plant kingdom.
Classification of fruits:
Simple fruits Simple fruits are the result of the ripening-to-fruit of a simple or compound ovary in a single flower with a single pistil. In contrast, a single flower with numerous pistils typically produces an aggregate fruit; and the merging of several flowers, or a 'multiple' of flowers, results in a 'multiple' fruit. A simple fruit is further classified as either dry or fleshy.
Classification of fruits:
To distribute their seeds, dry fruits may split open and discharge their seeds to the winds, which is called dehiscence. Or the distribution process may rely upon the decay and degradation of the fruit to expose the seeds; or it may rely upon the eating of fruit and excreting of seeds by frugivores – both are called indehiscence. Fleshy fruits do not split open, but they also are indehiscent and they may also rely on frugivores for distribution of their seeds. Typically, the entire outer layer of the ovary wall ripens into a potentially edible pericarp.
Classification of fruits:
Types of dry simple fruits, (with examples) include: Achene – most commonly seen in aggregate fruits (e.g., strawberry, see below).
Capsule – (Brazil nut: botanically, it is not a nut).
Caryopsis – (cereal grains, including wheat, rice, oats, barley).
Cypsela – an achene-like fruit derived from the individual florets in a capitulum: (dandelion).
Fibrous drupe – (coconut, walnut: botanically, neither is a true nut.).
Follicle – follicles are formed from a single carpel, and opens by one suture: (milkweed); also commonly seen in aggregate fruits: (magnolia, peony).
Legume – (bean, pea, peanut: botanically, the peanut is the seed of a legume, not a nut).
Loment – a type of indehiscent legume: (sweet vetch or wild potato).
Nut – (beechnut, hazelnut, acorn (of the oak): botanically, these are true nuts).
Samara – (ash, elm, maple key).
Schizocarp, see below – (carrot seed).
Silique – (radish seed).
Silicle – (shepherd's purse).
Utricle – (beet, Rumex).Fruits in which part or all of the pericarp (fruit wall) is fleshy at maturity are termed fleshy simple fruits.
Types of fleshy simple fruits, (with examples) include: Berry – the berry is the most common type of fleshy fruit. The entire outer layer of the ovary wall ripens into a potentially edible "pericarp", (see below).
Stone fruit or drupe – the definitive characteristic of a drupe is the hard, "lignified" stone (sometimes called the "pit"). It is derived from the ovary wall of the flower: apricot, cherry, olive, peach, plum, mango.
Pome – the pome fruits: apples, pears, rosehips, saskatoon berry, etc., are a syncarpous (fused) fleshy fruit, a simple fruit, developing from a half-inferior ovary. Pomes are of the family Rosaceae.
Classification of fruits:
Berries Berries are a type of simple fleshy fruit that issue from a single ovary. (The ovary itself may be compound, with several carpels.) The botanical term true berry includes grapes, currants, cucumbers, eggplants (aubergines), tomatoes, chili peppers, and bananas, but excludes certain fruits that are called "-berry" by culinary custom or by common usage of the term – such as strawberries and raspberries. Berries may be formed from one or more carpels (i.e., from the simple or compound ovary) from the same, single flower. Seeds typically are embedded in the fleshy interior of the ovary.
Classification of fruits:
Examples include: Tomato – in culinary terms, the tomato is regarded as a vegetable, but it is botanically classified as a fruit and a berry.
Banana – the fruit has been described as a "leathery berry". In cultivated varieties, the seeds are diminished nearly to non-existence.
Pepo – berries with skin that is hardened: cucurbits, including gourds, squash, melons.
Hesperidium – berries with a rind and a juicy interior: most citrus fruit.
Classification of fruits:
Cranberry, gooseberry, redcurrant, grape.The strawberry, regardless of its appearance, is classified as a dry, not a fleshy fruit. Botanically, it is not a berry; it is an aggregate-accessory fruit, the latter term meaning the fleshy part is derived not from the plant's ovaries but from the receptacle that holds the ovaries. Numerous dry achenes are attached to the outside of the fruit-flesh; they appear to be seeds but each is actually an ovary of a flower, with a seed inside.Schizocarps are dry fruits, though some appear to be fleshy. They originate from syncarpous ovaries but do not actually dehisce; rather, they split into segments with one or more seeds. They include a number of different forms from a wide range of families, including carrot, parsnip, parsley, cumin.
Classification of fruits:
Aggregate fruits An aggregate fruit is also called an aggregation, or etaerio; it develops from a single flower that presents numerous simple pistils. Each pistil contains one carpel; together, they form a fruitlet. The ultimate (fruiting) development of the aggregation of pistils is called an aggregate fruit, etaerio fruit, or simply an etaerio.
Different types of aggregate fruits can produce different etaerios, such as achenes, drupelets, follicles, and berries.
For example, the Ranunculaceae species, including Clematis and Ranunculus, produces an etaerio of achenes; Rubus species, including raspberry: an etaerio of drupelets; Calotropis species: an etaerio of follicles fruit; Annona species: an etaerio of berries.Some other broadly recognized species and their etaerios (or aggregations) are: Teasel; fruit is an aggregation of cypselas.
Tuliptree; fruit is an aggregation of samaras.
Magnolia and peony; fruit is an aggregation of follicles.
American sweet gum; fruit is an aggregation of capsules.
Classification of fruits:
Sycamore; fruit is an aggregation of achenes.The pistils of the raspberry are called drupelets because each pistil is like a small drupe attached to the receptacle. In some bramble fruits, such as blackberry, the receptacle, an accessory part, elongates and then develops as part of the fruit, making the blackberry an aggregate-accessory fruit. The strawberry is also an aggregate-accessory fruit, of which the seeds are contained in the achenes. Notably in all these examples, the fruit develops from a single flower, with numerous pistils.
Classification of fruits:
Multiple fruits A multiple fruit is formed from a cluster of flowers, (a 'multiple' of flowers) – also called an inflorescence. Each ('smallish') flower produces a single fruitlet, which, as all develop, all merge into one mass of fruit. Examples include pineapple, fig, mulberry, Osage orange, and breadfruit. An inflorescence (a cluster) of white flowers, called a head, is produced first. After fertilization, each flower in the cluster develops into a drupe; as the drupes expand, they develop as a connate organ, merging into a multiple fleshy fruit called a syncarp.
Classification of fruits:
Progressive stages of multiple flowering and fruit development can be observed on a single branch of the Indian mulberry, or noni. During the sequence of development, a progression of second, third, and more inflorescences are initiated in turn at the head of the branch or stem.
Classification of fruits:
Accessory fruit forms Fruits may incorporate tissues derived from other floral parts besides the ovary, including the receptacle, hypanthium, petals, or sepals. Accessory fruits occur in all three classes of fruit development – simple, aggregate, and multiple. Accessory fruits are frequently designated by the hyphenated term showing both characters. For example, a pineapple is a multiple-accessory fruit, a blackberry is an aggregate-accessory fruit, and an apple is a simple-accessory fruit.
Seedless fruits:
Seedlessness is an important feature of some fruits of commerce. Commercial cultivars of bananas and pineapples are examples of seedless fruits. Some cultivars of citrus fruits (especially grapefruit, mandarin oranges, navel oranges), satsumas, table grapes, and of watermelons are valued for their seedlessness. In some species, seedlessness is the result of parthenocarpy, where fruits set without fertilization. Parthenocarpic fruit-set may (or may not) require pollination, but most seedless citrus fruits require a stimulus from pollination to produce fruit. Seedless bananas and grapes are triploids, and seedlessness results from the abortion of the embryonic plant that is produced by fertilization, a phenomenon known as stenospermocarpy, which requires normal pollination and fertilization.
Seed dissemination:
Variations in fruit structures largely depend on the modes of dispersal applied to their seeds. Dispersal is achieved by wind or water, by explosive dehiscence, and by interactions with animals.Some fruits present their outer skins or shells coated with spikes or hooked burrs; these evolved either to deter would-be foragers from feeding on them or to serve to attach themselves to the hair, feathers, legs, or clothing of animals, thereby using them as dispersal agents. These plants are termed zoochorous; common examples include cocklebur, unicorn plant, and beggarticks (or Spanish needle).By developments of mutual evolution, the fleshy produce of fruits typically appeals to hungry animals, such that the seeds contained within are taken in, carried away, and later deposited (i.e., defecated) at a distance from the parent plant. Likewise, the nutritious, oily kernels of nuts typically motivate birds and squirrels to hoard them, burying them in soil to retrieve later during the winter of scarcity; thereby, uneaten seeds are sown effectively under natural conditions to germinate and grow a new plant some distance away from the parent.Other fruits have evolved flattened and elongated wings or helicopter-like blades, e.g., elm, maple, and tuliptree. This mechanism increases dispersal distance away from the parent via wind. Other wind-dispersed fruit have tiny "parachutes", e.g., dandelion, milkweed, salsify.Coconut fruits can float thousands of miles in the ocean, thereby spreading their seeds. Other fruits that can disperse via water are nipa palm and screw pine.Some fruits have evolved propulsive mechanisms that fling seeds substantial distances – perhaps up to 100 m (330 ft) in the case of the sandbox tree – via explosive dehiscence or other such mechanisms (see impatiens and squirting cucumber).
Food uses:
A cornucopia of fruits – fleshy (simple) fruits from apples to berries to watermelon; dry (simple) fruits including beans and rice and coconuts; aggregate fruits including strawberries, raspberries, blackberries, pawpaw; and multiple fruits such as pineapple, fig, mulberries – are commercially valuable as human food. They are eaten both fresh and as jams, marmalade and other fruit preserves. They are used extensively in manufactured and processed foods (cakes, cookies, baked goods, flavorings, ice cream, yogurt, canned vegetables, frozen vegetables and meals) and beverages such as fruit juices and alcoholic beverages (brandy, fruit beer, wine). Spices like vanilla, black pepper, paprika, and allspice are derived from berries. Olive fruit is pressed for olive oil and similar processing is applied to other oil-bearing fruits and vegetables. Some fruits are available all year round, while others (such as blackberries and apricots in the UK) are subject to seasonal availability.Fruits are also used for socializing and gift-giving in the form of fruit baskets and fruit bouquets.Typically, many botanical fruits – "vegetables" in culinary parlance – (including tomato, green beans, leaf greens, bell pepper, cucumber, eggplant, okra, pumpkin, squash, zucchini) are bought and sold daily in fresh produce markets and greengroceries and carried back to kitchens, at home or restaurant, for preparation of meals.
Food uses:
Storage All fruits benefit from proper post-harvest care, and in many fruits, the plant hormone ethylene causes ripening. Therefore, maintaining most fruits in an efficient cold chain is optimal for post-harvest storage, with the aim of extending and ensuring shelf life.
Food uses:
Nutritional value Various culinary fruits provide significant amounts of fiber and water, and many are generally high in vitamin C. An overview of numerous studies showed that fruits (e.g., whole apples or whole oranges) are satisfying (filling) by simply eating and chewing them.The dietary fiber consumed in eating fruit promotes satiety, and may help to control body weight and aid reduction of blood cholesterol, a risk factor for cardiovascular diseases. Fruit consumption is under preliminary research for the potential to improve nutrition and affect chronic diseases. Regular consumption of fruit is generally associated with reduced risks of several diseases and functional declines associated with aging.
Food uses:
Food safety For food safety, the CDC recommends proper fruit handling and preparation to reduce the risk of food contamination and foodborne illness. Fresh fruits and vegetables should be carefully selected; at the store, they should not be damaged or bruised; and precut pieces should be refrigerated or surrounded by ice.
All fruits and vegetables should be rinsed before eating. This recommendation also applies to produce with rinds or skins that are not eaten. It should be done just before preparing or eating to avoid premature spoilage.
Fruits and vegetables should be kept separate from raw foods like meat, poultry, and seafood, as well as from utensils that have come in contact with raw foods. Fruits and vegetables that are not going to be cooked should be thrown away if they have touched raw meat, poultry, seafood, or eggs.
All cut, peeled, or cooked fruits and vegetables should be refrigerated within two hours. After a certain time, harmful bacteria may grow on them and increase the risk of foodborne illness.
Allergies Fruit allergies make up about 10 percent of all food-related allergies.
Nonfood uses:
Because fruits have been such a major part of the human diet, various cultures have developed many different uses for fruits they do not depend on for food. For example: Bayberry fruits provide a wax often used to make candles; Many dry fruits are used as decorations or in dried flower arrangements (e.g., annual honesty, cotoneaster, lotus, milkweed, unicorn plant, and wheat). Ornamental trees and shrubs are often cultivated for their colorful fruits, including beautyberry, cotoneaster, holly, pyracantha, skimmia, and viburnum.
Nonfood uses:
Fruits of opium poppy are the source of opium, which contains the drugs codeine and morphine, as well as the biologically inactive chemical theabaine from which the drug oxycodone is synthesized.
Osage orange fruits are used to repel cockroaches.
Many fruits provide natural dyes (e.g., cherry, mulberry, sumac, and walnut).
Dried gourds are used as bird houses, cups, decorations, dishes, musical instruments, and water jugs.
Pumpkins are carved into Jack-o'-lanterns for Halloween.
The fibrous core of the mature and dry Luffa fruit is used as a sponge.
The spiny fruit of burdock or cocklebur inspired the invention of Velcro.
Coir fiber from coconut shells is used for brushes, doormats, floor tiles, insulation, mattresses, sacking, and as a growing medium for container plants. The shell of the coconut fruit is used to make bird houses, bowls, cups, musical instruments, and souvenir heads.
The hard and colorful grain fruits of Job's tears are used as decorative beads for jewelry, garments, and ritual objects.
Fruit is often a subject of still life paintings.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Syringofibroadenoma**
Syringofibroadenoma:
Syringofibroadenoma is a cutaneous condition characterized by a hyperkeratotic nodule or plaque involving the extremities.: 668 It is considered of eccrine origin.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Heartbeat star**
Heartbeat star:
Heartbeat stars are pulsating variable binary star systems in eccentric orbits with vibrations caused by tidal forces. The name "heartbeat" comes from the similarity of the light curve of the star with what a heartbeat looks like through an electrocardiogram if their brightness was mapped over time. Many heartbeat stars have been discovered with the Kepler Space Telescope.
Orbital information:
Heartbeat stars are binary star systems where each star travels in a highly elliptical orbit around the common mass center, and the distance between the two stars varies drastically as they orbit each other. Heartbeat stars can get as close as a few stellar radii to each other and as far as 100 times that distance during one orbit. As the star with the more elliptical orbit swings closer to its companion, gravity will stretch the star into a non-spherical shape, changing its apparent light output. At their closest point in orbit, the tidal forces cause the shape of the heartbeat stars to fluctuate rapidly. When the stars reach the point of their closest encounter, the mutual gravitational pull between the two stars will cause them to become slightly ellipsoidal in shape, which is one of the reasons for their observed brightness being so variable.
Discoveries:
Heartbeat stars were studied for the first time on the basis of OGLE project observations. The Kepler Space Telescope with its long monitoring of the brightness off hundreds of thousands of stars enabled the discovery of many heartbeat stars. One of the first binary systems discovered to show the elliptical orbits, KOI-54, has been shown to increase in brightness every 41.8 days. A subsequent study in 2012 characterized 17 additional objects from the Kepler data and united them as a class of binary stars.A study which measured the rotation rate of star spots on the surface of heartbeat stars showed that most heartbeat stars rotate slower than expected. A study which measured the orbits of 19 heartbeat star systems, found that surveyed heartbeat stars tend to be both bigger and hotter than the Sun.The star HD 74423, discovered using NASA's Transiting Exoplanet Survey Satellite, was found to be unusually teardrop-shaped, which causes the star to pulsate only on one side, the first known heartbeat star to do so.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mortality forecasting**
Mortality forecasting:
Mortality forecasting refers to the art and science of determining likely future mortality rates. It is especially important in rich countries with a high proportion of aged people, since aged populations are expensive in terms of pensions (both public and private). It is a major topic in Ageing studies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Karl Landsteiner Memorial Award**
Karl Landsteiner Memorial Award:
The Karl Landsteiner Memorial Award is a scientific award given by the American Association of Blood Banks (AABB) to scientists with "an international reputation in transfusion medicine or cellular therapies" "whose original research resulted in an important contribution to the body of scientific knowledge". Recipients give a lecture at the AABB Annual Meeting and receive a $7,500 honorarium. The prize was initiated in 1954 to honor Karl Landsteiner, whose research laid the foundation for modern blood transfusion therapy.
Recipients:
1954 Reuben Ottenberg 1955 Richard Lewisohn 1956 Philip Levine, Alexander Solomon Wiener 1957 Ruth Sanger, Robert Russell Race 1958 Oswald Hope Robertson, Francis Peyton Rous, J. R. Turner 1959 Ernest Witebsky 1960 Patrick L. Mollison 1961 Robert R. A. Coombs 1962 William C. Boyd 1963 Fred H. Allen Jr., Louis K. Diamond 1964 J. J. van Loghem 1965 Ruggero Ceppellini 1966 Elvin A. Kabat 1967 Walter Morgan, Winifred Watkins 1968 Rodney R. Porter 1969 Vincent J. Freda, John G. Gorman, William Pollack 1970 Jean Dausset 1971 Bruce Chown, Marion Lewis 1972 Richard E. Rosenfield 1973 Arthur E. Mourant 1974 Manfred M. Mayer, Hans J. Müller-Eberhard 1975 Baruch S. Blumberg, Alfred M. Prince 1976 Marie Cutbush Crookston, Eloise R. Giblett 1977 Rose Payne, Jon van Rood 1978 Fred Stratton 1979 Nevin C. Hughes-Jones, Serafeim P. Masouredis 1980 Donald M. Marcus, James M. Stavely 1981 James F. Danielli, S. Jonathan Singer 1982 Georges J. F. Köhler, César Milstein 1983 Vincent T. Marchesi 1984 Oliver Smithies 1985 Saul Krugman 1986 Claes F. Högman, Grant R. Bartlett 1987 E. Donnall Thomas 1988 Charles P. Salmon 1989 George W. Bird 1990 Robert Gallo, Luc Montagnier 1991 Paul I. Terasaki 1992 Harvey J. Alter, Daniel W. Bradley, Qui-Lim Choo, Michael Houghton, George Kuo, Lacy Overby 1993 C. Paul Engelfriet 1994 Kenneth Brinkhous, Harold Roberts, Robert Wagner, Robert Langdell 1995 W. Laurence Marsh 1996 Eugene Goldwasser 1997 Wendell F. Rosse 1998 Richard H. Aster, Scott Murphy, Sherrill J. Slichter 1999 Kary B. Mullis 2000 Michael E. DeBakey 2001 John Bowman 2002 Hal E. Broxmeyer 2003 Victor A. McKusick 2004 Tibor Greenwalt 2005 Peter Agre 2006 James D. Watson 2007 Peter Issitt 2008 Ernest Beutler 2009 Curt I. Civin 2010 Steven A. Rosenberg 2011 David Weatherall, Yuet Wai Kan 2012 Kenneth Kaushansky 2013 Barry S. Coller 2014 Carl June 2015 Nancy C. Andrews 2016 Stuart Orkin 2017 Irving Weissman 2018 David A. Williams 2019 David Anstee, Jean-Pierre Cartron, Colvin Redman, Fumiichiro Yamamoto
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Purple urine bag syndrome**
Purple urine bag syndrome:
Purple urine bag syndrome (PUBS) is a medical syndrome where purple discoloration of urine occurs in people with urinary catheters and co-existent urinary tract infection. Bacteria in the urine produce the enzyme indoxyl sulfatase. This converts indoxyl sulfate in the urine into the red and blue colored compounds indirubin and indigo. The most commonly implicated bacteria are Providencia stuartii, Providencia rettgeri, Klebsiella pneumoniae, Proteus mirabilis, Escherichia coli, Morganella morganii, and Pseudomonas aeruginosa.
Signs and symptoms:
People with purple urine bag syndrome usually do not complain of any symptoms. Purple discoloration of urine bag is often the only finding, frequently noted by caregivers. It is usually considered a benign condition, although in the setting of recurrent or chronic urinary tract infection, it may be associated with drug-resistant bacteria.
Pathophysiology:
Tryptophan in the diet is metabolized by bacteria in the gastrointestinal tract to produce indole. Indole is absorbed into the blood by the intestine and passes to the liver. There, indole is converted to indoxyl sulfate, which is then excreted in the urine. In purple urine bag syndrome, bacteria that colonize the urinary catheter convert indoxyl sulfate to the colored compounds indirubin and indigo.
Diagnosis:
Purple urine bag syndrome is a clinical diagnosis, the cause of which may be investigated using a variety of laboratory tests or imaging.
Treatment:
Antibiotics such as ciprofloxacin should be administered and the catheter should be changed. If constipation is present, this should also be treated.
Epidemiology:
Purple urine bag syndrome is more common in female nursing home residents. Other risk factors include alkaline urine, constipation, and polyvinyl chloride catheter use.
History:
The syndrome was first described by Barlow and Dickson in 1978.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Spezzatino**
Spezzatino:
Spezzatino is an Italian stew, made from low-grade cuts of veal, beef, lamb or pork. There are many regional variants. For example, in Tuscany is prepared a famous variant made with beef, carrots, celery and onions., in Umbria are traditional the spezzatini di montone (mutton) and roe, in Nuoro wild boar spezzatino is traditional, whereas in Friuli Venezia Giulia spezzatino is served with aromatic herbs and dry white wine.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Sheet metal**
Sheet metal:
Sheet metal is metal formed into thin, flat pieces, usually by an industrial process. Sheet metal is one of the fundamental forms used in metalworking, and it can be cut and bent into a variety of shapes. Thicknesses can vary significantly; extremely thin sheets are considered foil or leaf, and pieces thicker than 6 mm (0.25 in) are considered plate, such as plate steel, a class of structural steel.
Sheet metal:
Sheet metal is available in flat pieces or coiled strips. The coils are formed by running a continuous sheet of metal through a roll slitter.
Sheet metal:
In most of the world, sheet metal thickness is consistently specified in millimeters. In the U.S., the thickness of sheet metal is commonly specified by a traditional, non-linear measure known as its gauge. The larger the gauge number, the thinner the metal. Commonly used steel sheet metal ranges from 30 gauge to about 7 gauge. Gauge differs between ferrous (iron-based) metals and nonferrous metals such as aluminum or copper. Copper thickness, for example, is measured in ounces, representing the weight of copper contained in an area of one square foot. Parts manufactured from sheet metal must maintain a uniform thickness for ideal results.There are many different metals that can be made into sheet metal, such as aluminium, brass, copper, steel, tin, nickel and titanium. For decorative uses, some important sheet metals include silver, gold, and platinum (platinum sheet metal is also utilized as a catalyst).
Sheet metal:
Sheet metal is used in automobile and truck (lorry) bodies, major appliances, airplane fuselages and wings, tinplate for tin cans, roofing for buildings (architecture), and many other applications. Sheet metal of iron and other materials with high magnetic permeability, also known as laminated steel cores, has applications in transformers and electric machines. Historically, an important use of sheet metal was in plate armor worn by cavalry, and sheet metal continues to have many decorative uses, including in horse tack. Sheet metal workers are also known as "tin bashers" (or "tin knockers"), a name derived from the hammering of panel seams when installing tin roofs.
History:
Hand-hammered metal sheets have been used since ancient times for architectural purposes. Water-powered rolling mills replaced the manual process in the late 17th century. The process of flattening metal sheets required large rotating iron cylinders which pressed metal pieces into sheets. The metals suited for this were lead, copper, zinc, iron and later steel. Tin was often used to coat iron and steel sheets to prevent it from rusting. This tin-coated sheet metal was called "tinplate." Sheet metals appeared in the United States in the 1870s, being used for shingle roofing, stamped ornamental ceilings, and exterior façades. Sheet metal ceilings were only popularly known as "tin ceilings" later as manufacturers of the period did not use the term. The popularity of both shingles and ceilings encouraged widespread production. With further advances of steel sheet metal production in the 1890s, the promise of being cheap, durable, easy to install, lightweight and fireproof gave the middle-class a significant appetite for sheet metal products. It was not until the 1930s and WWII that metals became scarce and the sheet metal industry began to collapse. However, some American companies, such as the W.F. Norman Corporation, were able to stay in business by making other products until Historic preservation projects aided the revival of ornamental sheet metal.
Materials:
Stainless steel Grade 304 is the most common of the three grades. It offers good corrosion resistance while maintaining formability and weldability. Available finishes are #2B, #3, and #4. Grade 303 is not available in sheet form.Grade 316 possesses more corrosion resistance and strength at elevated temperatures than 304. It is commonly used for pumps, valves, chemical equipment, and marine applications. Available finishes are #2B, #3, and #4.Grade 410 is a heat treatable stainless steel, but it has a lower corrosion resistance than the other grades. It is commonly used in cutlery. The only available finish is dull.Grade 430 is a popular grade, low-cost alternative to series 300's grades. This is used when high corrosion resistance is not a primary criterion. Common grade for appliance products, often with a brushed finish.
Materials:
Aluminum Aluminum, or aluminium in British English, is also a popular metal used in sheet metal due to its flexibility, wide range of options, cost effectiveness, and other properties. The four most common aluminium grades available as sheet metal are 1100-H14, 3003-H14, 5052-H32, and 6061-T6.Grade 1100-H14 is commercially pure aluminium, highly chemical and weather resistant. It is ductile enough for deep drawing and weldable, but has low strength. It is commonly used in chemical processing equipment, light reflectors, and jewelry.Grade 3003-H14 is stronger than 1100, while maintaining the same formability and low cost. It is corrosion resistant and weldable. It is often used in stampings, spun and drawn parts, mail boxes, cabinets, tanks, and fan blades.Grade 5052-H32 is much stronger than 3003 while still maintaining good formability. It maintains high corrosion resistance and weldability. Common applications include electronic chassis, tanks, and pressure vessels.Grade 6061-T6 is a common heat-treated structural aluminium alloy. It is weldable, corrosion resistant, and stronger than 5052, but not as formable. It loses some of its strength when welded. It is used in modern aircraft structures.
Materials:
Brass Brass is an alloy of copper, which is widely used as a sheet metal. It has more strength, corrosion resistance and formability when compared to copper while retaining its conductivity.
Materials:
In sheet hydroforming, variation in incoming sheet coil properties is a common problem for forming process, especially with materials for automotive applications. Even though incoming sheet coil may meet tensile test specifications, high rejection rate is often observed in production due to inconsistent material behavior. Thus there is a strong need for a discriminating method for testing incoming sheet material formability. The hydraulic sheet bulge test emulates biaxial deformation conditions commonly seen in production operations.
Materials:
For forming limit curves of materials aluminium, mild steel and brass. Theoretical analysis is carried out by deriving governing equations for determining of equivalent stress and equivalent strain based on the bulging to be spherical and Tresca's yield criterion with the associated flow rule. For experimentation circular grid analysis is one of the most effective methods.
Gauge:
Use of gauge numbers to designate sheet metal thickness is discouraged by numerous international standards organizations. For example, ASTM states in specification ASTM A480-10a: "The use of gauge number is discouraged as being an archaic term of limited usefulness not having general agreement on meaning."Manufacturers' Standard Gauge for Sheet Steel is based on an average density of 41.82 lb per square foot per inch thick, equivalent to 501.84 pounds per cubic foot (8,038.7 kg/m3). Gauge is defined differently for ferrous (iron-based) and non-ferrous metals (e.g. aluminium and brass).
Gauge:
The gauge thicknesses shown in column 2 (U.S. standard sheet and plate iron and steel decimal inch (mm)) seem somewhat arbitrary. The progression of thicknesses is clear in column 3 (U.S. standard for sheet and plate iron and steel 64ths inch (delta)). The thicknesses vary first by 1⁄32 inch in higher thicknesses and then step down to increments of 1⁄64 inch, then 1⁄128 inch, with the final increments at decimal fractions of 1⁄64 inch. Some steel tubes are manufactured by folding a single steel sheet into a square/circle and welding the seam together. Their wall thickness has a similar (but distinct) gauge to the thickness of steel sheets.
Gauge:
Tolerances During the rolling process the rollers bow slightly, which results in the sheets being thinner on the edges. The tolerances in the table and attachments reflect current manufacturing practices and commercial standards and are not representative of the Manufacturer's Standard Gauge, which has no inherent tolerances.
Forming processes:
Bending The equation for estimating the maximum bending force is, max =kTLt2W where k is a factor taking into account several parameters including friction. T is the ultimate tensile strength of the metal. L and t are the length and thickness of the sheet metal, respectively. The variable W is the open width of a V-die or wiping die.
Curling The curling process is used to form an edge on a ring. This process is used to remove sharp edges. It also increases the moment of inertia near the curled end.
The flare/burr should be turned away from the die. It is used to curl a material of specific thickness. Tool steel is generally used due to the amount of wear done by operation.
Decambering It is a metal working process of removing camber, the horizontal bend, from a strip shaped material. It may be done to a finite length section or coils. It resembles flattening of leveling process, but on a deformed edge.
Forming processes:
Deep drawing Drawing is a forming process in which the metal is stretched over a form or die. In deep drawing the depth of the part being made is more than half its diameter. Deep drawing is used for making automotive fuel tanks, kitchen sinks, two-piece aluminum cans, etc. Deep drawing is generally done in multiple steps called draw reductions. The greater the depth, the more reductions are required. Deep drawing may also be accomplished with fewer reductions by heating the workpiece, for example in sink manufacture.
Forming processes:
In many cases, material is rolled at the mill in both directions to aid in deep drawing. This leads to a more uniform grain structure which limits tearing and is referred to as "draw quality" material.
Forming processes:
Expanding Expanding is a process of cutting or stamping slits in alternating pattern much like the stretcher bond in brickwork and then stretching the sheet open in accordion-like fashion. It is used in applications where air and water flow are desired as well as when light weight is desired at cost of a solid flat surface. A similar process is used in other materials such as paper to create a low cost packing paper with better supportive properties than flat paper alone.
Forming processes:
Hemming and seaming Hemming is a process of folding the edge of sheet metal onto itself to reinforce that edge. Seaming is a process of folding two sheets of metal together to form a joint.
Forming processes:
Hydroforming Hydroforming is a process that is analogous to deep drawing, in that the part is formed by stretching the blank over a stationary die. The force required is generated by the direct application of extremely high hydrostatic pressure to the workpiece or to a bladder that is in contact with the workpiece, rather than by the movable part of a die in a mechanical or hydraulic press. Unlike deep drawing, hydroforming usually does not involve draw reductions—the piece is formed in a single step.
Forming processes:
Incremental sheet forming Incremental sheet forming or ISF forming process is basically sheet metal working or sheet metal forming process. In this case, sheet is formed into final shape by a series of processes in which small incremental deformation can be done in each series.
Ironing Ironing is a sheet metal working or sheet metal forming process. It uniformly thins the workpiece in a specific area. This is a very useful process. It is used to produce a uniform wall thickness part with a high height-to-diameter ratio.
It is used in making aluminium beverage cans.
Laser cutting Sheet metal can be cut in various ways, from hand tools called tin snips up to very large powered shears. With the advances in technology, sheet metal cutting has turned to computers for precise cutting. Many sheet metal cutting operations are based on computer numerically controlled (CNC) laser cutting or multi-tool CNC punch press.
Forming processes:
CNC laser involves moving a lens assembly carrying a beam of laser light over the surface of the metal. Oxygen, nitrogen or air is fed through the same nozzle from which the laser beam exits. The metal is heated and burnt by the laser beam, cutting the metal sheet. The quality of the edge can be mirror smooth and a precision of around 0.1 mm (0.0039 in) can be obtained. Cutting speeds on thin 1.2 mm (0.047 in) sheet can be as high as 25 m (82 ft) per minute. Most laser cutting systems use a CO2 based laser source with a wavelength of around 10 µm; some more recent systems use a YAG based laser with a wavelength of around 1 µm.
Forming processes:
Photochemical machining Photochemical machining, also known as photo etching, is a tightly controlled corrosion process which is used to produce complex metal parts from sheet metal with very fine detail. The photo etching process involves photo sensitive polymer being applied to a raw metal sheet. Using CAD designed photo-tools as stencils, the metal is exposed to UV light to leave a design pattern, which is developed and etched from the metal sheet.
Forming processes:
Perforating Perforating is a cutting process that punches multiple small holes close together in a flat workpiece. Perforated sheet metal is used to make a wide variety of surface cutting tools, such as the surform.
Forming processes:
Press brake forming This is a form of bending used to produce long, thin sheet metal parts. The machine that bends the metal is called a press brake. The lower part of the press contains a V-shaped groove called the die. The upper part of the press contains a punch that presses the sheet metal down into the v-shaped die, causing it to bend. There are several techniques used, but the most common modern method is "air bending". Here, the die has a sharper angle than the required bend (typically 85 degrees for a 90 degree bend) and the upper tool is precisely controlled in its stroke to push the metal down the required amount to bend it through 90 degrees. Typically, a general purpose machine has an available bending force of around 25 tons per meter of length. The opening width of the lower die is typically 8 to 10 times the thickness of the metal to be bent (for example, 5 mm material could be bent in a 40 mm die). The inner radius of the bend formed in the metal is determined not by the radius of the upper tool, but by the lower die width. Typically, the inner radius is equal to 1/6 of the V-width used in the forming process.
Forming processes:
The press usually has some sort of back gauge to position depth of the bend along the workpiece. The backgauge can be computer controlled to allow the operator to make a series of bends in a component to a high degree of accuracy. Simple machines control only the backstop, more advanced machines control the position and angle of the stop, its height and the position of the two reference pegs used to locate the material. The machine can also record the exact position and pressure required for each bending operation to allow the operator to achieve a perfect 90 degree bend across a variety of operations on the part.
Forming processes:
Punching Punching is performed by placing the sheet of metal stock between a punch and a die mounted in a press. The punch and die are made of hardened steel and are the same shape. The punch is sized to be a very close fit in the die. The press pushes the punch against and into the die with enough force to cut a hole in the stock. In some cases the punch and die "nest" together to create a depression in the stock. In progressive stamping, a coil of stock is fed into a long die/punch set with many stages. Multiple simple shaped holes may be produced in one stage, but complex holes are created in multiple stages. In the final stage, the part is punched free from the "web".
Forming processes:
A typical CNC turret punch has a choice of up to 60 tools in a "turret" that can be rotated to bring any tool to the punching position. A simple shape (e.g. a square, circle, or hexagon) is cut directly from the sheet. A complex shape can be cut out by making many square or rounded cuts around the perimeter. A punch is less flexible than a laser for cutting compound shapes, but faster for repetitive shapes (for example, the grille of an air-conditioning unit). A CNC punch can achieve 600 strokes per minute.
Forming processes:
A typical component (such as the side of a computer case) can be cut to high precision from a blank sheet in under 15 seconds by either a press or a laser CNC machine.
Roll forming A continuous bending operation for producing open profiles or welded tubes with long lengths or in large quantities.
Rolling Rolling is metal working or metal forming process. In this method, stock passes through one or more pair of rolls to reduce thickness. It is used to make thickness uniform. It is classified according to its temperature of rolling: Hot rolling: in this temperature is above recrystallisation temperature.
Cold rolling: In this temperature is below recrystallisation temperature.
Warm rolling: In this temperature is used is in between Hot rolling and cold rolling.
Spinning Spinning is used to make tubular (axis-symmetric) parts by fixing a piece of sheet stock to a rotating form (mandrel). Rollers or rigid tools press the stock against the form, stretching it until the stock takes the shape of the form. Spinning is used to make rocket motor casings, missile nose cones, satellite dishes and metal kitchen funnels.
Stamping Stamping includes a variety of operations such as punching, blanking, embossing, bending, flanging, and coining; simple or complex shapes can be formed at high production rates; tooling and equipment costs can be high, but labor costs are low.
Alternatively, the related techniques repoussé and chasing have low tooling and equipment costs, but high labor costs.
Water jet cutting A water jet cutter, also known as a waterjet, is a tool capable of a controlled erosion into metal or other materials using a jet of water at high velocity and pressure, or a mixture of water and an abrasive substance.
Forming processes:
Wheeling The process of using an English wheel is called wheeling. It is basically a metal working or metal forming process. An English wheel is used by a craftsperson to form compound curves from a flat sheet of metal of aluminium or steel. It is costly, as highly skilled labour is required. It can produce different panels by the same method. A stamping press is used for high numbers in production.
Fasteners:
Fasteners that are commonly used on sheet metal include: clecos, rivets, and sheet metal screws.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Trophic function**
Trophic function:
A trophic function was first introduced in the differential equations of the Kolmogorov predator–prey model. It generalizes the linear case of predator–prey interaction firstly described by Volterra and Lotka in the Lotka–Volterra equation. A trophic function represents the consumption of prey assuming a given number of predators. The trophic function (also referred to as the functional response) was widely applied in chemical kinetics, biophysics, mathematical physics and economics. In economics, "predator" and "prey" become various economic parameters such as prices and outputs of goods in various linked sectors such as processing and supply. These relationships, in turn, were found to behave similarly to the magnitudes in chemical kinetics, where the molecular analogues of predators and prey react chemically with each other.
Trophic function:
These inter-disciplinary findings suggest the universal character of trophic functions and the predator–prey models in which they appear. They give general principles for the dynamic interactions of objects of different natures, so that the mathematical models worked out in one science may be applied to another. Trophic functions have proven useful in forecasting temporarily stable conditions (limit cycles and/or attractors) of the coupled dynamics of predator and prey. The Pontryagin L.S. theorem on the inflection points of trophic functions guarantees the existence of a limit cycle in these systems.
Trophic function:
Trophic functions are especially important in situations of chaos, when one has numerous interacting magnitudes and objects, as is particularly true in global economics. To define and forecast the dynamics in this case is scarcely possible with linear methods, but non-linear dynamic analysis involving trophic functions leads to the discovery of limit cycles or attractors. Since in nature there exist only temporarily stable objects, such limit cycles and attractors must exist in the dynamics of observed natural objects (chemistry, flora and fauna, economics, cosmology). The general theory suggests as-yet-unknown regularities in the dynamics of the various systems surrounding us.
Trophic function:
Despite the success already achieved in research on trophic functions, the field still has great further theoretical potential and practical importance. Global economics, for instance, needs tools to forecast the dynamics of outputs and prices over a scale of at least 3–5 years so as to maintain stable demand and not over-produce, and to prevent crises such as that of 2008.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Program compatibility date range**
Program compatibility date range:
The Program Compatibility Date Range (PCDR) of a computer determines the date range of programs it can run. Windows XP is widely recognized for its expansive PCDR, which covers games from as old as the 1980s. Windows Vista, however, wasn't so lucky, largely due to the addition of the Program Files (x86) file that outlawed the installation of, and therefore usage of DOS Programs from Vista. This contributed to Vista's intense negative reception, along with its overly-secure structure.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Bobby Burns (drink)**
Bobby Burns (drink):
The Bobby Burns is a whisky cocktail composed of scotch, vermouth and Bénédictine liqueur. It is served in a 4.5 US fl oz cocktail glass.
The drink is named for Robert Burns, the Scottish poet, but is not considered a national drink in the way the Rusty Nail is.
History:
The original recipe comes from the 1900 edition of Fancy Drinks, published by Bishop & Babcock where it is called the "Baby Burns". The "Robert Burns" name appears in the 1908 Jack's Manual and 1914 Drinks made with Irish whiskey, vermouth and absinthe. In later publications it starts to be called by the more informal "Bobby Burns" name, with the original Irish whiskey recipe appearing in Recipes for Mixed Drinks (1917). The 1948 recipe from The Fine Art of Mixing Drinks replaced the Bénédictine with Drambuie (Scotch whisky) and bitters.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Free Journal Network**
Free Journal Network:
The Free Journal Network is an index of open access scholarly journals, specifically for those that do not charge article processing charges.
Criteria:
The network founded in early 2018 in order to promote free, open access journals, a publishing model that is sometimes called diamond or platinum open access.
Such journals are typically smaller than equivalent commercial journals (often supported by academic societies). Main criteria include: adherence to the Fair Open Access Principles that are publicly supported by many renowned scientists, publication of article titles and abstracts in English, clear publication ethics and quality assurance policies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ellsberg paradox**
Ellsberg paradox:
In decision theory, the Ellsberg paradox (or Ellsberg's paradox) is a paradox in which people's decisions are inconsistent with subjective expected utility theory. Daniel Ellsberg popularized the paradox in his 1961 paper, "Risk, Ambiguity, and the Savage Axioms". John Maynard Keynes published a version of the paradox in 1921. It is generally taken to be evidence of ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown, incalculable risks.
Ellsberg paradox:
Ellsberg's findings indicate that choices with an underlying level of risk are favored in instances where the likelihood of risk is clear, rather than instances in which the likelihood of risk is unknown. A decision-maker will overwhelmingly favor a choice with a transparent likelihood of risk, even in instances where the unknown alternative will likely produce greater utility. When offered choices with varying risk, people prefer choices with calculable risk, even when they have less utility.
Experimental research:
Ellsberg's experimental research involved two separate thought experiments: the 2-urn 2-color scenario and the 1 urn 3-color scenario.
Two-urns paradox There are two urns each containing 100 balls. It is known that urn A contains 50 red and 50 black, but urn B contains an unknown mix of red and black balls.
Experimental research:
The following bets are offered to a participant: Bet 1A: get $1 if red is drawn from urn A, $0 otherwise Bet 2A: get $1 if black is drawn from urn A, $0 otherwise Bet 1B: get $1 if red is drawn from urn B, $0 otherwise Bet 2B: get $1 if black is drawn from urn B, $0 otherwise Typically, participants were seen to be indifferent between bet 1A and bet 2A (consistent with expected utility theory) but were seen to strictly prefer Bet 1A to Bet 1B and Bet 2A to 2B. This result is generally interpreted to be a consequence of ambiguity aversion (also known as uncertainty aversion); people intrinsically dislike situations where they cannot attach probabilities to outcomes, in this case favoring the bet in which they know the probability and utility outcome (0.5 and $1 respectively).
Experimental research:
One-urn paradox There is one urn containing 90 balls: 30 balls are red, while the remaining 60 balls are either black or yellow in unknown proportions. The balls are well mixed so that each ball is as likely to be drawn as any other. The participants then choose a gambling scenario: Additionally, the participant may choose a separate gamble scenario within the same situational parameters: The experimental conditions manufactured by Ellsberg serve to rely upon two economic principles: Knightian uncertainty, the unquantifiable nature of the mix between both yellow and black balls within the single urn, and probability, of which red balls are drawn at 1/3 vs. 2/3.
Experimental research:
Utility theory interpretation Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then compute the expected utility of the two gambles individually.
Experimental research:
Since the prizes are the same, it follows that the participant will strictly prefer Gamble A to Gamble B if and only if they believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be indifference between the choices if the participant thought that a red ball was as likely as a black ball. Similarly, it follows the participant will strictly prefer Gamble C to Gamble D if and only if the participant believes that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that if drawing a red ball is more likely than drawing a black ball, drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing the participant strictly prefers Gamble A to Gamble B, it follows that he/she will also strictly prefer Gamble C to Gamble D, and similarly conversely.
Experimental research:
However, ambiguity aversion would predict that people would strictly prefer Gamble A to Gamble B, and Gamble D to Gamble C.
Ellsberg's findings violate assumptions made within common Expected Utility Theory, with participants strictly preferring Gamble A to Gamble B and Gamble D to Gamble C.
Experimental research:
Numerical demonstration Mathematically, the estimated probabilities of each color ball can be represented as R, Y, and B. If the participant strictly prefers Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles. We reach a contradiction in our utility calculations. This contradiction indicates that the participant's preferences are inconsistent with the expected-utility theory.
Experimental research:
The generality of the paradox The result holds regardless of the utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble is selected, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately there are only two outcomes: receive a specific amount of money or nothing. Therefore, it is sufficient to assume that the preference is to receive some money to nothing (this assumption is not necessary: in the mathematical treatment above, it was assumed U($100) > U($0), but a contradiction can still be obtained for U($100) < U($0) and for U($100) = U($0)).
Experimental research:
In addition, the result holds regardless of risk aversion—all gambles involve risk. By choosing Gamble D, the participant has a 1 in 3 chance of receiving nothing, and by choosing Gamble A, a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow that Gamble C was less risky than Gamble D (and vice versa), so the risk is not averted in this way.
Experimental research:
However, because the exact chances of winning are known for Gambles A and D and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion, which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits the comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation).
Experimental research:
Possible explanations There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity that the decision-maker faces – see Knightian uncertainty. That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes.
Experimental research:
One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is 30/90, which is a precise number. Nonetheless, the participant may not distinguish intuitively between this and e.g. 30/91. No probability information whatsoever is provided regarding other outcomes, so the participant has very unclear subjective impressions of these probabilities.
Experimental research:
In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on maximizing the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal that Ellsberg observed.Another possible explanation is that this type of game triggers a deceit aversion mechanism. Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. Participants make the same decisions in the experiment as they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of 30/90 is compared to the lower part of the 0/90–60/90 range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because, in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply neglect to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well.
Decisions under uncertainty aversion:
To describe how an individual would take decisions in a world where uncertainty aversion exists, modifications of the expected utility framework have been proposed. These include: Choquet expected utility: Created by French mathematician Gustave Choquet was a subadditive integral used as a way of measuring expected utility in situations with unknown parameters. The mathematical principle is seen as a way in which the contradiction between rational choice theory, Expected utility theory, and Ellsberg's seminal findings can be reconciled.
Decisions under uncertainty aversion:
Maxmin expected utility: Axiomatized by Gilboa and Schmeidler is a widely received alternative to utility maximization, taking into account ambiguity-averse preferences. This model reconciles the notion that intuitive decisions may violate the ambiguity neutrality, established within both the Ellsberg Paradox and Allais Paradox.
Alternative explanations:
Other alternative explanations include the competence hypothesis and the comparative ignorance hypothesis. Both theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge.
Daniel Ellsberg's 1962 paper, "Risk, Ambiguity, and Decision":
Upon graduating in Economics from Harvard in 1952, Ellsberg left immediately to serve as a US Marine before coming back to Harvard in 1957 to complete his post-graduate studies on decision-making under uncertainty. Ellsberg left his graduate studies to join the RAND Corporation as a strategic analyst but continued to do academic work on the side. He presented his breakthrough paper at the December 1960 meeting of the Econometric Society. Ellsberg's work built upon previous works by both J.M. Keynes and F.H Knight, challenging the dominant rational choice theory. The work was made public in 2001, some 40 years after being published, because of the Pentagon Papers scandal then encircling Ellsberg's life. The book is considered a highly-influential paper and is still considered influential within economic academia about risk ambiguity and uncertainty.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rank factorization**
Rank factorization:
In mathematics, given a field F , nonnegative integers m,n , and a matrix A∈Fm×n , a rank decomposition or rank factorization of A is a factorization of A of the form A = CF, where C∈Fm×r and F∈Fr×n , where rank A is the rank of A
Existence:
Every finite-dimensional matrix has a rank decomposition: Let {\textstyle A} be an {\textstyle m\times n} matrix whose column rank is {\textstyle r} . Therefore, there are {\textstyle r} linearly independent columns in {\textstyle A} ; equivalently, the dimension of the column space of {\textstyle A} is {\textstyle r} . Let {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} be any basis for the column space of {\textstyle A} and place them as column vectors to form the {\textstyle m\times r} matrix {\textstyle C={\begin{bmatrix}\mathbf {c} _{1}&\mathbf {c} _{2}&\cdots &\mathbf {c} _{r}\end{bmatrix}}} . Therefore, every column vector of {\textstyle A} is a linear combination of the columns of {\textstyle C} . To be precise, if {\textstyle A={\begin{bmatrix}\mathbf {a} _{1}&\mathbf {a} _{2}&\cdots &\mathbf {a} _{n}\end{bmatrix}}} is an {\textstyle m\times n} matrix with {\textstyle \mathbf {a} _{j}} as the {\textstyle j} -th column, then aj=f1jc1+f2jc2+⋯+frjcr, where {\textstyle f_{ij}} 's are the scalar coefficients of {\textstyle \mathbf {a} _{j}} in terms of the basis {\textstyle \mathbf {c} _{1},\mathbf {c} _{2},\ldots ,\mathbf {c} _{r}} . This implies that {\textstyle A=CF} , where {\textstyle f_{ij}} is the {\textstyle (i,j)} -th element of {\textstyle F}
Non-uniqueness:
If A = C 1 F 1 {\textstyle A=C_{1}F_{1}} is a rank factorization, taking C 2 = C 1 R {\textstyle C_{2}=C_{1}R} and F 2 = R − 1 F 1 {\textstyle F_{2}=R^{-1}F_{1}} gives another rank factorization for any invertible matrix R {\textstyle R} of compatible dimensions. Conversely, if A = F 1 G 1 = F 2 G 2 {\textstyle A=F_{1}G_{1}=F_{2}G_{2}} are two rank factorizations of A {\textstyle A} , then there exists an invertible matrix R {\textstyle R} such that F 1 = F 2 R {\textstyle F_{1}=F_{2}R} and G 1 = R − 1 G 2 {\textstyle G_{1}=R^{-1}G_{2}} .
Construction:
Rank factorization from reduced row echelon forms In practice, we can construct one specific rank factorization as follows: we can compute {\textstyle B} , the reduced row echelon form of {\textstyle A} . Then {\textstyle C} is obtained by removing from {\textstyle A} all non-pivot columns (which can be determined by looking for columns in {\textstyle B} which do not contain a pivot), and {\textstyle F} is obtained by eliminating any all-zero rows of {\textstyle B} Note: For a full-rank square matrix (i.e. when {\textstyle n=m=r} ), this procedure will yield the trivial result {\textstyle C=A} and {\textstyle F=B=I_{n}} (the {\textstyle n\times n} identity matrix).
Construction:
Example Consider the matrix A=[1314273915311208]∼[10−20011000010000]=B.
{\textstyle B} is in reduced echelon form.
Then {\textstyle C} is obtained by removing the third column of {\textstyle A} , the only one which is not a pivot column, and {\textstyle F} by getting rid of the last row of zeroes from {\textstyle B} , so C=[134279151128],F=[10−2001100001].
It is straightforward to check that A=[1314273915311208]=[134279151128][10−2001100001]=CF.
Construction:
Proof Let {\textstyle P} be an {\textstyle n\times n} permutation matrix such that {\textstyle AP=(C,D)} in block partitioned form, where the columns of {\textstyle C} are the {\textstyle r} pivot columns of {\textstyle A} . Every column of {\textstyle D} is a linear combination of the columns of {\textstyle C} , so there is a matrix {\textstyle G} such that {\textstyle D=CG} , where the columns of {\textstyle G} contain the coefficients of each of those linear combinations. So {\textstyle AP=(C,CG)=C(I_{r},G)} , {\textstyle I_{r}} being the {\textstyle r\times r} identity matrix. We will show now that {\textstyle (I_{r},G)=FP} Transforming {\textstyle A} into its reduced row echelon form {\textstyle B} amounts to left-multiplying by a matrix {\textstyle E} which is a product of elementary matrices, so {\textstyle EAP=BP=EC(I_{r},G)} , where {\textstyle EC={\begin{pmatrix}I_{r}\\0\end{pmatrix}}} . We then can write {\textstyle BP={\begin{pmatrix}I_{r}&G\\0&0\end{pmatrix}}} , which allows us to identify {\textstyle (I_{r},G)=FP} , i.e. the nonzero {\textstyle r} rows of the reduced echelon form, with the same permutation on the columns as we did for {\textstyle A} . We thus have {\textstyle AP=CFP} , and since {\textstyle P} is invertible this implies {\textstyle A=CF} , and the proof is complete.
Construction:
Singular value decomposition If F∈{R,C}, then one can also construct a full-rank factorization of {\textstyle A} via a singular value decomposition A=UΣV∗=[U1U2][Σr000][V1∗V2∗]=U1(ΣrV1∗).
Since {\textstyle U_{1}} is a full-column-rank matrix and {\textstyle \Sigma _{r}V_{1}^{*}} is a full-row-rank matrix, we can take {\textstyle C=U_{1}} and {\textstyle F=\Sigma _{r}V_{1}^{*}}
Consequences:
rank(A) = rank(AT) An immediate consequence of rank factorization is that the rank of A {\textstyle A} is equal to the rank of its transpose A T {\textstyle A^{\textsf {T}}} . Since the columns of A {\textstyle A} are the rows of A T {\textstyle A^{\textsf {T}}} , the column rank of A {\textstyle A} equals its row rank.Proof: To see why this is true, let us first define rank to mean column rank. Since A = C F {\textstyle A=CF} , it follows that A T = F T C T {\textstyle A^{\textsf {T}}=F^{\textsf {T}}C^{\textsf {T}}} . From the definition of matrix multiplication, this means that each column of A T {\textstyle A^{\textsf {T}}} is a linear combination of the columns of F T {\textstyle F^{\textsf {T}}} . Therefore, the column space of A T {\textstyle A^{\textsf {T}}} is contained within the column space of F T {\textstyle F^{\textsf {T}}} and, hence, rank ( A T ) ≤ rank ( F T ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(F^{\textsf {T}}\right)} .
Consequences:
Now, F T {\textstyle F^{\textsf {T}}} is n × r {\textstyle n\times r} , so there are r {\textstyle r} columns in F T {\textstyle F^{\textsf {T}}} and, hence, rank ( A T ) ≤ r = rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq r=\operatorname {rank} \left(A\right)} . This proves that rank ( A T ) ≤ rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} .
Consequences:
Now apply the result to A T {\textstyle A^{\textsf {T}}} to obtain the reverse inequality: since ( A T ) T = A {\textstyle \left(A^{\textsf {T}}\right)^{\textsf {T}}=A} , we can write rank ( A ) = rank ( ( A T ) T ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(\left(A^{\textsf {T}}\right)^{\textsf {T}}\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} . This proves rank ( A ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} .
Consequences:
We have, therefore, proved rank ( A T ) ≤ rank ( A ) {\textstyle \operatorname {rank} \left(A^{\textsf {T}}\right)\leq \operatorname {rank} \left(A\right)} and rank ( A ) ≤ rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)\leq \operatorname {rank} \left(A^{\textsf {T}}\right)} , so rank ( A ) = rank ( A T ) {\textstyle \operatorname {rank} \left(A\right)=\operatorname {rank} \left(A^{\textsf {T}}\right)} .
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Racal suit**
Racal suit:
A Racal suit (also known as a Racal space suit) is a protective suit with a powered air-purifying respirator (PAPR). It consists of a plastic suit and a battery-operated blower with HEPA filters that supplies filtered air to a positive-pressure hood (also known as a Racal hood). Racal suits were among the protective suits used by the Aeromedical Isolation Team (AIT) of the United States Army Medical Research Institute of Infectious Diseases to evacuate patients with highly infectious diseases for treatment.Originally, the hood was manufactured by Racal Health & Safety, a subsidiary of Racal Electronics located in Frederick, Maryland, the same city where AIT was based. The division of Racal responsible for the suit's manufacture later became part of 3M, and the respirator product line was branded as 3M/Racal.
Components:
The main body of the protective suit consists of a lightweight coverall made of polyvinyl chloride (PVC), rubber gloves, and rubber boots. Originally, the coverall was in a bright orange color, and the Racal suit was known as an orange suit.The hood is a separate component from the protective suit. The Racal hood is a type of PAPR consisting of a transparent hood connected to a respirator, which is powered by a rechargeable battery. The respirator has three HEPA filters that are certified to remove 99.7% of particles of 0.03 to 3.0 microns in diameter. The filtered air is supplied at the rate of 170 L/min to the top of the hood under positive pressure for breathing and cooling. The air is forced out through an air exhaust valve at the base of the hood. A two-way radio system is installed inside the hood for communication. The AIT later switched from using transparent bubble hoods to butyl rubber hoods.
Procedures:
The main purpose of the AIT was to evacuate a patient from the field to a specialized isolation unit. As part of their procedures, AIT members wore Racal suits while transporting the patients. They were trained to take a bathroom break before suiting up, since the time they would be in the suits could be 1 hour and 45 minutes for a training session and 4 to 6 hours for an actual mission. The patient was placed in a mobile stretcher isolator during transit. After the patient was delivered to the isolation unit, the members would leave the unit and enter into an anteroom with an airlock. They were then sprayed with glutaraldehyde solution to disinfect before the suit was cut away and sent to an on-site incinerator for complete destruction.
Similar suits:
The Racal suit is similar to other positive pressure personnel suits such as the Chemturion, in that there is an air supply to provide positive pressure to reduce the chance of airborne agents entering the suit. However, several components are different. The positive pressure section for the Racal suit is only available at the hood. The air supply for Racal suits comes from a battery-operated blower that makes the suit portable, whereas other suits must be connected to an air hose that is part of the building, such as in Biosafety Level 4 laboratories. The main body part of the Racal suit is also more lightweight and can be disposed of by burning after use.
In popular culture:
Racal suits were used in films such as Outbreak in 1995. The term is also used in literature related to situations with infectious diseases, such as in The Hot Zone: A Terrifying True Story, Infected, and Executive Orders.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Pancreatic polypeptide**
Pancreatic polypeptide:
Pancreatic polypeptide (PP) is a polypeptide secreted by PP cells in the endocrine pancreas. It regulates pancreatic secretion activities, and also impacts liver glycogen storage and gastrointestinal secretion. Its secretion may be impacted by certain endocrine tumours.
Gene:
The PPY gene encodes an unusually short protein precursor. This precursor is cleaved to produce pancreatic polypeptide, pancreatic icosapeptide, and a 5- to 7- amino-acid oligopeptide.
Structure:
Pancreatic polypeptide consists of 36 amino acids. It has a molecular weight about 4200 Da. It has a similar structure to neuropeptide Y.
Synthesis:
Pancreatic polypeptide is synthesised and secreted by PP cells (also known as gamma cells or F cells) of the pancreatic islets of the pancreas. These are found predominantly in the head of the pancreas.
Function:
Pancreatic polypeptide regulates pancreatic secretion activities by both endocrine and exocrine tissues. It also affects hepatic glycogen levels and gastrointestinal secretions.
Its secretion in humans is increased after a protein meal, fasting, exercise, and acute hypoglycaemia, and is decreased by somatostatin and intravenous glucose.
Function:
Plasma pancreatic polypeptide has been shown to be reduced in conditions associated with increased food intake and elevated in anorexia nervosa. In addition, peripheral administration of polypeptide has been shown to decrease food intake in rodents. Pancreatic polypeptide inhibits pancreatic secretion of fluid, bicarbonate, and digestive enzymes. It also stimulates gastric acid secretion. It is the antagonist of cholecystokinin and opposes pancreatic secretion stimulated by cholecystokinin. It may stimulate the migrating motor complex, synergistic with motilin.On fasting, pancreatic polypeptide concentration is 80 pg/ml; after the meal, it rises up from 8 to 10 times more; glucose and fats also induce PP's level increase, but on parenteral introduction of those substances, the level of hormones doesn't change. The administration of atropine, the vagotomy, blocks pancreatic polypeptide secretion after meals. The excitation of the vagus nerve, the administration of gastrin, secretin or cholecystokinin induce PP secretion.
Clinical significance:
The secretion of pancreatic polypeptide may be increased by pancreatic tumours (insulin, glucagon), by Verner-Morrison syndrome, and by gastrinomas.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**8-Mercaptoquinoline**
8-Mercaptoquinoline:
8-Mercaptoquinoline is the organosulfur compound with the formula C9H7NSH. It is a derivative of the heterocycle quinoline, substituted in the 8-position with a thiol group. The compound is an analog of 8-hydroxyquinoline, a common chelating agent. The compound is a colorless solid.
Preparation:
Quinoline reacts with chlorosulfuric acid to form quinoline-8-sulfonyl chloride, which reacts with triphenylphosphine in toluene to form 8-mercaptoquinoline.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Thermomechanical processing**
Thermomechanical processing:
Thermomechanical processing is a metallurgical process that combines mechanical or plastic deformation process like compression or forging, rolling, etc. with thermal processes like heat-treatment, water quenching, heating and cooling at various rates into a single process.
Application in rebar steel:
The quenching process produces a high strength bar from inexpensive low carbon steel. The process quenches the surface layer of the bar, which pressurizes and deforms the crystal structure of intermediate layers, and simultaneously begins to temper the quenched layers using the heat from the bar's core.
Application in rebar steel:
Steel billets 130mm² ("pencil ingots") are heated to approximately 1200°C to 1250°C in a reheat furnace. Then, they are progressively rolled to reduce the billets to the final size and shape of reinforcing bar. After the last rolling stand, the billet moves through a quench box. The quenching converts the billet's surface layer to martensite, and causes it to shrink. The shrinkage pressurizes the core, helping to form the correct crystal structures. The core remains hot, and austenitic. A microprocessor controls the water flow to the quench box, to manage the temperature difference through the cross-section of the bars. The correct temperature difference assures that all processes occur, and bars have the necessary mechanical properties.The bar leaves the quench box with a temperature gradient through its cross section. As the bar cools, heat flows from the bar's centre to its surface so that the bar's heat and pressure correctly tempers an intermediate ring of martensite and bainite.
Application in rebar steel:
Finally, the slow cooling after quenching automatically tempers the austenitic core to ferrite and pearlite on the cooling bed.
These bars therefore exhibit a variation in microstructure in their cross section, having strong, tough, tempered martensite in the surface layer of the bar, an intermediate layer of martensite and bainite, and a refined, tough and ductile ferrite and pearlite core.
When the cut ends of TMT bars are etched in Nital (a mixture of nitric acid and methanol), three distinct rings appear: 1. A tempered outer ring of martensite, 2. A semi-tempered middle ring of martensite and bainite, and 3. a mild circular core of bainite, ferrite and pearlite. This is the desired micro structure for quality construction rebar.
Application in rebar steel:
In contrast, lower grades of rebar are twisted when cold, work hardening them to increase their strength. However, after thermo mechanical treatment (TMT), bars do not need more work hardening. As there is no twisting during TMT, no torsional stress occurs, and so torsional stress cannot form surface defects in TMT bars. Therefore TMT bars resist corrosion better than cold, twisted and deformed (CTD) bars.
Application in rebar steel:
After thermomechanical processing, some grades in which TMT Bars can be covered includes Fe: 415 /500 /550/ 600. These are much stronger compared with conventional CTD Bars and give up to 20% more strength to concrete structure with same quantity of steel.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**RAM pack**
RAM pack:
RAM pack, RAMpack, RAM expansion cartridge, RAM expansion unit (REU), memory expansion pak and memory module are some of the most common names given to various self-contained units or cartridges that expand a computer, games console or other device's own internal RAM in a user-friendly manner.
RAM pack:
Such units are generally designed to be installable by an end-user with little technical knowledge, often simply by plugging them into an expansion or cartridge slot easily accessible at the rear of the machine (e.g. the Sinclair ZX81 or the VTech Laser 200), or via a user-accessible hatch (e.g. the Atari 800's CX852 and CX853 modules or the Nintendo 64 Expansion Pak).
RAM pack:
The ZX81 16K RAM expansion gained particular notoriety for the "RAM pack wobble" problem because it was top-heavy and only supported by the edge connector. This could lead to it falling out, crashing the ZX81 and losing any program or data currently in the computer's memory.
RAM pack:
Examples of such memory expansions include: Jupiter Ace RAM Pack Sinclair ZX80 RAM pack units (available in 1–3 KB and later 16 KB) Sinclair ZX81 16KB RAM unit, commonly referred to as "RAM Pack" like its predecessor Atari 1064 Memory Module (expanded the Atari 600XL's 16 KB RAM to 64 KB) VIC-20 RAM cartridges officially available in 3 KB (with or without BASIC extension ROM), 8 KB, 16 KB, with 32 KB and 64 KB third-party cartridges also available Commodore REU, a series of RAM Expansion Units (REUs) for the Commodore 64 and Commodore 128 computers (128 KB, 256 KB and 512 KB capacities) Saturn carts, 1 or 4 MB of RAM, sold by SNK and Capcom respectively for use with their games.
RAM pack:
Nintendo 64 "Expansion Pak" expanded the N64's RAM from 4 to 8 MB Nintendo DS and DS Lite "Memory Expansion Pak" was supplied with the DS web browser software and adds 8 MB of RAM
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Wildflower strip**
Wildflower strip:
A wildflower strip is a section of land set aside to grow wildflowers. These may be at the edge of a crop field to mitigate agricultural intensification and monoculture; along road medians and verges; or in parkland or other open spaces such as the Coronation Meadows. Such strips are an attractive amenity and may also improve biodiversity, conserving birds, insects and other wildlife.
General characteristics:
Wildflower strips, a semi-natural man-made habitat comprising mixtures of native herbaceous species, can be sown on arable field margins to provide multiple ecological, agricultural and conservation benefits. They typically measure 3 – 10 m across and vary in their plant species composition depending on the purpose for which they are established.
The purposes for which wildflower strips are sown can vary. These may be to provide nectar sources for certain pollinator species, promote biological pest control, or enhance local biodiversity by improving habitat quality and diversity.
General characteristics:
Early concepts of wildflower strips were first developed in Switzerland in the 1980s, whereupon these ideas became merged under the German name "Buntbrachen" (wildflower strips) and established within Swiss agricultural policy.Wildflower strips can be naturally regenerated on a range of soil types. However, on nutrient-rich soils, the result is likely to be a plant community with low species richness and dominated by vigorous grasses; so that lighter soils may be preferable to give all species present a reasonable chance.In contrast to the traditional wildflower strips that border field margins, infield wildflower strips have more recently been trialed. In this approach, wildflower strips are extended beyond field boundaries to traverse the centre of fields. Thereby, they may be regarded as an extension to in-field beetle banks and are primarily targeted at making a larger proportion of arable crop easily accessible to natural enemies of crop pests.
Ecological benefits and conservation value:
As well as adding colour and aesthetic appeal to the otherwise homogeneous agricultural landscape, wildflower strips provide food, shelter and overwintering sites for arthropod species that benefit agriculture and ecosystem functionality. The beneficiaries may play an important role in controlling insect pests of commercial crops; or in pollinating crops, as with bumblebees and honeybees that are attracted to the wildflowers for a nectar source. In the former case, the pest-controlling arthropods profit from the high insulation capacity of the vegetation that makes the wildflower strips suitable as overwintering sites during winter. Thus, wildflower strips can significantly enhance local biodiversity and mitigate declines in economically important invertebrate populations through intensive agriculture. As such, they may be regarded as ecological compensation areas interspersed in a highly disturbed, wildlife-impoverished agroecosystem.Wildflower strips can also improve habitat connectivity within the agricultural landscape by functioning as wildlife corridors for beneficiary taxa in question.
Economic benefits:
Wildflower strips can be highly beneficial to agriculture by attracting pollinating insects and pest-controlling arthropods, thereby potentially improving crop yields. Sowing strips of wildflowers along a given area is usually worthwhile if the resulting increase in natural pollination improves crops yields beyond those obtained in the absence of wildflower strips. Moreover, from a farmer's perspective, investing in relatively inexpensive seed mixtures to create wildflower strips and thereby promote natural pollination is an effective way of reducing reliance on commercially sourced pollinators and ensuring against potential market supply failures for pollinators such as bumblebees.By restoring the previously lost semi-natural habitat for pollinators through establishment of wildflower strips along field margins, the loss in pollination services resulting from declines in pollinating insects may be sufficiently recouped.
Effectiveness:
Although generally beneficial to wildlife, the success of a wildflower strip, both from an ecological and agricultural perspective, depends on several important factors such as the right plant species composition and the local landscape context in which the strip is created. Ultimately however, the right choice of plant species to be sown in the right place is contingent upon the specific conservation aims of the wildflower strip. For example, in an intensive agricultural landscape where the main beneficiaries are intended to be the locally common pollinating bee species, wildflower strips sown from a mixture of seeds from various plant species with differential suitability for various pollinators may be best. On the other hand, wildflower strips can also be planted in a more targeted approach to help save endangered pollinating insect species of interest, in which case seed mixtures should primarily contain the preferred host plants of the species in questionThe connectivity of several wildflower strips with each other and wider landscape features is important for ensuring a viable network of natural and semi-natural habitats within the landscape to benefit movement of wildlife. However, it can take several years of vegetation growth and development before the ecological benefits are significantly realised.As well as getting the species composition and landscape context right, wildflower strips should also be designed to ensure seasonal continuity of floral resources for as long as possible in any given year. This can be achieved by sowing a mixture of annuals, biennials, and perennials at sufficiently high densities and with differing flowering times. By sowing side by side species with different individual flowering periods, the resulting wildflower strip will be available for use by a range of arthropods for a greater portion of the year. This design also benefits insects with long colony cycles such as bumblebees.Examples of ideal dominant plants within wildflower strips include species in the Fabaceae, which are especially favoured by pollinating bees, or Apiaceae (carrot family), which are good for attracting pest-controlling arthropods. However, care should be taken not to choose dominant species that are highly prone to slug herbivory, such as Centaurea cyanus or Papaver rhoeas, since this may compromise the growth and development of the strip.Another consideration is the economic practicality of planting and maintaining these strips at the farmer's end. In addition to the investment in the initial seed mixture for sowing, farmers will also need to account for the cost of maintaining the wildflower strip in subsequent years to prevent invasion of vigorous grass species or other perennial weeds.Sometimes, the selective sowing of food plants favoured by the larva of target species in wildflower strips is conducive to fulfilling conservation goals, although this has not been usually considered in designing agri-environment schemes.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Manifest expression**
Manifest expression:
A manifest expression is a programming language construct that a compiler can analyse to deduce which values it can take without having to execute the program. This information can enable compiler optimizations, in particular loop nest optimization, and parallelization through data dependency analysis. An expression is called manifest if it is computed only from outer loop counters and constants (a more formal definition is given below).
Manifest expression:
When all control flow for a loop or condition is regulated by manifest expressions, it is called a manifest loop resp. condition.
Most practical applications of manifest expressions also require the expression to be integral and affine (or stepwise affine) in its variables.
Definition:
A manifest expression is a compile time computable function which depends only on compile-time constants, manifest variable references, and loop counters of loops surrounding the expression.A manifest variable reference is itself defined as a variable reference with a single, unambiguous definition of its value, which is itself a manifest expression.The single, unambiguous definition is particularly relevant in procedural languages, where pointer analysis and/or data flow analysis is required to find the expression that defines the variable value. If several defining expressions are possible (e.g. because the variable is assigned in a condition), the variable reference is not manifest.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Polyglycylation**
Polyglycylation:
Polyglycylation is a form of posttranslational modification of glutamate residues of the carboxyl-terminal region tubulin in certain microtubules (e.g., axonemal) originally discovered in Paramecium, and later shown in mammalian neurons as well.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Samsung Galaxy Note Edge**
Samsung Galaxy Note Edge:
The Samsung Galaxy Note Edge is an Android phablet produced by Samsung Electronics. Unveiled during a Samsung press conference at IFA Berlin on September 3, 2014, alongside its sister, the Galaxy Note 4, it is distinguished by a display that curves across the right side of the device, which can be used as a sidebar to display application shortcuts, a virtual camera shutter button, notifications, and other information.
Development and release:
At the 2013 Consumer Electronics Show, Samsung presented "Youm"—concept prototypes for smartphones that incorporated flexible displays. One prototype had a screen curved along the right edge of the phone, while the other had a screen curved around the bottom of the phone. Samsung explained that the additional "strip" could be used to display additional information alongside apps, such as notifications or a news ticker.The Youm concept would surface as part of the Galaxy Note Edge, which was unveiled alongside the Galaxy Note 4 on September 3, 2014. Samsung strategist Justin Denison explained that the company liked to take risks in its products, going on to say that "We're not a company that does one-offs [..] We like to do things big and get behind it."
Specifications:
Hardware and design The Galaxy Note Edge is similar in design to the Galaxy Note 4 (which is in turn an evolution of the Galaxy Note 3), with a metallic frame and a plastic leather rear cover. The device features either an Exynos 5 Octa 5433(South Korea Version) or Qualcomm Snapdragon 805 (International Version) system-on-chip, 3 GB of RAM, and 32 or 64 GB of expandable storage. As with other Galaxy Note series devices, it includes an S Pen stylus which can be used for pen input, drawing, and handwriting. The S Pen had been given a small upgrade with the Note Edge. Similarly to other recent Samsung flagship devices, it also includes a heart rate sensor and fingerprint scanner. The Galaxy Note Edge features a 5.6-inch "Quad HD+" Super AMOLED display, which contains an additional 160 pixel wide column that wraps around the side of the device on a curve. The device includes a 16 megapixel rear camera with a back-illuminated sensor, optical image stabilization, and 4K video recording, and a 3.7 megapixel front-facing camera.
Specifications:
Software The Galaxy Note Edge ships with Android 4.4.4 "KitKat" and Samsung's TouchWiz interface and software suite, and is similar to that of the Note 4. The curved edge of the screen is used as a sidebar for various purposes: it can be used to display different panels, including shortcuts to frequent applications, displays of notifications, news, stocks, sports, social networks, playback controls for the music and video players, camera controls, data usage, and minigames. Tools are also available through the panel, including a ruler, stopwatch, timer, voice recorder, and flashlight button. A software development kit is available for developers to code panels; additional panels can be obtained through Galaxy Apps. The "Night Clock" mode allows the edge screen to, during a pre-determined timeframe, display a digital clock while not in use. Due to the nature of AMOLED displays, which render black by not turning on the pixel at all, this mode does not significantly consume battery power, but per software limitations it cannot be active for more than 12 hours at a time.
Variants:
CountriesEurope: SM-N915FY Global: SM-N915F Korea: N915K/N915L/N915S Singapore, Australia, Spain: N915G Japan: N915DCarriersAT&T: N915A T-Mobile: N915TThe Note Edge was shipped to Germany after more than 120.000 people voted for it in an online poll conducted by Samsung. A "Premium Edition" with the model number "N915FZKYDBT" was launched soon after, with more accessories in the box (flip cover, memory card, "display cleaner", and an additional brochure with usage tips), as well as an extended warranty.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Open carry in the United States**
Open carry in the United States:
In the United States, open carry refers to the practice of visibly carrying a firearm in public places, as distinguished from concealed carry, where firearms cannot be seen by the casual observer. To "carry" in this context indicates that the firearm is kept readily accessible on the person, within a holster or attached to a sling. Carrying a firearm directly in the hands, particularly in a firing position or combat stance, is known as "brandishing" and may constitute a serious crime, but that is not the mode of "carrying" discussed in this article.
Open carry in the United States:
The practice of open carry, where gun owners openly carry firearms while they go about their daily business, has seen an increase in the United States in recent years, and is a hotly debated topic in gun politics. This has been marked by a number of organized events intended to increase the visibility of open carry and public awareness about the practice. Proponents of open carry point to history and statistics, noting that criminals usually conceal their weapons, in contrast to the law-abiding citizens who display their weapons. As of 2022, almost all US states allow for open carry either without a permit or with a permit/license.
Open carry in the United States:
The gun rights community has become supportive of the practice, while gun control groups are generally opposed.
Terminology:
Open carry The act of publicly carrying a firearm on one's person in plain sight.
Plain sight Broadly defined as not being hidden from common observation; varies somewhat from state to state. Some states specify that open carry occurs when the weapon is "partially visible," while other jurisdictions require the weapon to be "fully visible" to be considered carried openly.
Loaded weapon Definition varies from state to state. Depending on state law, a weapon may be considered "loaded" under one of the following criteria: Preemption In the context of open carry: the act of a state legislature passing laws which limit or eliminate the ability of local governments to regulate the possession or carrying of firearms.
Prohibited persons This refers to people who are prohibited by law from carrying a firearm. Typical examples are felons, those convicted of a misdemeanor of domestic violence, those found to be addicted to alcohol or drugs, those who have been involuntarily committed to a mental institution, and those who have been dishonorably discharged from the United States Armed Forces.
Terminology:
Categories of law Today in the United States, the laws vary from state to state regarding open carry of firearms. The categories are defined as follows: Permissive open carry states A state has passed full preemption of all firearms laws, with few exceptions. They do not prohibit open carry for all nonprohibited citizens and do not require a permit or license to carry firearms openly. Open carry is lawful on foot. A permit may or may not be required to carry in a motor vehicle, depending on the state.
Terminology:
Permissive open carry with local restriction states A state that generally allows open carry without a license, but additional restrictions may exist on non-license holders such as local restrictions or additional restricted locations or modes of carry. Some states exempt license holders from local restrictions while others don't.
Terminology:
Licensed open carry states A state has passed full preemption of all firearms laws, with few exceptions. They permit open carry of a handgun to all nonprohibited citizens once they have been issued a permit or license. Open carry of a handgun is lawful on foot and in a motor vehicle. In practice however, some of these states that have may-issue licensing laws can be regarded as non-permissive for open carry, as issuing authorities rarely or never grant licenses to ordinary citizens.
Terminology:
Anomalous open carry states Open carry is generally prohibited except either under special circumstances or in unincorporated areas of counties in which population densities are below statutorily-defined thresholds, and local authorities have enacted legislation to allow open carry with a permit in such jurisdictions (California). Thus, some local jurisdictions may permit open carry, and others may impose varying degrees of restrictions or prohibit open carry entirely.
Terminology:
Non-permissive open carry states Open carry of a handgun is not lawful or is lawful only under such a limited set of circumstances that public carry is effectively prohibited. They may include when one is hunting or traveling to/from hunting locations, on property controlled by the person carrying, or for lawful self-defense. Additionally, some states with may-issue licensing laws are non-permissive when issuing authorities are highly restrictive in the issuance of licenses allowing open carry.
Jurisdictions in the United States:
In the United States, the laws concerning open carry vary by state and sometimes by municipality. The following chart lists state policies for openly carrying a loaded handgun in public.
Constitutional implications:
Open carry has never been authoritatively addressed by the United States Supreme Court. The most obvious predicate for a federal right to do so would arise under the Second Amendment to the United States Constitution.
Constitutional implications:
In the majority opinion in the case of District of Columbia v. Heller (2008), Justice Antonin Scalia wrote concerning the entirety of the elements of the Second Amendment; "We find that they guarantee the individual right to possess and carry weapons in case of confrontation." However, Scalia continued, "Like most rights, the Second Amendment right is not unlimited. It is not a right to keep and carry any weapon whatsoever in any manner whatsoever and for whatever purpose."Forty five states' constitutions recognize and secure the right to keep and bear arms in some form, and none of those prohibit the open carrying of firearms. Five state constitutions provide that the state legislature may regulate the manner of keeping or bearing arms, and advocates argue that none rule out open carry specifically. Nine states' constitutions indicate that the concealed carrying of firearms may be regulated and/or prohibited by the state legislature. Open carry advocates argue that, by exclusion, open carrying of arms may not be legislatively controlled in these states. Section 1.7 of Kentucky's state constitution only empowers the state to enact laws prohibiting "concealed carry". Open carry without a permit is a specifically protected right in the Kentucky State Constitution and that right may not be questioned, in Holland v Commonwealth(1956) as mentioned " We observe, via obiter dicta, that although a person is granted the right to carry a weapon openly, a severe penalty is imposed for carrying it concealed. If the gun is worn outside the jacket or shirt in full view, no one may question the wearer's right so to do." Concealed Carry was decided to not be protected in the state constitution. The North Carolina Supreme Court ruled in State v. Kerner that requiring any form of permit, fee or license to open carry a firearm off one's own premises is unconstitutional according to article 1, Section 30 of the states constitution which says " A well regulated militia being necessary to the security of a free State, the right of the people to keep and bear arms shall not be infringed... " The court also held that concealed carry was not a right protected by the state's constitution and thus could be regulated by law.In July 2018, a divided panel of the United States Court of Appeals for the Ninth Circuit found that Hawaii's licensing requirement for open carry violated the Second Amendment. That ruling was vacated on February 8, 2019 and the case is scheduled to be heard en banc.
Constitutional implications:
Grounds for detention Several courts have ruled that the mere carriage of a firearm, where it is allowable by law, is not reasonable suspicion to detain someone; however, some courts have ruled that simply being armed is grounds for seizure.
Constitutional implications:
United States Supreme Court In Terry v. Ohio (1968), the Supreme Court ruled that police may stop a person only if they have a reasonable suspicion that the person has committed or is about to commit a crime, and may frisk the suspect for weapons if they have reasonable suspicion that the suspect is armed and dangerous. In an analog case, the Supreme Court ruled in Delaware v. Prouse (1979) that stopping automobiles for no reason other than to check the driver's license and registration violates the Fourth Amendment. In the case Florida v. J. L. (2000), the court ruled that a police officer may not legally stop and frisk anyone based solely on an anonymous tip that simply described that person's location and appearance without information as to any illegal conduct that the person might be planning.
Constitutional implications:
Other federal courts Unless otherwise stated, the following courts ruled that carrying a firearm is not reasonable suspicion to detain someone or being armed is not a justifiable reason to frisk someone: The Third Circuit issued its ruling in United States v. Ubiles (2000), United States v. Navedo (2012), and United States v. Lewis (2012).The Fourth Circuit issued its ruling in United States v. Black (2013), however the decision United States v. Robinson (2017) found that a suspect stopped for a lawful reason can be frisked if the officer reasonably suspects them to be armed regardless of whether in legal possession or not.The Sixth Circuit issued its ruling in Northrup v. City of Toledo Police Department (2015).The Seventh Circuit issued its ruling in United States v. Leo (2015).The Ninth Circuit issued its ruling in United States v. Brown (2019), however the decision United States v. Orman (2007) held that a police officer seizing a firearm for safety did not violate the Fourth Amendment.The Tenth Circuit issued its ruling in United States v. King (1993) and United States v. Roch (1993), however the decision United States v. Rodriguez (2013) found that the presence of a handgun in a waistband is grounds for reasonable suspicion of unlawfully carrying a deadly weapon thus justifying a stop and frisk.The District Court of New Mexico issued its ruling in St. John v. McColley (2009).
Constitutional implications:
State courts Unless otherwise stated, the following courts ruled that carrying a firearm is not reasonable suspicion to detain someone or being armed is not a justifiable reason to frisk someone: The Arizona Supreme Court issued its ruling in State v. Serna (2014).The Florida Fourth District Court of Appeal issued its ruling in Regalado v. State (2009).The Idaho Supreme Court issued its ruling in State v. Bishop (2009).The Illinois Supreme Court issued its ruling in People v. Granados (2002) however the decision People v. Colyar (2013) found that the presence of a bullet justified officers searching for weapons for officer safety.The Indiana Supreme Court issued its ruling in Pinner v. Indiana (2017).The Kentucky Court of Appeals issued its ruling in Pulley v. Commonwealth (2016).The New Jersey Superior Court, Appellate Division issued its ruling in State v. Goree (2000).The New Mexico Supreme Court issued its ruling in State v. Vandenberg and Swanson (2003) holding that frisking for weapons was reasonable.The Pennsylvania Supreme Court issued its ruling in Commonwealth v. Hawkins (1997) and Commonwealth v. Hicks (2019).
Constitutional implications:
The Tennessee Supreme Court issued its ruling in State v. Williamson (2012).
Demonstrations and events:
May 2, 1967 openly armed members of the Black Panther Party marched on the California State Capitol in opposition to the then-proposed Mulford Act prohibiting the public carrying of loaded firearms. After the march in the state capitol building, the law was quickly enacted.
On June 16, 2000, the New Black Panther Party along with the National Black United Front and the New Black Muslim Movement protested against the death sentencing conviction of Gary Graham, by openly carrying shotguns and rifles at the Texas Republican National convention in Houston, Texas.
In 2003, gun rights supporters in Ohio used a succession of open carry "Defense Walks" attempting to persuade the governor to sign concealed carry legislation into law.
Demonstrations and events:
The legality of open carry of certain firearms in Virginia was reaffirmed after several 2004 incidents in which citizens openly carrying firearms were confronted by local law enforcement. The Virginia law prohibits the open carry, in certain localities, of any semiautomatic weapon holding more than 20 rounds or a shotgun that holds more than seven rounds, without a concealed carry permit.
Demonstrations and events:
In 2008, Clachelle and Kevin Jensen, of Utah, were photographed together openly carrying handguns in the Salt Lake City International Airport near a "no weapons" sign. The photo led to an article in The Salt Lake Tribune about the airport's preempted "no weapons" signs. After a few weeks, the city removed the signs.
Demonstrations and events:
In 2008, Zachary Mead was detained in Richmond County, Georgia by law enforcement for openly carrying a firearm. The weapon was seized. The organization GeorgiaCarry.org filed a lawsuit on behalf of Mead. The court declared that the seizure was a violation of the Fourth Amendment to the United States Constitution, awarded court costs and attorney fees to Mead, and dismissed the remaining charges with prejudice.
Demonstrations and events:
In 2008, Brad Krause of West Allis, Wisconsin was arrested by police for alleged disorderly conduct while openly carrying a firearm while planting a tree on his property. A court later acquitted him of the disorderly conduct charge, observing in the process that in Wisconsin there is no law dealing with the issue of unconcealed weapons.
Demonstrations and events:
On September 11, 2008, Meleanie Hain had a handgun in plain view in a holster at her 5-year-old daughter's soccer game in Lebanon County, Pennsylvania, leading the county sheriff Michael DeLeo to revoke her weapons permit; Judge Robert Eby, a gun owner and concealed carry permit holder himself, later reinstated it. Hain launched a million-dollar lawsuit against Sheriff DeLeo, claiming he had infringed on her Second Amendment rights. About a year later, her estranged husband shot her dead in her home before killing himself. Police took several handguns, a shot gun, two rifles and several hundred rounds of ammunition from the Hains' home. Meleanie Hain's handgun was found fully loaded and in a backpack near the front door of the home, according to police. A second legal dispute with the sheriff continued after her death, but a federal judge dismissed that lawsuit on November 3, 2010.
Demonstrations and events:
On April 20, 2009, Wisconsin Attorney General J.B. Van Hollen issued a memorandum to district attorneys stating that open carry was legal and in and of itself does not warrant a charge of disorderly conduct. Milwaukee police chief Ed Flynn instructed his officers to take down anyone with a firearm, take the gun away, and then determine if the individual could legally carry it until they could make sure the situation is safe.
Demonstrations and events:
On May 31, 2009, Washington OpenCarry members held an open carry protest picnic at Silverdale's Waterfront Park, a county park. Attendees openly carried handguns in violation of posted regulations prohibiting firearms at the park. Washington state law allows the open carrying of firearms and specifically preempts local ordinances more restrictive than the state's, such as the one on the books for Kitsap county. Shortly after the protest Kitsap county commissioners voted to amend KCC10.12.080 to remove the language that banned firearms being carried in county parks. KCC10.12.080 Was amended on July 27, 2009 and as of May 31, 2012 most of the signs in the county still read that firearms are prohibited despite numerous attempts to get the county to update the signs. The amendment is listed as it reads in meeting minutes from July 2009: KCC10.12.080 Amendment: It is unlawful to shoot, fire or explode any firearm, firecracker, fireworks, torpedo or explosive of any kind or to carry any firearm or to shoot or fire any air gun, BB gun, bow and arrow or use any slingshot in any park, except the park director may authorize archery, slinging, fireworks and firing of small bore arms at designated times and places suitable for their use.
Demonstrations and events:
In July 2009, an open carry event organized by OpenCarry.org took place at Pacific Beach, San Diego, California, where citizens carrying unloaded pistols and revolvers were subjected to Section 12031(e) inspections of their firearms on demand by police officers. The officers were obviously well-briefed on the details of the law, which allowed Californians to openly carry only unloaded guns and allows carry of loaded magazines and speedloaders.
Demonstrations and events:
On August 11, 2009, William Kostric, a New Hampshire resident, Free State Project participant, and former member of We The People's Arizona Chapter, was seen carrying a loaded handgun openly in a holster while participating in a rally outside a town hall meeting hosted by President Barack Obama at Portsmouth High School in New Hampshire. Kostric never attempted to enter the school, but rather stood some distance away on the private property of a nearby church, where he had permission to be. He held up a sign that read "It's Time to Water the Tree of Liberty!".
Demonstrations and events:
On August 16, 2009, "about a dozen" people were noted by police to be openly carrying firearms at a health care rally across the street from a Veterans of Foreign Wars Convention in the Phoenix Convention Center, where President Barack Obama was giving an address. While the Secret Service was "very much aware" of these individuals, Arizona law does not prohibit open carry. No crimes were committed by these protesters, and no arrests were made. In an interview with Fox News, commentator James Wesley Rawles characterized the Phoenix protesters as "merely exercising a pre-existing right". When he was asked about open carry, "but ... without a permit?" Rawles opined, "We have a permit – it is called the Second Amendment." In May 2010, Jesus C. Gonzalez was arrested and charged with homicide in a shooting which occurred while he was carrying a handgun. Gonzalez was involved in two prior arrests for disorderly conduct, based on his open carry practice. He filed a lawsuit claiming fourth and fourteenth amendment violations. His suit and appeal were both dismissed. Gonzalez was convicted on lesser charges, including reckless homicide.
Demonstrations and events:
The Starbucks coffee chain has been the target of several boycotts arranged by gun control groups to protest Starbucks' policy of allowing concealed and open carry weapons in stores, if allowed by local laws. A counter buycott was proposed for Valentines Day of 2012 to show support from gun owners for Starbucks, with the use of two dollar bills to represent Second Amendment rights. On September 17, 2013 Howard Schultz, the CEO of Starbucks, published a letter asking customers to refrain from bringing guns into his stores.
Demonstrations and events:
On February 5, 2017, two self admitted open carry political activists, James Craig Baker and Brandon Vreeland, walked into a Dearborn, Michigan police station in order to protest what they felt was unfair profiling from an earlier traffic stop which had resulted from a 911 call after Baker had been seen near local businesses armed and dressed in tactical gear. When Baker entered the police station he was carrying an assault rifle at the "low ready" position, meaning it could be raised and fired at a moment's notice, with a fully loaded and inserted magazine. Baker was also wearing tactical gear and a ski mask. Vreeland was not armed, but was wearing body armor and carrying a camera on a tripod. The police on duty in the station immediately sounded an alarm that there was a possible active shooter in the lobby and the two activist were approached from all sides by police with guns drawn. Baker was ordered to set down his rifle and get on the floor, which he did so after a few minor protests. Vreeland, however, angrily confronted the police, stating he was not armed and only had a camera. He refused to comply with officer instructions and was tackled after several warnings to which he replied "fuck you". The two men were arrested and initially charged with misdemeanor crimes, including brandishing a weapon and disturbing the peace. These charges were later upgraded to felonies in court, partially due to a post investigation which revealed e-mails and text messages between the two men in which they discussed deliberately provoking police, staging incidents to incite lethal force situations, as well as discussing how to elude capture should police attempt to arrest them. Vreeland was eventually convicted on one count of carrying a concealed weapon, one count of felony resisting and opposing an officer, and one count of disturbing the peace. Baker was convicted on a single count of carrying a concealed weapon. Vreeland received a prison sentence of nine months to five years, and began serving his sentence at the Charles Egeler Reception and Guidance Center in the fall of 2017. Baker received time in county jail and three years probation.
Demonstrations and events:
On September 1, 2017 the state of Texas legalized the open carrying of blades longer than 5.5 inches in public.April 30, 2020 hundreds of protesters—many of them carrying guns—descended on the Michigan Capitol to oppose Gov. Gretchen Whitmer's extension of the state's stay-at-home order by another two weeks, to May 15. Protesters have demonstrated against stay-at-home orders at capitols in dozens of states, but the protests in Michigan were the starkest example yet of protesters actually entering a capitol while the legislature was in session and bringing weapons with them. Michigan is an open-carry state, however, and there are no rules barring people from bringing guns into the Capitol.
Diversity in state laws:
As of 2018, 45 states allowed open carry, but the details vary widely.
Diversity in state laws:
Four states, the Territory of the U.S. Virgin Islands and the District of Columbia fully prohibit the open carry of handguns. Twenty-five states permit open carry of a handgun without requiring the citizen to apply for any permit or license. Fifteen states require some form of permit (often the same permit as allows a person to carry concealed), and the remaining five states, though not prohibiting the practice in general, do not preempt local laws or law enforcement policies, and/or have significant restrictions on the practice, such as prohibiting it within the boundaries of an incorporated urban area. Illinois allows open carry on private property only.On October 11, 2011, California Governor Jerry Brown signed into law that it would be a "misdemeanor to openly carry an exposed and unloaded handgun in public or in a vehicle." This does not apply to the open carry of rifles or long guns or persons in rural areas where permitted by local ordinance.
Diversity in state laws:
On November 1, 2011, Wisconsin explicitly acknowledged the legality of open carry by amending its disorderly conduct statute (Wis. Stat. 947.01). A new subsection 2 states "Unless other facts and circumstances that indicate a criminal or malicious intent on the part of the person apply, a person is not in violation of, and may not be charged with a violation of, this section for loading, carrying, or going armed with a firearm, without regard to whether the firearm is loaded or is concealed or openly carried." On May 15, 2012, Oklahoma Governor Mary Fallin signed Senate Bill 1733, an amendment to the Oklahoma Self Defense Act, which will allow people with Oklahoma concealed weapons permits to open carry if they so choose. The law took effect November 1, 2012. "Under the measure, businesses may continue to prohibit firearms to be carried on their premises. SB 1733 prohibits carrying firearms on properties owned or leased by the city, state or federal government, at corrections facilities, in schools or college campuses, liquor stores and at sports arenas during sporting events."
Federal Gun Free School Zones Act:
The Federal Gun-Free School Zones Act of 1990 limits where a person may legally carry a firearm by generally prohibiting carry within 1,000 ft of the property line of any K–12 school in the nation, with private property excluded.In a 1995 Supreme Court case, the Act was declared unconstitutional (on Federalism, not Second Amendment grounds), but was reenacted in the slightly different form in 1996.
Public opinion:
According to joint polls published by CNN and the SSRS Institute: A majority of Americans support stricter gun control law; and 64% of Americans support stricter gun control laws, while 36% oppose it. Besides, 54% of Americans believe that such laws will reduce the number of deaths and killings of citizens with firearms, and 58% believe that the government can take effective action to prevent mass shootings.
Public opinion:
2023 According to a 2023 Fox News poll found registered voters overwhelmingly supported a wide variety of gun restrictions: 87% said they support requiring criminal background checks for all gun buyers; 81% support raising the age requirement to buy guns to 21; 80% support requiring mental health checks for all gun purchasers; 80% said police should be allowed take guns away from people considered a danger to themselves or others; 61% supported banning assault rifles and semi-automatic weapons.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**HyperOs HyperDrive**
HyperOs HyperDrive:
HyperDrive (HD) is a series of RAM-based solid-state drives invented by Accelerated Logic B.V. (became Accelerated Logic ltd., and is now a German company) employee Pascal Bancsi (for HyperDrive II architecture), who partnered with the British company HyperOs Systems, who manufactured the retail product. The HyperDrive interfaces with and is recognized by computer systems as a standard hard drive.
HyperDrive I:
Originally called 'Accelerator', development began in 1999. It is an IDE device supporting PIO mode 1 transfer, and includes 128 MiB SRAM.
HyperDrive II:
After the SRAM Accelerator, it was switched to SDRAM, and uses 5.25-inch form factor, which allows the company to build Accelerator with capacity of 128 MiB to 4 GiB. It had maximum random access time of 0.15 ms. SDRAM was chosen over flash because of its speed advantage and reliability over flash memory.Later generations used 3.5-inch form factor and supports UDMA 33 transfer speed, with maximum capacity of 14 GiB.It uses Atmel controller.It includes battery backup mechanism.Future plans included support of UDMA66, Fibre Channel interface.
HyperDrive III:
It uses Parallel ATA (PATA) (max 100 MB/s) or Serial ATA (SATA) interface. For the first time, memory capacity could be changed by using memory slots. It uses ECC DDR SDRAM (max 2 GiB per DIMM). Maximum capacity started at 6 DIMM (12 GiB), and was later changed to 8 DIMM (16 GiB).
Non-volatile storage is achieved using an integral 160 minute 7.2 V battery backup battery (1250 mAh), external adapter, or HyperOs software.
It uses the Xilinx Spartan FPGA and Atmel controller array.
The circuit board was produced by DCE.
HyperDrive 4:
It supports both SATA and PATA interfaces (PATA native), with interface speed up to 133 MB/s. It uses ECC DDR SDRAM (max 2 GiB per DIMM).
Maximum capacity is 8 DIMM (16 GiB, PC1600-PC3200). Memory of different sizes can be mixed, but only if DIMMs of same capacity are used in 1 bank (4 DIMM/bank).
It supports non-volatile memory backup using optional 2.5-inch PATA drive, HyperOs software (which swaps RAM contents to a different drive) or backup battery (5 Ah or 10 Ah).
The drive is rated 125 MB/s data rate, 44k I/O per second.
Revision 3 It supports registered ECC SDRAM, with capacity up to 2 GiB per DIMM on 16 GiB version and 4 GiB per DIMM on 32 GiB version. Seek time was reduced from 40 microseconds to 1100 nanoseconds read and 250 nanoseconds write. It also reduces the power consumption by 30% and employs gold plated DIMM sockets.
HyperDrive 4 Rack-mounted It is a rack-mounted version of the device with at most four drives.
HyperDrive 4 RAID Systems It is an external case version with four HyperDrive4 drives. It uses PCI-X or PCI Express x8 interface, with Silicon Image 3124 or Areca 12XX SAT RAID card to connect each drive.
HyperDrive 5:
It uses SATA interface with 2x SATA2 interface ports. It uses DDR2 SDRAM (max 8 GiB per DIMM). The manufacturer claimed it had built-in ECC so it no longer required ECC memory, but ECC is performed at the expense of storage capacity if ECC memory is not used. Memory speed is not rated; the manufacturer recommends Kingston ValueRAM (PC2-3200 to PC2-6400).
HyperDrive 5:
HyperDrive 5 includes 7.4 V 2400 mAh lithium battery for flash backup, CompactFlash card slot, with external DC adapter for non-volatile storage.
HyperDrive 5:
The drive is rated 175 MB/s read, 145 MB/s write, 40k (later 65k) I/O per second, when using only one of the SATA2 links. The rated speed using the dual SATA2 links is not given by the manufacturer. When using both SATA2 links, the physical drive can be configured as a RAID 0 array with two devices with half of maximum capacity. In RAID 0 mode, the read and write speeds are reported to be more than twice those that are claimed by the manufacturer.Drive controller is switched to Taiwanese ASIC, instead of the Xilinx Spartan FPGA/Atmel array.
HyperDrive 5:
HyperDrive 5 is also sold as ACard ANS-9010, outside of the UK.
HyperDrive 5M A cheaper version of the Hyperdrive5, with only one SATA2 port and 6 DDR2 slots, so that memory is limited to 48 GiB. Performance and features are the same as the HyperDrive5 when using only one SATA2 link.
HyperDrive 5M is sold as ACard ANS-9010B, outside of the UK.
Awards:
HyperDrive4 (16 GiB) won Custom PCs Crazy But Cool award.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**CHMP1A**
CHMP1A:
Charged multivesicular body protein 1a is a protein that in humans is encoded by the CHMP1A gene.
Function:
This gene encodes a member of the CHMP/Chmp family of proteins which are involved in multivesicular body sorting of proteins to the interiors of lysosomes. The initial prediction of the protein sequence encoded by this gene suggested that the encoded protein was a metallopeptidase. The nomenclature has been updated recently to reflect the correct biological function of this encoded protein.
Interactions:
CHMP1A has been shown to interact with VPS4A.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Moviola**
Moviola:
A Moviola () is a device that allows a film editor to view a film while editing. It was the first machine for motion picture editing when it was invented by Iwan Serrurier in 1924.
History:
Iwan Serrurier's original 1917 concept for the Moviola was as a home movie projector to be sold to the general public. The name was derived from the name "Victrola" since Serrurier thought his invention would do for home movie viewing what the Victrola did for home music listening. However, since the machine cost $600 in 1920 (equivalent to $8,800 in 2022), very few sold. An editor at Douglas Fairbanks Studios suggested that Iwan should adapt the device for use by film editors. Serrurier did this and the Moviola as an editing device was born in 1924, with the first Moviola being sold to Douglas Fairbanks himself.
History:
Many studios quickly adopted the Moviola including Universal Studios, Warner Bros., Charles Chaplin Studios, Buster Keaton Productions, Mary Pickford, Mack Sennett, and Metro-Goldwyn-Mayer. The need for portable editing equipment during World War II greatly expanded the market for Moviola's products, as did the advent of sound, 65mm and 70mm film.
History:
Iwan Serrurier's son, Mark Serrurier, took over his father's company in 1946. In 1966, Mark sold Moviola Co. to Magnasync Corporation (a subsidiary of Craig Corporation) of North Hollywood for $3 million. Combining the names, the new name was Magnasync/Moviola Corp. President L. S. Wayman instantly ordered a tripling of production, and the new owners realized their investment in less than two years.
History:
Wayman retired in 1981, and Moviola Co. was sold to J&R Film Co., Inc. in 1984.
The Moviola company is still in existence and is located in Hollywood, where part of the facility is located on one of the original Moviola factory floors.
Usage:
The Moviola allowed editors to study individual shots in their cutting rooms, thus to determine more precisely where the best cut-point might be. The vertically oriented Moviolas were the standard for film editing in the United States until the 1970s, when horizontal flatbed editor systems became more common.
Usage:
Nevertheless, Moviolas continued to be used, albeit to a diminishing extent, into the 21st century. Michael Kahn received an Academy Award nomination for Best Film Editing in 2005 for his work on Steven Spielberg's Munich, which he edited with a Moviola, although by this time almost all editors had switched over to digital film editors (Kahn himself switched to digital editing for his later work).
Awards:
Mark Serrurier accepted an Academy Award of Merit (Oscar statue) for himself and his father for the Moviola in 1979.
To MARK SERRURIER for the progressive development of the Moviola from the 1924 invention of his father, Iwan Serrurier, to the present Series 20 sophisticated film editing equipment.
There is a star on the Hollywood Walk of Fame for Mark Serrurier because of the Moviola's contribution to Motion Pictures.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Average crossing number**
Average crossing number:
In the mathematical subject of knot theory, the average crossing number of a knot is the result of averaging over all directions the number of crossings in a knot diagram of the knot obtained by projection onto the plane orthogonal to the direction. The average crossing number is often seen in the context of physical knot theory.
Definition:
More precisely, if K is a smooth knot, then for almost every unit vector v giving the direction, orthogonal projection onto the plane perpendicular to v gives a knot diagram, and we can compute the crossing number, denoted n(v). The average crossing number is then defined as the integral over the unit sphere: 14π∫S2n(v)dA where dA is the area form on the 2-sphere. The integral makes sense because the set of directions where projection doesn't give a knot diagram is a set of measure zero and n(v) is locally constant when defined.
Alternative formulation:
A less intuitive but computationally useful definition is an integral similar to the Gauss linking integral.
A derivation analogous to the derivation of the linking integral will be given. Let K be a knot, parameterized by f:S1→R3.
Then define the map from the torus to the 2-sphere g:S1×S1→S2 by g(s,t)=f(s)−f(t)|f(s)−f(t)|.
Alternative formulation:
(Technically, one needs to avoid the diagonal: points where s = t .) We want to count the number of times a point (direction) is covered by g. This will count, for a generic direction, the number of crossings in a knot diagram given by projecting along that direction. Using the degree of the map, as in the linking integral, would count the number of crossings with sign, giving the writhe. Use g to pull back the area form on S2 to the torus T2 = S1 × S1. Instead of integrating this form, integrate the absolute value of it, to avoid the sign issue. The resulting integral is 14π∫T2|(f′(s)×f′(t))⋅(f(s)−f(t))||(f(s)−f(t))|3dsdt.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Phosphosilicate glass**
Phosphosilicate glass:
Phosphosilicate glass, commonly referred to by the acronym PSG, is a silicate glass commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers, due to its effect in gettering alkali ions. Another common type of phosphosilicate glass is borophosphosilicate glass (BPSG).
Soda-lime phosphosilicate glasses also form the basis for bioactive glasses (e.g. Bioglass), a family of materials which chemically convert to mineralised bone (hydroxy-carbonate-apatite) in physiological fluid.
Bismuth doped phosphosilicate glasses are being explored for use as the active gain medium in fiber lasers for fiber-optic communication.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**CA/EZTEST**
CA/EZTEST:
CA-EZTEST was a CICS interactive test/debug software package distributed by Computer Associates and originally called EZTEST/CICS, produced by Capex Corporation of Phoenix, Arizona with assistance from Ken Dakin from England.The product provided source level test and debugging features for computer programs written in COBOL, PL/I and Assembler (BAL) languages to complement their own existing COBOL optimizer product.
Competition:
CA-EZTEST initially competed with three rival products: "Intertest" originally from On-line Software International, based in the United States. In 1991, Computer Associates International, Inc. acquired On-line Software and renamed the product CA-INTERTEST, then stopped selling CA-EZTEST.
OLIVER (CICS interactive test/debug) from Advanced Programming Techniques in the UK.
XPEDITER from Compuware Corporation who in 1994 acquired the OLIVER product.
Early critical role:
Between them, these three products provided much needed third-party system software support for IBM's "flagship" teleprocessing product CICS, which survived for more than 20 years as a strategic product without any memory protection of its own. A single "rogue" application program (frequently by a buffer overflow) could accidentally overwrite data almost anywhere in the address space causing "down-time" for the entire teleprocessing system, possibly supporting thousands of remote terminals. This was despite the fact that much of the world's banking and other commerce relied heavily on CICS for secure transaction processing between 1970 and early 1990s. The difficulty in deciding which application program caused the problem was often insurmountable and frequently the system would be restarted without spending many hours investigated very large (and initially unformatted) "core dump"s requiring expert system programming support and knowledge.
Early integrated testing environment:
Additionally, the product (and its competitors) provided an integrated testing environment which was not provided by IBM for early versions of CICS and which was only partially satisfied with their later embedded testing tool — "Execution Diagnostic Facility" (EDF), which only helped newer "Command level" programmers and provided no protection.
Supported operating systems:
The following operating systems were supported: IBM MVS IBM XA IBM VSE (except XPEDITER)
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Unnoticed Art**
Unnoticed Art:
Unnoticed Art is the name of an organisation and a series of initiatives relating to a form of performance art that is executed in a non-theatrical context.The term 'Unnoticed Art' was originally mentioned by Dutch artist Frans van Lent as a basic concept for the first Unnoticed Art Festival, which took place in Haarlem (The Netherlands) in 2014. The first Unnoticed Art Festival took place over two days, during which time thirty volunteers executed the performance scores created by thirty five artists. An iteration of the first Unnoticed Art Festival was commissioned by Zeppelin University in Friedrichshafen in Germany to form part of their 2014 Sommerfest, the university's principal annual public engagement event. This used a selection of works devised for the Haarlem event. The second Unnoticed Art Festival took place in Nijmegen, 2016. The book Unnoticed Art was published in January, 2015. It contains an artistic statement by Frans van Lent and a catalogue of the Unnoticed Art Festival performances.
Unnoticed Art:
The 'Unnoticed Art' concept was further developed into a blog titled UnnoticedArt.com. The purpose of this blog is to present a wide variety of art works that relate to the artistic attitude of the field of Unnoticed Art.
In addition, TheConceptBank.org was initiated as a follow up from the first Unnoticed Art Festival, as an online free approachable database for performative concepts. Like the festival, TheConceptBank.org is based on the separation of concept creation (the artist) and execution (by visitors of the website). This website was launched in May 2014.
Another derivation of the 'Unnoticed Art' concept, The ParallelShow, is recognised as a series of impromptu performances of occasional collaborations from performance art practitioners. It started on 7 July 2015 as a singular occasion at the Kunsthal in Rotterdam, NL. This first ParallelShow was a cooperation of three Dutch artists: Ieke Trinks, Ienke Kastelein and Frans van Lent.
Unnoticed Art:
The concept of The ParallelShow took place unexpectedly at and around public exhibitions in art venues. It was never announced, no invitations are ever sent. Since the first show, The ParallelShow has also been initiated in nine other locations: 23 October 2015: at the Naturalis Biodiversity Center in Leiden, Netherlands; 4 December 2015: at the M-Museumin Leuven, Belgium; 17 January 2016: at the Tate Britain, London, UK; 11 February 2016: at the Art Rotterdam art-fair, Rotterdam, Netherlands; 28 May 2016: at the Archeological Sites, Delphi, Greece; 5 June 2016: at the Huis van Gijn, Dordrecht, Netherlands; 18 September 2016: at the Institut Valencià d’Art Modern (IVAM), Valencia, Spain; 6 November 2016: at the Stasi Museum, Berlin, Germany; 8 January 2017: at the Met Cloisters, in New York City, New York, USA.The ParallelShow never leaves any physical traces of its occurrence.
Unnoticed Art:
The ParallelShow book was published from Jap Sam Books, in The Netherlands, in March 2018.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Nuclear utilization target selection**
Nuclear utilization target selection:
Nuclear utilization target selection (NUTS) is a hypothesis regarding the use of nuclear weapons often contrasted with mutually assured destruction (MAD). NUTS theory at its most basic level asserts that it is possible for a limited nuclear exchange to occur and that nuclear weapons are simply one more rung on the ladder of escalation pioneered by Herman Kahn. This leads to a number of other conclusions regarding the potential uses of and responses to nuclear weapons.
Counterforce strikes:
A counterforce strike consists of an attack on enemy nuclear weapons meant to destroy them before they can be used. A viable first strike capability would require the ability to launch a 100-percent-effective (or nearly so) counterforce attack. Such an attack is made more difficult by systems such as early warning radars which allow the possibility for rapid recognition and response to a nuclear attack and by systems such as submarine-launched ballistic missiles or road-mobile nuclear missiles (such as the Soviet SS-20) which make nuclear weapons harder to locate and target.
Counterforce strikes:
Since a limited nuclear war is a viable option for a NUTS theorist, the power to unleash such attacks holds a great deal of appeal. However, establishing such a capability is very expensive. A counterforce weapon requires a much more accurate warhead than a countervalue weapon, as it must be guaranteed to detonate very close to its target, which drastically increases relative costs.
Limited countervalue strikes:
Some NUTS theorists hold that a mutually assured destruction-type deterrent is not credible in cases of a small attack, such as one carried out on a single city, as it is suicidal. In such a case, an overwhelming nuclear response would destroy every enemy city and thus every potential hostage that could be used to influence the attacker's behavior. This would free up the attacker to launch further attacks and remove any chance for the attacked nation to bargain. A country adhering to a NUTS-style war plan would likely respond to such an attack with a limited attack on one or several enemy cities.
Missile defense:
Since NUTS theory assumes the possibility of a winnable nuclear war, the contention of many MAD theorists that missile defense systems should be abandoned as a destabilizing influence is generally not accepted by NUTS theorists. For NUTS theorists, a missile defence system would be a positive force in that it would protect against a limited nuclear attack. Additionally, such a system would increase the odds of success for a counterforce attack by assuring that if some targets escaped the initial attack, the incoming missiles could be intercepted. But protection against a limited attack means that the opponent has incentive to launch a larger scale attack, against which the defence is likely to be ineffective. Additionally, increased possibility of success of counterforce attacks means that the opponent has the incentive to launch a preventive attack, which increases the risk of a large scale response to misinterpreted signals.
NUTS and US nuclear strategy:
NUTS theory can be seen in the US adoption of a number of first-strike weapons, such as the Trident II and Minuteman III nuclear missiles, which both have an extremely low circular error probable (CEP) of about 90 meters for the former and 120 meters for the latter. These weapons are accurate enough to almost certainly destroy a missile silo if it is targeted.
NUTS and US nuclear strategy:
Additionally, the US has proceeded with a number of programs which improve its strategic situation in a nuclear confrontation. The Stealth bomber has the capacity to carry a large number of stealthy cruise missiles, which could be nuclear-tipped, and due to its low probability of detection and long range would be an excellent weapon with which to deliver a first strike.During the late 1970s and the 1980s, the Pentagon began to adopt strategies for limited nuclear options to make it possible to control escalation and reduce the risk of all-out nuclear war, hence accepting NUTS. In 1980, President Jimmy Carter signed Presidential Directive 59 which endorsed the NUTS strategic posture committed to fight and win a nuclear war, and accepted escalation dominance and flexible response. The Soviets, however, were skeptical of limited options or the possibility of controlling escalation. While Soviet deterrence doctrine posited massive responses to any nuclear use ("all against any"), military officials considered the possibility of proportionate responses to a limited US attack, although they "doubted that nuclear war could remain limited for long."Like several other nuclear powers, but unlike China and India, the United States has never made a "no first use" pledge, maintaining that pledging not to use nuclear weapons before an opponent would undermine their deterrent. NATO plans for war with the USSR called for the use of tactical nuclear weapons in order to counter Soviet numerical superiority.
NUTS and US nuclear strategy:
Rather than making extensive preparations for battlefield nuclear combat in Central Europe, the Soviet General Staff emphasized conventional military operations and believing that they had an advantage there. "The Soviet military leadership believed that conventional superiority provided the Warsaw Pact with the means to approximate the effects of nuclear weapons and achieve victory in Europe without resort to those weapons."In criticising US policy on nuclear weapons as contradictory, leftist philosopher Slavoj Zizek has suggested that NUTS is the policy of the US with respect to Iran and North Korea while its policy with respect to Russia and China is one of mutual assured destruction (MAD).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gazelle (web browser)**
Gazelle (web browser):
Gazelle was a research web browser project by Microsoft Research, first announced in early 2009. The central notion of the project was to apply operating system (OS) principles to browser construction. In particular, the browser had a secure kernel, modeled after an OS kernel, and various web sources run as separate "principals" above that, similar to user space processes in an OS. The goal of doing this was to prevent bad code from one web source to affect the rendering or processing of code from other web sources. Browser plugins are also managed as principals.Gazelle had a predecessor project, MashupOS, but with Gazelle the emphasis was on a more secure browser.By the July 2009 announcement of ChromeOS, Gazelle was seen as a possible alternative Microsoft architectural approach compared to Google's direction. That is, rather than the OS being reduced in role to that of a browser, the browser would be strengthened using OS principles.The Gazelle project became dormant, and ServiceOS arose as a replacement project also related to browser architectures. But by 2015, the SecureOS project was also dormant, after Microsoft decided that its new flagship browser would be Edge.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Epigenetics of human development**
Epigenetics of human development:
Epigenetics of human development is the study of how epigenetics (hertiable characteristics that do no involve changes in DNA sequence) effects human development.
Epigenetics of human development:
Development before birth, including gametogenesis, embryogenesis, and fetal development, is the process of body development from the gametes are formed to eventually combine into a zygote to when the fully developed organism exits the uterus. Epigenetic processes are vital to fetal development due to the need to differentiate from a single cell to a variety of cell types that are arranged in such a way to produce cohesive tissues, organs, and systems.
Epigenetics of human development:
Epigenetic modifications such as methylation of CpGs (a dinucleotide composed of a 2'-deoxycytosine and a 2' deoxyguanosine) and histone tail modifications allow activation or repression of certain genes within a cell, in order to create cell memory either in favor of using a gene or not using a gene. These modifications can either originate from the parental DNA, or can be added to the gene by various proteins and can contribute to differentiation. Processes that alter the epigenetic profile of a gene include production of activating or repressing protein complexes, usage of non-coding RNAs to guide proteins capable of modification, and the proliferation of a signal by having protein complexes attract either another protein complex or more DNA in order to modify other locations in the gene.
Definitions:
Gene expression refers to the transcription of a gene but the RNA produced does not necessarily have to encode a protein product. Transcription may produce so called noncoding RNA products such as tRNA and regulatory RNA. Repression may refer to the decrease in transcription of a gene or inhibition of a protein. Proteins are often inhibited by binding the active site or causing a conformational change so that the active site can no longer bind. By making these alterations, proteins, like transcription factors, may bind DNA less or some protein may be inhibited so that it becomes a block in a signaling cascade and certain genes will then not be induced to be expressed. Repression can occur pre- or post-transcriptionally. Methylating the DNA or the modifying the histones that the DNA wraps around is one example that commonly leads to repression. Pre-transcriptional repression can also occur by altering the proteins that allow transcription to occur, namely the polymerase complex. Proteins can sit on the DNA strand and serve as a kind of block to polymerase proteins, halting them from transcribing. Post-transcriptional repression generally refers to the degradation of the RNA product or binding the RNA with proteins so that it cannot be translated or carry out its function.
Definitions:
DNA methylation in humans and most other mammals refers to the methylation of a CpG. Methylation of these cytosines are common in DNA, and in sufficient numbers can prevent proteins from attaching to the DNA by obscuring the domain binding site's matching DNA to the protein. Regions in which cytosines prior to guanines are clustered and highly unmethylated are called CpG islands, and often serve as promoters, or transcription start sites.
Definitions:
Histone modifications are modifications made to the amino acid residues in the tails of the histones that either restrict the histone's ability to bind to DNA or boost the histone's ability to bind to DNA. Histone modifications also act as sites for proteins to attach, which then further alter the gene's expression. Two common histone modifications are acetylation and methylation. Acetylation is when a protein adds an acetyl group to a lysine in a histone tail in order to restrict the ability of the histone to bind to DNA. This acetylation is commonly found on lysine 9 of histone 3, notated as H3K9ac. This results in the DNA being more open to transcription, due to the decreased binding to the histone. Methylation, meanwhile, is when a protein adds a methyl group to a lysine in a histone tail, although more than one methyl group can be added at a time. Two sites for histone methylation are common in current studies: trimethylation of lysine 4 on histone 3 (H3K4me3), which causes activation, and trimethylation of lysine 27 on histone 3, which causes repression (H3K27me3).
Definitions:
Cis acting elements refer to mechanisms that act on the same chromosome they come from, usually either in the same region from which they were produced or a region very close to this origin region. For example, a long non-coding RNA that is produced at one location silences the same or a different location on the same chromosome. Trans acting elements, however, are gene products from one location that act on a different chromosome, either the other in a chromosomal pair, or on a different chromosome from a separate chromosome pair. An example of this is a long non coding RNA from Hox gene C silences Hox gene D on a different chromosome, from a different chromosomal pair.
Hox gene regulation:
Hox genes are genes in humans that regulate body plan development. Humans have four sets of Hox genes, numbering 39 genes altogether, all of which aid in the differentiation of cells by location. Hox genes are activated early in the development of the embryo, in order to plan the development of the differing structures of the body. They also show colinearity with the body plan, meaning that the order of the Hox genes is similar to the expression levels of the Hox genes on the anterior-posterior axis. This colinearity allows for a spatial and temporal activation of genes in order to produce a proper body structure.Hox genes are regulated using a variety of epigenetic mechanisms, including the use of lncRNAs such as HOTAIR, the Trithorax (TrxG) group of proteins, and the Polycomb (PcG) group of proteins.
Hox gene regulation:
In Hox genes, long non-coding RNAs allow for communication between different Hox genes and different sets of Hox genes in order to coordinate body plan in the cell. One example of a long non-coding RNA that coordinates between Hox gene sets is HOTAIR, which is an RNA transcript produced in the HoxC cassette that represses transcription of a large number of genes in the HoxD cassette. Thus, HOTAIR regulates the HoxD genes from the HoxC genes in order to coordinate transcription of the Hox genes.
Hox gene regulation:
Role of PcG and TrxG The PcG and TrxG genes that produce protein complexes responsible for continuing the activation and the repression patterns in the Hox genes initially formed by the maternal factors. PcG genes are responsible for repressing chromatin in Hox clusters meant to be inactivated in the differentiated cell. PcG proteins repress genes by forming polycomb repressive complexes, such as PRC1 and PRC2. PRC2 complexes repress by trimethylating histone 3 at lysine 27 through histone methyltransferases Ezh2 and Ezh1. PRC2 is recruited by many elements, including CpG islands. PRC1, meanwhile, ubiquitinates H2AK119 using Ring1A/B's E3 ligase activity, causing stalling of RNA polymerase II. Furthermore, Ring1B, a member of the PRC1 complex, also represses Hox genes with Me118, Mph2, and RYBP by compacting the chromatin into higher-order structures. TrxG genes, meanwhile, are responsible for activating genes by trimethylating lysine 4 of the histone H3 tail. Genes with similar transcriptional marks tend to cluster together in distinct structures. In bivalent domains, both of these marks are present, indicating genes that are silenced but can be rapidly activated when necessary.
Hox gene regulation:
Role of ncRNAs 231 ncRNAs are present in the four Hox gene cassettes. Similarly to the Hox protein-coding genes, the ncRNAs show differential expression according to the cell's location on the anterior-posterior and proximal-distal axes. These lncRNAs can act either on the set of genes which they are present in, or can act on a separate gene set within the Hox genes.HOTTIP is a long non-coding RNA that assists in regulating the HoxA genes. It is produced from the 5' end of the HoxA gene cassette, and activates HoxA genes. Loops within the chromosome bring HOTTIP closer to its targets; this allows HOTTIP to bind to WDR5/MLL protein complexes to aid in trimethylation of lysine 4 of histone 3.HOTAIR is a long non-coding RNA that assists in regulating the HoxD genes. It is produced in the HoxC cassette, near the divide between expressed and unexpressed genes, and represses HoxD genes. HOTAIR acts by attaching to Suz12 in the PRC2 complex, and then guides this complex to the genes to be repressed. PRC2 then trimethylates the lysine 27 of histone 3, repressing the gene of interest.
Barr body formation:
In female humans, Barr bodies are defined as the condensed and inactivated X-chromosome that is found in every cell of the adult. Because females have two nearly identical X chromosomes, one of them must be silenced so that the expression levels of the genes on the X-chromosome are of the proper dosage. Thus, males and females have the same level of X-chromosome expression, despite being born with one X for males and two for females. This is also why individuals with Klinefelter syndrome, a disease in which more than two sex chromosomes are present in the body, have fewer symptoms than individuals with other types of aneuploidy, which are often fatal before birth.
Barr body formation:
The role of Xist Inactivation of one of the X chromosomes is initiated by a long non coding RNA called Xist. This lncRNA is expressed on the same chromosome it represses, known as working in cis. Recent research has shown that a repeat element in the RNA of Xist causes PRC2 to bind to the RNA. Another part of the RNA binds to the X-chromosome positioning PRC2 such that it can methylate various regions on the X-chromosome. This methylation causes other factors like histone deacetylases (HDACs) to bind to the chromosome and propagate heterochromatin formation, even into active gene regions. This heterochromatin greatly reduces, if not completely silences gene expression of the Barr body. Xist will be continuously created to maintain a condensed and silenced Barr body.In human cells with more than one X chromosome, two long non-coding RNAs are produced: Tsix is produced by one X chromosome, and Xist is produced by all of the other X chromosomes. Tsix is a long non-coding RNA that prevents repression of an X chromosome, while Xist is a long non-coding RNA that acts to repress and condense an entire X chromosome. The actions of Xist serve to create a Barr body in the cell.
Barr body formation:
Random early X-chromosome inactivation In embryonic development, when the zygote is still composed of just a few cells, each cell of the zygote will randomly choose an X-chromosome to condense and silence. From then on, the daughter cells of that cell will always silence the same X-chromosome as the parent cell it propagated from. This creates what is known as the “mosaic effect,” in which differential X-chromosome expression creates differing genotypes throughout a single organism. This may or may not be evident in females, depending on how the genes of the X-chromosomes affect phenotype. If the alleles for a gene are identical on both X-chromosomes, then you will see no difference between the cells that chose one X over the other. If the alleles are different for, say, fur color, then you may see patches of one color and patches of the other color. In calico cats the mosaic pattern of X inactivation is easily seen because a gene affecting coat color is carried on the X, resulting in patches of color on the coat. The mosaic pattern of X inactivation may also determine how penetrant a disease is, if the disease allele is present on one X-chromosome and not the other. The organism may have few cells in which the diseased allele has not been condensed, leading to little expression of the disease allele. This is referred to as skewed X-chromosome inactivation.
Imprinting:
Imprinting is defined as the differential expression of paternal and maternal alleles of a gene, due to epigenetic marks introduced onto the chromosome during the production of egg and sperm. These marks usually lead to differential expression of the specific sets of genes from the maternal and paternal chromosomes. Imprinting is carried out through many epigenetic mechanisms like methylation, histone modifications, rearrangement of higher order chromatin structure, non-coding RNAs, and interfering RNAs.
Imprinting:
Function A single evolutionary purpose of imprinting is still unknown, since the mechanisms and effects seem to be so diverse. One hypothesis states that imprinting occurs in order to carry out the evolutionary goal of the parent, that being the differential partition of resources. The male seeks to provide maximum resources for his offspring so that his genes may be passed on successfully to the next generation, whereas the female must partition resources between all her offspring, and so must limit resources given.Another hypothesis states that imprinting may help protect the female from ovarian trophoblastic disease and parthenogenesis. Trophoblastic disease occurs when a sperm fertilizes an egg with no nucleus and a cancer-like mass forms in the placenta. Parthenogenesis occurs when an unfertilized egg develops into a fully functional organism that is genetically identical to the parent, who is female in the case of animals or both sexes, in the case of plants. This does not occur naturally in mammals. In most animals, especially mammals, uniparental inheritance of chromosomes is often lethal or results in developmental abnormalities, sometimes physically but often cognitively. Other hypotheses point to the function of imprinting as a way of establishing the proper amount of expression or functional haploidy, much like silencing the extra X-chromosome in females (see section on Barr bodies). Imprinting may help in the differentiation of cells by silencing pluripotency genes or other developmental genes. Supporting this hypothesis, imprinted genes have been shown to differ in their expression between tissue types in the same organism, pointing to divergent outcomes as a result of developmental events during embryogenesis. Regardless of whether there is a single purpose for imprinting, numerous studies have shown that a normal and functional organism cannot be made without the various imprinting mechanisms.
Imprinting:
Igf2 and H19 In mammals, imprinted genes are often clustered in the genome, probably because they share transcriptional regulators or regulatory regions that impact the expression of multiple genes. It is easier for a lncRNA to silence multiple genes if they are closer together, making silencing more efficient. In some cases, when a gene is transcribed it overlaps another region nearby or opposite (antisense) to it, often silencing it. In the case of the Ifg2 and H19 genes, CTCF, a transcriptional repressor protein, is involved. CTCF binds to the unmethylated maternal ICR region but not the methylated paternal ICR region. ICR is a shared control region of Ifg2 and H19 that, when deleted, results in the loss of imprinting of these genes. CTCF then binds another region of the chromosome, creating a loop where Igf2 is blocked from transcription, but H19 is not, resulting in the maternal chromosome expressing H19 but not Igf2. CTCF has been shown to directly interact with Suz12, a subunit of PRC2, in order to silence the Ifg2 promoter region through hypermethylation. Conversely, the paternal H19 promoter is highly methylated during embryogenesis so that Ifg2 will not be silenced. Should CTCF fail to bind, H19 on the maternal chromosome has reduced expression and Igf2 is not silenced properly, resulting in biallelic expression. Mice have homologues of these genes, but silence them in a different way, where biallelic expression occurs and then antisense RNA is used to silence one of the genes.
Imprinting:
Igf2r and Airn Airn is an lncRNA used to silence Igf2r and other surrounding genes. In the mechanism to silence Igf2r, the transcription of the lncRNA Airn silences the expression of Igf2r, as opposed to an active repression mechanism. Airn is the antisense gene of Ifg2r, so if Airn is being transcribed, the transcriptional machinery may cover a part of or the entire promoter region of Igf2r, so RNA polymerase cannot bind to the promoter region of Igf2r in order to initiate transcription. This mechanism is very efficient in that Igf2r is silenced by transcription of Airn, while the RNA product silences other genes near Igf2r. The imprinting mechanisms described above work on the chromosome that the Airn lncRNA is produced, but there are many other imprinted genes that work to silence genes on other chromosomes or to silence the similar allele(s) on the opposing chromosome of the same pair. Some imprinted genes code for regulatory RNA elements such as lncRNA, small nucleolar RNA, and micro RNA, so the expression of these genes results in the silencing of some other gene.From these examples, researchers have seen similar patterns in developmental genetics. It is imperative that many genes are silenced at the right time so that cells can maintain their identity and expressional integrity. Failure to do so often leads to symptoms such as cognitive abnormalities, if not fatality.
Imprinting:
Igf2r regulation The lncRNA Airn is an lncRNA that regulates Igf2r expression. Igf2r is a gene which expresses a receptor for insulin-like growth factor 2, and assists in lysosomal enzyme transport, activation of growth factors, and degradation of insulin-like growth factor 2. This lncRNA is an RNA modified by imprinting, leading to Airn expression in the paternal allele, but not in the maternal allele. Airn acts by cis-acting silencing of the Igf2r region through overlapping the Igf2r gene through the antisense transcript encoded by Airn. Airn is silenced in the maternal allele through Igf2r transcription. In the brain, however, Igf2r alleles are both expressed due to Airn mediation being repressed in neuronal cells.
Role of PRC2:
PRC2 (Polycomb Repressive Complex 2) is a complex of proteins that repress chromatin by histone methylation and by working to recruit other proteins that help further the repression of chromatin. The structure of this complex and group of mechanisms used by this complex are highly conserved across various eukaryotic species. Very few species have duplicates of these complexes in the genome beyond PRC1 and PRC2.
Role of lncRNAs:
Long non-coding RNAs, or lncRNAs, are RNA transcripts produced by RNA polymerase II that are not translated but participate in the regulation of gene expression. Long non-coding RNAs are used in various epigenetic processes in development, including the regulation of Hox genes, as well as in the creation of Barr bodies.
Role of lncRNAs:
Recruitment by lncRNA Although PRC2 seems to have a very simple mechanism and works on many genes and chromosomes across the genome, it often has very specific binding regions and has been observed to localize to specific genes to cause their repression. Recent research shows that it probably does this through the binding of long non coding RNAs (lncRNAs). Xist and Hox genes have both been studied extensively and display this mechanism very well. The lncRNA that the complex binds does not necessarily need to hybridize to the target region in order to silence it, as evidenced by the PRC2-lncRNA complex working on regions other than the region from which this complex was produced. However, the three-dimensional configuration of the RNA often gives the complex specific localization to regions where the RNA is created to bind.
Role of lncRNAs:
Repressive function PRC2 is a multi-protein complex composed of four major subunits (E2H1/2, SUZ12, EED, and RbAp46/48) and three variable subunits (AEBP2, JARID2, and PCLs). The three variable subunits are used for catalysis of enzymatic reactions or binding to specific regions, not for repression of genes or chromatin. Similar to a zinc finger, AEBP2 docks onto the major grooves of DNA to assist in binding. PRC2 is usually recruited by other proteins or lncRNa and then catalyzes the trimethylation of lysine 27 of histone 3 tails (H3K27me3. This methylation is thought to cause repression by steric hindrance of RNA polymerase II. Even though the polymerase is not prevented from binding, the polymerase, after beginning transcription, will pause at H3K27me3 marks. The short transcript produced by the pausing of the polymerase often recruits regulatory complexes, like PRC2. Thus, PRC2 represses by two mechanisms: by directly altering the structure of the chromatin through methylation or by binding of transcripts.
Role of lncRNAs:
Phosphorylation PRC2 has been shown in many experiments to be necessary for the proper formation of organs, starting with the maintenance of cellular differentiation and silencing of pluripotency genes. The exact mechanism in early embryogenesis that induces cells to differentiate is still unclear, but this mechanism has been closely linked to protein kinase A (PKA). Since the PRC2 complex has sites able to be phosphorylated and has differentiated behavior based on the level of phosphorylation, a logical hypothesis can be made that PKA affects PRC2 behavior and may phosphorylate PRC2, activating the protein and starting the methylation cascade that silences genes.
Role of lncRNAs:
Early cell differentiation Experimentally, PRC2 has been shown to be highly enriched at the Hox genes and near developmental gene regulators, resulting in their methylation. Some time after the second or third cleavage event, PRC2 begins to bind to these developmental genes, even though they have the markers for highly active genes like H3K9me3. This has been described as the “leaking” of PRC2 binding. Variable binding will cause some genes to be silenced before others, causing differentiation, but this is likely regulated by the organism. What causes the specificity of cell differentiation is still unknown but some hypotheses say it largely has to do with the cell environment and the “awareness” of the cells to each other, considering all cells in this stage contain identical genomes at this point. The maintained cell lines after this differentiation event are largely dependent on PRC2. Without it, pluripotency genes will still be active, causing the cells to be unstable and reversion back to a stem cell-like stage where the cell would have to undergo differentiation again in order to return to its normal state. Properly differentiated cells have silenced pluripotency genes.
Role of lncRNAs:
Maintenance of chromosome condensation PRC2 is also highly associated with intergenic regions, subtelomeric regions, and long-terminal repeat transposons. PRC2 acts to create heterochromatin in these regions through similar mechanisms to the mechanism used to repress genes. Heterochromatin formation is imperative in these regions in order to regulate gene expression, maintain chromatin shape, prevent degradation of the chromosome, and reduce the event of transposon “hopping” or spontaneous recombination.Thus, PRC2 is not only essential to the initiation of differentiation in development, but also for maintaining heterochromatin in all cell stages and for silencing genes and chromosome regions that would undo the cell differentiation that had already occurred or negatively affect the survival of the cell or the organism as a whole.
Role of lncRNAs:
Paraspeckle formation Neat1 is an lncRNA which assists in forming the structure of nuclear structures known as paraspeckles: nuclear bodies which contain RNA-binding proteins. They control gene expression in the nucleus by retaining RNA in the nucleus that would otherwise alter gene expression. Paraspeckles form a significant portion of the corpus luteum of the ovary; in Neat1 impaired mice, corpus luteum formation is highly dysfunctional, causing ovarian defects and lowered progesterone levels resulting in a lack of pregnancy in Neat1 deficient mice. Neat1 assists in regulation of luteal genes by preventing the protein Sfpq from inhibiting Nr5a1 and Sp1, allowing luteal genes to be regularly transcribed. Neat1 is regulated by histone deacetylases.
Role of lncRNAs:
Neuronal differentiation Evf2 is a lncRNA that acts in forebrain neuronal differentiation during embryonic development. Evf2 is transcribed from an ultraconserved region, or a region that is very highly conserved among most vertebrate species, within the region from Dlx5 to Dlx6. This region is a target for SHH, a highly important regulator of central nervous system development. Evf2, when transcribed, recruits Dlx and Mecp2 through cis and trans-acting mechanisms to the Dlx5/6 region in the ventral forebrain, causing GABAergic interneurons in the hippocampus to be formed. Evf2 acts by forming a complex with Dlx4 that increases Dlx4 transcription activation ability and stability.Malat1, another neurological lncRNA, causes increased synaptic function and greater amounts of dendrite development. Increases of Malat1 increase neuronal density, while decreases of Malat1 decrease neuronal density. Malat1 acts by regulating the expression levels of Nlgn1 and SynCAM1 which are important genes in synapse formation.
Role of BRD4:
Bromodomain protein 4, or BRD4, is a protein which binds to acetylated tails of histones H3 and H4 to aid active gene transcription by decompaction using the bromodomain with the assistance of the acetylated K5 on H4. BRD4 is a member of the BET protein family, which includes other bromodomain-containing proteins and their homologues in other species. BRD4 is a protein which functions in both gene activation and repression in cell cycle control and DNA replication. BRD4 functions by binding to the acetylated tails and then attaching to other proteins, allowing those proteins to either activate or repress the histones next to BRD4.BRD4 aids in early cell development by activating pluripotent genes through interacting with Oct4 and recruiting P-TEFb (positive transcription elongation factor). By occupying pluripotent genes and X-chromosome inactivation lncRNAs in their regulatory regions, BRD4 enhances activation of these DNA regions. BRD4 enhances this activation by recruiting P-TEFb; if either BRD4 or P-TEFb is not functional, pluripotent gene transcription is blocked, and the cell differentiates into a neuroectodermal cell.BRD4 can act as epigenetic bookmarking throughout the cell cycle, including after transcription, due to its association with P-TEFb, allowing BRD4 to enhance RNAPII.BRD4 also assists in the hyperacetylation of histones in the sperm nucleus. Histone hyperacetylation, the addition of acetyl groups to lysines on the amino tails of histones in an amount much larger than normal, is believed to assist in histone removal from the sperm nucleus.
Developmental diseases:
Examples of diseases caused by epigenetic dysfunction in development include: Beckwith-Wiedemann Syndrome, caused by abnormal methylation in the maternal ICE region, causing Igf2 overexpression. Symptoms include accelerated growth, abnormal growth (hemihyperplasia), abdominal wall defects, macroglossia, hypoglycemia, kidney abnormalities, and large abdominal organs.
Russell-Silver Syndrome, caused by abnormal lack of methylation in the paternal ICE region, causing Igf2 repression. Symptoms include low birth weight, failure to thrive, hypoglycemia, distinctive head shape, abnormal growth, clinodactyly, and digestive issues.
Prader-Willi Syndrome, caused by missing paternal expression of the region which UBE3A expression inhibits. Symptoms include hypotonia, feeding difficulties, delayed development, poor growth, hyperphagia, obesity, learning disabilities, intellectual impairment, delayed or incomplete puberty, behavioral issues, sleep abnormalities, and distinctive features.
Angelman Syndrome, caused by loss of UBE3A expression in the maternal allele. Symptoms include delayed development, intellectual disability, ataxia, speech impairment, epilepsy, microcephaly, hyperactivity, excitable demeanor, scoliosis, and difficulty sleeping.
Alpha thalassemia X-linked syndrome, which can be caused by hypomethylation in certain repeat sequences. Symptoms include delayed development, hypotonia, distinctive facial features, and reduced hemoglobin production.
ICF syndrome, caused by a mutation in the DNA methyltransferase 3b gene or DNA hypomethylation, which causes lack of DNA methylation. Symptoms include intellectual impairment and alpha thalassemia.
Cancerous stem cells, caused by misregulation of polycomb proteins that often lead to blocking or activating developmental genes at the wrong time. Tumor suppressor genes may be silenced and undifferentiated cells proliferate an increased rate.
Developmental diseases:
There are many diseases that have been closely linked to Hox gene malfunctions, caused by genetic and epigenetic factors such as sequence mutations, overexpression, underexpression, and others. These diseases often involve missing or extra body parts like extra fingers, missing bones, missing auditory organs, limb deformations, etc. Some Hox gene defects have even been shown to cause early cancers. A full list of which genes cause which diseases can be seen in the reference "Human Hox gene disorders" by Quinonez.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Visual system**
Visual system:
The visual system comprises the sensory organ (the eye) and parts of the central nervous system (the retina containing photoreceptor cells, the optic nerve, the optic tract and the visual cortex) which gives organisms the sense of sight (the ability to detect and process visible light) as well as enabling the formation of several non-image photo response functions. It detects and interprets information from the optical spectrum perceptible to that species to "build a representation" of the surrounding environment. The visual system carries out a number of complex tasks, including the reception of light and the formation of monocular neural representations, colour vision, the neural mechanisms underlying stereopsis and assessment of distances to and between objects, the identification of a particular object of interest, motion perception, the analysis and integration of visual information, pattern recognition, accurate motor coordination under visual guidance, and more. The neuropsychological side of visual information processing is known as visual perception, an abnormality of which is called visual impairment, and a complete absence of which is called blindness. Non-image forming visual functions, independent of visual perception, include (among others) the pupillary light reflex and circadian photoentrainment.
Visual system:
This article mostly describes the visual system of mammals, humans in particular, although other animals have similar visual systems (see bird vision, vision in fish, mollusc eye, and reptile vision).
System overview:
Mechanical Together, the cornea and lens refract light into a small image and shine it on the retina. The retina transduces this image into electrical pulses using rods and cones. The optic nerve then carries these pulses through the optic canal. Upon reaching the optic chiasm the nerve fibers decussate (left becomes right). The fibers then branch and terminate in three places.
System overview:
Neural Most of the optic nerve fibers end in the lateral geniculate nucleus (LGN). Before the LGN forwards the pulses to V1 of the visual cortex (primary) it gauges the range of objects and tags every major object with a velocity tag. These tags predict object movement.
System overview:
The LGN also sends some fibers to V2 and V3.V1 performs edge-detection to understand spatial organization (initially, 40 milliseconds in, focusing on even small spatial and color changes. Then, 100 milliseconds in, upon receiving the translated LGN, V2, and V3 info, also begins focusing on global organization). V1 also creates a bottom-up saliency map to guide attention or gaze shift.V2 both forwards (direct and via pulvinar) pulses to V1 and receives them. Pulvinar is responsible for saccade and visual attention. V2 serves much the same function as V1, however, it also handles illusory contours, determining depth by comparing left and right pulses (2D images), and foreground distinguishment. V2 connects to V1 - V5.
System overview:
V3 helps process 'global motion' (direction and speed) of objects. V3 connects to V1 (weak), V2, and the inferior temporal cortex.V4 recognizes simple shapes, and gets input from V1 (strong), V2, V3, LGN, and pulvinar. V5's outputs include V4 and its surrounding area, and eye-movement motor cortices (frontal eye-field and lateral intraparietal area).
System overview:
V5's functionality is similar to that of the other V's, however, it integrates local object motion into global motion on a complex level. V6 works in conjunction with V5 on motion analysis. V5 analyzes self-motion, whereas V6 analyzes motion of objects relative to the background. V6's primary input is V1, with V5 additions. V6 houses the topographical map for vision. V6 outputs to the region directly around it (V6A). V6A has direct connections to arm-moving cortices, including the premotor cortex.The inferior temporal gyrus recognizes complex shapes, objects, and faces or, in conjunction with the hippocampus, creates new memories. The pretectal area is seven unique nuclei. Anterior, posterior and medial pretectal nuclei inhibit pain (indirectly), aid in REM, and aid the accommodation reflex, respectively. The Edinger-Westphal nucleus moderates pupil dilation and aids (since it provides parasympathetic fibers) in convergence of the eyes and lens adjustment. Nuclei of the optic tract are involved in smooth pursuit eye movement and the accommodation reflex, as well as REM.
System overview:
The suprachiasmatic nucleus is the region of the hypothalamus that halts production of melatonin (indirectly) at first light.
Functions:
Visual categorization A major function of the visual system is to categorize visual objects. It has been shown that humans can per perform categorization in briefly presented images in a fraction of a second. These experiments consisted in asking subjects to categorize images that do or do not contain animals. The results showed that humans were able to perform this task very well (with a success rate of more than 95%) but above all that a differential activity for the two categories of images could be observed by electroencephalography, showing that this differentiation emerges with a very short latency in neural activity. These results have been extended to several species, including primates. Different experimental protocols have shown for example that the motor response could be extremely fast (of the order of 120 ms) when the task was to perform a saccade. This speed of the visual cortex in primates is compatible with the latencies that are recorded at the neuro-physiological level. The rapid propagation of the visual information in the thalamus, then in the primary visual cortex takes about 45 ms in the macaque and about 60 ms in humans. This functioning of visual processing as a forward pass is most prominent in fast processing, and can be complemented with feedback loops from the higher areas to the sensory areas.
Structure:
The eye, especially the retina The optic nerve The optic chiasma The optic tract The lateral geniculate body The optic radiation The visual cortex The visual association cortex.These are components of the visual pathway also called the optic pathway that can be divided into anterior and posterior visual pathways. The anterior visual pathway refers to structures involved in vision before the lateral geniculate nucleus. The posterior visual pathway refers to structures after this point.
Structure:
Eye Light entering the eye is refracted as it passes through the cornea. It then passes through the pupil (controlled by the iris) and is further refracted by the lens. The cornea and lens act together as a compound lens to project an inverted image onto the retina.
Structure:
Retina The retina consists of many photoreceptor cells which contain particular protein molecules called opsins. In humans, two types of opsins are involved in conscious vision: rod opsins and cone opsins. (A third type, melanopsin in some retinal ganglion cells (RGC), part of the body clock mechanism, is probably not involved in conscious vision, as these RGC do not project to the lateral geniculate nucleus but to the pretectal olivary nucleus.) An opsin absorbs a photon (a particle of light) and transmits a signal to the cell through a signal transduction pathway, resulting in hyper-polarization of the photoreceptor.
Structure:
Rods and cones differ in function. Rods are found primarily in the periphery of the retina and are used to see at low levels of light. Each human eye contains 120 million rods. Cones are found primarily in the center (or fovea) of the retina. There are three types of cones that differ in the wavelengths of light they absorb; they are usually called short or blue, middle or green, and long or red. Cones mediate day vision and can distinguish color and other features of the visual world at medium and high light levels. Cones are larger and much less numerous than rods (there are 6-7 million of them in each human eye).In the retina, the photoreceptors synapse directly onto bipolar cells, which in turn synapse onto ganglion cells of the outermost layer, which then conduct action potentials to the brain. A significant amount of visual processing arises from the patterns of communication between neurons in the retina. About 130 million photo-receptors absorb light, yet roughly 1.2 million axons of ganglion cells transmit information from the retina to the brain. The processing in the retina includes the formation of center-surround receptive fields of bipolar and ganglion cells in the retina, as well as convergence and divergence from photoreceptor to bipolar cell. In addition, other neurons in the retina, particularly horizontal and amacrine cells, transmit information laterally (from a neuron in one layer to an adjacent neuron in the same layer), resulting in more complex receptive fields that can be either indifferent to color and sensitive to motion or sensitive to color and indifferent to motion.
Structure:
Mechanism of generating visual signals The retina adapts to change in light through the use of the rods. In the dark, the chromophore retinal has a bent shape called cis-retinal (referring to a cis conformation in one of the double bonds). When light interacts with the retinal, it changes conformation to a straight form called trans-retinal and breaks away from the opsin. This is called bleaching because the purified rhodopsin changes from violet to colorless in the light. At baseline in the dark, the rhodopsin absorbs no light and releases glutamate, which inhibits the bipolar cell. This inhibits the release of neurotransmitters from the bipolar cells to the ganglion cell. When there is light present, glutamate secretion ceases, thus no longer inhibiting the bipolar cell from releasing neurotransmitters to the ganglion cell and therefore an image can be detected.The final result of all this processing is five different populations of ganglion cells that send visual (image-forming and non-image-forming) information to the brain: M cells, with large center-surround receptive fields that are sensitive to depth, indifferent to color, and rapidly adapt to a stimulus; P cells, with smaller center-surround receptive fields that are sensitive to color and shape; K cells, with very large center-only receptive fields that are sensitive to color and indifferent to shape or depth; another population that is intrinsically photosensitive; and a final population that is used for eye movements.A 2006 University of Pennsylvania study calculated the approximate bandwidth of human retinas to be about 8960 kilobits per second, whereas guinea pig retinas transfer at about 875 kilobits.In 2007 Zaidi and co-researchers on both sides of the Atlantic studying patients without rods and cones, discovered that the novel photoreceptive ganglion cell in humans also has a role in conscious and unconscious visual perception. The peak spectral sensitivity was 481 nm. This shows that there are two pathways for sight in the retina – one based on classic photoreceptors (rods and cones) and the other, newly discovered, based on photo-receptive ganglion cells which act as rudimentary visual brightness detectors.
Structure:
Photochemistry The functioning of a camera is often compared with the workings of the eye, mostly since both focus light from external objects in the field of view onto a light-sensitive medium. In the case of the camera, this medium is film or an electronic sensor; in the case of the eye, it is an array of visual receptors. With this simple geometrical similarity, based on the laws of optics, the eye functions as a transducer, as does a CCD camera.
Structure:
In the visual system, retinal, technically called retinene1 or "retinaldehyde", is a light-sensitive molecule found in the rods and cones of the retina. Retinal is the fundamental structure involved in the transduction of light into visual signals, i.e. nerve impulses in the ocular system of the central nervous system. In the presence of light, the retinal molecule changes configuration and as a result, a nerve impulse is generated.
Structure:
Optic nerve The information about the image via the eye is transmitted to the brain along the optic nerve. Different populations of ganglion cells in the retina send information to the brain through the optic nerve. About 90% of the axons in the optic nerve go to the lateral geniculate nucleus in the thalamus. These axons originate from the M, P, and K ganglion cells in the retina, see above. This parallel processing is important for reconstructing the visual world; each type of information will go through a different route to perception. Another population sends information to the superior colliculus in the midbrain, which assists in controlling eye movements (saccades) as well as other motor responses.
Structure:
A final population of photosensitive ganglion cells, containing melanopsin for photosensitivity, sends information via the retinohypothalamic tract to the pretectum (pupillary reflex), to several structures involved in the control of circadian rhythms and sleep such as the suprachiasmatic nucleus (the biological clock), and to the ventrolateral preoptic nucleus (a region involved in sleep regulation). A recently discovered role for photoreceptive ganglion cells is that they mediate conscious and unconscious vision – acting as rudimentary visual brightness detectors as shown in rodless coneless eyes.
Structure:
Optic chiasm The optic nerves from both eyes meet and cross at the optic chiasm, at the base of the hypothalamus of the brain. At this point, the information coming from both eyes is combined and then splits according to the visual field. The corresponding halves of the field of view (right and left) are sent to the left and right halves of the brain, respectively, to be processed. That is, the right side of primary visual cortex deals with the left half of the field of view from both eyes, and similarly for the left brain. A small region in the center of the field of view is processed redundantly by both halves of the brain.
Structure:
Optic tract Information from the right visual field (now on the left side of the brain) travels in the left optic tract. Information from the left visual field travels in the right optic tract. Each optic tract terminates in the lateral geniculate nucleus (LGN) in the thalamus.
Structure:
Lateral geniculate nucleus The lateral geniculate nucleus (LGN) is a sensory relay nucleus in the thalamus of the brain. The LGN consists of six layers in humans and other primates starting from catarrhines, including cercopithecidae and apes. Layers 1, 4, and 6 correspond to information from the contralateral (crossed) fibers of the nasal retina (temporal visual field); layers 2, 3, and 5 correspond to information from the ipsilateral (uncrossed) fibers of the temporal retina (nasal visual field). Layer one contains M cells, which correspond to the M (magnocellular) cells of the optic nerve of the opposite eye and are concerned with depth or motion. Layers four and six of the LGN also connect to the opposite eye, but to the P cells (color and edges) of the optic nerve. By contrast, layers two, three and five of the LGN connect to the M cells and P (parvocellular) cells of the optic nerve for the same side of the brain as its respective LGN. Spread out, the six layers of the LGN are the area of a credit card and about three times its thickness. The LGN is rolled up into two ellipsoids about the size and shape of two small birds' eggs. In between the six layers are smaller cells that receive information from the K cells (color) in the retina. The neurons of the LGN then relay the visual image to the primary visual cortex (V1) which is located at the back of the brain (posterior end) in the occipital lobe in and close to the calcarine sulcus. The LGN is not just a simple relay station, but it is also a center for processing; it receives reciprocal input from the cortical and subcortical layers and reciprocal innervation from the visual cortex.
Structure:
Optic radiation The optic radiations, one on each side of the brain, carry information from the thalamic lateral geniculate nucleus to layer 4 of the visual cortex. The P layer neurons of the LGN relay to V1 layer 4C β. The M layer neurons relay to V1 layer 4C α. The K layer neurons in the LGN relay to large neurons called blobs in layers 2 and 3 of V1.There is a direct correspondence from an angular position in the visual field of the eye, all the way through the optic tract to a nerve position in V1 (up to V4, i.e. the primary visual areas. After that, the visual pathway is roughly separated into a ventral and dorsal pathway).
Structure:
Visual cortex The visual cortex is the largest system in the human brain and is responsible for processing the visual image. It lies at the rear of the brain (highlighted in the image), above the cerebellum. The region that receives information directly from the LGN is called the primary visual cortex, (also called V1 and striate cortex). It creates a bottom-up saliency map of the visual field to guide attention or eye gaze to salient visual locations, hence selection of visual input information by attention starts at V1 along the visual pathway. Visual information then flows through a cortical hierarchy. These areas include V2, V3, V4 and area V5/MT (the exact connectivity depends on the species of the animal). These secondary visual areas (collectively termed the extrastriate visual cortex) process a wide variety of visual primitives. Neurons in V1 and V2 respond selectively to bars of specific orientations, or combinations of bars. These are believed to support edge and corner detection. Similarly, basic information about color and motion is processed here.Heider, et al. (2002) have found that neurons involving V1, V2, and V3 can detect stereoscopic illusory contours; they found that stereoscopic stimuli subtending up to 8° can activate these neurons.
Structure:
Visual association cortex As visual information passes forward through the visual hierarchy, the complexity of the neural representations increases. Whereas a V1 neuron may respond selectively to a line segment of a particular orientation in a particular retinotopic location, neurons in the lateral occipital complex respond selectively to complete object (e.g., a figure drawing), and neurons in visual association cortex may respond selectively to human faces, or to a particular object.
Structure:
Along with this increasing complexity of neural representation may come a level of specialization of processing into two distinct pathways: the dorsal stream and the ventral stream (the Two Streams hypothesis, first proposed by Ungerleider and Mishkin in 1982). The dorsal stream, commonly referred to as the "where" stream, is involved in spatial attention (covert and overt), and communicates with regions that control eye movements and hand movements. More recently, this area has been called the "how" stream to emphasize its role in guiding behaviors to spatial locations. The ventral stream, commonly referred to as the "what" stream, is involved in the recognition, identification and categorization of visual stimuli.
Structure:
However, there is still much debate about the degree of specialization within these two pathways, since they are in fact heavily interconnected.Horace Barlow proposed the efficient coding hypothesis in 1961 as a theoretical model of sensory coding in the brain. Limitations in the applicability of this theory in the primary visual cortex (V1) motivated the V1 Saliency Hypothesis that V1 creates a bottom-up saliency map to guide attention exogenously. With attentional selection as a center stage, vision is seen as composed of encoding, selection, and decoding stages.The default mode network is a network of brain regions that are active when an individual is awake and at rest. The visual system's default mode can be monitored during resting state fMRI: Fox, et al. (2005) have found that "The human brain is intrinsically organized into dynamic, anticorrelated functional networks'", in which the visual system switches from resting state to attention.
Structure:
In the parietal lobe, the lateral and ventral intraparietal cortex are involved in visual attention and saccadic eye movements. These regions are in the Intraparietal sulcus (marked in red in the adjacent image).
Development:
Infancy Newborn infants have limited color perception. One study found that 74% of newborns can distinguish red, 36% green, 25% yellow, and 14% blue. After one month, performance "improved somewhat." Infant's eyes do not have the ability to accommodate. The pediatricians are able to perform non-verbal testing to assess visual acuity of a newborn, detect nearsightedness and astigmatism, and evaluate the eye teaming and alignment. Visual acuity improves from about 20/400 at birth to approximately 20/25 at 6 months of age. All this is happening because the nerve cells in their retina and brain that control vision are not fully developed.
Development:
Childhood and adolescence Depth perception, focus, tracking and other aspects of vision continue to develop throughout early and middle childhood. From recent studies in the United States and Australia there is some evidence that the amount of time school aged children spend outdoors, in natural light, may have some impact on whether they develop myopia. The condition tends to get somewhat worse through childhood and adolescence, but stabilizes in adulthood. More prominent myopia (nearsightedness) and astigmatism are thought to be inherited. Children with this condition may need to wear glasses.
Development:
Adulthood Eyesight is often one of the first senses affected by aging. A number of changes occur with aging: Over time, the lens become yellowed and may eventually become brown, a condition known as brunescence or brunescent cataract. Although many factors contribute to yellowing, lifetime exposure to ultraviolet light and aging are two main causes.
The lens becomes less flexible, diminishing the ability to accommodate (presbyopia).
While a healthy adult pupil typically has a size range of 2–8 mm, with age the range gets smaller, trending towards a moderately small diameter.
On average tear production declines with age. However, there are a number of age-related conditions that can cause excessive tearing.
Other functions:
Balance Along with proprioception and vestibular function, the visual system plays an important role in the ability of an individual to control balance and maintain an upright posture. When these three conditions are isolated and balance is tested, it has been found that vision is the most significant contributor to balance, playing a bigger role than either of the two other intrinsic mechanisms. The clarity with which an individual can see his environment, as well as the size of the visual field, the susceptibility of the individual to light and glare, and poor depth perception play important roles in providing a feedback loop to the brain on the body's movement through the environment. Anything that affects any of these variables can have a negative effect on balance and maintaining posture. This effect has been seen in research involving elderly subjects when compared to young controls, in glaucoma patients compared to age matched controls, cataract patients pre and post surgery, and even something as simple as wearing safety goggles. Monocular vision (one eyed vision) has also been shown to negatively impact balance, which was seen in the previously referenced cataract and glaucoma studies, as well as in healthy children and adults.According to Pollock et al. (2010) stroke is the main cause of specific visual impairment, most frequently visual field loss (homonymous hemianopia, a visual field defect). Nevertheless, evidence for the efficacy of cost-effective interventions aimed at these visual field defects is still inconsistent.
Clinical significance:
Proper function of the visual system is required for sensing, processing, and understanding the surrounding environment. Difficulty in sensing, processing and understanding light input has the potential to adversely impact an individual's ability to communicate, learn and effectively complete routine tasks on a daily basis.
In children, early diagnosis and treatment of impaired visual system function is an important factor in ensuring that key social, academic and speech/language developmental milestones are met.
Cataract is clouding of the lens, which in turn affects vision. Although it may be accompanied by yellowing, clouding and yellowing can occur separately. This is typically a result of ageing, disease, or drug use.
Presbyopia is a visual condition that causes farsightedness. The eye's lens becomes too inflexible to accommodate to normal reading distance, focus tending to remain fixed at long distance.
Clinical significance:
Glaucoma is a type of blindness that begins at the edge of the visual field and progresses inward. It may result in tunnel vision. This typically involves the outer layers of the optic nerve, sometimes as a result of buildup of fluid and excessive pressure in the eye.Scotoma is a type of blindness that produces a small blind spot in the visual field typically caused by injury in the primary visual cortex.
Clinical significance:
Homonymous hemianopia is a type of blindness that destroys one entire side of the visual field typically caused by injury in the primary visual cortex.
Quadrantanopia is a type of blindness that destroys only a part of the visual field typically caused by partial injury in the primary visual cortex. This is very similar to homonymous hemianopia, but to a lesser degree.
Prosopagnosia, or face blindness, is a brain disorder that produces an inability to recognize faces. This disorder often arises after damage to the fusiform face area.
Visual agnosia, or visual-form agnosia, is a brain disorder that produces an inability to recognize objects. This disorder often arises after damage to the ventral stream.
Other animals:
Different species are able to see different parts of the light spectrum; for example, bees can see into the ultraviolet, while pit vipers can accurately target prey with their pit organs, which are sensitive to infrared radiation. The mantis shrimp possesses arguably the most complex visual system of any species. The eye of the mantis shrimp holds 16 color receptive cones, whereas humans only have three. The variety of cones enables them to perceive an enhanced array of colors as a mechanism for mate selection, avoidance of predators, and detection of prey. Swordfish also possess an impressive visual system. The eye of a swordfish can generate heat to better cope with detecting their prey at depths of 2000 feet. Certain one-celled microorganisms, the warnowiid dinoflagellates have eye-like ocelloids, with analogous structures for the lens and retina of the multi-cellular eye. The armored shell of the chiton Acanthopleura granulata is also covered with hundreds of aragonite crystalline eyes, named ocelli, which can form images.Many fan worms, such as Acromegalomma interruptum which live in tubes on the sea floor of the Great Barrier Reef, have evolved compound eyes on their tentacles, which they use to detect encroaching movement. If movement is detected, the fan worms will rapidly withdraw their tentacles. Bok, et al., have discovered opsins and G proteins in the fan worm's eyes, which were previously only seen in simple ciliary photoreceptors in the brains of some invertebrates, as opposed to the rhabdomeric receptors in the eyes of most invertebrates.Only higher primate Old World (African) monkeys and apes (macaques, apes, orangutans) have the same kind of three-cone photoreceptor color vision humans have, while lower primate New World (South American) monkeys (spider monkeys, squirrel monkeys, cebus monkeys) have a two-cone photoreceptor kind of color vision.
History:
In the second half of the 19th century, many motifs of the nervous system were identified such as the neuron doctrine and brain localization, which related to the neuron being the basic unit of the nervous system and functional localisation in the brain, respectively. These would become tenets of the fledgling neuroscience and would support further understanding of the visual system.
History:
The notion that the cerebral cortex is divided into functionally distinct cortices now known to be responsible for capacities such as touch (somatosensory cortex), movement (motor cortex), and vision (visual cortex), was first proposed by Franz Joseph Gall in 1810. Evidence for functionally distinct areas of the brain (and, specifically, of the cerebral cortex) mounted throughout the 19th century with discoveries by Paul Broca of the language center (1861), and Gustav Fritsch and Eduard Hitzig of the motor cortex (1871). Based on selective damage to parts of the brain and the functional effects of the resulting lesions, David Ferrier proposed that visual function was localized to the parietal lobe of the brain in 1876. In 1881, Hermann Munk more accurately located vision in the occipital lobe, where the primary visual cortex is now known to be.In 2014, a textbook "Understanding vision: theory, models, and data" illustrates how to link neurobiological data and visual behavior/psychological data through theoretical principles and computational models.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Twiddler's syndrome**
Twiddler's syndrome:
Twiddler's syndrome is a malfunction of a pacemaker due to manipulation of the device and the consequent dislodging of the leads from their intended location. As the leads move, they stop pacing the heart and can cause strange symptoms such as phrenic nerve stimulation resulting in abdominal pulsing or brachial plexus stimulation resulting in rhythmic arm twitching. Twiddler's syndrome in patients with an implanted defibrilator may lead to inadequate, painful defibrillation-shocks.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**4-HO-DSBT**
4-HO-DSBT:
4-HO-DsBT (4-hydroxy-N,N-di-sec-butyltryptamine) is a tryptamine derivative which acts as a serotonin receptor agonist. It was first made by Alexander Shulgin and is mentioned in his book TiHKAL, but was never tested by him. However it has subsequently been tested in vitro and unlike the n-butyl and isobutyl isomers which are much weaker, the s-butyl derivative retains reasonable potency, with a similar 5-HT2A receptor affinity to MiPT but better selectivity over the 5-HT1A and 5-HT2B subtypes.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Thorium(IV) hydroxide**
Thorium(IV) hydroxide:
Thorium(IV) hydroxide is an inorganic compound with a chemical formula Th(OH)4.
Production:
Thorium(IV) hydroxide can be produced by reacting sodium hydroxide and soluble thorium salts.
Reactions:
New thorium(IV) hydroxide is soluble in acid but its solubility will decrease when older.Thorium(IV) hydroxide will break up at high temperature and produce thorium dioxide: Th(OH)4 → ThO2 + 2 H2OAt high pressure, thorium(IV) hydroxide reacts with carbon dioxide, and produce thorium carbonate hemihydrate.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**4-Isopropenylphenol**
4-Isopropenylphenol:
4-Isopropenylphenol is an organic compound with the formula CH2=(CH3)CC6H4OH. The molecule consists of a 2-propenyl group (CH2=C-CH3) affixed to the 4 position of phenol. The compound is an intermediate in the production of bisphenol A (BPA), 2.7 Mkg/y of which are produced annually (2007). It is also generated by the recycling of o,p-BPA, a byproduct of the production of the p,p-isomer of BPA.
Synthesis and reactions:
The high-temperature hydrolysis of BPA gives the title compound together with phenol: (CH3)2C(C6H4OH)2 + H2O → CH2=(CH3)CC6H4OH + C6H5OHThe compound can also be produced by catalytic dehydrogenation of 4-isopropylphenol.4-Isopropenylphenol undergoes O-protonation by sulfuric acid, giving the carbocation, which undergoes a variety of dimerization reactions.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Recoilless rifle**
Recoilless rifle:
A recoilless rifle (rifled), recoilless launcher (smoothbore), or simply recoilless gun, sometimes abbreviated to "RR" or "RCL" (for ReCoilLess) is a type of lightweight artillery system or man-portable launcher that is designed to eject some form of countermass such as propellant gas from the rear of the weapon at the moment of firing, creating forward thrust that counteracts most of the weapon's recoil. This allows for the elimination of much of the heavy and bulky recoil-counteracting equipment of a conventional cannon as well as a thinner-walled barrel, and thus the launch of a relatively large projectile from a platform that would not be capable of handling the weight or recoil of a conventional gun of the same size. Technically, only devices that use spin-stabilized projectiles fired from a rifled barrel are recoilless rifles, while smoothbore variants (which can be fin-stabilized or unstabilized) are recoilless guns. This distinction is often lost, and both are often called recoilless rifles.Though similar in appearance to a tube-based rocket launcher (since these also operate on a recoilless launch principle), the key difference is that recoilless weapons fire shells using a conventional smokeless propellant. While there are rocket-assisted rounds for recoilless weapons, they are still ejected from the barrel by the deflagration of a conventional propelling charge.
Recoilless rifle:
Because some projectile velocity is inevitably lost to the recoil compensation, recoilless rifles tend to have inferior range to traditional cannon, although with a far greater ease of transport, making them popular with paratroop, mountain warfare and special forces units, where portability is of particular concern, as well as with some light infantry and infantry fire support units. The greatly diminished recoil allows for devices that can be carried by individual infantrymen: heavier recoilless rifles are mounted on light tripods, wheeled light carriages, or small vehicles, and intended to be carried by crew of two to five. The largest versions retain enough bulk and recoil to be restricted to a towed mount or relatively heavy vehicle, but are still much lighter and more portable than cannon of the same scale. Such large systems have been replaced by guided anti-tank missiles in many armies.
Design:
There are a number of principles under which a recoilless gun can operate, all involving the ejection of some kind of counter-mass from the rear of the gun tube to offset the force of the projectile being fired forward. The most basic method, and the first to be employed, is simply making a double-ended gun with a conventional sealed breech, which fires identical projectiles forwards and backwards. Such a system places enormous stress on its midpoint, is extremely cumbersome to reload, and has the highly undesirable effect of launching a projectile potentially just as deadly as the one launched at the enemy at a point behind the shooter where their allies may well be.
Design:
The most common system involves venting some portion of the weapon's propellant gas to the rear of the tube, in the same fashion as a rocket launcher. This creates a forward directed momentum which is nearly equal to the rearward momentum (recoil) imparted to the system by accelerating the projectile. The balance thus created does not leave much momentum to be imparted to the weapon's mounting or the gunner in the form of felt recoil. Since recoil has been mostly negated, a heavy and complex recoil damping mechanism is not necessary. Despite the name, it is rare for the forces to completely balance, and real-world recoilless rifles do recoil noticeably (with varying degrees of severity). Recoilless rifles will not function correctly if the venting system is damaged, blocked, or poorly maintained: in this state, the recoil-damping effect can be reduced or lost altogether, leading to dangerously powerful recoil. Conversely, if a projectile becomes lodged in the barrel for any reason, the entire weapon will be forced forward.
Design:
Recoilless rifle rounds for breech-loading reloadable systems resemble conventional cased ammunition, using a driving band to engage the rifled gun tube and spin-stabilize the projectile. The casing of a recoilless rifle round is often perforated to vent the propellant gases, which are then directed to the rear by an expansion chamber surrounding the weapon's breech. In the case of single-shot recoilless weapons such as the Panzerfaust or AT4, the device is externally almost identical in design to a single-shot rocket launcher: the key difference is that the launch tube is a gun that launches the projectile using a pre-loaded powder charge, not a hollow tube. Weapons of this type can either encase their projectile inside the disposable gun tube, or mount it on the muzzle: the latter allows the launching of an above-caliber projectile. Like single shot rocket launchers, the need to only survive a single firing means that single-shot recoilless weapons can be made from relatively flimsy and therefore very light materials, such as fiberglass. Recoilless gun launch systems are often used to provide the initial thrust for man-portable weapons firing rocket-powered projectiles: examples include the RPG-7, Panzerfaust 3 and MATADOR.
Design:
Since venting propellant gases to the rear can be dangerous in confined spaces, some recoilless guns use a combination of a countershot and captive piston propelling cartridge design to avoid both recoil and backblast. The Armbrust "cartridge," for example, contains the propellant charge inside a double-ended piston assembly, with the projectile in front, and an equal countermass of shredded plastic to the rear. On firing, the propellant expands rapidly, pushing the pistons outward. This pushes the projectile forwards towards the target and the countermass backwards providing the recoilless effect. The shredded plastic countermass is quickly slowed by air resistance and is harmless at a distance more than a few feet from the rear of the barrel. The two ends of the piston assembly are captured at the ends of the barrel, by which point the propellant gas has expanded and cooled enough that there is no threat of explosion. Other countermass materials that have been used include inert powders and liquids.
History:
The earliest known example of a design for a gun based on recoilless principles was created by Leonardo da Vinci in the 15th or early 16th century. This design was of a gun which fired projectiles in opposite directions, but there is no evidence any physical firearm based on the design was constructed at the time.
History:
In 1879 a French patent was filed by Alfred Krupp for a recoilless gun.The first recoilless gun known to have actually been constructed was developed by Commander Cleland Davis of the US Navy, just prior to World War I. His design, named the Davis gun, connected two guns back-to-back, with the backwards-facing gun loaded with lead balls and grease of the same weight as the shell in the other gun. His idea was used experimentally by the British as an anti-Zeppelin and anti-submarine weapon mounted on a Handley Page O/100 bomber and intended to be installed on other aircraft.In the Soviet Union, the development of recoilless weapons ("Dinamo-Reaktivnaya Pushka" (DRP), roughly "dynamic reaction cannon") began in 1923. In the 1930s, many different types of weapons were built and tested with configurations ranging from 37 mm to 305 mm. Some of the smaller examples were tested in aircraft (Grigorovich I-Z and Tupolev I-12) and saw some limited production and service, but development was abandoned around 1938. The best-known of these early recoilless rifles was the Model 1935 76 mm DRP designed by Leonid Kurchevsky. A small number of these mounted on trucks saw combat in the Winter War. Two were captured by the Finns and tested; one example was given to the Germans in 1940.
History:
The first recoilless gun to enter service in Germany was the 7.5 cm Leichtgeschütz 40 ("light gun" '40), a simple 75 mm smoothbore recoilless gun developed to give German airborne troops artillery and anti-tank support that could be parachuted into battle. The 7.5cm LG 40 was found to be so useful during the invasion of Crete that Krupp and Rheinmetall set to work creating more powerful versions, respectively the 10.5 cm Leichtgeschütz 40 and 10.5 cm Leichtgeschütz 42. These weapons were loosely copied by the US Army. The Luftwaffe also showed great interest in aircraft-mounted recoilless weapons to allow their planes to attack tanks, fortified structures and ships. These included the unusual Düsenkanone 88, an 88mm recoilless rifle fed by a 10-round rotary cylinder and with the exhaust vent angled upwards at 51 degrees to the barrel so it could pass through the host aircraft's fuselage rather than risking a rear-vented backblast damaging the tail, and the Sondergerät SG104 "Münchhausen", a gargantuan 14-inch (355.6mm) weapon designed to be mounted under the fuselage of a Dornier Do 217. None of these systems proceeded beyond the prototype stage.The US did have a development program, and it is not clear to what extent the German designs were copied. These weapons remained fairly rare during the war, although the American M20 became increasingly common in 1945. Postwar saw a great deal of interest in recoilless systems, as they potentially offered an effective replacement for the obsolete anti-tank rifle in infantry units.
History:
During World War II, the Swedish military developed a shoulder-fired 20 mm device, the Pansarvärnsgevär m/42 (20 mm m/42); the British expressed their interest in it, but by that point the weapon, patterned after obsolete anti-tank rifles, was too weak to be effective against period tank armor. This system would form the basis of the much more successful Carl Gustav recoilless rifle postwar.
History:
By the time of the Korean War, recoilless rifles were found throughout the US forces. The earliest American infantry recoilless rifles were the shoulder-fired 57 mm M18 and the tripod-mounted 75 mm M20, later followed by the 105 mm M27: the latter proved unreliable, too heavy, and too hard to aim. Newer models replacing these were the 90 mm M67 and 106 mm M40 (which was actually 105 mm caliber, but designated otherwise to prevent accidental issue of incompatible M27 ammunition). In addition, the Davy Crockett, a muzzle-loaded recoilless launch system for tactical nuclear warheads intended to counteract Soviet tank units, was developed in the 1960s and deployed to American units in Germany.
History:
The Soviet Union adopted a series of crew-served smoothbore recoilless guns in the 1950s and 1960s, specifically the 73mm SPG-9, 82mm B-10 and 107mm B-11. All are found quite commonly around the world in the inventories of former Soviet client states, where they are usually used as anti-tank guns.
History:
The British, whose efforts were led by Charles Dennistoun Burney, inventor of the Wallbuster HESH round, also developed recoilless designs. Burney demonstrated the technique with a recoilless 4-gauge shotgun. His "Burney Gun" was developed to fire the Wallbuster shell against the Atlantic Wall defences, but was not required in the D-Day landings of 1944. He went on to produce further designs, with two in particular created as anti-tank weapons. The Ordnance, RCL, 3.45 in could be fired off a man's shoulder or from a light tripod, and fired an 11 lb (5 kg) wallbuster shell to 1,000 yards. The larger Ordnance RCL. 3.7in fired a 22.2 lb (10 kg) wallbuster to 2,000 yards. Postwar work developed and deployed the BAT (Battalion, Anti Tank) series of recoilless rifles, culminating in the 120 mm L6 WOMBAT. This was too large to be transported by infantry and was usually towed by jeep. The weapon was aimed via a spotting rifle, a modified Bren Gun on the MOBAT and an American M8C spotting rifle on the WOMBAT: the latter fired a .50 BAT (12.7x77mm) point-detonating incendiary tracer round whose trajectory matched that of the main weapon. When tracer rounds hits were observed, the main gun was fired.
History:
During the late 1960s and 1970s, SACLOS wire-guided missiles began to supplant recoilless rifles in the anti-tank role. While recoilless rifles retain several advantages such as being able to be employed at extremely close range, as a guided missile typically has a significant deadzone before it can arm and begin to seek its target, missile systems tend to be lighter and more accurate, and are better suited to deployment of hollow-charge warheads. The large crew-served recoilless rifle started to disappear from first-rate armed forces, except in areas such as the Arctic, where thermal batteries used to provide after-launch power to wire-guided missiles like M47 Dragon and BGM-71 TOW would fail due to extremely low temperatures. The former 6th Light Infantry Division in Alaska used the M67 in its special weapons platoons, as did the Ranger Battalions and the US Army's Berlin Brigade. The last major use was the M50 Ontos, which mounted six M40 rifles on a light (9 ton) tracked chassis. They were largely used in an anti-personnel role firing "beehive" flechette rounds. In 1970 the Ontos was removed from service and most were broken up. The M40, usually mounted on a jeep or technical, is still very common in conflict zones throughout the world, where it is used as a hard-hitting strike weapon in support of infantry, with the M40-armed technical fulfilling a similar combat role to an attack helicopter.
History:
Front-line recoilless weapons in the armies of modern industrialized nations are mostly man-portable devices such as the Carl Gustav, an 84 mm weapon. First introduced in 1948 and exported extensively since 1964, it is still in widespread use throughout the world today: a huge selection of special-purpose rounds are available for the system, and the current variant, known as the M4 or M3E1, is designed to be compatible with computerized optics and future "smart" ammunition. Many nations also use a weapon derived from the Carl Gustav, the one-shot AT4, which was originally developed in 1984 to fulfil an urgent requirement for an effective replacement for the M72 LAW after the failure of the FGR-17 Viper program the previous year. The ubiquitous RPG-7 is also technically a recoilless gun, since its rocket-powered projectile is launched using an explosive booster charge (even more so when firing the OG-7V anti-personnel round, which has no rocket motor), though it is usually not classified as one.
Civilian use:
Obsolete 75mm M20 and 105mm M27 recoilless rifles were used by the U.S. National Park Service and the U.S. Forest Service as a system for triggering controlled avalanches at a safe distance, from the early 1950s until the US Military's inventory of surplus ammunition for these weapons was exhausted in the 1990s. They were then replaced with M40 106mm recoilless rifles, but following a catastrophic in-bore ammunition explosion that killed one of the five-man gun crew at Alpine Meadows Ski Resort, California, in 1995 and two further in-bore explosions at Mammoth Mountain, California, within thirteen days of each other in December 2002, all such guns were removed from use and replaced with surplus 105mm howitzers.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ponazuril**
Ponazuril:
Ponazuril (INN), sold by Merial, Inc., now part of Boehringer Ingelheim, under the trade name Marquis® (15% w/w ponazuril), is a drug currently approved for the treatment of equine protozoal myeloencephalitis (EPM) in horses, caused by coccidia Sarcocystis neurona. More recently, veterinarians have been preparing a formulary version of the medication for use in small animals such as cats, dogs, and rabbits against coccidia as an intestinal parasite. Coccidia treatment in small animals is far shorter than treatment for EPM.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Electric Image Animation System**
Electric Image Animation System:
The Electric Image Animation System (EIAS) is a 3D computer graphics package published by EIAS3D. It currently runs on the macOS and Windows platforms.
History:
Electric Image, Inc. was initially a visual effects production company. They developed their own in-house 3D animation and rendering package for the Macintosh beginning in the late 1980s, calling it ElectricImage Animation System. (To avoid confusion with the current product with its similar name, we will refer to this initial incarnation of the product simply as ElectricImage.) When the company later decided to offer their software for sale externally, it quickly gained a customer base that lauded the developers for the software's exceptionally fast rendering engine and high image quality. Because it was capable of film-quality output on commodity hardware, ElectricImage was popular in the movie and television industries throughout the decade. It was used by the "Rebel Unit" at Industrial Light and Magic quite extensively and was in use by a variety of game companies, such as Bad Mojo and Bad Day on the Midway. However, only these high end effects companies could afford it: Electric Image initially sold for US $7500.
History:
EIAS has been used in numerous film and television productions, such as: Piranha 3D, Alien Trespass, Pirates of the Caribbean: The Curse of the Black Pearl, Daddy Day Care, K-19: The Widowmaker, Gangs of New York, Austin Powers: Goldmember, Men In Black II, The Bourne Identity, Behind Enemy Lines, Time Machine, Ticker, JAG - Pilot Episode, Spawn, Star Trek: First Contact, Star Trek: Insurrection, Galaxy Quest, Mission to Mars, Austin Powers: The Spy Who Shagged Me, Star Wars Episode 1: The Phantom Menace, Titan A.E., U-571, Dinosaur, Terminator 2: Judgment Day, Terminator 2: Judgment Day - DVD Intro, Jungle Book 2, American President, Sleepers, Star Wars Special Edition, Empire Strikes Back Special Edition, Return of Jedi Special Edition, Bicentennial Man, Vertical Limit, Elf, Blade Trinity, and Lost In Space.
History:
TV Shows: Revolution, Breaking Bad, Alcatraz, Pan AM, The whole Truth, Lost, Flash Forward, Fringe, Surface, Weeds, Pushing Daisies, The X-Files, Alias, Smallville, Star Trek: The Next Generation, Babylon 5, Young Indiana Jones, Star Trek Voyager, Mists of Avalon, Star Trek Enterprise....
History:
Electric Image, Inc. was always a small company that produced software on the Mac platform and so never had a large a market share. Play, Inc. purchased Electric Image corporation in November 1998. The first version of EIAS released under the Play moniker was version 2.9. Play later released the 3.0 version. This was the first version to run on Windows, and to mark this move, Play renamed the package Electric Image Universe. Play was never a greatly successful company, and so Electric Image Universe stagnated during the time they owned it.
History:
In 2000, Dwight Parscale (former CEO of Newtek) and original Electric Image founders Markus Houy and Jay Roth bought back the original company from Play Inc. On September 19, 2000, the company bought back the shares of Electric Image from Play and set about to recapture the product's former customer base. The new company released version 4.0 and 5.0 under the Electric Image moniker. Then due to a licensing problem with Spatial Technologies, they dropped the Modeler program from the version 5.5 release, and renamed the package back to Electric Image Animation System.
History:
Versions 6.0 and 6.5 were subsequently released with vast improvements to the rendering engine and OpenGL performance. Version 6.5r2 added FBX file importing capability. 6.6 added Universal Binary support and finally drops support for Mac OS 9. Version 7.0 brought Multi-Layer Rendering, Image-Based Lighting, Raytrace Sky Maps and Rigid Body Dynamics. The version, 8.0, added Photon Mapping, Fast soft shadows, area light, Quadratic light drop-off, EXR and 16bit image input support, Displacement Sea Level, new Weight maps tools, much workflow enhancement and Renderama improvements.
History:
In 2009, EITG began negotiations to sell the intellectual property rights of ElectricImage. On January 12, 2010 it was announced that Tomas Egger, Igor Yatsenko, and Igor Ivaniuk had become the new owners of EIAS. Known collectively as "The Igors", Igor Yatsenko and Igor Ivaniuk had been EIAS's primary software developers for many years. They released version 9.0 in November 2012, followed by version 9.1 in June 2013.
Market positioning:
The existing customer base for EIAS favors it for its fast renderer, its high output quality, and its camera mapping features. The tool set lends itself particularly well to hard-surface animation/rendering and other forms of non-organic tasks. It is most popular with architects and visual effects artists for TV and film.
EIAS's primary competitors in the integrated 3D package space are Autodesk with Maya, 3D Studio Max and Softimage, Maxon with Cinema 4D, and Newtek with Lightwave 3D.
Components:
The Electric Image Animation System is not a single program, but rather a suite of several programs designed to work together. Each of the primary programs handles a particular part of the production workflow: Animator Animator is the EIAS animation program. It can directly import 3D models in the Lightwave, 3D Studio, AutoCAD, Maya, and Electric Image FACT formats. In addition to animating models, Animator allows you to set up rendering settings. It efficiently supports the animation of very geometrically complex projects.
Components:
Camera Camera is the EIAS rendering program, known for its speed and high image quality. As of version 9.0, it supports ray tracing, Phong shading, scanline rendering, spatial anti-aliasing, motion blur, caustics, radiosity, Photon mapping, Irradiance Cache, Screen Cache and global illumination. Camera outputs to several file formats, like Quicktime and EI's own Image format. The latter is directly supported by Adobe After Effects CC and Adobe Photoshop CC.
Components:
Renderama Renderama and Renderama Slave compose EIAS's distributed network rendering system. It allows for the rendering of a project to be distributed over a network's computers (i.e., for the formation of a render farm). It supports both single and multiprocessor computers, taking advantage of all available processors to distribute the workload. It also supports rendering across platforms (e.g., Windows 7/8 - 64 bits and Mac OS X 10.6.8 or earlier).
Components:
Modeler Modeler saves its files in Electric Image's "FACT" file format for importing into Animator (see above). It supports ACIS modeling, "ÜberNurbs" (EIAS' subdivision surfaces modeling technology), LAWS (based on parametric formulas) as well as Boolean operations and other modern modeling tools.
Modeler last shipped in Electric Image Universe 5.0. As a result, users of EIAS 5.5 and newer use a third-party modeler instead. As of this writing, Electric Image recommends Nevercenter Silo for this purpose. Form•Z from auto•des•sys is also popularly used as a companion for EIAS.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gamma camera**
Gamma camera:
A gamma camera (γ-camera), also called a scintillation camera or Anger camera, is a device used to image gamma radiation emitting radioisotopes, a technique known as scintigraphy. The applications of scintigraphy include early drug development and nuclear medical imaging to view and analyse images of the human body or the distribution of medically injected, inhaled, or ingested radionuclides emitting gamma rays.
Imaging techniques:
Scintigraphy ("scint") is the use of gamma cameras to capture emitted radiation from internal radioisotopes to create two-dimensional images.
SPECT (single photon emission computed tomography) imaging, as used in nuclear cardiac stress testing, is performed using gamma cameras. Usually one, two or three detectors or heads, are slowly rotated around the patient's torso.
Imaging techniques:
Multi-headed gamma cameras can also be used for positron emission tomography (PET) scanning, provided that their hardware and software can be configured to detect "coincidences" (near simultaneous events on 2 different heads). Gamma camera PET is markedly inferior to PET imaging with a purpose designed PET scanner, as the scintillator crystal has poor sensitivity for the high-energy annihilation photons, and the detector area is significantly smaller. However, given the low cost of a gamma camera and its additional flexibility compared to a dedicated PET scanner, this technique is useful where the expense and resource implications of a PET scanner cannot be justified.
Construction:
A gamma camera consists of one or more flat crystal planes (or detectors) optically coupled to an array of photomultiplier tubes in an assembly known as a "head", mounted on a gantry. The gantry is connected to a computer system that both controls the operation of the camera and acquires and stores images.: 82 The construction of a gamma camera is sometimes known as a compartmental radiation construction.
Construction:
The system accumulates events, or counts, of gamma photons that are absorbed by the crystal in the camera. Usually a large flat crystal of sodium iodide with thallium doping NaI(Tl) in a light-sealed housing is used. The highly efficient capture method of this combination for detecting gamma rays was discovered in 1944 by Sir Samuel Curran whilst he was working on the Manhattan Project at the University of California at Berkeley. Nobel prize-winning physicist Robert Hofstadter also worked on the technique in 1948.The crystal scintillates in response to incident gamma radiation. When a gamma photon leaves the patient (who has been injected with a radioactive pharmaceutical), it knocks an electron loose from an iodine atom in the crystal, and a faint flash of light is produced when the dislocated electron again finds a minimal energy state. The initial phenomenon of the excited electron is similar to the photoelectric effect and (particularly with gamma rays) the Compton effect. After the flash of light is produced, it is detected. Photomultiplier tubes (PMTs) behind the crystal detect the fluorescent flashes (events) and a computer sums the counts. The computer reconstructs and displays a two dimensional image of the relative spatial count density on a monitor. This reconstructed image reflects the distribution and relative concentration of radioactive tracer elements present in the organs and tissues imaged.: 162
Signal processing:
Hal Anger developed the first gamma camera in 1957. His original design, frequently called the Anger camera, is still widely used today. The Anger camera uses sets of vacuum tube photomultipliers (PMT). Generally each tube has an exposed face of about 7.6 cm in diameter and the tubes are arranged in hexagon configurations, behind the absorbing crystal. The electronic circuit connecting the photodetectors is wired so as to reflect the relative coincidence of light fluorescence as sensed by the members of the hexagon detector array. All the PMTs simultaneously detect the (presumed) same flash of light to varying degrees, depending on their position from the actual individual event. Thus the spatial location of each single flash of fluorescence is reflected as a pattern of voltages within the interconnecting circuit array.
Signal processing:
The location of the interaction between the gamma ray and the crystal can be determined by processing the voltage signals from the photomultipliers; in simple terms, the location can be found by weighting the position of each photomultiplier tube by the strength of its signal, and then calculating a mean position from the weighted positions.: 112 The total sum of the voltages from each photomultiplier, measured by a pulse height analyzer is proportional to the energy of the gamma ray interaction, thus allowing discrimination between different isotopes or between scattered and direct photons.: 166
Spatial resolution:
In order to obtain spatial information about the gamma-ray emissions from an imaging subject (e.g. a person's heart muscle cells which have absorbed an intravenous injected radioactive, usually thallium-201 or technetium-99m, medicinal imaging agent) a method of correlating the detected photons with their point of origin is required.
Spatial resolution:
The conventional method is to place a collimator over the detection crystal/PMT array. The collimator consists of a thick sheet of lead, typically 25 to 55 millimetres (1 to 2.2 in) thick, with thousands of adjacent holes through it. There are three types of collimators: low energy, medium energy, and high energy collimators. As the collimators transitioned from low energy to high energy, the hole sizes, thickness, and septations between the holes also increased. Given a fixed septal thickness, the collimator resolution decreases with increased efficiency and also increasing distance of the source from the collimator. Pulse-height analyser determines the Full width at half maximum that selects certain photons to contribute to the final image, thus determining the collimator resolution.The individual holes limit photons which can be detected by the crystal to a cone shape; the point of the cone is at the midline center of any given hole and extends from the collimator surface outward. However, the collimator is also one of the sources of blurring within the image; lead does not totally attenuate incident gamma photons, there can be some crosstalk between holes. Unlike a lens, as used in visible light cameras, the collimator attenuates most (>99%) of incident photons and thus greatly limits the sensitivity of the camera system. Large amounts of radiation must be present so as to provide enough exposure for the camera system to detect sufficient scintillation dots to form a picture.: 128 Other methods of image localization (pinhole, rotating slat collimator with CZT) have been proposed and tested; however, none have entered widespread routine clinical use.
Spatial resolution:
The best current camera system designs can differentiate two separate point sources of gamma photons located at 6 to 12 mm depending on distance from the collimator, the type of collimator and radio-nucleide. Spatial resolution decreases rapidly at increasing distances from the camera face. This limits the spatial accuracy of the computer image: it is a fuzzy image made up of many dots of detected but not precisely located scintillation. This is a major limitation for heart muscle imaging systems; the thickest normal heart muscle in the left ventricle is about 1.2 cm and most of the left ventricle muscle is about 0.8 cm, always moving and much of it beyond 5 cm from the collimator face. To help compensate, better imaging systems limit scintillation counting to a portion of the heart contraction cycle, called gating, however this further limits system sensitivity.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Knowledge ark**
Knowledge ark:
A knowledge ark (also known as a doomsday ark or doomsday vault) is a collection of knowledge preserved in such a way that future generations would have access to said knowledge if all other copies of it were lost.
Knowledge ark:
Scenarios where access to information (such as the Internet) would become otherwise impossible could be described as existential risks or extinction-level events. A knowledge ark could take the form of a traditional library or a modern computer database. It could also be pictorial in nature, including photographs of important information, or diagrams of critical processes. A knowledge ark would have to be resistant to the effects of natural or man-made disasters in order to be viable. Such an ark should include, but would not be limited to, information or material relevant to the survival and prosperity of human civilization.
Knowledge ark:
Other types of knowledge arks might include genetic material, such as in a DNA bank. With the potential for widespread personal DNA sequencing becoming a reality, an individual might agree to store their genetic code in a digital or analog storage format which would enable later retrieval of that code. If a species was sequenced before extinction, its genome would still remain available for study.
Examples:
An example of a DNA bank is the Svalbard Global Seed Vault, a seedbank which is intended to preserve a wide variety of plant seeds (such as important crops) in case of their extinction.
Examples:
The Memory of Mankind project involves engraving human knowledge on clay tablets and storing it in a salt mine. The engravings are microscopic.A Lunar ark has been proposed which would store and transmit valuable information to receiver stations on Earth. The success of this would also depend on the availability of compatible receiver equipment on Earth, and adequate knowledge of that equipment's operation.
Examples:
The Arch Mission Foundation sent the Lunar Library, a 30 million page knowledge ark designed to survive for millions or billions of years in space, to the moon on the Israeli Beresheet spacecraft in 2019. The spacecraft experienced a crash landing. However, the library likely survived intact.The Phoenix mars lander (landed on surface of Mars in 2008) included the "Visions of Mars" DVD, a digital library about Mars designed to last for hundreds or thousands of years.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Pseudostrabismus**
Pseudostrabismus:
Pseudostrabismus is the false appearance of crossed eyes. When the eyes are actually crossed or not completely aligned with one another, it is called strabismus. Pseudostrabismus is more likely to be observed in East Asian or Native American infants, due to the presence of epicanthic folds obscuring the medial aspect of each eye.
Pseudostrabismus:
Pseudostrabismus generally occurs in infants and toddlers, whose facial features are not fully developed. The bridge of their nose is wide and flat, creating telecanthus (increased distance between medial canthus of both eyes). With age, the bridge will narrow, and the epicanthic folds in the corner of the eyes will go away. This will cause the eyes to appear wider and thus not have the appearance of strabismus.
Pseudostrabismus:
To detect the difference between strabismus and pseudostrabismus, clinicians use a flashlight to shine into the child's eyes. When the child is looking at the light, a reflection can be seen on the front surface of the pupil. If the eyes are aligned with one another, the reflection from the light will be in the same spot of each eye. If strabismus is present, the reflection from the light will not be in the same spot of each eye.
Pseudostrabismus:
Rakel's Textbook of Family Medicine states, "A common misconception is that children with crossed eyes outgrow the condition, but this is generally not the case. This belief stems from the confusion between true strabismus and pseudostrabismus. When a child's eyes are truly crossed, it is always a serious condition and requires the care of an ophthalmologist."
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Threose**
Threose:
Threose is a four-carbon monosaccharide with molecular formula C4H8O4. It has a terminal aldehyde group rather than a ketone in its linear chain, and so is considered part of the aldose family of monosaccharides. The threose name can be used to refer to both the D- and L-stereoisomers, and more generally to the racemic mixture (D/L-, equal parts D- and L-) as well as to the more generic threose structure (absolute stereochemistry unspecified). The prefix "threo" which derives from threose (and "erythro" from a corresponding diastereomer erythrose) offer a useful way to describe general organic structures with adjacent chiral centers, where "the prefixes... designate the relative configuration of the centers". As is depicted in a Fischer projection of D-threose, the adjacent substituents will have a syn orientation in the isomer referred to as "threo", and are anti in the isomer referred to as "erythro".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Mock trial**
Mock trial:
A mock trial is an act or imitation trial. It is similar to a moot court, but mock trials simulate lower-court trials, while moot court simulates appellate court hearings. Attorneys preparing for a real trial might use a mock trial consisting of volunteers as role players to test theories or experiment with each other. Mock trial is also the name of an extracurricular program in which students participate in rehearsed trials to learn about the legal system in a competitive manner. Interscholastic mock trials take place on all levels including primary school, middle school, high school, college, and law school. Mock trial is often taught in conjunction with a course in trial advocacy or takes place as an after school enrichment activity. Some gifted and talented programs may also take place in one.
Litigation related mock trials:
Litigators may use mock trials to assist with trial preparation and settlement negotiations of actual cases. Unlike school-related mock trials, these mock trials can take numerous forms depending on the information sought. For example, when faced with complex fact issues in a particular case, attorneys might convene a mini mock trial to try different methods of presenting their evidence, sometimes before a jury.
Competitive school-related mock trials around the world:
Asia Pacific Guam has been hosting the annual Asia-Pacific Invitational Mock Trial Competition. In 2011 and 2012, there were 14 teams from Guam, South Korea, and Saipan in its 4th annual competition. The competition was held at the Superior Court of Guam. The teams made up of mostly junior and senior students from high schools. The Champion of the 4th Annual Asia-Pacific Invitational Mock Trial Competition was Marianas Baptist Academy of Saipan.
Competitive school-related mock trials around the world:
Australia Mock trial competitions in Australia are held regionally. These include: The Law Society of South Australia Mock Trial Competition comprises a series of simulated court cases by students years 10, 11 and 12 from 32 schools in South Australia.
The Law Society of Western Australia Interschool Mock Trial Competition is held each year for students enrolled in years 10, 11 and 12 in Western Australia. There were 773 students representing 68 teams from 38 schools participated in 2011.
The Capital Region Mock Trial Competition is organized by University of Canberra. There were teams of schools in Australian Capital Territory region with year 10, 11 and 12 students.
Competitive school-related mock trials around the world:
The New South Wales Mock Trial Competition has been running since 1981 for students years 10 and 11 in New South Wales initially with 28 schools. There have also been international events with mock trial competitions between Australia and the United Kingdom. In 2008, schools from this region also traveled to New York City to compete in the Empire Mock Trial Competition. The first Asia-Australia mock trial competition via video conference between Australian teams and a Korean team was held in 2009.
Competitive school-related mock trials around the world:
Continental Europe Mock trials in continental Europe are less coordinated than their UK or US counterparts, but there are a few isolated examples. In Germany, the Faculty of Legal Sciences of the University of Regensburg organizes the REGINA Mock Trial. The University of Erlangen-Nuremberg organizes a mock trial specialized in administrative law. In Spain, there is a mock trial program specialized in insurance law coordinated by the Fundacion INADE-UDC Chair of Risk Management and Insurance.
Competitive school-related mock trials around the world:
United Kingdom The Bar National Mock Trial Competition involves students taking on the roles of barristers and witnesses and presenting their case against teams from other schools. Young Citizens, in association with His Majesty's Courts and Tribunal Service and the Bar Council, has been running Mock Trial Competitions annually since 1991. The competition consists of several regional heats and a national final to decide the winner. In 2022/23, 938 students aged between 15 - 18 years old from 144 schools participated in the competition. The heats were supported by over 70 legal volunteers, including judges, barristers and law students.The Magistrates Mock Trial Competition, organised by Young Citizens in association with the Magistrates Association, involves students taking on the roles of magistrates and witnesses. In 2022/2023, approximately 2,900 students aged between 12 - 14 years from 193 schools participated in the competition. The heats were supported by approximately 600 legal volunteers, including judges, barristers and law students.Mock Trials are primarily aimed at promoting confidence in public speaking as well as introducing young people to the justice system and legal profession.
Competitive school-related mock trials around the world:
In Scotland, The School Mock Court Case Project runs both primary and secondary school competitions, although has undertaken a number of international competitions. Currently some 100 schools, circa 3,000 students are involved. As of 2021, international schools started entering the programme, with over 16 countries taking part.
Competitive school-related mock trials around the world:
United States Competition framework Competitive mock trial functions in yearly cycles. Each year, a case packet is distributed to all participating schools in late summer to early fall. The case packet is a series of documents including the charges, penal code, stipulations, case law, and jury instructions as well as all exhibits and affidavits relevant to the case. During a mock trial, competitors are restricted to only the materials provided in the case packet and may not reference any outside sources. In order to prepare for competition, teams thoroughly read and analyze the case packet.
Competitive school-related mock trials around the world:
National mock trial teams consist of a minimum of six and a maximum of twelve official members. The size of state mock trial teams can vary; California, for example, allows up to 25 official team members. Each team prepares both sides of the case: prosecution/plaintiff and defense in a criminal trial, plaintiff and defense in a civil action. Each side is composed of two or three attorneys as well as two or three witnesses, all played by members of the team. Teams must be organized into two sides of five to six players for the prosecution/plaintiff and defense. It is important to note that high school mock trial is governed by state bar associations, meaning that cases, rules, and competition structure vary from state to state whereas all of college mock trial is governed by the American Mock Trial Association, meaning that every school uses the same case and is subject to the same rules.
Competitive school-related mock trials around the world:
Procedure The mock trial begins with the judge entering the courtroom. The judge then gives out the instructions to the jury (about what they are to listen to). Then if there is a pretrial motion, the defense and prosecution give their respective pretrial arguments. The judge then lets the prosecution or plaintiff give an opening statement. Following the prosecution/plaintiff's opening statement, the judge may offer the defense to deliver the opening statement during that time as well, or to wait until after the prosecution has presented all of its witnesses. After the opening statements, examination of the witnesses begins. The prosecution/plaintiff calls their witnesses first. Witnesses are sworn in by their team's bailiff/timekeeper. A student competitor attorney for the prosecution/plaintiff does a direct examination of the witness. Once the direct examination is complete, the opposing team may cross-examine the witness. After the cross-examination, if the first team chooses, they may re-direct the witness. Likewise, the other team may do a re-cross after a re-direct. However, re-direct and re-cross examinations are only limited to the scope of the previous examination conducted by opposing counsel. This process is repeated for the two remaining plaintiff witnesses. Once the prosecution/plaintiff have finished with their witnesses, the defense may give their opening statement if not delivered before, and then the process is repeated with the defense witnesses, having the defense attorneys direct and the plaintiff attorneys cross-examine.Once all of the witnesses have been examined, the trial moves to closing arguments. The prosecutor/plaintiff again goes first and has the option to reserve time beforehand for rebuttal. After the defense finishes their closing argument, the plaintiff may give their rebuttal argument if they still have time remaining. In some competitions, the rebuttal is limited to the scope of the defense's closing argument. Time limits are set at each level of competition to prevent the trials from running too long and to keep rounds of competition running smoothly. Time limits are as follows: Time limits may vary from state-to-state competitions. Please note that time is kept track of by the bailiff/timekeeper, who times their co-counsel's statements, examinations, and arguments. Time is stopped for objections.
Competitive school-related mock trials around the world:
Objections A main part of Mock Trial is the raising and arguing of objections given by opposing teams. Objections are raised when the opposing counsel attempts to bring in evidence or testimony that go against the rules of evidence. When an objection is raised, the judge may either overrule or sustain it immediately, or ask opposing counsel for their argument about why the testimony/evidence is admissible. Time is paused for objections, so an objection "battle" could theoretically go on for hours at a time until the judge makes a ruling. There is often an over all time limit given to rounds of competition, called "all loss" that prevent this from occurring. Mock Trial students receive an abridged version of Rules of Evidence to base their objections on in the case packet that contains the witness affidavits and other elements of the court case.
Competitive school-related mock trials around the world:
Judging There are several different ways that a mock trial can be judged. In one, the judges for scoring the mock trial consist of the presiding judge and two scoring judges, all of whom score the teams. In a second method, there are two scoring judges and the presiding judge, as in the first method, but the presiding judge does not score the teams, rather the judge simply votes or casts a ballot for one team or another. In yet another method of judging, there are three scoring judges and the presiding judge is not involved in the scoring of the teams. Often at college invitationals, there are two scoring judges, one of whom doubles as the presiding judge. Since enticing attorneys to judge is notoriously difficult (as judges are rarely compensated with more than a free lunch), it is rare to see more than two judges in a round at most competitions.
Competitive school-related mock trials around the world:
Unlike real law, the victorious team does not necessarily have to win on the merits of the case. Instead, evaluators score individual attorneys and witnesses on a 1–10 scale (though some states use different scales for high school competitions) based on each stage of the trial. These consist of the opening statements for the plaintiff and defense, each of the witnesses’ testimony, direct and cross-examination by attorneys, and the closing statements for both sides. The team with the highest total number of points is often, but not always, the team that wins the judge's verdict. Given this method of scoring, it is possible for the defendant to be found guilty or lose the case but for the defense team to still win the round.
Competitive school-related mock trials around the world:
In some competitions, points can be deducted from a team's score for testifying with information outside the scope of the mock trial materials and for unsportsmanlike conduct or abuse of objections. However, scores are completely at a judge's discretion, meaning that scores are subjective based on different evaluation criteria.
Competitive school-related mock trials around the world:
Power matching In the first round of the tournament, all of the teams are randomly matched to compete with each other. After the first round of some tournaments, teams are “power matched” to go up against other teams with similar records (e.g. in the second round, a 1–0 team will be matched with another 1–0 team). If there is a tie in record, the judges will use the number of ballots and total points earned to decide the matching. This allows for teams to compete with other teams of similar skill.
Competitive school-related mock trials around the world:
Of course, there are practical exceptions to the theory of power matching. Tab room coordinators who are creating brackets for each round may deviate from the rules of power matching in order to 1. allow each team to alternate between prosecution/plaintiff and defense between rounds, 2. avoid two teams from the same school competing against each other (Maryland Rule), 3. avoid having a team compete against a team it played in a prior round.
Competitive school-related mock trials around the world:
National championship format In the national championship format, which is also employed by invitationals across the country, the tournament is power matched through the last round. While this determines the strongest team at the tournament overall, it does not provide an accurate representation of 2nd, 3rd, 4th place teams, etc., because they might have lost multiple ballots to a strong team who placed first.
Competitive school-related mock trials around the world:
Power protection When tournaments are power protected, it means that the first rounds of competition are power matched as stated above. In the last round(s), however, the team with the strongest record is paired against the team with the weakest record. This ensures that the best teams do not knock each other out of the running for a rank. Rather, it's anticipated that the stronger team will win and protect their chance at a rank, while there is no harm done to the weaker team who is already out of reach of a trophy. Additionally, this method gives weaker teams exposure to stronger programs that they can learn from.
Competitive school-related mock trials around the world:
Levels of competition Elementary school At the elementary school level, the mock trial guide by American Bar Association suggests to use role-playing from scripted mock trials such as fairy tale mock trials as a way to introduce the concept of conflicts, trials, jury verdicts in civil trials, vocabulary of the court, damages, and the roles of individuals portrayed in the trial. Therefore, most of mock trials at this level are non-competitive classroom activities. There is no national competition for this level. However, a few states offer intrastate competitions. There are a couple of different formats for competitions at this level.
Competitive school-related mock trials around the world:
The first format focuses on how the cases are developed such as New Jersey Law Fair competitions. Students grades 3 to 6 are asked to generate, develop and write the case from their own idea to include details of the facts, issue, witnesses, statements, instructions, sub-issues, concepts and law. As the students at this age range may not know the details of applicable law, students are allowed to create their own law. There are no specific themes, students can choose any age-appropriate topics. The students are encouraged to do a role-playing in a mock trial based on the script that they have developed to involve other students in the classroom as juries in order to refine their case. Each team plays the roles of both side in their case during the mock trial. The winners will perform their case in the real court.In North Carolina, the competition has a different format which focuses more on presentation skills. All teams are given the same case which has been written prior to the competition and students are asked to perform role-playing on the case. North Carolina Elementary School Mock Trial Competition uses the winning entries from New Jersey Law Fair in the prior year as the cases for students to perform.Another format is to follow the standard format similar to the high school level but with less technical restriction. All teams are given the same material related to a case and prepare for the competition. Two teams compete in a live mock trial to represent two sides of the case. This format is used in the New Hampshire Bar Association's Mock Trial Competition. However, the first round of the competition is done by video submission where each team performs in both sides of the case. The qualified teams will be invited to the live competition with each team on each side of the case.
Competitive school-related mock trials around the world:
Middle school Although there is no national competition for middle/junior high school level, there are many intrastate competitions held by state/county level organizers: * With the exception of New Jersey, all competitions have similar format as in high school competitions with some rules relaxed. In New Jersey, the format is similar to the New Jersey Law Fair at the elementary school level with two specific themes that the teams can choose to develop their cases.
Competitive school-related mock trials around the world:
High school The mock trial program was started to allow high school students to experience a courtroom setting in a variety of hands-on roles. The mock trials are set up and structured just like a real court and are bound by the same rules. This can help the students to know exactly what role each of the different people in a court (judges, lawyers, witnesses, etc.) do in the judicial system. High school competitions are even held in functional courtrooms in the local City Hall to lend additional authenticity to the trial, are presided over by real judges, and the competing teams are typically coached and scored by practicing attorneys. Cases typically have to do with problems faced by teens, and includes competitors as witnesses. Each year the case for pre-Nationals competitions alternates between a Civil case and a Criminal case. The cases are set up in such a way that the witnesses, defendant, and prosecution (or plaintiff, for a Civil case) are all given gender neutral names to prevent gender based arguments from being brought up in the final arguments.
Competitive school-related mock trials around the world:
The National High School Mock Trial Championship began in 1984. This first competition consisted of teams from Illinois, Iowa, Minnesota, Nebraska, and Wisconsin. The competition since has grown and now is considered to be an All-State tournament. Each year, various participating states around the country take turns hosting the tournament. There are only three teams in the tournament's history that have won the competition consecutive times: Family Christian Academy Homeschoolers from Tennessee (now CSTHEA) in the years 2002–2003, Jonesboro High School from Georgia in the years 2007–2008, and Albuquerque Academy from New Mexico in the years 2012–2013. The 2011 Championship was held in Phoenix, Arizona. Albuquerque, New Mexico hosted in 2012; Indianapolis, Indiana hosted in 2013; Madison, Wisconsin hosted in 2014; and Raleigh, North Carolina hosted in 2015. The 2016 competition was held in Boise, Idaho. Hartford, Connecticut hosted in 2017; Reno, Nevada hosted in 2018. The 2019 National High School Mock Trial Competition was held in Athens, Georgia. New York State does not participate in the national competition; rather, it has its own intrastate competition consisting of over 350 teams throughout the state. It follows similar rules to that of the national competition. New York has three levels of play, county competition, regional competition, and the finals, which is held in Albany, New York in May. Before 2021, the state of Maryland did not compete in the National High School Mock Trial Championship, and had their own statewide mock trial competition similar to that of New York; it did compete in the national competition for the first time in 2021, and was crowned champion. New Jersey and North Carolina both pulled out of the NHSMTC competition following the 2005 season due to a refusal by the organization to accommodate an Orthodox Jewish team, Torah Academy of Bergen County, that had won New Jersey's state championship. Both states rejoined in 2010 after their concerns regarding accommodation had been addressed.Each state has its own case every year that is different from the national case. This means that the winners of each state competition, who move on to nationals, must study and prepare a completely different case in time for the National High School Mock Trial Championship in May. The national competition is governed by National Mock Trial Championship, Inc.
Competitive school-related mock trials around the world:
College Inter-collegiate mock trial is governed by the American Mock Trial Association or AMTA. This organization was founded in 1985 by Richard Calkins, the dean of the Drake University Law School. AMTA sponsors regional and national-level competitions, writes and distributes case packets and rules, and keeps a registry of mock trial competitors and alumni. The case packet is generally written and distributed prior to the scholastic year in August, and case changes are made throughout the season, usually in September, December, and finally in February after Regional competitions and prior to the Opening Round of Championships. Since 2015, AMTA has released a new case to be used for the National Championship following the completion of the Opening Round of Championships. Approximately 700 teams from over 400 universities and colleges will compete in AMTA tournaments. In total, AMTA provides a forum for over 7,300 undergraduate students each academic year to engage in intercollegiate mock trial competitions across the country.On the inter-collegiate circuit, a mock trial team consists of three attorneys and three witnesses on each side of the case (plaintiff/prosecution and defense). The attorneys are responsible for delivering an opening statement, conducting direct and cross examinations of witnesses and delivering closing arguments. Witnesses are selected in a sports draft format from a pool of approximately eight to 10 available witnesses prior to the round. Typical draft orders are DPDPDP, PPPDDD, or DDPPPD but this may vary substantially between cases. Witnesses may be available only to the plaintiff/prosecution, only to the defense, or to both sides of the case. Witnesses consist of both experts as well as lay witnesses. Judges are usually attorneys, coaches, law school students, and in some occasions, practicing judges.
Competitive school-related mock trials around the world:
All collegiate mock trial cases take place in the fictional state of Midlands, USA. Midlands is not geographically situated and falls under the protection of the United States Constitution.
Competitive school-related mock trials around the world:
Tournament competition A tournament consists of four rounds, two on each side of the case, typically scored by two to three judges in each round. The season runs in two parts: the invitational season and the regular season. Invitational tournaments are held throughout the fall semester and into early spring across the country. At invitationals, teams have the opportunity to test out particular case theories and improve as competitors before facing the challenge and pressure of regular season competition.
Competitive school-related mock trials around the world:
The regular season begins in late January, starting with regional tournaments. There are typically more than 600 teams spread across 24 regional tournaments. Each school is limited to two post-regional bids to the "Opening Round Championship Series." 192 teams advance to the Opening Round Championship, which is held at eight different tournament sites. The top teams at each Opening Round Championship Tournament qualify for a berth in the National Championship Tournament. There are 48 total bids to the final tournament. Previously, teams could earn up to two bids to either the National Championship Tournament (gold flight) or a National Tournament (silver flight) based on performance at Regionals. The two National Tournaments, which were held in March, consisted of 48 teams each, with the top 6 teams at each National earning a second-chance bid to the National Championship Tournament, which was held in April. The direct bid system was replaced by the current ORCS system in 2008.For 22 years, the National Championship Tournament was held in Des Moines, Iowa, the city in which collegiate mock trial began. The tournament left Iowa for the first time in 2007 when Stetson University in St. Petersburg, Florida hosted the Championship. The 2008 National Championship Tournament was held in Minneapolis, Minnesota. Between 2009–2011, the Championship returned to Des Moines in odd-numbered years, while even-numbered years featured a different venue. The 2010 Championship was hosted by Rhodes College at courthouses in downtown Memphis, Tennessee, while the 2012 Championship was held in Minneapolis. Beginning in 2013, future Championships will be awarded solely on a competitive bidding process, although Des Moines, if it bids, will be given preference during "landmark" years, such as anniversaries of AMTA's founding in 1985. The 2013 Championship was held in Washington, D.C., with the University of Virginia handling hosting duties. The 2014 Championship was held in Orlando, Florida at the Orange County Courthouse, with the University of Central Florida serving as the host institution. The 2015 Championship was hosted by the University of Cincinnati, with trials held at the Hamilton County Courthouse in Cincinnati. The 2016 Championship was hosted by Furman University in Greenville, South Carolina. In 2017, the Championship tournament was hosted by the University of California, Los Angeles in the Los Angeles County Superior Court (Stanley Mosk Courthouse) in downtown LA. In 2018, the Championship was hosted by Hamline University in Minneapolis, Minnesota.
Competitive school-related mock trials around the world:
Past championship results In 2006, the University of Virginia beat Harvard University to win the National Championship. The University of Virginia won the championship by a single point using a tiebreaker, after a three judge panel split with one judge choosing Virginia as the winner, one choosing Harvard, and one calling the round a draw. The University of Virginia's victory ended the then-recent run by UCLA, which had won the two previous national championships.In 2007, the University of Virginia again defeated Harvard University. This marked the first ever re-match of a previous year's final round. Virginia again won via a split decision, winning two of the three ballots in the final round. Virginia also became the 4th school to repeat as champions, joining UCLA, the University of Iowa, and Rhodes College, which accomplished the feat twice. Harvard University became the second program to finish as runner up in consecutive years, joining the University of Maryland, College Park. Maryland, however, had the distinction of losing to themselves in one of those two defeats.
Competitive school-related mock trials around the world:
In 2008, the University of Maryland prevailed over the George Washington University in a split-ballot decision (2–1). This was Maryland's fifth title, giving them more total wins than any other university in AMTA history.
In 2009, Northwood University defeated George Washington University 5–0 to claim its first national championship.
In 2010, New York University defeated Harvard University 3–1–1 to win its first national championship. This was Harvard's third championship round appearance in the last five years following its consecutive losses to Virginia.
In 2011, UCLA defeated defending champion New York University 4–1 to claim the Bruins' third title, the third-most in the history of the American Mock Trial Association.
In 2012, Duke defeated Rutgers 2–1, in what was the first championship round appearance for both squads.
In 2013, Florida State University defeated Rhodes College in FSU's first championship round appearance by a 4–1 ballot decision. This was Rhodes' eighth championship round appearance to date. 2013 also marked the first year that the National Championship Tournament had 3 scoring judges per round (instead of 2).
In 2014, UCLA defeated Princeton University in a 3–2 ballot decision. With this victory, UCLA tied Rhodes for the second-highest record of championships (4 wins), behind University of Maryland, College Park (5 wins). This round was also Princeton's first championship round appearance.
Competitive school-related mock trials around the world:
In 2015, Harvard defeated Yale in a 4-0–1 ballot decision. This marked Harvard's first championship win, despite having been the runner-up 3 times previously (in 2006, 2007, and 2010). 2015 also marked Yale's first championship round appearance. Finally, 2015 marked the first year that the case problem for the National Championship Tournament was different from the case schools had been using in competition for the earlier elimination rounds. (I.e. In the past colleges had argued the same case all year long, but starting in 2015 any team that qualified for the championship tournament was given a brand new case to learn and argue in the span of just a few weeks.) In 2016, Yale won its first national championship in an 11–3–1 ballot decision, defeating the University of Virginia in the final trial. Yale is one of only nine schools to have competed in the final trial of the National Championship Tournament two years in a row. 2016 also marked the first year that the National Championship Tournament had 5 scoring judges per round, and 15 scoring judges in the final championship round.In 2017, Virginia defeated Yale in a 6–1 ballot decision in a rematch of the 2016 final. Yale's appearance marked the first time in collegiate championship history the same school has appeared three consecutive times in the final championship round.
Competitive school-related mock trials around the world:
In 2018, Miami University of Ohio defeated Yale in a 4–3 ballot decision. This was Miami's second championship, the first having been in 2001. Yale extended its record of consecutive championship round appearances to 4, having been the champion in 2016 and the runner-up in 2017 and 2015.
Competitive school-related mock trials around the world:
In 2019, Yale University defeated Rhodes College in a 5–2 ballot decision. The decision was then vacated by the governing body, after an investigation concluded that Yale had violated tournament rules. Therefore, no winner was declared.In 2020, the national championship (as well as several preliminary qualifier tournaments) was canceled due to the COVID-19 pandemic. In 2021, due to the ongoing pandemic, the national championship took place entirely remotely via video calls. In what was the closest final round in AMTA history, the University of Maryland, Baltimore County beat Yale when the panel of 11 judges was split 5–5 (with one tie), resulting in a narrow tiebreaker.In 2022, Harvard University defeated the University of Chicago in a 3–2 ballot decision. This marked Harvard's second championship title and Chicago's first-ever appearance in a championship round. In 2023, UCLA defeated Harvard University in a 5-4 ballot decision. This marked UCLA's fifth championship title and Harvard's sixth appearance in a championship round. UCLA's victory moved it to a tie with U. of Maryland, College Park for most number of championship victories. The following is the list of winners of the National Championship Tournament, as well as the runners-up: †The 2021 National Championship Tournament was held online due to the COVID-19 Pandemic.‡The 2020 National Championship Tournament was cancelled due to the COVID-19 Pandemic.*From 1992 until 2010, the "Maryland Rule" was in effect, which placed both teams from the same school in the same division in order to ensure there wouldn't be another championship round between two teams from the same school. The Maryland Rule was repealed before the 2010–11 season.National Championship Round Participants *Yale also participated in the Championship Round in 2019, and won the initial judges' decision, but was later stripped of its title for violating tournament rules.
Competitive school-related mock trials around the world:
Trial by Combat Beginning in 2018, Trial by Combat (TBC) has existed as an AMTA-adjacent mock trial competition. The tournament is held after the end of the competitive season (marked by the conclusion of the final round of the National Championship), typically in late June. It is currently co-hosted by the Drexel University Thomas R. Kline School of Law and the UCLA School of Law. The tournament "aims to celebrate the best individual college competitors in the country—and identify the very best".The format of Trial by Combat is markedly different from official AMTA tournaments. Rather than a successive narrowing of a field of teams throughout a competitive season, individual competitors submit applications to take part in the tournament, and the top 16 applicants are selected to compete. The competitors are chosen based on their respective school's caliber of competition, the number of awards received at tournaments, appearances at ORCS and Nationals, amongst other factors. The selected competitors then receive the case materials on the morning of the first day of the competition, and have 24 hours to prepare. Each trial has one attorney and one witness per side. During the four preliminary rounds, each competitor performs each role once. Instead of the typical 1 to 10 scale for judging each performance, judges award a check mark to the competitor whose performance they preferred for each category. The four top-ranked students then proceed to Semifinals. The winning competitors of Semifinals compete as attorneys in the Championship Trial. The winner of the Championship Trial receives a full-size sword as their prize.The following is a list of Trial by Combat winners, the runners-ups, and the university they represented: *The 2020 and 2021 Trial by Combat Tournaments were held as online competitions due to the COVID-19 Pandemic.
Competitive school-related mock trials around the world:
Forums and social media Social media and forums have allowed competitors to communicate and share information and opinions. Some mock trial teams have created forums for themselves on Facebook, and spoofs of characters from various cases have made their way onto Facebook with their own profiles. Two collegiate mock trial alumni, Ben Garmoe and Drew Evans, created a podcast on the subject, entitled The Mock Review. The American Mock Trial Association has a Twitter feed which provides updates on procedures and tournament results.The most prolific presence of collegiate mock trial on the internet was the web forum, Perjuries [1], the national online mock trial community. On Perjuries, mock trial competitors, coaches, and alumni could create user accounts and discuss a wide range of related topics. The site had approximately 5,000 registered users, 2,000 discussions, and 88,000 posts.
Competitive school-related mock trials around the world:
The Perjuries forum is now defunct. Many of the posts from the forum have been archived on the replacement website, Impeachments.
Law school In the United States, law schools participate in interscholastic mock trial/trial advocacy. Teams typically consist of several "attorneys" and several "witnesses" on each side. A round consists of two law students acting as "attorneys" for each side.
The trial typically, although not always, begins with motions in limine and housekeeping matters, then moves through opening statements, witness testimony (both direct examination and cross examination), and finishes with a closing argument, sometimes called a summation. Throughout the trial, rules of evidence apply, typically the Federal Rules of Evidence, and objections are made applying these rules.
Competitive school-related mock trials around the world:
Every team in a tournament is given the same "problem" or "case", typically several months in advance, but for some tournaments only a few weeks ahead of the tournament's start. The problems can be criminal or civil, which affects many procedural aspects of the trial, for instance the increased rights of a criminal defendant not to testify against himself. The cases are written in an attempt to create an equal chance of either side prevailing, since the main objective is not to identify the winner of the case, but rather the team with superior advocacy skills.
Competitive school-related mock trials around the world:
Occasionally the winners of mock trial tournaments receive special awards such as money or invitations to special events, but the status of winning a tournament is significant in and of itself.
In addition, a university may require a mock trial course or courses as a requirement for graduation; among such universities is Baylor Law School, whose third-year Practice Court courses are mandatory for all students and have been since the school's reopening in the 1920s.
Competitive school-related mock trials around the world:
Mock Trial Competitions*: Georgetown White Collar Crime Invitational Mock Trial Competition, American Association for Justice Student Trial Advocacy Competition (formerly ATLA) National Civil Trial Competition, Texas Young Lawyers Association/National Trial Competition Mock Trial Competition (NTC), Michigan State University National Trial Advocacy Competition (NTAC), California Association of Criminal Justice (CACJ) Mock Trial Competition, Capitol City Challenge, National Ethics Trial Competition at Pacific McGeorge School of Law, Lone Star Classic National Mock Trial Tournament The following is the list of winners of the National Trial Competition (NTC):
In arts, entertainment, and media:
Mock Trial is a 1910 card game developed by Lizzie Magie.
In arts, entertainment, and media:
In an episode of the American television series The Fugitive, a once-famed attorney and current law professor named G. Stanley Lazer claims that he could reverse Richard Kimble's criminal conviction if the case went back to trial. To Kimble's chagrin, Lazer decides to prove his theory by conducting a mock trial with his students playing the prosecutor, defense lawyer, and jury in front of a live TV audience.
In arts, entertainment, and media:
In an episode of the American television show Suits, Mike Ross, an employee of one of New York's top law firms, goes head-to-head with one of his co-workers in a mock trial.
In the American television series Shark, the protagonist had a room on his house specially designed like a courtroom to make Mock trials before important cases. It was used in a couple of episodes.
In arts, entertainment, and media:
In the season 1 episode "Mock" of the American television show The Good Wife, Will Gardner, a partner at Lockhart Gardner, presides as a judge over a law school mock trial. Additionally, in the season 4 episode "Red Team, Blue Team", Alicia and Cary, employees of Lockhart & Gardner, go head-to-head with their co-workers, Will and Diane, in a mock trial. Season 6 episode "Loser Edit" also featured a mock trial involving Diane as a prosecutor.
In arts, entertainment, and media:
In an episode of Arrested Development, Judge Reinhold has a courtroom TV show called Mock Trial with J. Reinhold, in which he is a fake Judge and the Bluth family uses it as a trial run for their legal defense against the SEC.
In arts, entertainment, and media:
In Phoenix Wright: Ace Attorney: Dual Destinies (Gyakuten Saiban 5 in Japan), there is a case called "Turnabout Academy" in which a mock trial ends up becoming an actual trial. The teacher that decided which script was going to be used for the mock trial was found dead after the mock trial. Juniper Woods was accused of murdering her teacher, but her friend, Athena Cykes, decided to defend her old friend in court.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Guignolet**
Guignolet:
Guignolet (pronounced [ɡiɲɔlɛ]) is a French wild cherry liqueur.
It is widely available in France, including at supermarkets such as Casino and others, but is not widely available internationally.
A leading producer is the company Giffard in Angers, France, the same town where Cointreau is produced. The Cointreau brothers were instrumental in its reinvention, the original recipe having been lost.
Composition and etymology:
It obtains its name from guigne, one of a few species of cherry used in its production. (Black cherries and sour cherries are also used.) It has an alcohol content between 16 and 18° proof (ca. 12%) and has an aroma vaguely reminiscent of whiskey and a very sweet taste.
Uses:
It is drunk neat as an aperitif.
The cocktail guignolo is composed of guignolet, champagne and cherry juice.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Geodesic deviation**
Geodesic deviation:
In general relativity, if two objects are set in motion along two initially parallel trajectories, the presence of a tidal gravitational force will cause the trajectories to bend towards or away from each other, producing a relative acceleration between the objects.Mathematically, the tidal force in general relativity is described by the Riemann curvature tensor, and the trajectory of an object solely under the influence of gravity is called a geodesic. The geodesic deviation equation relates the Riemann curvature tensor to the relative acceleration of two neighboring geodesics. In differential geometry, the geodesic deviation equation is more commonly known as the Jacobi equation.
Mathematical definition:
To quantify geodesic deviation, one begins by setting up a family of closely spaced geodesics indexed by a continuous variable s and parametrized by an affine parameter τ. That is, for each fixed s, the curve swept out by γs(τ) as τ varies is a geodesic. When considering the geodesic of a massive object, it is often convenient to choose τ to be the object's proper time. If xμ(s, τ) are the coordinates of the geodesic γs(τ), then the tangent vector of this geodesic is Tμ=∂xμ(s,τ)∂τ.
Mathematical definition:
If τ is the proper time, then Tμ is the four-velocity of the object traveling along the geodesic.
One can also define a deviation vector, which is the displacement of two objects travelling along two infinitesimally separated geodesics: Xμ=∂xμ(s,τ)∂s.
The relative acceleration Aμ of the two objects is defined, roughly, as the second derivative of the separation vector Xμ as the objects advance along their respective geodesics. Specifically, Aμ is found by taking the directional covariant derivative of X along T twice: Aμ=Tα∇α(Tβ∇βXμ).
The geodesic deviation equation relates Aμ, Tμ, Xμ, and the Riemann tensor Rμνρσ: Aμ=RμνρσTνTρXσ.
An alternate notation for the directional covariant derivative Tα∇α is D/dτ , so the geodesic deviation equation may also be written as D2Xμdτ2=RμνρσTνTρXσ.
Mathematical definition:
The geodesic deviation equation can be derived from the second variation of the point particle Lagrangian along geodesics, or from the first variation of a combined Lagrangian. The Lagrangian approach has two advantages. First it allows various formal approaches of quantization to be applied to the geodesic deviation system. Second it allows deviation to be formulated for much more general objects than geodesics (any dynamical system which has a one spacetime indexed momentum appears to have a corresponding generalization of geodesic deviation).
Weak-field limit:
The connection between geodesic deviation and tidal acceleration can be seen more explicitly by examining geodesic deviation in the weak-field limit, where the metric is approximately Minkowski, and the velocities of test particles are assumed to be much less than c. Then the tangent vector Tμ is approximately (1, 0, 0, 0); i.e., only the timelike component is nonzero.
The spatial components of the relative acceleration are then given by Ai=−Ri0j0Xj, where i and j run only over the spatial indices 1, 2, and 3.
In the particular case of a metric corresponding to the Newtonian potential Φ(x, y, z) of a massive object at x = y = z = 0, we have Ri0j0=−∂2Φ∂xi∂xj, which is the tidal tensor of the Newtonian potential.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Afimoxifene**
Afimoxifene:
Afimoxifene, also known as 4-hydroxytamoxifen (4-OHT) and by its tentative brand name TamoGel, is a selective estrogen receptor modulator (SERM) of the triphenylethylene group and an active metabolite of tamoxifen. The drug is under development under the tentative brand name TamoGel as a topical gel for the treatment of hyperplasia of the breast. It has completed a phase II clinical trial for cyclical mastalgia, but further studies are required before afimoxifene can be approved for this indication and marketed.Afimoxifene is a SERM and hence acts as a tissue-selective agonist–antagonist of the estrogen receptors ERα and ERβ with mixed estrogenic and antiestrogenic activity depending on the tissue. It is also an agonist of the G protein-coupled estrogen receptor (GPER) with relatively low affinity (100–1,000 nM, relative to 3–6 nM for estradiol). In addition to its estrogenic and antiestrogenic activity, afimoxifene has been found to act as an antagonist of the estrogen-related receptors (ERRs) ERRβ and ERRγ.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Spectral energy distribution**
Spectral energy distribution:
A spectral energy distribution (SED) is a plot of energy versus frequency or wavelength of light (not to be confused with a 'spectrum' of flux density vs frequency or wavelength). It is used in many branches of astronomy to characterize astronomical sources. For example, in radio astronomy they are used to show the emission from synchrotron radiation, free-free emission and other emission mechanisms. In infrared astronomy, SEDs can be used to classify young stellar objects.
Detector for spectral energy distribution:
The count rates observed from a given astronomical radiation source have no simple relationship to the flux from that source, such as might be incident at the top of the Earth's atmosphere. This lack of a simple relationship is due in no small part to the complex properties of radiation detectors.These detector properties can be divided into those that merely attenuate the beam, including residual atmosphere between source and detector, absorption in the detector window when present, quantum efficiency of the detecting medium, those that redistribute the beam in detected energy, such as fluorescent photon escape phenomena, inherent energy resolution of the detector.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**QML**
QML:
QML (Qt Modeling Language) is a user interface markup language. It is a declarative language (similar to CSS and JSON) for designing user interface–centric applications. Inline JavaScript code handles imperative aspects. It is associated with Qt Quick, the UI creation kit originally developed by Nokia within the Qt framework. Qt Quick is used for mobile applications where touch input, fluid animations and user experience are crucial. QML is also used with Qt3D to describe a 3D scene and a "frame graph" rendering methodology. A QML document describes a hierarchical object tree. QML modules shipped with Qt include primitive graphical building blocks (e.g., Rectangle, Image), modeling components (e.g., FolderListModel, XmlListModel), behavioral components (e.g., TapHandler, DragHandler, State, Transition, Animation), and more complex controls (e.g., Button, Slider, Drawer, Menu). These elements can be combined to build components ranging in complexity from simple buttons and sliders, to complete internet-enabled programs.
QML:
QML elements can be augmented by standard JavaScript both inline and via included .js files. Elements can also be seamlessly integrated and extended by C++ components using the Qt framework.
QML is the language; its JavaScript runtime is the custom V4 engine, since Qt 5.2; and Qt Quick is the 2D scene graph and the UI framework based on it. These are all part of the Qt Declarative module, while the technology is no longer called Qt Declarative.
QML and JavaScript code can be compiled into native C++ binaries with the Qt Quick Compiler. Alternatively there is a QML cache file format which stores a compiled version of QML dynamically for faster startup the next time it is run.
Adoption:
KDE Plasma 4 and KDE Plasma 5 through Plasma-framework Liri OS Simple Desktop Display Manager reMarkable tablet device Unity2D Sailfish OS BlackBerry 10 MeeGo Maemo Tizen Mer Ubuntu Phone Lumina (desktop environment) Many open-source applications
Syntax, semantics:
Basic syntax Example: Objects are specified by their type, followed by a pair of braces. Object types always begin with a capital letter. In the example above, there are two objects, a Rectangle; and its child, an Image. Between the braces, one can specify information about the object, such as its properties.
Properties are specified as property: value. In the example above, we can see the Image has a property named source, which has been assigned the value pics/logo.png. The property and its value are separated by a colon.
The id property Each object can be given a special unique property called an id. Assigning an id enables the object to be referred to by other objects and scripts.
The first Rectangle element below has an id, myRect. The second Rectangle element defines its own width by referring to myRect.width, which means it will have the same width value as the first Rectangle element.
Note that an id must begin with a lower-case letter or an underscore, and cannot contain characters other than letters, digits and underscores.
Property bindings A property binding specifies the value of a property in a declarative way. The property value is automatically updated if the other properties or data values change, following the reactive programming paradigm.
Property bindings are created implicitly in QML whenever a property is assigned a JavaScript expression. The following QML uses two property bindings to connect the size of the rectangle to that of otherItem.
QML extends a standards-compliant JavaScript engine, so any valid JavaScript expression can be used as a property binding. Bindings can access object properties, make function calls, and even use built-in JavaScript objects like Date and Math.
Syntax, semantics:
Example: States States are a mechanism to combine changes to properties in a semantic unit. A button for example has a pressed and a non-pressed state, an address book application could have a read-only and an edit state for contacts. Every element has an "implicit" base state. Every other state is described by listing the properties and values of those elements which differ from the base state.
Syntax, semantics:
Example: In the default state, myRect is positioned at 0,0. In the "moved" state, it is positioned at 50,50. Clicking within the mouse area changes the state from the default state to the "moved" state, thus moving the rectangle.
State changes can be animated using Transitions.
For example, adding this code to the above Item element animates the transition to the "moved" state: Animation Animations in QML are done by animating properties of objects. Properties of type real, int, color, rect, point, size, and vector3d can all be animated.
QML supports three main forms of animation: basic property animation, transitions, and property behaviors.
The simplest form of animation is a PropertyAnimation, which can animate all of the property types listed above.
A property animation can be specified as a value source using the Animation on property syntax. This is especially useful for repeating animations.
The following example creates a bouncing effect:
Qt/C++ integration:
Usage of QML does not require Qt/C++ knowledge to use, but it can be easily extended via Qt. Any C++ class derived from QObject can be easily registered as a type which can then be instantiated in QML.
Qt/C++ integration:
Familiar concepts QML provides direct access to the following concepts from Qt: QObject signals – can trigger callbacks in JavaScript QObject slots – available as functions to call in JavaScript QObject properties – available as variables in JavaScript, and for bindings QWindow – Window creates a QML scene in a window Q*Model – used directly in data binding (e.g. QAbstractItemModel) Signal handlers Signal handlers are JavaScript callbacks which allow imperative actions to be taken in response to an event. For instance, the MouseArea element has signal handlers to handle mouse press, release and click: All signal handler names begin with "on".
Development tools:
Because QML and JavaScript are very similar, almost all code editors supporting JavaScript will work. However full support for syntax highlighting, code completion, integrated help, and a WYSIWYG editor are available in the free cross-platform IDE Qt Creator since version 2.1 and many other IDEs.
The qml executable can be used to run a QML file as a script. If the QML file begins with a shebang it can be made directly executable. However packaging an application for deployment (especially on mobile platforms) generally involves writing a simple C++ launcher and packaging the necessary QML files as resources.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Cumene hydroperoxide**
Cumene hydroperoxide:
Cumene hydroperoxide is the organic compound with the formula C6H5C(CH3)2OOH. An oily liquid, it is classified as an organic hydroperoxide. Products of decomposition of cumene hydroperoxide are methylstyrene, acetophenone, and 2-Phenyl-2-propanol.It is produced by treatment of cumene with oxygen, an autoxidation. At temperatures >100 °C, oxygen is passed through liquid cumene: C6H5(CH3)2CH + O2 → C6H5(CH3)2COOHDicumyl peroxide is a side product.
Applications:
Cumene hydroperoxide is an intermediate in the cumene process for producing phenol and acetone from benzene and propene.
Applications:
Cumene hydroperoxide is a free radical initiator for production of acrylates.Cumene hydroperoxide is involved as an organic peroxide in the manufacturing of propylene oxide by the oxidation of propylene. This technology was commercialized by Sumitomo Chemical.The oxidation by cumene hydroperoxide of propylene affords propylene oxide and the byproduct 2-Phenyl-2-propanol. The reaction follows this stoichiometry: CH3CHCH2 + C6H5(CH3)2COOH → CH3CHCH2O + C6H5(CH3)2COHDehydrating and hydrogenating cumyl alcohol recycles the cumene.
Safety:
Cumene hydroperoxide, like all organic peroxides, is potentially explosive. It is also toxic, corrosive and flammable as well as a skin-irritant.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Clinical and Translational Science**
Clinical and Translational Science:
Clinical and Translational Science is a bimonthly peer-reviewed open-access medical journal covering translational medicine. It is published by Wiley-Blackwell and is an official journal of the American Society for Clinical Pharmacology and Therapeutics. The journal was established in 2008 and the editor-in-chief is John A. Wagner (Cygnal Therapeutics).
Abstracting and indexing:
The journal is abstracted and indexed in: According to the Journal Citation Reports, its 2020 impact factor is 4.689.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Leucine N-acetyltransferase**
Leucine N-acetyltransferase:
In enzymology, a leucine N-acetyltransferase (EC 2.3.1.66) is an enzyme that catalyzes the chemical reaction acetyl-CoA + L-leucine ⇌ CoA + N-acetyl-L-leucineThus, the two substrates of this enzyme are acetyl-CoA and L-leucine, whereas its two products are CoA and N-acetyl-L-leucine.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:L-leucine N-acetyltransferase. This enzyme is also called leucine acetyltransferase.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Rule of six (viruses)**
Rule of six (viruses):
The rule of six is a feature of some paramyxovirus genomes. These RNA viruses have genes made from RNA and not DNA, and their whole genome – that is the number of nucleotides – is always a multiple of six. This is because during their replication, these viruses are dependent on nucleoprotein molecules that each bind to six nucleotides.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Jocasta complex**
Jocasta complex:
In psychoanalytic theory, the Jocasta complex is the incestuous sexual desire of a mother towards her son.Raymond de Saussure introduced the term in 1920 by way of analogy to its logical converse in psychoanalysis, the Oedipus complex, and it may be used to cover different degrees of attachment, including domineering but asexual mother love – something perhaps particularly prevalent with an absent father.
Origins:
The Jocasta complex is named for Jocasta, a Greek queen who unwittingly married her son, Oedipus. The Jocasta complex is similar to the Oedipus complex, in which a child has sexual desire towards their parent(s). The term is a bit of an extrapolation, since in the original story Oedipus and Jocasta were unaware that they were mother and son when they married. The usage in modern contexts involves a son with full knowledge of who his mother is.
Analytic discussion:
Theodor Reik saw the "Jocasta mother", with an unfulfilled adult relationship of her own and an over-concern for her child instead, as a prime source of neurosis.George Devereux went further, arguing that the child's Oedipal complex was itself triggered by a pre-existing parental complex (Jocasta/Laius).Eric Berne also explored the other (parental) side of the Oedipus complex, pointing to related family dramas such as "mother sleeping with daughter's boyfriend ... when mother has no son to play Jocasta with".With her feminist articulation of Jocasta Complex and Laius complex Bracha L. Ettinger criticises the classical psychoanalytic perception of Jocasta, of the maternal, the feminine, and the Oedipal/castration model in relation to the mother-child links.
Cultural analogues:
Atossa, in the Greek tragedy The Persians, has been seen as struggling in her dreams with a Jocasta complex.
Some American folk-tales often feature figures, like Jocasta, expressing maternal desire for their sons.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**N-Acetyldopamine**
N-Acetyldopamine:
N-Acetyldopamine is the organic compound with the formula CH3C(O)NHCH2CH2C6H3(OH)2. It is the N-acetylated derivative of dopamine. This compound is a reactive intermediate in sclerotization, the process by which insect cuticles are formed by hardening molecular precursors. The catechol substituent is susceptible to redox and crosslinking.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Jack-in-the-box**
Jack-in-the-box:
A jack-in-the-box is a children's toy that outwardly consists of a music box with a crank. When the crank is turned, a music box mechanism in the toy plays a melody. After the crank has been turned a sufficient number of times (such as at the end of the melody), the lid pops open and a figure, usually a clown or jester, pops out of the box. Some jacks-in-the-box open at random times when cranked, making the startle even more effective. Many of those that use "Pop Goes the Weasel" open at the point in the melody when the word "pop" would be sung. In 2005, the jack-in-the-box was inducted into the U.S. National Toy Hall of Fame, where are displayed all types of versions of the toy, starting from the beginning versions, and ending with the most recently manufactured versions.
Origin:
A theory as to the origin of the jack-in-the-box is that it comes from the 14th-century English prelate Sir John Schorne, who is often pictured holding a boot with a devil in it. According to folklore, he once cast the devil into a boot to protect the village of North Marston in Buckinghamshire. In French, a jack-in-the-box is called a "diable en boîte" (literally "devil in a box"). The phrase jack-in-the-box was first seen used in literature by John Foxe, in his book Actes and Monuments., first published in 1563. There he used the term as an insult to describe a swindler who would cheat tradesmen by selling them empty boxes instead of what they actually purchased.
History:
In the early 1500s, the first jack-in-the-box was made by a German clockmaker known as Claus. Claus built a wooden box, with metal edges and a handle that would pop out an animated devil or “Jack” after cranking the handle. It was built as a gift for a local prince's fifth birthday. After seeing this toy, other nobles requested their own "Devils-in-a-box" for their children.In the early 18th century, improved toy mechanisms made the jack-in-the-box more widely available for all children and not just royalty.
Models:
Originally, the jack-in-the-box was made out of wood, but with new technology the toy could be constructed from printed cardboard. Around the 1930s, the jack-in-the-box became a wind-up toy made from tin. Additionally, the tin boxes began to be covered in images from children's nursery rhymes with corresponding tunes. Over the years, the jack-in-the-box has evolved into characters other than the clown, such as Winnie the Pooh, The Cat in the Hat, the Three Little Pigs, kittens, dogs, Curious George, Santa Claus, giraffe, and so on.
Distributors:
Starting in 1935 and continuing for 20 years, the first company to take on the distribution of the toy was a very small firm named Joy Toy. The company is located in Italy as well as the Netherlands. Since then, Fisher Price, Chad Valley, Mattel and Tomy have all played a major role in distributing the jack-in-the-box.
In popular culture:
The jack-in-the-box has been used for centuries by cartoonists as a way to describe and poke fun at politicians.
The American fast food company Jack in the Box began using the toy and the phrase as their mascot in the early 1950s.A 1945 Disney cartoon called The Clock Watcher shows Donald Duck making many failed attempts to close Jack-in-the-box.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**SULF2**
SULF2:
Extracellular sulfatase Sulf-2 is an enzyme that in humans is encoded by the SULF2 gene.
Function:
Heparan sulfate proteoglycans (HSPGs) act as coreceptors for numerous heparin-binding growth factors and cytokines and are involved in cell signaling. Heparan sulfate 6-O-endosulfatases, such as SULF2, selectively remove 6-O-sulfate groups from heparan sulfate. This activity modulates the effects of heparan sulfate by altering binding sites for signaling molecules (Dai et al., 2005).[supplied by OMIM]
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Avogadrite**
Avogadrite:
Avogadrite ((K,Cs)BF4) is a potassium-caesium tetrafluoroborate in the halide class. Avogadrite crystallizes in the orthorhombic system (space group Pnma) with cell parameters a 8.66 Å, b 5.48 Å and c Å 7.03.
History:
The mineral was discovered by the Italian mineralogist Ferruccio Zambonini in 1926. He analyzed several samples from the volcanic fumaroles close to Mount Vesuvius and from the Lipari islands. In nature, it can only be found as a sublimation product around volcanic fumaroles. He named it after the Italian scientist Amedeo Avogadro (1776–1856).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Zermelo set theory**
Zermelo set theory:
Zermelo set theory (sometimes denoted by Z-), as set out in a seminal paper in 1908 by Ernst Zermelo, is the ancestor of modern Zermelo–Fraenkel set theory (ZF) and its extensions, such as von Neumann–Bernays–Gödel set theory (NBG). It bears certain differences from its descendants, which are not always understood, and are frequently misquoted. This article sets out the original axioms, with the original text (translated into English) and original numbering.
The axioms of Zermelo set theory:
The axioms of Zermelo set theory are stated for objects, some of which (but not necessarily all) are sets, and the remaining objects are urelements and not sets. Zermelo's language implicitly includes a membership relation ∈, an equality relation = (if it is not included in the underlying logic), and a unary predicate saying whether an object is a set. Later versions of set theory often assume that all objects are sets so there are no urelements and there is no need for the unary predicate.
The axioms of Zermelo set theory:
AXIOM I. Axiom of extensionality (Axiom der Bestimmtheit) "If every element of a set M is also an element of N and vice versa ... then M ≡ N. Briefly, every set is determined by its elements." AXIOM II. Axiom of elementary sets (Axiom der Elementarmengen) "There exists a set, the null set, ∅, that contains no element at all. If a is any object of the domain, there exists a set {a} containing a and only a as an element. If a and b are any two objects of the domain, there always exists a set {a, b} containing as elements a and b but no object x distinct from them both." See Axiom of pairs.
The axioms of Zermelo set theory:
AXIOM III. Axiom of separation (Axiom der Aussonderung) "Whenever the propositional function –(x) is defined for all elements of a set M, M possesses a subset M' containing as elements precisely those elements x of M for which –(x) is true." AXIOM IV. Axiom of the power set (Axiom der Potenzmenge) "To every set T there corresponds a set T' , the power set of T, that contains as elements precisely all subsets of T ." AXIOM V. Axiom of the union (Axiom der Vereinigung) "To every set T there corresponds a set ∪T, the union of T, that contains as elements precisely all elements of the elements of T ." AXIOM VI. Axiom of choice (Axiom der Auswahl) "If T is a set whose elements all are sets that are different from ∅ and mutually disjoint, its union ∪T includes at least one subset S1 having one and only one element in common with each element of T ." AXIOM VII. Axiom of infinity (Axiom des Unendlichen) "There exists in the domain at least one set Z that contains the null set as an element and is so constituted that to each of its elements a there corresponds a further element of the form {a}, in other words, that with each of its elements a it also contains the corresponding set {a} as element."
Connection with standard set theory:
The most widely used and accepted set theory is known as ZFC, which consists of Zermelo–Fraenkel set theory including the axiom of choice (AC). The links show where the axioms of Zermelo's theory correspond. There is no exact match for "elementary sets". (It was later shown that the singleton set could be derived from what is now called the "Axiom of pairs". If a exists, a and a exist, thus {a,a} exists, and so by extensionality {a,a} = {a}.) The empty set axiom is already assumed by axiom of infinity, and is now included as part of it.
Connection with standard set theory:
Zermelo set theory does not include the axioms of replacement and regularity. The axiom of replacement was first published in 1922 by Abraham Fraenkel and Thoralf Skolem, who had independently discovered that Zermelo's axioms cannot prove the existence of the set {Z0, Z1, Z2, ...} where Z0 is the set of natural numbers and Zn+1 is the power set of Zn. They both realized that the axiom of replacement is needed to prove this. The following year, John von Neumann pointed out that the axiom of regularity is necessary to build his theory of ordinals. The axiom of regularity was stated by von Neumann in 1925.In the modern ZFC system, the "propositional function" referred to in the axiom of separation is interpreted as "any property definable by a first-order formula with parameters", so the separation axiom is replaced by an axiom schema. The notion of "first order formula" was not known in 1908 when Zermelo published his axiom system, and he later rejected this interpretation as being too restrictive. Zermelo set theory is usually taken to be a first-order theory with the separation axiom replaced by an axiom scheme with an axiom for each first-order formula. It can also be considered as a theory in second-order logic, where now the separation axiom is just a single axiom. The second-order interpretation of Zermelo set theory is probably closer to Zermelo's own conception of it, and is stronger than the first-order interpretation.
Connection with standard set theory:
In the usual cumulative hierarchy Vα of ZFC set theory (for ordinals α), any one of the sets Vα for α a limit ordinal larger than the first infinite ordinal ω (such as Vω·2) forms a model of Zermelo set theory. So the consistency of Zermelo set theory is a theorem of ZFC set theory. As Vω⋅2 models Zermelo's axioms while not containing ℵω and larger infinite cardinals, by Gödel's completeness theorem Zermelo's axioms do not prove the existence of these cardinals. (Cardinals have to be defined differently in Zermelo set theory, as the usual definition of cardinals and ordinals does not work very well: with the usual definition it is not even possible to prove the existence of the ordinal ω2.) The axiom of infinity is usually now modified to assert the existence of the first infinite von Neumann ordinal ω ; the original Zermelo axioms cannot prove the existence of this set, nor can the modified Zermelo axioms prove Zermelo's axiom of infinity. Zermelo's axioms (original or modified) cannot prove the existence of Vω as a set nor of any rank of the cumulative hierarchy of sets with infinite index.
Connection with standard set theory:
Zermelo allowed for the existence of urelements that are not sets and contain no elements; these are now usually omitted from set theories.
Mac Lane set theory:
Mac Lane set theory, introduced by Mac Lane (1986), is Zermelo set theory with the axiom of separation restricted to first-order formulas in which every quantifier is bounded.
Mac Lane set theory is similar in strength to topos theory with a natural number object, or to the system in Principia mathematica. It is strong enough to carry out almost all ordinary mathematics not directly connected with set theory or logic.
The aim of Zermelo's paper:
The introduction states that the very existence of the discipline of set theory "seems to be threatened by certain contradictions or "antinomies", that can be derived from its principles – principles necessarily governing our thinking, it seems – and to which no entirely satisfactory solution has yet been found". Zermelo is of course referring to the "Russell antinomy".
He says he wants to show how the original theory of Georg Cantor and Richard Dedekind can be reduced to a few definitions and seven principles or axioms. He says he has not been able to prove that the axioms are consistent.
A non-constructivist argument for their consistency goes as follows. Define Vα for α one of the ordinals 0, 1, 2, ...,ω, ω+1, ω+2,..., ω·2 as follows: V0 is the empty set.
For α a successor of the form β+1, Vα is defined to be the collection of all subsets of Vβ.
The aim of Zermelo's paper:
For α a limit (e.g. ω, ω·2) then Vα is defined to be the union of Vβ for β<α.Then the axioms of Zermelo set theory are consistent because they are true in the model Vω·2. While a non-constructivist might regard this as a valid argument, a constructivist would probably not: while there are no problems with the construction of the sets up to Vω, the construction of Vω+1 is less clear because one cannot constructively define every subset of Vω. This argument can be turned into a valid proof with the addition of a single new axiom of infinity to Zermelo set theory, simply that Vω·2 exists. This is presumably not convincing for a constructivist, but it shows that the consistency of Zermelo set theory can be proved with a theory which is not very different from Zermelo theory itself, only a little more powerful.
The axiom of separation:
Zermelo comments that Axiom III of his system is the one responsible for eliminating the antinomies. It differs from the original definition by Cantor, as follows.
The axiom of separation:
Sets cannot be independently defined by any arbitrary logically definable notion. They must be constructed in some way from previously constructed sets. For example, they can be constructed by taking powersets, or they can be separated as subsets of sets already "given". This, he says, eliminates contradictory ideas like "the set of all sets" or "the set of all ordinal numbers".
The axiom of separation:
He disposes of the Russell paradox by means of this Theorem: "Every set M possesses at least one subset M0 that is not an element of M ". Let M0 be the subset of M for which, by AXIOM III, is separated out by the notion " x∉x ". Then M0 cannot be in M . For If M0 is in M0 , then M0 contains an element x for which x is in x (i.e. M0 itself), which would contradict the definition of M0 If M0 is not in M0 , and assuming M0 is an element of M, then M0 is an element of M that satisfies the definition " x∉x ", and so is in M0 which is a contradiction.Therefore, the assumption that M0 is in M is wrong, proving the theorem. Hence not all objects of the universal domain B can be elements of one and the same set. "This disposes of the Russell antinomy as far as we are concerned".
The axiom of separation:
This left the problem of "the domain B" which seems to refer to something. This led to the idea of a proper class.
Cantor's theorem:
Zermelo's paper may be the first to mention the name "Cantor's theorem".
Cantor's theorem: "If M is an arbitrary set, then always M < P(M) [the power set of M]. Every set is of lower cardinality than the set of its subsets".
Cantor's theorem:
Zermelo proves this by considering a function φ: M → P(M). By Axiom III this defines the following set M' : M' = {m: m ∉ φ(m)}.But no element m' of M could correspond to M' , i.e. such that φ(m' ) = M' . Otherwise we can construct a contradiction: 1) If m' is in M' then by definition m' ∉ φ(m' ) = M' , which is the first part of the contradiction2) If m' is not in M' but in M then by definition m' ∉ M' = φ(m' ) which by definition implies that m' is in M' , which is the second part of the contradiction.so by contradiction m' does not exist. Note the close resemblance of this proof to the way Zermelo disposes of Russell's paradox.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Priming (immunology)**
Priming (immunology):
Priming is the first contact that antigen-specific T helper cell precursors have with an antigen. It is essential to the T helper cells' subsequent interaction with B cells to produce antibodies. Priming of antigen-specific naive lymphocytes occurs when antigen is presented to them in immunogenic form (capable of inducing an immune response). Subsequently, the primed cells will differentiate either into effector cells or into memory cells that can mount stronger and faster response to second and upcoming immune challenges. T and B cell priming occurs in the secondary lymphoid organs (lymph nodes and spleen).
Priming (immunology):
Priming of naïve T cells requires dendritic cell antigen presentation. Priming of naive CD8 T cells generates cytotoxic T cells capable of directly killing pathogen-infected cells. CD4 cells develop into a diverse array of effector cell types depending on the nature of the signals they receive during priming. CD4 effector activity can include cytotoxicity, but more frequently it involves the secretion of a set of cytokines that directs the target cell to make a particular response. This activation of naive T cell is controlled by a variety of signals: recognition of antigen in the form of a peptide: MHC complex on the surface of a specialized antigen-presenting cell delivers signal 1; interaction of co-stimulatory molecules on antigen-presenting cells with receptors on T cells delivers signal 2 (one notable example includes a B7 ligand complex on antigen-presenting cells binding to the CD28 receptor on T cells); and cytokines that control differentiation into different types of effector cells deliver signal 3.
Cross-priming:
Cross-priming refers to the stimulation of antigen-specific CD8+ cytotoxic T lymphocytes (CTLs) by dendritic cell presenting an antigen acquired from the outside of the cell. Cross-priming is also called immunogenic cross-presentation. This mechanism is vital for priming of CTLs against viruses and tumours.
Immune priming (invertebrate immunity):
Immune priming is a memory-like phenomenon described in invertebrate taxa of animals. It is evolutionarily advantageous for an organism to develop a better and faster secondary immune response to pathogen, which is harmful and which it is likely to be exposed again. In vertebrates immune memory is based on adaptive immune cells called B and T lymphocytes, which provide an enhanced and faster immune response, when challenged with the same pathogen for a second time. It was assumed that invertebrates do not have memory-like immune functions, because of their lack of adaptive immunity. But in recent years evidence supporting innate memory-like functions were found. In invertebrate immunology the common model organisms are different species of insect. The experiments focusing on immune priming are based on exposing the insect to dead or sublethal dose of bacteria or microbes to elicit the initial innate immune response. Afterwards the researchers compare subsequent infections in primed and non-primed individuals to see if they mount a stronger or modified response.
Immune priming (invertebrate immunity):
Mechanism of immune priming It seems that the results of immune priming research are showing that the mechanism differs and is dependent on the kind of insect species and microbe used for given experiment. That could be due to host-pathogen coevolution. For every species is convenient to develop a specialised defense against a pathogen (e.g. bacterial strain) that it encounters the most. In arthropod model, the red flour beetle Tribolium castaneum, it has been shown that the route of infection (cuticular, septic or oral) is important for the defence mechanism generation. Innate immunity in insects is based on non-cellular mechanisms, including production of antimicrobial peptides (AMPs), reactive oxygen species (ROS) or activation of the prophenol oxidase cascade. Cellular parts of insect innate immunity are hemocytes, which can eliminate pathogens by nodulation, encapsulation or phagocytosis. The innate response during immune priming differs based on the experimental setup, but generally it involves enhancement of humoral innate immune mechanisms and increased levels of hemocytes. There are two hypothetical scenarios of immune induction, on which immune priming mechanism could be based. The first mechanism is induction of long-lasting defences, such as circulating immune molecules, by the priming antigens in the host body, which remain until the secondary encounter. The second mechanism describes a drop after the initial priming response, but a stronger defence upon a secondary challenge. The most probable scenario is the combination of these two mechanisms.
Immune priming (invertebrate immunity):
Trans-generational immune priming Trans-generational immune priming (TGIP) describes the transfer of parental immunological experience to its progeny, which may help the survival of the offspring when challenged with the same pathogen. Similar mechanism of offspring protection against pathogens has been studied for a very long time in vertebrates, where the transfer of maternal antibodies helps the newborns immune system fight an infection before its immune system can function properly on its own. In the last two decades TGIP in invertebrates was heavily studied. Evidence supporting TGIP were found in all colleopteran, crustacean, hymenopteran, orthopteran and mollusk species, but in some other species the results still remain contradictory. The experimental outcome could be influenced by the procedure used for particular investigation. Some of these parameters include the infection procedure, the sex of the offspring and the parent and the developmental stage.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Zilog SCC**
Zilog SCC:
The SCC, short for Serial Communication Controller, is a family of serial port driver integrated circuits made by Zilog. The primary members of the family are the Z8030/Z8530, and the Z85233.
Developed from the earlier Zilog SIO devices (Z8443), the SCC added a number of serial-to-parallel modes that allowed internal implementation of a variety of data link layer protocols like Bisync, HDLC and SDLC.
The SCC could be set up as a conventional RS-232 port for driving legacy systems, or alternately as a RS-422 port for much higher performance, up to 10 Mbit/s. Implementation details generally limited performance to 5 Mbit/s or less.
One of the most famous users of the SCC was the Apple Macintosh computer line, which used the Z8530 to implement two serial ports on the back of the early designs, labeled "modem" and "printer".
Description:
Traditional serial communications are normally implemented using a device known as a UART, which translates data from the computer bus's internal parallel format to serial and back. This allows the computer to send data serially simply by doing a regular parallel write to an I/O register, and the UART will convert this to serial form and send it. Generally there were different UARTs for each computer architecture, with the goal of being as low-cost as possible. A good example is the Zilog Z-80 SIO from 1977, designed to work with the widely used Zilog Z80 to provide two serial ports with relatively high speeds up to 800 kbit/s. The SIO is technically a USART, as it understands synchronous protocols.The SCC is essentially an updated version of the SIO, with more internal logic to allow it to directly implement a number of common data link layer protocols. To start with, the SCC included a hardware implementation of the cyclic redundancy check (CRC), which allowed it to check, flag and reject improper data without the support of the host computer. Higher-level protocols included BiSync, HDLC and SDLC. HDLC is better known in its implementation in the modem-oriented LAPM protocol, part of V.42. By moving the implementation of these protocols to hardware, the SCC made it easy to implement local area networking systems, like IBM's SNA, without the need for the host CPU to handle these details.
Description:
When used in traditional serial mode, the SCC could be set to use 5, 6, 7 or 8 bits/character, 1, 1+1⁄2, or 2 stop bits, odd, even or no parity, and automatically detected or generated break signals. In synchronous modes, data could be optionally sent with NRZ, NRZI or FM encoding, as well as Manchester decoding, although Manchester encoding had to be handled in external logic.
Description:
The SCC's transmission rate could be timed from three sources. For basic RS-232-style communications, the SCC included an internal 300 Hz clock that could be multiplied by 1, 16, 32 to 64, providing data rates between 300 and 19,200 bit/s. Alternately, it could use the clock on the bus as provided by the host platform, and then divide that clock by 4, 8, 16 or 32 (the later two only in the original NMOS implementation). When used on a machine running at the common 8 MHz clock, this allowed for rates as high as 2 Mbit/s. Finally, the SCC also included inputs for the provision of an external clock. This worked similar to the host clock, but could be used to provide any reference clock signal, independent of the host platform. In this mode, the clock could be divided as in the internal case, or multiplied by 2 for even higher speeds, up to 32.3 Mbit/s in some versions. Using the external clock made it easy to implement LAN adaptors, which normally ran at speeds that were independent of the host computer.
Description:
Early implementations used receive buffers that were only 3 bytes deep, and a send buffer with a single byte. This meant that the real-world performance was limited by the host platform's ability to continually empty the buffers into its own memory. With network-like communications the SCC itself could cause the remote sender to stop transmission when the buffers were full, and thereby prevent data loss while the host was busy. With conventional async serial this was not possible; on the Macintosh Plus this limited RS-232 performance to about 9600 bit/s or less, and as little as 4800 bit/s on earlier models.
Description:
Most SCC models were available in either dual in-line package (DIP) or chip carrier (PLCC) versions.
Versions:
Z8030Original model implemented in NMOS with a multiplexed "Z-Bus" interface that matched the Zilog Z8000/Z16C00/8086 CPUs Z8530Functionally identical to the Z8030, but using a non-multiplexed "Universal-Bus" designed to allow use with any CPU or host platform, including the Z-80 Z8031 and Z8531Versions of the Z8030 and Z8530 with the synchronous support removed, producing a design more closely matching the original SIO Z80C30 and Z85C30CMOS implementations of the Z8030 and Z8530. Plug compatible with the early versions, adding the 2x speed when used with the external clock, and a number of bug fixes and improvements in the link layer protocols.
Versions:
Z80230 and Z85230Updated CMOS implementations of the Z80C30 and Z85C30, also known as the ESCC Z85233Updated version of the Z85230 (only), also known as the EMSCC
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Incorporeality**
Incorporeality:
Incorporeality is "the state or quality of being incorporeal or bodiless; immateriality; incorporealism." Incorporeal (Greek: ἀσώματος) means "Not composed of matter; having no material existence." Incorporeality is a quality of souls, spirits, and God in many religions, including the currently major denominations and schools of Islam, Christianity and Judaism. In ancient philosophy, any attenuated "thin" matter such as air, aether, fire or light was considered incorporeal. The ancient Greeks believed air, as opposed to solid earth, to be incorporeal, in so far as it is less resistant to movement; and the ancient Persians believed fire to be incorporeal in that every soul was said to be produced from it. In modern philosophy, a distinction between the incorporeal and immaterial is not necessarily maintained: a body is described as incorporeal if it is not made out of matter.
Incorporeality:
In the problem of universals, universals are separable from any particular embodiment in one sense, while in another, they seem inherent nonetheless. Aristotle offered a hylomorphic account of abstraction in contrast to Plato's world of Forms. Aristotle used the Greek terms soma (body) and hyle (matter, literally "wood").
Incorporeality:
The notion that a causally effective incorporeal body is even coherent requires the belief that something can affect what's material, without physically existing at the point of effect. A ball can directly affect another ball by coming in direct contact with it, and is visible because it reflects the light that directly reaches it. An incorporeal field of influence, or immaterial body could not perform these functions because they have no physical construction with which to perform these functions. Following Newton, it became customary to accept action at a distance as brute fact, and to overlook the philosophical problems involved in so doing.
Theology:
Church of Jesus Christ of Latter-day Saints Members of the Church of Jesus Christ of Latter-day Saints (see also Mormonism) view the mainstream Christian belief in God's incorporeality as being founded upon a post-Apostolic departure from what they claim is the traditional Judeo-Christian belief: an anthropomorphic, corporeal God. Mainstream Christianity has always interpreted anthropomorphic references to God in Scripture as non-literal, poetic, and symbolic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Garlic fingers**
Garlic fingers:
Garlic fingers (French: Doigts à l'ail) known also as garlic cheese fingers are an Atlantic Canadian dish, similar to a pizza in shape and size and made with the same type of dough. Instead of being cut in triangular slices, they are presented in thin strips, or "fingers".Instead of the traditional tomato sauce and toppings of a pizza, garlic fingers consist of pizza dough topped with garlic butter, parsley, and cheese, which is cooked until the cheese is melted. Bacon bits are also sometimes added.
Garlic fingers:
Garlic fingers are often eaten as a side dish with pizza, and dipped in donair sauce or marinara sauce.
Wisconsin-style cheese fries:
In central Wisconsin and some other parts of the state, a similar dish is served, consisting of a pizza-like, typically thin crust topped with cheese and garlic butter or a garlic-butter-like mixture. It is cut into strips and often accompanied with marinara sauce.
Called cheese fries and sometimes pizza fries or Italian fries, they are sold both in restaurants and in the frozen foods section of supermarkets.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**CART Precision Racing**
CART Precision Racing:
CART Precision Racing is a racing video game developed by Terminal Reality and published by Microsoft Studios for Windows.
Development:
The game was showcased at E3 1997.
Reception:
GameSpot said for the PC, "CART Precision Racing raises the bar for serious racing simulations" and rated the game 8.5. GamePro contradicted that "while Microsoft has done an admirable job with its new CART Precision Racing, it falls short of becoming the new benchmark in racing games." They elaborated that the "quirky controls", confusing array of menu screens, long loading times, and sound card compatibility issues keep the player from feeling fully comfortable while playing the game. They cited the detailed graphics and inclusion of real tracks, drivers, teams, and sponsors as strong points of the game.Next Generation rated it four stars out of five, and stated that "it's a very fun game and an impressive first effort".CART Precision Racing tied with Baseball Mogul to win Computer Gaming World's 1997 "Sports Game of the Year" award. The editors wrote: "With state-of-the-art graphics, Internet play, and incredibly deep options that scale the game from novice play through hard-core realism, CART offers the spiffiest high-tech sports thrills of the year".
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Model-driven application**
Model-driven application:
A model-driven application is a software application that the functions or behaviors are based on, or in control of, some evolutionary applied models of the target things to the application. The applied models are served as a part of the application system in which it can be changed at runtime. The target things are what the application deals with, such as the objects and affairs in business for a business application. Follows the definition of application in TOGAF, a model-driven business application could be described as an IT system that supports business functions and services running on the models of the (things in) business.
History:
The ideal of the architecture for a model-driven application was first put forward by Tong-Ying Yu on the Enterprise Engineering Forum in 1999, which have been studied and spread through some internet media for a long time. It had influence on the field of enterprise application development in China; there were successful cases of commercial development of enterprise/business applications in the architectural style of a model-driven application. Gartner Group carried out some studies into the subject in 2008; they defined the model-driven packaged applications as "enterprise applications that have explicit metadata-driven models of the supported processes, data and relationships, and that generate runtime components through metadata models, either dynamically interpreted or compiled, rather than hardcoded." The model-driven application architecture is one of few technology trends to driven the next generation of application modernization, that claimed by some industrial researchers in 2012.
Instance:
Business process management (BPM) is the significant practice to the model-driven application. According to the definition, a BPM system is model-driven if the functions are operated based on the business process models which are built and changed at the operational time but not the design or implementation time; the biggest advantage is that it can deal with the continuous changing of business process directly without modifying the code of the software.
Notes:
Note that it should be distinguished from the Model-Driven Architecture (MDA); the latter is a software design approach for the development of software systems and generally does not specify a specific system style or the runtime configuration.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Seven-segment display character representations**
Seven-segment display character representations:
The topic of seven-segment display character representations revolves around the various shapes of numerical digits, letters, and punctuation devisable on seven-segment displays. Such representation of characters is not standardized by any relevant entity (e.g. ISO, IEEE or IEC). Unicode provides encoding codepoint for segmented digits in Unicode 13.0 in Symbols for Legacy Computing block.
Digit:
Two basic conventions are in common use for some Arabic numerals: display segment A is optional for digit 6 (/), segment F for 7 (/), and segment D for 9 (/). Although EF () could also be used to represent digit 1, this seems to be rarely done if ever. CDEG () is occasionally encountered on older calculators to represent 0.
Digit:
In Unicode 13.0, 10 codepoints had been given for segmented digits 0–9 in the Symbols for Legacy Computing block:
Alphabet:
In addition to the ten digits, seven-segment displays can be used to show most letters of the Latin, Cyrillic and Greek alphabets including punctuation.
Alphabet:
One such special case is the display of the letters A–F when denoting the hexadecimal values (digits) 10–15. These are needed on some scientific calculators, and are used with some testing displays on electronic equipment. Although there is no official standard, today most devices displaying hex digits use the unique forms shown to the right: uppercase A, lowercase b, uppercase C, lowercase d, uppercase E and F. To avoid ambiguity between the digit 6 and the letter b the digit 6 is displayed with segment A lit.However, this modern scheme was not always followed in the past, and various other schemes could be found as well: The Texas Instruments seven-segment display decoder chips 7446/7447/7448/7449 and 74246/74247/74248/74249 and the Siemens FLH551-7448/555-8448 chips used truncated versions of "2", "3", "4", "5" and "6" for digits A–G. Digit F (1111 binary) was blank.
Alphabet:
Soviet programmable calculators like the Б3-34 instead used the symbols "−", "L", "C", "Г", "E", and " " (space) to display hexadecimal numbers above nine. (The Б3-34 character set allowed for a cross-alphabet display of the English word "Error" as either EГГ0Г or 3ГГ0Г, depending on the error, in all-numeric form during error messages.) Not all 7-segment decoders were suitable to display digits above nine at all. For comparison, the National Semiconductor MM74C912 displayed "o" for A and B, "−" for C, D and E, F, and blank for G.
Alphabet:
The CD4511 even just displayed blanks.
Alphabet:
The Magic Black Box, an electronic version of the Magic 8-Ball toy, used a ROM to generate 64 different 16-character alphanumeric messages on a LED display. It could not generate K, M, V, W, and X but it could generate a question mark.For the remainder of characters, ad hoc and corporate solutions dominate the field of using seven-segment displays to show general words and phrases. Such applications of seven-segment displays are usually not considered essential and are only used for basic notifications on consumer electronics appliances (as is the case of this article's example phrases), and as internal test messages on equipment under development. Certain letters (M, V, W, X in the Latin alphabet) cannot be expressed unambiguously at all due to either diagonal strokes, more than two vertical strokes, or inability to distinguish them from other letters, while others can only be expressed in either capital form or lowercase form but not both. The Nine-segment display, fourteen-segment display, sixteen-segment display or dot matrix display are more commonly used for hardware that requires the display of messages that are more than trivial.
Examples:
The following phrases come from a portable media player's seven-segment display. They give a good illustration of an application where a seven-segment display may be sufficient for displaying letters, since the relevant messages are neither critical nor in any significant risk of being misunderstood, much due to the limited number and rigid domain specificity of the messages. As such, there is no direct need for a more expressive display, in this case, although even a slightly wider repertoire of messages would require at least a 14-segment display or a dot matrix one.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Memorex**
Memorex:
Memorex Corp. began as a computer tape producer and expanded to become both a consumer media supplier and a major IBM plug compatible peripheral supplier. It was broken up and ceased to exist after 1996 other than as a consumer electronics brand specializing in disk recordable media for CD and DVD drives, flash memory, computer accessories and other electronics.
History and evolution:
Established in 1961 in Silicon Valley, Memorex started by selling computer tapes, then added other media such as disk packs. The company then expanded into disk drives and other peripheral equipment for IBM mainframes. During the 1970s and into the early 1980s, Memorex was worldwide one of the largest independent suppliers of disk drives and communications controllers to users of IBM-compatible mainframes, as well as media for computer uses and consumers. The company's name is a portmanteau of "memory excellence".
History and evolution:
Memorex entered the consumer media business in 1971 and started the ad campaign, first with its "shattering glass" advertisements and then with a series of legendary television commercials featuring Ella Fitzgerald. In the commercials, she would sing a note that shattered a glass while being recorded to a Memorex audio cassette. The tape was played back and the recording also broke the glass, asking "Is it live, or is it Memorex?" This would become the company slogan which was used in a series of advertisements released through 1970s and 1980s.
History and evolution:
In 1982, Memorex was bought by Burroughs for its enterprise businesses; the company’s consumer business, a small segment of the company’s revenue at that time was sold to Tandy. Over the next six years, Burroughs and its successor Unisys shut down, sold off or spun out the various remaining parts of Memorex.
History and evolution:
The computer media, communications and IBM end user sales and service organization were spun out as Memorex International. In 1988, Memorex International acquired the Telex Corporation becoming Memorex Telex NV, a corporation based in the Netherlands, which survived as an entity until the middle 1990s. The company evolved into a provider of information technology solutions including the distribution and integration of data network and storage products and the provision of related services in 18 countries worldwide. As late as 2006, several pieces existed as subsidiaries of other companies, see e.g., Memorex Telex Japan Ltd a subsidiary of Kanematsu or Memorex Telex (UK) Ltd. a subsidiary of EDS Global Field Services.Over time the Memorex consumer brand has been owned by Tandy, Handy Holdings and Imation. As of 2016, the Memorex brand is owned by Digital Products International (DPI).
Timeline:
1961 – Memorex is founded by Laurence L. Spitters, Arnold T. Challman, Donald F. Eldridge and W Lawrence Noon with Spitters as president.
1962 – Memorex is one of the early independent companies to ship computer tape.
May 1965 – Memorex IPO's at $25 and closes at $32.
1966 – Memorex is first independent company to ship a disk pack.
Timeline:
Jun 1968 – Memorex is first to ship an IBM-plug-compatible disk drive 1970 – Memorex ships 1270 Communications Controller 1971 – With CBS Memorex forms CMX Systems, a company formed to design videotape editing systems Sep 1971 – Memorex launches its consumer tape business 1972 – Memorex launches its "Is it live, or is it Memorex?" campaign Apr 1981 – Burroughs acquires Memorex Apr 1982 – Burroughs sells Memorex consumer brand to Tandy May 1985 – Burroughs exits OEM disk drive business, selling sales and service to Toshiba Sep 1986 – Burroughs acquires Sperry and renames itself as Unisys Dec 1986 – Unisys spins off Memorex Media, Telecommunications and International businesses as Memorex International NV.
Timeline:
Jan 1988 – Memorex-Telex merger Dec 1988 – Unisys mainly shuts down large disk business and spins off service and repair as Sequel.
Nov 1993 – Tandy sells Memorex consumer brand to Hanny Holdings of Hong Kong Oct 1996 – The U.S. operations of Memorex Telex NV filed for bankruptcy and with court approval were sold November 1, 1996.
Jan 2006 – Imation acquires Memorex brand for $330 million.
Jan 2016 – Imation closed on the sale of its Memorex trademark and two associated trademark licenses to DPI Inc., a St. Louis-based branded consumer electronics company for $9.4 million.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**FindBugs**
FindBugs:
FindBugs is an open-source static code analyser created by Bill Pugh and David Hovemeyer which detects possible bugs in Java programs. Potential errors are classified in four ranks: (i) scariest, (ii) scary, (iii) troubling and (iv) of concern. This is a hint to the developer about their possible impact or severity. FindBugs operates on Java bytecode, rather than source code. The software is distributed as a stand-alone GUI application. There are also plug-ins available for Eclipse, NetBeans, IntelliJ IDEA, Gradle, Hudson, Maven, Bamboo and Jenkins.Additional rule sets can be plugged in FindBugs to increase the set of checks performed.
SpotBugs:
SpotBugs is the spiritual successor of FindBugs, carrying on from the point where it left off with support of its community.
SpotBugs:
In 2016, the project lead of FindBugs was inactive but there are many issues in its community so Andrey Loskutov gave an announcement to its community, and some volunteers tried creating a project with support for modern Java platform and better maintainability. In 2017 Sep, Andrey Loskutov again gave an announcement about the status of new community, then released SpotBugs 3.1.0 with support for Java 11 the new LTS, especially Java Platform Module System and invokedynamic instruction.
SpotBugs:
There are also plug-ins available for Eclipse, IntelliJ IDEA, Gradle, Maven and SonarQube. SpotBugs also supports all of existing FindBugs plugins such as sb-contrib, find-security-bugs, with several minor changes.
Applications SpotBugs have numerous areas of applications: Testing during a Continuous Integration or Delivery Cycle.
Locating faults in an application.
During a code review.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**International Journal of Neuroscience**
International Journal of Neuroscience:
The International Journal of Neuroscience is a peer-reviewed scientific journal that publishes original research articles, reviews, brief scientific notes, case studies, letters to the editor, and book reviews concerned with all aspects of neuroscience and neurology.
Editors:
The Editors-in-Chief of the International Journal of Neuroscience is Dr. Mohamad Bydon.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Morphosyntactic alignment**
Morphosyntactic alignment:
In linguistics, morphosyntactic alignment is the grammatical relationship between arguments—specifically, between the two arguments (in English, subject and object) of transitive verbs like the dog chased the cat, and the single argument of intransitive verbs like the cat ran away. English has a subject, which merges the more active argument of transitive verbs with the argument of intransitive verbs, leaving the object distinct; other languages may have different strategies, or, rarely, make no distinction at all. Distinctions may be made morphologically (through case and agreement), syntactically (through word order), or both.
Terminology:
Arguments Dixon (1994) The following notations will be used to discuss the various types of alignment: S (from sole), the subject of an intransitive verb ; A (from agent), the subject of a transitive verb; O (from object), the object of a transitive verb. Some authors use the label P (from patient) for O.Note that while the labels S, A, O, and P originally stood for subject, agent, object, and patient, respectively, the concepts of S, A, and O/P are distinct both from the grammatical relations and thematic relations. In other words, an A or S need not be an agent or subject, and an O need not be a patient.
Terminology:
In a nominative–accusative system, S and A are grouped together, contrasting O. In an ergative–absolutive system, S and O are one group and contrast with A. The English language represents a typical nominative–accusative system (accusative for short). The name derived from the nominative and accusative cases. Basque is an ergative–absolutive system (or simply ergative). The name stemmed from the ergative and absolutive cases. S is said to align with either A (as in English) or O (as in Basque) when they take the same form.
Terminology:
Bickel & Nichols (2009) Listed below are argument roles used by Bickel and Nichols for the description of alignment types. Their taxonomy is based on semantic roles and valency (the number of arguments controlled by a predicate).
Terminology:
S, the sole argument of a one-place predicate A, the more agent-like arguments of a two-place (A1) or three-place (A2) predicate O, the less agent-like argument of a two-place predicate G, the more goal-like argument of a three-place predicate T, the non-goal-like and non-agent-like argument of a three-place predicate Locus of marking The term locus refers to a location where the morphosyntactic marker reflecting the syntactic relations is situated. The markers may be located on the head of a phrase, a dependent, and both or none of them.
Types of alignment:
Nominative–accusative (or accusative) alignment treats the S argument of an intransitive verb like the A argument of transitive verbs, with the O argument distinct (S = A; O separate) (see nominative–accusative language). In a language with morphological case marking, an S and an A may both be unmarked or marked with the nominative case while the O is marked with an accusative case (or sometimes an oblique case used for dative or instrumental case roles also), as occurs with nominative -us and accusative -um in Latin: Julius venit "Julius came"; Julius Brutum vidit "Julius saw Brutus". Languages with nominative–accusative alignment can detransitivize transitive verbs by demoting the A argument and promoting the O to be an S (thus taking nominative case marking); it is called the passive voice. Most of the world's languages have accusative alignment. An uncommon subtype is called marked nominative. In such languages, the subject of a verb is marked for nominative case, but the object is unmarked, as are citation forms and objects of prepositions. Such alignments are clearly documented only in northeastern Africa, particularly in the Cushitic languages, and the southwestern United States and adjacent parts of Mexico, in the Yuman languages.
Types of alignment:
Ergative–absolutive (or ergative) alignment treats an intransitive argument like a transitive O argument (S = O; A separate) (see ergative–absolutive language). An A may be marked with an ergative case (or sometimes an oblique case used also for the genitive or instrumental case roles) while the S argument of an intransitive verb and the O argument of a transitive verb are left unmarked or sometimes marked with an absolutive case. Ergative–absolutive languages can detransitivize transitive verbs by demoting the O and promoting the A to an S, thus taking the absolutive case, called the antipassive voice. About a sixth of the world's languages have ergative alignment. The best known are probably the Inuit languages and Basque.
Types of alignment:
Active–stative alignment treats the arguments of intransitive verbs like the A argument of transitives (like English) in some cases and like transitive O arguments (like Inuit) in other cases (Sa=A; So=O). For example, in Georgian, Mariamma imğera "Mary (-ma) sang", Mariam shares the same narrative case ending as in the transitive clause Mariamma c'erili dac'era "Mary (-ma) wrote the letter (-i)", while in Mariami iq'o Tbilisši revolutsiamde "Mary (-i) was in Tbilisi up to the revolution", Mariam shares the same case ending (-i) as the object of the transitive clause. Thus, the arguments of intransitive verbs are not uniform in its behaviour. The reasons for treating intransitive arguments like A or like O usually have a semantic basis. The particular criteria vary from language to language and may be either fixed for each verb or chosen by the speaker according to the degree of volition, control, or suffering of the participant or to the degree of sympathy that the speaker has for the participant.
Types of alignment:
Austronesian alignment, also called Philippine-type alignment, is found in the Austronesian languages of the Philippines, Borneo, Taiwan, and Madagascar. These languages have both accusative-type and ergative-type alignments in transitive verbs. They are traditionally (and misleadingly) called "active" and "passive" voice because the speaker can choose to use either one rather like active and passive voice in English. However, because they are not true voice, terms such as "agent trigger" or "actor focus" are increasingly used for the accusative type (S=A) and "patient trigger" or "undergoer focus" for the ergative type (S=O). (The terms with "trigger" may be preferred over those with "focus" because these are not focus systems either; morphological alignment has a long history of confused terminology). Patient-trigger alignment is the default in most of these languages. For either alignment, two core cases are used (unlike passive and antipassive voice, which have only one), but the same morphology is used for the "nominative" of the agent-trigger alignment and the "absolutive" of the patient-trigger alignment so there is a total of just three core cases: common S/A/O (usually called nominative, or less ambiguously direct), ergative A, and accusative O. Some Austronesianists argue that these languages have four alignments, with additional "voices" that mark a locative or benefactive with the direct case, but most maintain that these are not core arguments and thus not basic to the system.
Types of alignment:
Direct alignment: very few languages make no distinction among agent, patient, and intransitive arguments, leaving the hearer to rely entirely on context and common sense to figure them out. This S/A/O case is called direct, as it sometimes is in Austronesian alignment.
Tripartite alignment uses a separate case or syntax for each argument, which are conventionally called the accusative case, the intransitive case, and the ergative case. The Nez Perce language is a notable example.
Types of alignment:
Transitive alignment: certain Iranian languages, such as Rushani, distinguish only transitivity (in the past tense), using a transitive case for both A and O, and an intransitive case for S. That is sometimes called a double-oblique system, as the transitive case is equivalent to the accusative in the non-past tense.The direct, tripartite, and transitive alignment types are all quite rare. The alignment types other than Austronesian and Active-Stative can be shown graphically like this: In addition, in some languages, both nominative–accusative and ergative–absolutive systems may be used, split between different grammatical contexts, called split ergativity. The split may sometimes be linked to animacy, as in many Australian Aboriginal languages, or to aspect, as in Hindustani and Mayan languages. A few Australian languages, such as Diyari, are split among accusative, ergative, and tripartite alignment, depending on animacy.
Types of alignment:
A popular idea, introduced in Anderson (1976), is that some constructions universally favor accusative alignment while others are more flexible. In general, behavioral constructions (control, raising, relativization) are claimed to favor nominative–accusative alignment while coding constructions (especially case constructions) do not show any alignment preferences. This idea underlies early notions of ‘deep’ vs. ‘surface’ (or ‘syntactic’ vs. ‘morphological’) ergativity (e.g. Comrie 1978; Dixon 1994): many languages have surface ergativity only (ergative alignments only in their coding constructions, like case or agreement) but not in their behavioral constructions or at least not in all of them. Languages with deep ergativity (with ergative alignment in behavioral constructions) appear to be less common.
Comparison between ergative-absolutive and nominative-accusative:
The arguments can be symbolized as follows: O = most patient-like argument of a transitive clause (also symbolized as P) S = sole argument of an intransitive clause A = most agent-like argument of a transitive clauseThe S/A/O terminology avoids the use of terms like "subject" and "object", which are not stable concepts from language to language. Moreover, it avoids the terms "agent" and "patient", which are semantic roles that do not correspond consistently to particular arguments. For instance, the A might be an experiencer or a source, semantically, not just an agent.
Comparison between ergative-absolutive and nominative-accusative:
The relationship between ergative and accusative systems can be schematically represented as the following: The following Basque examples demonstrate ergative–absolutive case marking system: In Basque, gizona is "the man" and mutila is "the boy". In a sentence like mutila gizonak ikusi du, you know who is seeing whom because -k is added to the one doing the seeing. So the sentence means "the man saw the boy". If you want to say "the boy saw the man", add the -k instead to the word meaning "the boy": mutilak gizona ikusi du.
Comparison between ergative-absolutive and nominative-accusative:
With a verb like etorri, "come", there's no need to distinguish "who is doing the coming", so no -k is added. "The boy came" is mutila etorri da.
Comparison between ergative-absolutive and nominative-accusative:
Japanese – by contrast – marks nouns by following them with different particles which indicate their function in the sentence: In this language, in the sentence "the man saw the child", the one doing the seeing ("man") may be marked with ga, which works like Basque -k (and the one who is being seen may be marked with o). However, in sentences like "the child arrived" ga can still be used even though the situation involves only a "doer" and not a "done-to". This is unlike Basque, where -k is completely forbidden in such sentences.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Kempe's universality theorem**
Kempe's universality theorem:
In 1876 Alfred B. Kempe published his article On a General Method of describing Plane Curves of the nth degree by Linkwork, which showed that for an arbitrary algebraic plane curve a linkage can be constructed that draws the curve. This direct connection between linkages and algebraic curves has been named Kempe's universality theorem that any bounded subset of an algebraic curve may be traced out by the motion of one of the joints in a suitably chosen linkage. Kempe's proof was flawed and the first complete proof was provided in 2002 based on his ideas.This theorem has been popularized by describing it as saying, "One can design a linkage which will sign your name!"Kempe recognized that his results demonstrate the existence of a drawing linkage but it would not be practical. He states It is hardly necessary to add, that this method would not be practically useful on account of the complexity of the linkwork employed, a necessary consequence of the perfect generality of the demonstration.
Kempe's universality theorem:
He then calls for the "mathematical artist" to find simpler ways to achieve this result: The method has, however, an interest, as showing that there is a way of drawing any given case; and the variety of methods of expressing particular functions that have already been discovered renders it in the highest degree probable that in every case a simpler method can be found. There is still, however, a wide field open to the mathematical artist to discover the simplest linkworks that will describe particular curves.
Kempe's universality theorem:
A series of animations demonstrating the linkwork that results from Kempe's universality theorem are available for the parabola, self-intersecting cubic, smooth elliptic cubic and the trifolium curves.
Simpler drawing linkages:
Several approaches have been taken to simplify the drawing linkages that result from Kempe's universality theorem. Some of the complexity arises from the linkages Kempe used to perform addition and subtraction of two angles, the multiplication of an angle by a constant, and translation of the rotation of a link in one location to a rotation of a second link at another location. Kempe called these linkages additor, reversor, multiplicator and translator linkages, respectively. The drawing linkage can be simplified by using bevel gear differentials to add and subtract angles, gear trains to multiply angles and belt or cable drives to translate rotation angles.Another source of complexity is the generality of Kempe's application to all algebraic curves. By focusing on parameterized algebraic curves, dual quaternion algebra can be used to factor the motion polynomial and obtain a drawing linkage. This has been extended to provide movement of the end-effector, but again for parameterized curves.Specializing the curves to those defined by trigonometric polynomials has provided another way to obtain simpler drawing linkages. Bezier curves can be written in the form of trigonometric polynomials therefore a linkage system can be designed that draws any curve that is approximated by a sequence of Bezier curves.
Visualizations:
Below is an example of a single-coupled serial chain mechanism, designed by Liu and McCarthy, used to draw the trifolium curve (left) and the hypocycloid curve (right). Using SageMath, their design was interpreted into these images. The source code can be found on GitHub.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ynolate**
Ynolate:
Ynolates are chemical compounds with a negatively charged oxygen attached to an alkyne functionality. They were first synthesized in 1975 by Schöllkopf and Hoppe via the n-butyllithium fragmentation of 3,4-diphenylisoxazole.Synthetically, they behave as ketene precursors or synthons.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Unisys OS 2200 programming languages**
Unisys OS 2200 programming languages:
OS 2200 has had several generations of compilers and linkers in its history supporting a wide variety of programming languages. In the first releases, the Exec II assembler (SLEUTH) and compilers were used. The assembler was quickly replaced with an updated version (ASM) designed specifically for the 1108 computer and Exec 8 but the early compilers continued in use for quite some time.
Universal Compiling System:
The modern compiling system for OS 2200 is known as UCS, Universal Compiling System. The UCS architecture uses a common syntax analyzer, separate semantic front ends for each language and a common back-end and optimizer. There is also a common language runtime environment. The UCS system was developed starting in 1969 and initially included PL/I and Pascal. FORTRAN and COBOL were soon added. Ada was added later. The currently supported languages include COBOL, FORTRAN, C, and PLUS. PLUS, Programming Language for Unisys (originally UNIVAC) Systems, is a block structured language somewhat similar to Pascal which it predates.
Legacy compilers:
Previous PLUS, COBOL and FORTRAN compilers are also still supported. An even earlier FORTRAN compiler (FORTRAN V), while no longer supported, is still in use for an application developed in the 1960s in that language.
Compilers previously existed for ALGOL, Simula, BASIC, Lisp, NELIAC, JOVIAL, and other programming languages that are no longer in use on the ClearPath OS 2200 systems.
Assembler:
The assembler, MASM, is heavily used both to obtain the ultimate in efficiency and to implement system calls that are not native to the programming language. Much of the MASM code in current use is a carryover from earlier days when compiler technology was not as advanced and when the machines were much slower and more constrained by memory size than today.
Linking:
There are two linking systems used. The collector (@MAP) combines the output relocatable elements of the basic-mode compilers and assemblers into an absolute element which is directly executable. While this linker is intended primarily to support basic mode, the relocatable and absolute elements may contain extended-mode as well. This is often the case when an existing application is enhanced to use extended mode or call extended mode libraries but still contains some basic mode code. The Exec is an example of such a program.
Linking:
The linker (@LINK) is the modern linking environment which combines object modules into a new object module. It provides both static and dynamic linking capabilities. The most common usage is to combine the object modules of a program statically but to allow dynamic linking to libraries.
Java:
OS 2200 provides a complete Java environment.
Java:
Java on OS 2200 has evolved from an interesting additional capability for small servlets and tools to a full environment capable of handling large applications. The Virtual Machine for the Java Platform on ClearPath OS 2200 JProcessor is a Linux port of the Oracle Corporation Java release. The environment includes a full J2EE application server environment using the Tomcat open source web server from the Apache Software Foundation and the JBoss application server. All of this has been integrated with the OS 2200 security, databases, and recovery environment.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Ancient TL**
Ancient TL:
Ancient TL is a peer-reviewed open-access scientific journal covering luminescence and electron spin resonance dating. It is published by the Luminescence Dosimetry Laboratory, Department of Physics, East Carolina University. The journal was established in 1977 by D.W. Zimmerman (Washington University in St. Louis).
Since 2015 the journal has been available online only. The journal is community maintained and articles can be published and downloaded free of charge. Since 2020, articles are published under the Creative Commons licence CC BY 4.0.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Isotopes of gadolinium**
Isotopes of gadolinium:
Naturally occurring gadolinium (64Gd) is composed of 6 stable isotopes, 154Gd, 155Gd, 156Gd, 157Gd, 158Gd and 160Gd, and 1 radioisotope, 152Gd, with 158Gd being the most abundant (24.84% natural abundance). The predicted double beta decay of 160Gd has never been observed; only a lower limit on its half-life of more than 1.3×1021 years has been set experimentally.Thirty-three radioisotopes have been characterized, with the most stable being alpha-decaying 152Gd (naturally occurring) with a half-life of 1.08×1014 years, and 150Gd with a half-life of 1.79×106 years. All of the remaining radioactive isotopes have half-lives less than 74.7 years. The majority of these have half-lives less than 24.6 seconds. Gadolinium isotopes have 10 metastable isomers, with the most stable being 143mGd (t1/2 = 110 seconds), 145mGd (t1/2 = 85 seconds) and 141mGd (t1/2 = 24.5 seconds).
Isotopes of gadolinium:
The primary decay mode at atomic weights lower than the most abundant stable isotope, 158Gd, is electron capture, and the primary mode at higher atomic weights is beta decay. The primary decay products for isotopes lighter than 158Gd are isotopes of europium and the primary products of heavier isotopes are isotopes of terbium.
Isotopes of gadolinium:
Gadolinium-153 has a half-life of 240.4 ± 10 days and emits gamma radiation with strong peaks at 41 keV and 102 keV. It is used as a gamma ray source for X-ray absorptiometry and fluorescence, for bone density gauges for osteoporosis screening, and for radiometric profiling in the Lixiscope portable x-ray imaging system, also known as the Lixi Profiler. In nuclear medicine, it serves to calibrate the equipment needed like single-photon emission computed tomography systems (SPECT) to make x-rays. It ensures that the machines work correctly to produce images of radioisotope distribution inside the patient. This isotope is produced in a nuclear reactor from europium or enriched gadolinium. It can also detect the loss of calcium in the hip and back bones, allowing the ability to diagnose osteoporosis.Gadolinium-148 would be ideal for radioisotope thermoelectric generators due to its 74-year half-life, high density, and dominant alpha decay mode. However, gadolinium-148 cannot be economically synthesized in sufficient quantities to power a RTG.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Shell growth in estuaries**
Shell growth in estuaries:
Shell growth in estuaries is an aspect of marine biology that has attracted a number of scientific research studies. Many groups of marine organisms produce calcified exoskeletons, commonly known as shells, hard calcium carbonate structures which the organisms rely on for various specialized structural and defensive purposes. The rate at which these shells form is greatly influenced by physical and chemical characteristics of the water in which these organisms live. Estuaries are dynamic habitats which expose their inhabitants to a wide array of rapidly changing physical conditions, exaggerating the differences in physical and chemical properties of the water.
Shell growth in estuaries:
Estuaries have large variation in salinity, ranging from entirely fresh water upstream to fully marine water at the ocean boundary. Estuarine systems also experience daily, tidal and seasonal swings in temperature, which affect many of the chemical characteristics of the water and in turn affect the metabolic and calcifying processes of shell-producing organisms. Temperature and salinity affect the carbonate balance of the water, influencing carbonate equilibrium, calcium carbonate solubility and the saturation states of calcite and aragonite. The tidal influences and shallow water of estuaries mean that estuarine organisms experience wide variations in temperature, salinity and other aspects of water chemistry; these fluctuations make the estuarine habitat ideal for studies on the influence of changing physical and chemical conditions on processes such as shell deposition. Changing conditions in estuaries and coastal regions are especially relevant to human interests, because about 50% of global calcification and 90% of fish catch occurs in these locations.A substantial proportion of larger marine calcifying organisms are molluscs: bivalves, gastropods and chitons. Cnidarians such as corals, echinoderms such as sea urchins, and arthropods such as barnacles also produce shells in coastal ecosystems. Most of these groups are benthic, living on hard or soft substrates at the bottom of the estuary. Some are attached, like barnacles or corals; some move around on the surface like urchins or gastropods; and some live inside the sediment, like most of the bivalve species.
Shell growth in estuaries:
Minute pelagic species in the phyla Foraminifera and Radiolaria also produce ornate calcareous skeletons. Many benthic mollusks have planktonic larvae called veligers that have calcareous shells, and these larvae are particularly vulnerable to changes in water chemistry; their shells are so thin that small changes in pH can have a large impact on their ability to survive. Some holoplankton (organisms that are planktonic for their full lives) have calcareous skeletons as well, and are even more susceptible to unfavorable shell deposition conditions, since they spend their entire lives in the water column.
Details of carbonate usage:
There are several variations in calcium carbonate (CaCO3) skeletons, including the two different crystalline forms, calcite and aragonite, as well as other elements which can become incorporated into the mineral matrix, altering its properties. Calcite is a hexagonal form of CaCO3 that is softer and less dense than aragonite, which has a rhombic form. Calcite is the more stable form of CaCO3 and is less soluble in water under standard temperature and pressure than aragonite, with a solubility product constant (Ksp) of 10−8.48 compared to 10−8.28 for aragonite. This means that a greater proportion of aragonite will dissolve in water, producing calcium (Ca2+) and carbonate (CO32−) ions. The amount of magnesium (Mg) incorporated into the mineral matrix during calcium carbonate deposition can also alter the properties of the shell, because magnesium inhibits calcium deposition by inhibiting nucleation of calcite and aragonite. Skeletons with significant amounts of magnesium incorporated into the matrix (greater than 12%) are more soluble, so the presence of this mineral can negatively impact shell durability, which is why some organisms remove magnesium from the water during the calcification process.
Influencing factors:
Food availability can alter shell growth patterns, as can chemical cues from predators, which cause clams, snails and oysters to produce thicker shells. There are costs to producing thicker shells as protection, including the energetic expense of calcification, limits on somatic growth, and reduced growth rates in terms of shell length. In order to minimize the significant energetic expense of shell formation, several calcifying species reduce shell production by producing porous shells or spines and ridges as more economical forms of predator defense.
Influencing factors:
Temperature and salinity also affect shell growth by altering organismal processes, including metabolism and shell magnesium (Mg) incorporation, as well as water chemistry in terms of calcium carbonate solubility, CaCO3 saturation states, ion-pairing, alkalinity and carbonate equilibrium. This is especially relevant in estuaries, where salinities range from 0 to 35, and other water properties such as temperature and nutrient composition also vary widely during the transition from fresh river water to saline ocean water. Acidity (pH) and carbonate saturation states also reach extremes in estuarine systems, making these habitats a natural testing ground for the impacts of chemical changes on the calcification of shelled organisms.
Carbonate and shell deposition:
Calcification rates are largely related to the amount of available carbonate (CO32−) ions in the water, and this is linked to the relative amounts of (and reactions between) different types of carbonate. Carbon dioxide from the atmosphere and from respiration of animals in estuarine and marine environments quickly reacts in water to form carbonic acid, H2CO3. Carbonic acid then dissociates into bicarbonate (HCO3−) and releases hydrogen ions, and the equilibrium constant for this equation is referred to as K1. Bicarbonate dissociates into carbonate (CO32−), releasing another hydrogen ion (H+), with an equilibrium constant known as K2. The equilibrium constants refer to the ratio of products to reactants produced in these reactions, so the constants K1 and K2 govern the relative amounts of different carbonate compounds in the water.
Carbonate and shell deposition:
H2CO3 ↔ H+ + HCO3− K1 = ([H+] x [HCO3−]) / [H2CO3] HCO3− ↔ H+ + CO32− K2 = ([H+] x [CO32−]) / [HCO3−] Since alkalinity, or acid-buffering capacity, of the water is regulated by the number of hydrogen ions that a cation can accept, carbonate (can accept 2 H+) and bicarbonate (can accept 1 H+) are the principal components of alkalinity in estuarine and marine systems. Since acidic conditions promote shell dissolution, the alkalinity of the water is positively correlated with shell deposition, especially in estuarine regions that experience broad swings in pH. Based on the carbonate equilibrium equations, an increase in K2 leads to higher levels of available carbonate and a potential increase in calcification rates as a result. The values for K1 and K2 can be influenced by several different physical factors, including temperature, salinity and pressure, so organisms in different habitats can encounter different equilibrium conditions. Many of these same factors influence solubility of calcium carbonate, with the solubility product constant Ksp expressed as the concentration of dissolved calcium and carbonate ions at equilibrium: Ksp = [Ca2+][CO32−]. Therefore, increases in Ksp based on differences in temperature or pressure or increases in the apparent solubility constant K’sp as a result of salinity or pH changes means that calcium carbonate is more soluble. Increased solubility of CaCO3 makes shell deposition more difficult, and so this has a negative impact on the calcification process.
Carbonate and shell deposition:
The saturation state of calcium carbonate also has a strong influence on shell deposition, with calcification only occurring when the water is saturated or supersaturated with CaCO3, based on the formula: Ω = [CO32−][Ca2+] / K’sp. Higher saturation states mean higher concentrations of carbonate and calcium relative to the solubility of calcium carbonate, favoring shell deposition. The two forms of CaCO3 have different saturation states, with the more soluble aragonite displaying a lower saturation state than calcite. Since aragonite is more soluble than calcite and solubility increases with pressure, the depth at which the ocean is undersaturated with aragonite (aragonite compensation depth) is shallower than the depth at which it is undersaturated with calcite (calcite compensation depth). As a result, aragonite-based organisms live in shallower environments. Calcification rate does not change much with saturation levels above 300%. Since saturation state can be affected by both solubility and carbonate ion concentrations, it can be strongly impacted by environmental factors such as temperature and salinity.
Effect of temperature on calcification:
Water temperatures vary widely on a seasonal basis in polar and temperate habitats, inducing metabolic changes in organisms exposed to these conditions. Seasonal temperature swings are even more drastic in estuaries than in the open ocean due to the large surface area of shallow water as well as the differential temperature of ocean and river water. During the summer, rivers are often warmer than the ocean, so there is a gradient of decreasing temperature towards the ocean in an estuary. This switches in the winter, with ocean waters being much warmer than river water, producing the opposite temperature gradient. Temperature is changing on a larger time scale as well, with predicted temperature changes slowly increasing both freshwater and marine water sources (though at variable rates), further enhancing the impact that temperature has on shell deposition processes in estuarine environments.
Effect of temperature on calcification:
Solubility product Temperature has a strong effect on the solubility product constants for both calcite and aragonite, with an approximately 20% decrease in K’sp from 0 to 25 °C. The lower solubility constants for calcite and aragonite with elevated temperature have a positive impact on calcium carbonate precipitation and deposition, making it easier for calcifying organisms to produce shells in water with lower solubility of calcium carbonate. Temperature can also influence the calcite:aragonite ratios, as aragonite precipitation rates are more strongly tied to temperature, with aragonite precipitation dominating above 6 °C.
Effect of temperature on calcification:
Saturation state Temperature also has a large impact on the saturation state of calcium carbonate species, as the level of disequilibrium (degree of saturation) strongly influences reaction rates. Comeau et al. point out that cold locations such as the Arctic show the most dramatic decreases in aragonite saturation state (Ω) associated with climate change. This particularly affects pteropods since they have thin aragonite shells and are the dominant planktonic species in cold Arctic waters. There is a positive correlation between temperature and calcite saturation state for the eastern oyster Crassostrea virginica, which produces a shell primarily composed of calcite. While oysters are benthic and use calcite instead of aragonite (like pteropods), there is still a clear increase in both calcite saturation level and oyster calcification rate at the higher temperature treatments.
Effect of temperature on calcification:
In addition to impacting the solubility and saturation state of calcite and aragonite, temperature can alter the composition of shell or calcified skeletons, especially influencing the incorporation of magnesium (Mg) into the mineral matrix. Magnesium content of carbonate skeletons (as MgCO3) increases with temperature, explaining a third of the variation in sea star Mg:Ca ratios. This is important because when more than 8-12% of a calcite-dominated skeleton is composed of MgCO3, the shell material is more soluble than aragonite. As a result of the positive correlation between temperature and Mg content, organisms that live in colder environments such as the deep sea and high latitudes have a lower percentage of MgCO3 incorporated into their shells.
Effect of temperature on calcification:
Even small temperature changes such as those predicted under global warming scenarios can influence Mg:Ca ratios, as the foraminiferan Ammonia tepida increases its Mg:Ca ratio 4-5% per degree of temperature elevation. This response is not limited to animals or open ocean species, since crustose coralline algae also increase their incorporation of magnesium and therefore their solubility at elevated temperatures.
Effect of temperature on calcification:
Shell deposition Between the effect that temperature has on Mg:Ca ratios as well as on solubility and saturation state of calcite and aragonite, it is clear that short- or long-term temperature variations can influence the deposition of calcium carbonate by altering seawater chemistry. The impact that these temperature-induced chemical changes have on shell deposition has been repeatedly demonstrated for a wide array of organisms that inhabit estuarine and coastal systems, highlighting the cumulative effect of all temperature-influenced factors.
Effect of temperature on calcification:
The blue mussel Mytilus edulis is a major space occupier on hard substrates on the east coast of North America and west coast of Europe, and the calcification rate of this species increases up to five times with rising temperature. Eastern oysters and crustose coralline algae have also been shown to increase their calcification rates with elevated temperature, though this can have varied effects on the morphology of the organism.Schone et al. (2006) found that the barnacle Chthamalus fissus and mussel Mytella guyanensis showed faster shell elongation rates at higher temperature, with over 50% of this variability in shell growth explained by temperature changes. The cowry (a sea snail) Monetaria annulus displayed a positive correlation between sea surface temperature (SST) and the thickness of the callus, the outer surface of juvenile shells.
Effect of temperature on calcification:
The predatory intertidal snail Nucella lapillus also develops thicker shells in warmer climates, likely due to constraints on calcification in cold water. Bivalve clams show higher growth rates and produce thicker shells, more spines, and more shell ornamentation at warmer, low latitude locations, again highlighting the enhancement of calcification as a result of warmer water and the corresponding chemical changes.The short-term changes in calcification rate and shell growth described by the aforementioned studies are based on experimental temperature elevation or latitudinal thermal gradients, but long-term temperature trends can also affect shell growth. Sclerochronology can reconstruct historical temperature data from growth increments in shells of many calcifying organisms based on differential growth rates at different temperatures. The visible markers for these growth increments are similar to growth rings, and are also present in fossil shells, enabling researchers to establish that clams such as Phacosoma balticum and Ruditapes philippinarum grew the fastest during times of warmer climate.
Effect of salinity on calcification:
Salinity refers to the water's "saltiness". In oceanography and marine biology, it has been traditional to express salinity not as a percent, but as permille (parts per thousand) (‰), which is approximately grams of salt per kilogram of solution. Salinity varies even more widely than temperature in estuaries, ranging from zero to 35, often over relatively short distances. Even organisms in the same location experience broad swings in salinity with the tides, exposing them to very different water masses with chemical properties that provide varying levels of support for calcification processes. Even within a single estuary, an individual species can be exposed to differing shell deposition conditions, resulting in varied growth patterns due to changes in water chemistry and resultant calcification rates.
Effect of salinity on calcification:
Magnesium:calcium ratios Salinity displays a positive correlation with magnesium:calcium (Mg:Ca) ratios, though shows only about half as much influence as temperature. Salinity in some systems can account for about 25% of the variation in Mg:Ca ratios, with 32% explained by temperature, but these salinity induced changes in shell MgCO3 incorporation are not due to differences in available magnesium. Instead, in planktonic foraminiferans, changes in salinity could hinder the internal mechanisms of magnesium removal prior to calcification. Foraminiferans are thought to produce calcification vacuoles that transport pockets of seawater to the calcification site and alter the makeup of the seawater and remove magnesium, a process that may be interrupted by high levels of salinity. Salinity can also affect the solubility of CaCO3, as shown by the following formulas relating temperature (T) and salinity (S) to K’sp, the apparent solubility product constant for CaCO3.K’sp(calcite) = (0.1614 + .05225 S – 0.0063 T) x 10−6K’sp(aragonite) = (0.5115 + .05225 S – 0.0063 T) x 10−6These equations show that temperature displays a negative relationship with K’sp, while salinity shows a positive relationship with K’sp (calcite and aragonite). The slopes of these lines are the same, with only the intercept changing for the different carbonate species, highlighting that at standard temperature and pressure, aragonite is more soluble than calcite. Mucci presented more complex equations relating temperature and salinity to K’sp, but the same general pattern appears.The increasing solubility of CaCO3 with salinity indicates that organisms in more marine environments would have difficulty depositing shell material if this factor was the only one influencing shell formation. Apparent solubility product is tied to salinity because of the ionic strength of the solution and the formation of cation-carbonate ion pairs that lower the amount of carbonate ions that are available in the water. This equates to the removal of the products from the equation for the dissolution of CaCO3 in water (CaCO3 ↔ Ca2+ + CO32−), which facilitates the forward reaction and favors the dissolution of calcium carbonate. This results in an apparent solubility product for CaCO3 that is 193 times higher in 35‰ seawater than in distilled water.
Effect of salinity on calcification:
Saturation state Salinity has a different effect on the saturation state of calcite and aragonite, causing increases in these values and in calcium concentrations with higher salinity, favoring the precipitation of calcium carbonate. Both alkalinity, or acid buffering capacity, and CaCO3 saturation state increase with salinity, which may help estuarine organisms to overcome fluctuations in pH that could otherwise negatively impact shell formation. However, river waters in some estuaries are oversaturated with calcium carbonate, while mixed estuarine water is undersaturated due to low pH resulting from respiration. Highly eutrophic estuaries support high amounts of planktonic and benthic animals that consume oxygen and produce carbon dioxide, which lowers the pH of estuarine waters and the amount of free carbonate. Therefore, even though higher salinity can cause increased saturation states of calcite and aragonite, there are many other factors that interact in this system to influence the shell deposition of estuarine organisms.
Effect of salinity on calcification:
Shell deposition All of these aspects of shell deposition are affected by salinity in different ways, so it is useful to examine the overall impact that salinity has on calcification rates and shell formation in estuarine organisms, especially in conjunction with temperature, which also affects calcification. Fish bones and scales are heavily calcified, and these parts of Arctic fish are about half as calcified (27% inorganic material) as those from fish in temperate (33%) and tropical (50%) environments. The benthic blue mussel Mytilus edulis also displayed an increase in calcification rate with salinity, showing calcification rates up to 5 times higher at 37‰ than 15‰.For oysters in Chesapeake Bay, salinity does not have an influence on calcification at high temperature (30 °C), but does significantly increase calcification at cooler temperature (20 °C). In the crustose coralline algae Phymatolithon calcareum, temperature and salinity showed an additive effect, as both of these factors increased the overall calcification rate of this encrusting alga. The gross effect of salinity on calcification is largely a positive one, as evidenced by the positive impact of salinity on calcification rates in diverse groups of species. This is likely a result of the increased alkalinity and calcium carbonate saturation states with salinity, which combine to decrease free hydrogen ions and increase free carbonate ions in the water. Higher alkalinity in marine waters is especially important since carbon dioxide produced via respiration in estuaries can lower pH, which decreases saturation states of calcite and aragonite and can cause CaCO3 dissolution. Because of lower salinity in fresher parts of estuaries, alkalinity is lower, increasing the susceptibility of estuarine organisms to calcium carbonate dissolution due to low pH. Increases in salinity and temperature can counteract the negative impact of pH on calcification rates, as they elevate calcite and aragonite saturation states and generally facilitate more favorable conditions for shell growth.
Future changes:
Shell growth and calcification rate are the cumulative outcome of the impacts of temperature and salinity on water chemistry and organismal processes such as metabolism and respiration. It has been established that temperature and salinity influence the balance of the carbonate equilibrium, the solubility and saturation state of calcite and aragonite, as well as the amount of magnesium that gets incorporated into the mineral matrix of the shell. All of these factors combine to produce net calcification rates that are observed under different physical and environmental conditions. Organisms from many phyla produce calcium carbonate skeletons, so organismal processes vary widely, but the effect of physical conditions on water chemistry impacts all calcifying organisms. Since these conditions are dynamic in estuaries, they serve as an ideal test environment to draw conclusions about future shifts in calcification rates based on changes in water chemistry with climate change.
Future changes:
Climate change With changing climate, precipitation is predicted to increase in many areas, resulting in higher river discharge into estuarine environments. In large estuaries such as the Chesapeake Bay, this could result in a large-scale decrease in salinity over hundreds of square kilometers of habitats and cause a decrease in alkalinity and CaCO3 saturation states, reducing calcification rates in affected habitats. Lower alkalinity and increased nutrient availability from runoff will increase biological activity, producing carbon dioxide and thus lowering the pH of these environments. This could be exacerbated by pollution that could make estuarine environments even more eutrophic, negatively impacting shell growth since more acidic conditions favor shell dissolution. However, this may be mitigated by increased temperature due to global warming, since elevated temperature result in lower solubility and higher saturation states for calcite and aragonite, facilitating CaCO3 precipitation and shell formation. Therefore, if organisms are able to adapt or acclimate to increased temperature in terms of physiology, the higher temperature water will be more conducive to shell production than current water temperature, at least in temperate regions.
Future changes:
Calcification rates The limiting factor in shell deposition may be saturation state, especially for aragonite, which is a more soluble and less stable form of CaCO3 than calcite. In 1998, the average global aragonite saturation state was 390%, a range commonly experienced since the last glacial period and a percentage above which calcification rates plateaued. However, there is a precipitous drop in calcification rate with aragonite saturation state dropping below 380%, with a three-fold decrease in calcification accompanying a drop to 98% saturation. By 2100, pCO2 of 560 and pH drop to 7.93 (global ocean average) will reduce the saturation state to 293%, which is unlikely to cause calcification decreases. The following 100–200 years may see pCO2 increase to 1000, pH drop to 7.71, and aragonite saturation state drop to 192, which would result in a 14% drop in calcification rate based on this alone. This could be exacerbated by low salinity from higher precipitation in estuaries, but could also be mitigated by increased temperature which could increase calcification rates. The interaction between pH, temperature and salinity in estuaries and in the world ocean will drive calcification rates and determine future species assemblages based on susceptibility to this change.
Future changes:
One problem with counting on increased temperature to counteract effects of acidification on calcification rate is the relationship between temperature and Mg:Ca ratios, as higher temperature result in higher amounts of magnesium incorporated into the shell matrix. Shells with higher Mg:Ca ratios are more soluble, so even organisms with primarily calcite (less soluble than aragonite) skeletons may be heavily impacted by future conditions.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Connectivism**
Connectivism:
Connectivism is a theoretical framework for understanding learning in a digital age. It emphasizes how internet technologies such as web browsers, search engines, wikis, online discussion forums, and social networks contributed to new avenues of learning. Technologies have enabled people to learn and share information across the World Wide Web and among themselves in ways that were not possible before the digital age. Learning does not simply happen within an individual, but within and across the networks. What sets connectivism apart from theories such as constructivism is the view that "learning (defined as actionable knowledge) can reside outside of ourselves (within an organization or a database), is focused on connecting specialized information sets, and the connections that enable us to learn more are more important than our current state of knowing". Connectivism sees knowledge as a network and learning as a process of pattern recognition. Connectivism has similarities with Vygotsky's zone of proximal development (ZPD) and Engeström's activity theory. The phrase "a learning theory for the digital age" indicates the emphasis that connectivism gives to technology's effect on how people live, communicate, and learn. Connectivism is an integration of principles related to chaos, network, complexity, and self-organization theories.
History:
Connectivism was first introduced in 2004 on a blog post which was later published as an article in 2005 by George Siemens. It was later expanded in 2005 by two publications, Siemens' Connectivism: Learning as Network Creation and Downes' An Introduction to Connective Knowledge. Both works received significant attention in the blogosphere and an extended discourse has followed on the appropriateness of connectivism as a learning theory for the digital age. In 2007, Bill Kerr entered into the debate with a series of lectures and talks on the matter, as did Forster, both at the Online Connectivism Conference at the University of Manitoba. In 2008, in the context of digital and e-learning, connectivism was reconsidered and its technological implications were discussed by Siemens' and Ally.
Nodes and links:
The central aspect of connectivism is the metaphor of a network with nodes and connections. In this metaphor, a node is anything that can be connected to another node such as an organization, information, data, feelings, and images. Connectivism recognizes three node types: neural, conceptual (internal) and external. Connectivism sees learning as the process of creating connections and expanding or increasing network complexity. Connections may have different directions and strength. In this sense, a connection joining nodes A and B which goes from A to B is not the same as one that goes from B to A. There are some special kinds of connections such as "self-join" and pattern. A self-join connection joins a node to itself and a pattern can be defined as "a set of connections appearing together as a single whole".The idea of organisation as cognitive systems where knowledge is distributed across nodes originated from the Perceptron (Artificial neuron) in an Artificial Neural Network, and is directly borrowed from Connectionism, "a software structure developed based on concepts inspired by biological functions of brain; it aims at creating machines able to learn like human".The network metaphor allows a notion of "know-where" (the understanding of where to find the knowledge when it is needed) to supplement to the ones of "know-how" and "know-what" that make the cornerstones of many theories of learning.
Nodes and links:
As Downes states: "at its heart, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks".
Principles Principles of connectivism include: Learning and knowledge rests in diversity of opinions.
Learning is a process of connecting specialized nodes or information sources.
Learning may reside in non-human appliances.
Learning is more critical than knowing.
Maintaining and nurturing connections is needed to facilitate continuous learning. When the interaction time between the actors of a learning environment is not enough, the learning networks cannot be consolidated.
Perceiving connections between fields, ideas and concepts is a core skill.
Currency (accurate, up-to-date knowledge) is the intent of learning activities.
Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision.
Teaching methods:
Summarizing connectivist teaching and learning, Downes states: "to teach is to model and demonstrate, to learn is to practice and reflect."In 2008, Siemens and Downes delivered an online course called "Connectivism and Connective Knowledge". It covered connectivism as content while attempting to implement some of their ideas. The course was free to anyone who wished to participate, and over 2000 people worldwide enrolled. The phrase "Massive Open Online Course" (MOOC) describes this model. All course content was available through RSS feeds, and learners could participate with their choice of tools: threaded discussions in Moodle, blog posts, Second Life and synchronous online meetings. The course was repeated in 2009 and in 2011.
Teaching methods:
At its core, connectivism is a form of experiential learning which prioritizes the set of formed by actions and experience over the idea that knowledge is propositional.
Criticisms:
The idea that connectivism is a new theory of learning is not widely accepted. Verhagen argued that connectivism is rather a "pedagogical view."The lack of comparative literature reviews in Connectivism papers complicate evaluating how Connectivism relates to prior theories, such as socially distributed cognition (Hutchins, 1995), which explored how connectionist ideas could be applied to social systems. Classical theories of cognition such as activity theory (Vygotsky, Leont'ev, Luria, and others starting in the 1920s) proposed that people are embedded actors, with learning considered via three features – a subject (the learner), an object (the task or activity) and tool or mediating artifacts. Social cognitive theory (Bandura, 1962) claimed that people learn by watching others. Social learning theory (Miller and Dollard) elaborated this notion. Situated cognition (Brown, Collins, & Duguid, 1989; Greeno & Moore, 1993) alleged that knowledge is situated in activity bound to social, cultural and physical contexts; knowledge and learning that requires thinking on the fly rather than the storage and retrieval of conceptual knowledge. Community of practice (Lave & Wenger 1991) asserted that the process of sharing information and experiences with the group enables members to learn from each other. Collective intelligence (Lévy, 1994) described a shared or group intelligence that emerges from collaboration and competition.
Criticisms:
Kerr claims that although technology affects learning environments, existing learning theories are sufficient. Kop and Hill conclude that while it does not seem that connectivism is a separate learning theory, it "continues to play an important role in the development and emergence of new pedagogies, where control is shifting from the tutor to an increasingly more autonomous learner." AlDahdouh examined the relation between connectivism and Artificial Neural Network (ANN) and the results, unexpectedly, revealed that ANN researchers use constructivism principles to teach ANN with labeled training data. However, he argued that connectivism principles are used to teach ANN only when the knowledge is unknown.
Criticisms:
Ally recognizes that the world has changed and become more networked, so learning theories developed prior to these global changes are less relevant. However, he argues that, "What is needed is not a new stand-alone theory for the digital age, but a model that integrates the different theories to guide the design of online learning materials.".Chatti notes that Connectivism misses some concepts, which are crucial for learning, such as reflection, learning from failures, error detection and correction, and inquiry. He introduces the Learning as a Network (LaaN) theory which builds upon connectivism, complexity theory, and double-loop learning. LaaN starts from the learner and views learning as the continuous creation of a personal knowledge network (PKN).Schwebel of Torrens University notes that Connectivism provides limited account for how learning occurs online. Conceding that learning occurs across networks, he introduces a paradox of change. If Connectivism accounts for this changing in networks, and these networks change so drastically, as technology has in the past, then theses like this must account for that change too, making it no longer the same theory. Furthermore, citing Understanding Media: The Extensions of Man, Schwebel notes that the nodes can impede on the types of learning that can occur, leading to issues with democratised education, as content presented within the network will both be limited to how the network can handle information, and what content is likely to be presented within the network through behaviourist style principles of reinforcement, as providers are likely to recirculate, reproduce and reiterate information that is rewarded through things such as likes.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**PabloDraw**
PabloDraw:
PabloDraw is a cross-platform text editor designed for creating ANSI and ASCII art, similar to that of its MS-DOS-based predecessors; ACiDDraw (1994) and TheDraw (1986).
PabloDraw:
A notable feature of PabloDraw is its integrated multi-user editing support, making it the first groupware ANSI/ASCII editor in existence. This allows artists from around the world with an internet connection to cooperatively draw (and chat) together. These creations are referred to as "joints", or jointly created productions, and have radically changed the way these artists collaborate in this form.
PabloDraw:
This editor is capable of handling most standard text mode formats such as ANSI, ASCII and Binary (.BIN). Additionally it supports different aspect ratios such as the 80×25 and 80×50 (25 and 50-line text graphics modes, respectively) and emulates the Amiga Topaz font for artists who prefer to draw using that specific extended character set. In addition to ASCII and ANSI art, PabloDraw can also be used to create RIPscrip vector graphics .
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**FMN reductase (NADH)**
FMN reductase (NADH):
FMN reductase (NADH) (EC 1.5.1.42, NADH-FMN reductase) is an enzyme with systematic name FMNH2:NAD+ oxidoreductase. This enzyme catalyses the following chemical reaction FMNH2 + NAD+ ⇌ FMN + NADH + H+The enzyme often forms a complex with monooxygenases.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Inland saline aquaculture**
Inland saline aquaculture:
Inland saline aquaculture is the farming or culture of aquatic animals and plants using inland (i.e. non-coastal) sources of saline groundwater rather than the more common coastal aquaculture methods. As a side benefit, it can be used to reduce the amount of salt in underground water tables, leading to an improvement in the surrounding land usage for agriculture. Due to its nature, it is only commercially possible in areas that have large reserves of saline groundwater, such as Australia.
Systems:
Extensive culture Extensive culture aquaculture systems are simple and with low levels of intervention. An example of this would be a salty dam, lake or pond stocked with trout, where no food is needed to be added as the fish can feed off what naturally occurs in the water. While they required little capital investment or management time, their productivity is relatively low.
Systems:
Intensive culture Intensive culture requires more capital outlay and greater management time. Often they use purpose-built facilities (e.g. tanks), artificial food and aeration and constant monitoring of water quality. It has much higher productivity rates, but associated high levels of feeding, labour, water pumping and capital costs.
Semi-intensive culture Semi-intensive culture is in between extensive and intensive culture. It may range from adding some artificial feed to an extensive system or some aeration and waste management. Costs rise as more inputs are added.
Suitable species:
Fish Rainbow trout - robust, fast growth, require low water temperatures, may be limited to winter production Brown trout - robust, fast growth, require low water temperatures, may be limited to winter production Barramundi - needs higher temperatures, tolerant in a large range of salinity levels Macquarie perch - wide tolerance over range of salinity and water quality levels, not suitable for commercial quantities Silver Perch - suitable for extensive and intensive systems, prefers warmer water Snapper Other species Crustaceans - brine, shrimp, prawns - these can be included as part of a wastewater treatment program as some have the capacity to quickly clean water Molluscs - mussels Algae - both unicellular and "seaweeds" can be used to extract a range of high-value products, including pharmaceutical chemicals.
Suitable species:
Mixing species Chain system Some inland aquaculture systems involve using a range of separated species to increase its productivity. An example of this would be where water is used to culture a fish specifies, which is then diverted to tanks of shellfish which feed on the fine particles left by the fish, which then is diverted to algae species which remove the dissolved nutrients, and then last of all the water is sent to a horticultural system.
Suitable species:
Poly-culture Separate from this type of system is poly-culture, where two or more species are cultured in the same water, possibly multiple fish species or a fish and mollusc species.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Axicabtagene ciloleucel**
Axicabtagene ciloleucel:
Axicabtagene ciloleucel, sold under the brand name Yescarta, is a medication used for the treatment for large B-cell lymphoma that has failed conventional treatment. T cells are removed from a person with lymphoma and genetically engineered to produce a specific T-cell receptor. The resulting chimeric antigen receptor T cells (CAR-Ts) that react to the cancer are then given back to the person to populate the bone marrow. Axicabtagene treatment carries a risk for cytokine release syndrome (CRS) and neurological toxicities.Due to CD19 being a pan-B cell marker, the T-cells that are engineered to target CD19 receptors on the cancerous B cells also influence normal B cells, except some plasma cells.
Side effects:
Because treatment with axicabtagene carries a risk of cytokine release syndrome and neurological toxicities, the FDA has mandated that hospitals be certified for its use prior to treatment of any patients.
History:
It was developed by California-based Kite Pharma.Axicabtagene ciloleucel was awarded U.S. Food and Drug Administration (FDA) breakthrough therapy designation on 18 October 2017, for diffuse large B-cell lymphoma, transformed follicular lymphoma, and primary mediastinal B-cell lymphoma. It also received priority review and orphan drug designation.Based on the ZUMA-1 trial, Kite submitted a biologics license application for axicabtagene in March 2017, for the treatment of non-Hodgkin lymphoma.The FDA granted approval on 18 October 2017, for the second-line treatment of diffuse large B-cell lymphoma.On 1 April 2022, the FDA approved axicabtagene ciloleucel for adults with large B-cell lymphoma (LBCL) that is refractory to first-line chemoimmunotherapy or relapses within twelve months of first-line chemoimmunotherapy. It is not indicated for the treatment of patients with primary central nervous system lymphoma.Approval was based on ZUMA-7, a randomized, open-label, multicenter trial in adults with primary refractory LBCL or relapse within twelve months following completion of first-line therapy. Participants had not yet received treatment for relapsed or refractory lymphoma and were potential candidates for autologous hematopoietic stem cell transplantation (HSCT). A total of 359 participants were randomized 1:1 to receive a single infusion of axicabtagene ciloleucel following fludarabine and cyclophosphamide lymphodepleting chemotherapy or to receive second-line standard therapy, consisting of two or three cycles of chemoimmunotherapy followed by high-dose therapy and autologous HSCT in participants who attained complete remission or partial remission.In January 2023, the National Institute for Health and Care Excellence (NICE) recommended axicabtagene ciloleucel to treat adult patients with diffuse large B-cell lymphoma (DLBCL) or primary mediastinal large B-cell lymphoma (PMBCL) who have already been treated with two or more systemic therapies.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Gated commit**
Gated commit:
A gated commit, gated check-in or pre-tested commit is a software integration pattern that reduces the chances for breaking a build (and often its associated tests) by committing changes into the main branch of version control. This pattern can be supported by a continuous integration (CI) server.To perform a gated commit the software developer must request a gated commit from the CI server before committing the actual changes to a central location. The CI server merges the local changes with the head of the master branch and performs the validations (build and tests) that make up the gate. So the developer can see if his or her changes break the build without actually committing the changes. A commit to the central location will only be allowed if the gates are cleared.
Gated commit:
As an alternative this pattern can be realized using different branches in version control. For instance, GitHub can force all commits to a branch B to be merge commits from pull requests which have successfully been built on the CI server and are up-to-date (i.e. based or rebased on B).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Blocking (radio)**
Blocking (radio):
In radio, and wireless communications in general, blocking is a condition in a receiver in which an off-frequency signal (generally further off-frequency than the immediately adjacent channel) causes the signal of interest to be suppressed.Blocking rejection is the ability of a receiver to tolerate an off-frequency signal and avoid blocking. A good automatic gain control design is part of achieving good blocking rejection.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**NaPTAN**
NaPTAN:
The National Public Transport Access Node (NaPTAN) database is a UK nationwide system for uniquely identifying all the points of access to public transport in the UK. The dataset is closely associated with the National Public Transport Gazetteer.
Every UK railway station, coach terminus, airport, ferry terminal, bus stop, taxi rank or other place where public transport can be joined or left is allocated a unique NaPTAN identifier. The relationship of the stop to a City, Town, Village or other locality can be indicated through an association with elements of the National Public Transport Gazetteer.
There is a CEN standardisation initiative, Identification of Fixed Objects In Public Transport ('IFOPT'), to develop NaPTAN concepts into a European standard for stop identification as an extension to Transmodel, the European standard for Public Transport information.
Purpose of NaPTAN:
The ability to identify and locate stops in relation to topography, both consistently and economically, is fundamental to modern computer based systems that provide passenger information and manage public transport networks. Stop data is needed by journey planners, scheduling systems, real-time systems, for transport planning, performance monitoring, and for many other purposes. Digitalising a nation's public transport stops is an essential step in creating a national information infrastructure.
Purpose of NaPTAN:
In the UK NaPTAN enabled the creation of the Transport Direct Portal, a UK nationwide system for multi-modal journey planning. NaPTAN also underpins TransXChange, the UK standard for bus schedules, which is used for the Electronic Registration of Bus Services.
NaPTAN Components:
NaPTAN comprises several distinct elements A data model, based on Transmodel.
A data exchange format, specified as an XML schema (see http://www.naptan.org.uk/schema/schemas.htm) A central data store used to aggregate and distribute stop data from different UK regions.
A process for different stakeholders to contribute, validate and share data.
A website to access the data www.beta-naptan.dft.gov.uk A simple API that can be used to download the data (https://naptan.api.dft.gov.uk/swagger/index.html)NaPTAN identifiers are designed to be used within the UK's Digital National Framework a system of unique persistent reference for shareable information resources of all types managed by the Ordnance Survey.
NaPTAN includes on a related standard - the UK National Public Transport Gazetteer.
The NaPTAN Database The National Public Transport Access Node dataset has information on all UK public transport stops. Stops are submitted by PTEs to a central authority which consolidates the stops and distributes them back to users. There are currently 380,000 active stop points.
NaPTAN is maintained by the Department of Transport.
NaPTAN Components:
The NaPTAN XML Schema NaPTAN data is described by a NaPTAN XML Schema. This can be used to describe NaPTAN data when exchanging it between systems as XML documents. It is versioned so that different generations of data can be managed. See http://www.dft.gov.uk/naptan/schema/2.4/NaPTAN.xsd NaPTAN Conceptual Model The NPTG & NaPTAN data conform to a family of consistent, interlocking data models. The models are described in the NPTG & NaPTAN Schema Guide in UML notation.
NaPTAN Stops:
NaPTAN Stop Numbering NaPTAN identifiers are a systematic way of identifying of all UK points of access to public transport or "Stop points").
Every UK rail station, bus and coach terminus, airport, ferry terminal, individual bus stop, tram stop, and taxi rank is allocated a unique NaPTAN Identifier.
NaPTAN Stops:
For large interchanges & termini, NaPTAN points identify the entrances from the public thoroughfare – one identifier is distinguished as the main entrance. Platforms may also be individually identified Every local authority has been allocated a unique prefix for their stop numbering, this ensures that stop numbers cannot be duplicated, in addition there are national number prefixes - 900 for coach stops, 910 for railway stations, 920 for airports, 930 for ferry terminals and 940 for metro and tram stops, the national stop numbers are created centrally and not by local authorities.
NaPTAN Stops:
In England stop details are provided by 87 local authorities and are prefixed with numbers ranging between 010 and 490.
In Wales stop details are provided by 22 local authorities and are prefixed with numbers ranging between 511 and 582.
In Scotland stop details are provided by 32 local authorities and are prefixed with numbers ranging between 601 and 690.
NaPTAN Stops:
NaPTAN Stop Descriptors NaPTAN stop points have a number of text descriptor elements associated with them: not just a name, but also additional labels and distinguishing identifiers that will help users to recognise them. These elements can be combined in different ways to provide presentations of names useful for many different contexts, for example on maps, stop finders, timetables etc., and on mobile devices Stop Points may have a Common Name, Short name, Landmark, Street, Asset code, etc.
NaPTAN Stops:
Stop Points may also have alternative names, for example for aliases for different national languages.
NaPTAN Stops:
Stop Names may have a qualifier to distinguish them from other stops within the same group of stops. For example, Bus Station (Stand 1) from Bus Station (Stand 2).The Purpose of these descriptors is to create an iterative level of detail i.e. Country - County - Locality - Street - Name - Identifier. All of this information should be included but it is up to the user of the data to decide how much data is relevant for the task in hand.
NaPTAN Stops:
Stop Locations Every NaPTAN point includes geospatial coordinates specified in Ordnance Survey National Grid format, and as WGS84 latitude and longitude pairs where these are provided by local authorities. This allows NaPTAN points to be projected on maps and to be associated with other information layers such as the Integrated Transport Network of the Ordnance Survey.
National Public Transport Gazetteer:
The National Public Transport Gazetteer is closely associated with the NaPTAN dataset and contains details of every City, Town, Village, suburb in Great Britain (i.e., UK but not including Northern Ireland). This dataset is based on usage of names, rather than legal definitions and so includes local informal names for places as well as their official names.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**1000-Word Philosophy**
1000-Word Philosophy:
1000-Word Philosophy is an online philosophy anthology that publishes introductory 1000-word (or less) essays on philosophical topics.
1000-Word Philosophy:
The project was created in 2014 by Andrew D. Chapman, a philosophy lecturer at the University of Colorado, Boulder. Since 2018, the blog's editor-in-chief is Nathan Nobis, an associate professor of philosophy at Morehouse College. Many of the initial authors are graduates of the University of Colorado at Boulder's Ph.D. program in philosophy; now the contributors are from all over the globe. The essays include references or sources for more discussion of the essay's topic.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Biotinidase deficiency**
Biotinidase deficiency:
Biotinidase deficiency is an autosomal recessive metabolic disorder in which biotin is not released from proteins in the diet during digestion or from normal protein turnover in the cell. This situation results in biotin deficiency.
Biotin is an important water-soluble nutrient that aids in the metabolism of fats, carbohydrates, and proteins. Biotin deficiency can result in behavioral disorders, lack of coordination, learning disabilities and seizures. Biotin supplementation can alleviate and sometimes totally stop such symptoms.
Signs and symptoms:
Signs and symptoms of a biotinidase deficiency can appear several days after birth. These include seizures, hypotonia and muscle/limb weakness, ataxia, paresis, hearing loss, optic atrophy, skin rashes (including seborrheic dermatitis and psoriasis), and alopecia. If left untreated, the disorder can rapidly lead to coma and death.Neonates with BTD may not exhibit any signs, and symptoms typically appear after the first few weeks or months of life. If left untreated, around 70% of infants with BTD will experience seizures (staring spells, jerking limb movements, stiffness, flickering eyelids), which often acts as the first symptom of BTD. Infants with BTD may also have weak muscles and hypotonia; this may cause infants to appear abnormally "floppy" and have affected feeding and motor skills. BTD may result in developmental delays, vision or hearing problems, eye infections, alopecia, and eczema. The urine of infants with BTD may contain lactic acid and ammonia. Other symptoms that infants may exhibit include ataxia, breathing issues, lethargy, hepatomegaly, splenomegaly, and speech problems. The condition may eventually result in a coma and death.Biotinidase deficiency can also appear later in life. This is referred to as "late-onset" biotinidase deficiency. The symptoms are similar, but perhaps more mild, because if an individual survives the neonatal period they likely have some residual activity of biotin-related enzymes. Studies have noted individuals who were asymptomatic until adolescence or early adulthood. One study pointed out that untreated individuals may not show symptoms until age 21. Furthermore, in rare cases, even individuals with profound deficiencies of biotinidase can be asymptomatic.Symptom severity is predictably correlated with the severity of the enzyme defect. Profound biotinidase deficiency refers to situations where enzyme activity is 10% or less. Individuals with partial biotinidase deficiency may have enzyme activity of 10-30%.Functionally, there is no significant difference between dietary biotin deficiency and genetic loss of biotin-related enzyme activity. In both cases, supplementation with biotin can often restore normal metabolic function and proper catabolism of leucine and isoleucine.The symptoms of biotinidase deficiency (and dietary deficiency of biotin) can be quite severe. A 2004 case study from Metametrix detailed the effects of biotin deficiency, including aggression, cognitive delay, and reduced immune function.
Genetics:
Mutations in the BTD gene cause biotinidase deficiency. Biotinidase is the enzyme that is made by the BTD gene. Many mutations that cause the enzyme to be nonfunctional or to be produced at extremely low levels have been identified. Biotin is a vitamin that is chemically bound to proteins. (Most vitamins are only loosely associated with proteins.) Without biotinidase activity, the vitamin biotin cannot be separated from foods and therefore cannot be used by the body. Another function of the biotinidase enzyme is to recycle biotin from enzymes that are important in metabolism (processing of substances in cells). When biotin is lacking, specific enzymes called carboxylases cannot process certain proteins, fats, or carbohydrates. Specifically, two essential branched-chain amino acids (leucine and isoleucine) are metabolized differently.Individuals lacking functional biotinidase enzymes can still have normal carboxylase activity if they ingest adequate amounts of biotin. The standard treatment regimen calls for 5–10 mg of biotin per day.Biotinidase deficiency is inherited in an autosomal recessive pattern, which means the defective gene is located on an autosome, and two copies of the defective gene - one from each parent - must be inherited for a person to be affected by the disorder. The parents of a child with an autosomal recessive disorder are usually not affected by the disorder, but are carriers of one copy of the defective gene. If both parents are carriers for the biotinidase deficiency, there is a 25% chance that their child will be born with it, a 50% chance the child will be a carrier, and a 25% chance the child will be unaffected.The chromosomal locus is at 3p25. The BTD gene has 4 exons of lengths 79 bp, 265 bp, 150 bp and 1502 bp, respectively. There are at least 21 different mutations that have been found to lead to biotinidase deficiency. The most common mutations in severe biotinidase deficiency (<10% normal enzyme activity) are: p. Cys33PhefsX36, p.Gln456His, p.Arg538Cys, p.Asp444His, and p.[Ala171Thr;Asp444His]. Almost all individuals with partial biotinidase deficiency (10-30% enzyme activity) have the mutation p.Asp444His in one allele of the BTD gene in combination with a second allele.
Pathophysiology:
Symptoms of the deficiency are caused by the inability to reuse biotin molecules that are needed for cell growth, production of fatty acids and the metabolism of fats and amino acids. If left untreated, the symptoms can lead to later problems such as comas or death. Unless treatment is administered on a regular basis, symptoms can return at any point during the lifespan.
Diagnosis:
Biotinidase deficiency can be found by genetic testing. This is often done at birth as part of newborn screening in several states throughout the United States. Results are found through testing a small amount of blood gathered through a heel prick of the infant. As not all states require that this test be done, it is often skipped in those where such testing is not required. Biotinidase deficiency can also be found by sequencing the BTD gene, particularly in those with a family history or known familial gene mutation.
Treatment:
Treatment is possible but unless continued daily, problems may arise. Currently, this is done through supplementation of 5–10 mg of oral biotin a day. If symptoms have begun to show, standard treatments can take care of them, such as hearing aids for poor hearing.
Epidemiology:
Based on the results of worldwide screening of biotinidase deficiency in 1991, the incidence of the disorder is: 5 in 137,401 for profound biotinidase deficiency One in 109,921 for partial biotinidase deficiency One in 61,067 for the combined incidence of profound and partial biotinidase deficiency Carrier frequency in the general population is approximately one in 120.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Internal occipital protuberance**
Internal occipital protuberance:
Along the internal surface of the occipital bone, at the point of intersection of the four divisions of the cruciform eminence, is the internal occipital protuberance. Running transversely on either side is a groove for the transverse sinus.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Generalized Helmholtz theorem**
Generalized Helmholtz theorem:
The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem which is valid only in one dimension. The generalized Helmholtz theorem reads as follows.
Generalized Helmholtz theorem:
Let p=(p1,p2,...,ps), q=(q1,q2,...,qs), be the canonical coordinates of a s-dimensional Hamiltonian system, and let H(p,q;V)=K(p)+φ(q;V) be the Hamiltonian function, where K=∑i=1spi22m ,is the kinetic energy and φ(q;V) is the potential energy which depends on a parameter V Let the hyper-surfaces of constant energy in the 2s-dimensional phase space of the system be metrically indecomposable and let ⟨⋅⟩t denote time average. Define the quantities E , P , T , S , as follows: E=K+φ ,T=2s⟨K⟩t ,P=⟨−∂φ∂V⟩t log ∫H(p,q;V)≤Edspdsq.
Remarks:
The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities in multidimensional ergodic systems. This in turn allows to define the "thermodynamic state" of a multi-dimensional ergodic mechanical system, without the requirement that the system be composed of a large number of degrees of freedom. In particular the temperature T is given by twice the time average of the kinetic energy per degree of freedom, and the entropy S by the logarithm of the phase space volume enclosed by the constant energy surface (i.e. the so-called volume entropy).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Glory hole (petroleum production)**
Glory hole (petroleum production):
A glory hole in the context of the offshore petroleum industry is an excavation into the sea floor designed to protect the wellhead equipment installed at the surface of a petroleum well from icebergs or pack ice. An economically attractive alternative for exploiting offshore petroleum resources is a floating platform; however, ice can pose a serious hazard to this solution. While floating platforms can be built to withstand ice loading up to a design threshold, for the largest icebergs or the thickest pack ice the only sensible alternative is to move out of the way. Floating platforms can be disconnected from the wellheads in order to allow them to be moved away from threatening ice, but the wellhead equipment is fixed in place and hence vulnerable.
Glory hole (petroleum production):
The keel of an iceberg or pack ice can extend far below the surface of the water. If this keel extends deep enough to make contact with the sea floor, it will scour the sea floor as the ice moves with the current. To protect the wellhead equipment from possible scouring, a glory hole is excavated into the sea floor. This excavation must be deep enough to allow adequate clearance between the top of the wellhead equipment and the surrounding sea floor. The resulting glory hole can be either open or cased. A cased glory hole utilizes steel casing as a retaining wall while an open glory hole is simply an excavation.
Glory hole (petroleum production):
Due to the cost of excavating individual glory holes, typically each glory hole will contain several wellheads. Locating multiple wellheads within a single glory hole is made possible by the use of directional drilling.
Etymology:
The usage of the term glory hole in this context almost certainly is taken from its historical usage in the mining industry to refer to excavations.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Schweigger-Seidel sheath**
Schweigger-Seidel sheath:
Schweigger-Seidel sheath is a phagocytic sleeve that is part of a sheathed arteriole of the spleen, and is sometimes referred to as a splenic ellipsoid. It is a spindle-shaped thickening in the walls of the second part of the arterial branches forming the penicilli in the spleen. It is named after German physiologist Franz Schweigger-Seidel (1834-1871).
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
**Preputial gland**
Preputial gland:
Preputial glands are exocrine glands that are located in the folds of skin front of the genitals of some mammals. They occur in several species, including mice, ferrets, rhinoceroses, and even-toed ungulates and produce pheromones. The glands play a role in the urine-marking behavior of canids such as gray wolves and African wild dogs. The preputial glands of female animals are sometimes called clitoral glands.
Preputial gland:
The preputial glands of male musk deer produce strong-smelling deer musk which is of economic importance, as it is used in perfumes.
Human homologues:
There is debate about whether humans have functional homologues to preputial glands. Preputial glands were first noted by Edward Tyson and in 1694 fully described by William Cowper who named them Tyson's glands after Tyson. They are described as modified sebaceous glands located around the corona and inner surface of the prepuce of the human penis. They are believed to be most frequently found in the balanopreputial sulcus. Their secretion may be one of the components of smegma.
Human homologues:
Some, including Satya Parkash, dispute their existence. While humans may not have true anatomical equivalents, the term may sometimes be used for tiny whitish yellow bumps occasionally found on the corona of the glans penis. The proper name for these structures is pearly penile papules (or hirsutoid papillomas). According to detractors, they are not glands, but mere thickenings of the skin and are not involved in the formation of smegma.
|
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.