prompt
stringlengths 2.91k
132k
| answer
stringlengths 4
245
|
|---|---|
Relavent Documents:
Document 0:::
Following is a list of dams and reservoirs in Puerto Rico.
The below list is incomplete. The National Inventory of Dams, maintained by the U.S. Army Corps of Engineers, defines any "major dam" as being tall with a storage capacity of at least , or of any height with a storage capacity of .
Dams and reservoirs in Puerto Rico
Lago Caonillas, Utuado, Puerto Rico Electric Power Authority (PREPA)
Lago Carite, Guayama, PREPA
Carraízo Dam and Lago Loíza, Trujillo Alto, Puerto Rico Aqueducts and Sewers Authority (PRASA)
Lago Cerrillos, Ponce, United States Army Corps of Engineers (USACE)
Lago de Cidra, Cidra, PRASA
Coamo Dam, between Coamo and Santa Isabel, PREPA
Lago Dos Bocas, between Arecibo and Utuado municipalities, PREPA
Lago Guajataca, between San Sebastián, Quebradillas and Isabela municipalities, PREPA
Lago Guayabal, Juana Díaz, PREPA
Lago El Guineo (Río Toro Negro), Villalba, PREPA
Lago Garzas, Peñuelas, PREPA
Lago La Plata, Toa Alta, PRASA
Patillas Dam, Patillas, PREPA
Portugués Dam, Ponce, USACE
Río Blanco Project, Naguabo, PREPA
El Salto #1 and El Salto #2, Comerío
Lago Toa Vaca, Villalba, PRASA
Yauco Project, Yauco, PREPA
Gallery
See also
List of rivers in Puerto Rico
References
External links
USGS List of Reservoirs in Puerto Rico
Document 1:::
Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism. After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease, similarly to graphene.
The first few-layered van der Waals magnetism was reported in 2017 (Cr2Ge2Te6, and CrI3). One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets more easily compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS3, Cr2Ge2Te6, CrI3, NiPS3, MnPS3, Fe3GeTe2
Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise.
Overview
Magnetic van der Waals materials is a new addition to the growing list of 2d materials. The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, probably more important, feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets.
Interest in these
Document 2:::
Unified Video Decoder (UVD, previously called Universal Video Decoder) is the name given to AMD's dedicated video decoding ASIC. There are multiple versions implementing a multitude of video codecs, such as H.264 and VC-1.
UVD was introduced with the Radeon HD 2000 Series and is integrated into some of AMD's GPUs and APUs. UVD occupies a considerable amount of the die surface at the time of its introduction and is not to be confused with AMD's Video Coding Engine (VCE).
As of AMD Raven Ridge (released January 2018), UVD and VCE were succeeded by Video Core Next (VCN).
Overview
The UVD is based on an ATI Xilleon video processor, which is incorporated onto the same die as the GPU and is part of the ATI Avivo HD for hardware video decoding, along with the Advanced Video Processor (AVP). UVD, as stated by AMD, handles decoding of H.264/AVC, and VC-1 video codecs entirely in hardware.
The UVD technology is based on the Cadence Tensilica Xtensa processor, which was originally licensed by ATI Technologies Inc. in 2004.
UVD/UVD+
In early versions of UVD, video post-processing is passed to the pixel shaders and OpenCL kernels. MPEG-2 decoding is not performed within UVD, but in the shader processors. The decoder meets the performance and profile requirements of Blu-ray and HD DVD, decoding H.264 bitstreams up to a bitrate of 40 Mbit/s. It has context-adaptive binary arithmetic coding (CABAC) support for H.264/AVC.
Unlike video acceleration blocks in previous generation GPUs, which demanded considerable host-CPU involvement, UVD offloads the entire video-decoder process for VC-1 and H.264 except for video post-processing, which is offloaded to the shaders. MPEG-2 decode is also supported, but the bitstream/entropy decode is not performed for MPEG-2 video in hardware.
Previously, neither ATI Radeon R520 series' ATI Avivo nor NVidia Geforce 7 series' PureVideo assisted front-end bitstream/entropy decompression in VC-1 and H.264 - the host CPU performed this work. UVD han
Document 3:::
Sliplining is a technique for repairing leaks or restoring structural stability to an existing pipeline. It involves installing a smaller, "carrier pipe" into a larger "host pipe", grouting the annular space between the two pipes, and sealing the ends. Sliplining has been used since the 1940s.
The most common material used to slipline an existing pipe is high-density polyethylene (HDPE), but fiberglass-reinforced pipe (FRP) and PVC are also common. Sliplining can be used to stop infiltration and restore structural integrity to an existing pipe. The most common size is (8"-60"), but sliplining can occur in any size given appropriate access and a new pipe small or large enough to install.
Installation methods
There are two methods used to install a slipline: continuous and segmental.
Continuous sliplining uses a long continuous pipe, such as HDPE, Fusible PVC, or Welded Steel Pipe, that are connected into continuous pieces of any length prior to installation. The continuous carrier pipe is pulled through the existing host pipe starting at an insertion pit and continuing to a receiving pit. Either the insertion pit, the receiving pit, or both can be manholes or other existing access points if the size and material of the new carrier pipe can manoeuvre the existing facilities.
Segmental sliplining is very similar to continuous sliplining. The difference is primarily based on the pipe material used as the new carrier pipe. When using any bell and spigot pipe such as FRP, PVC, HDPE or Spirally Welded Steel Pipe, the individual pieces of pipe are lowered into place, pushed together, and pushed along the existing pipe corridor.
Using either method the annular space between the two pipes must be grouted. In the case of sanitary sewer lines, the service laterals must be reconnected via excavation.
Advantages
Sliplining is generally a very cost-effective rehabilitation method. It is also very easy to install and requires tools and equipment widely available to any pi
Document 4:::
In organic chemistry, a moiety ( ) is a part of a molecule that is given a name because it is identified as a part of other molecules as well.
Typically, the term is used to describe the larger and characteristic parts of organic molecules, and it should not be used to describe or name smaller functional groups of atoms that chemically react in similar ways in most molecules that contain them. Occasionally, a moiety may contain smaller moieties and functional groups.
A moiety that acts as a branch extending from the backbone of a hydrocarbon molecule is called a substituent or side chain, which typically can be removed from the molecule and substituted with others.
The term is also used in pharmacology, where an active moiety is the part of a molecule responsible for the physiological or pharmacological action of a drug.
Active moiety
In pharmacology, an active moiety is the part of a molecule or ion—excluding appended inactive portions—that is responsible for the physiological or pharmacological action of a drug substance. Inactive appended portions of the drug substance may include either the alcohol or acid moiety of an ester, a salt (including a salt with hydrogen or coordination bonds), or other noncovalent derivative (such as a complex, chelate, or clathrate). The parent drug may itself be an inactive prodrug and only after the active moiety is released from the parent in free form does it become active.
See also
Moiety conservation
Moiety, each of the two kinship groups in some Indigenous societies
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a sag in geological terms?
A. A raised area of land
B. A depressed, persistent, low area
C. A type of mountain
D. A large body of water
Answer:
|
B. A depressed, persistent, low area
|
Relavent Documents:
Document 0:::
Madrid Río is an urban park in the Spanish capital Madrid, built along an urban stretch of the Manzanares River following the burial of the M-30 bypass road in this area. It is the result of a project led by the architect Ginés Garrido, who won the international ideas competition organised by the Madrid City Council in 2005 to redevelop the area.
The project started with the idea of recovering the banks of the Manzanares River for the use and enjoyment of the citizens. The section of the river that is now known as Madrid Río is the section that was boxed in by the M-30 bypass road, a road that isolated the river between the two directions of the highway as well as creating a barrier and fracture between the two sides of the city, the district of Arganzuela on the left bank, and the districts of Latina, Carabanchel and Usera on the right bank. The connection of the M-30 with the A-5 motorway, the road to Extremadura, separated the city in an impassable way from Casa de Campo, Madrid's largest park. The project involved the undergrounding of the M-30 in this area as well as that section of the A-5 running parallel to Casa de Campo.
There are seven dams that regulate the river as it passes through the city. They receive the waters of the Manzanares River after passing through the Santillana reservoir, in Manzanares el Real, and the El Pardo reservoir, in the municipality of Madrid, which is why they are numbered from 3 to 9. Their mechanisms and locks have been repaired and the dams have been used for the new system of crossings. Initially, the project for the renaturation of the Manzanares River as it passes through Madrid Río contemplated the opening of all the dams, except the last one, to create the conditions that would make it possible for the Madrid Río rowing school to train, but finally, contrary to what was first agreed and due to pressure from the local residents, it was also decided to also open the last one so that the river could flow freely.
The water level has been dropped as the natural flow of the river has been restored. Accessible wooden boards and fish ladders have been added to encourage the continuity of the underwater fauna along the river. There has been a noticeable improvement in avian biodiversity along the river with herons and kingfishers being regular visitors.
The Madrid Río has received the Veronica Rudge Green Prize in Urban Design from Harvard University's Graduate School of Design in 2015. The architects were Ginés Garrido (of Burgos & Garrido), Porras La Casta, Rubio & Álvarez-Sala, and West 8.
Notes
External links
Document 1:::
1B-LSD (N1-butyryl-lysergic acid diethylamide) is an acylated derivative of lysergic acid diethylamide (LSD), which has been sold as a designer drug. In tests on mice it was found to be an active psychedelic, though with only around 1/7 the potency of LSD itself.
Legal status
1B-LSD is illegal in Singapore. Sweden's public health agency suggested classifying 1B-LSD as a hazardous substance, on June 24, 2019.
Document 2:::
Digistar is the first computer graphics-based planetarium projection and content system. It was designed by Evans & Sutherland and released in 1983. The technology originally focused on accurate and high quality display of stars, including for the first time showing stars from points of view other than Earth's surface, travelling through the stars, and accurately showing celestial bodies from different times in the past and future. Beginning with the Digistar 3 the system now projects full-dome video.
Projector
Unlike modern full-dome systems, which use LCD, DLP, SXRD, or laser projection technology, the Digistar projection system was designed for projecting bright pinpoints of light representing stars. This was accomplished using a calligraphic display, a form of vector graphics, rather than raster graphics. The heart of the Digistar projector is a large cathode-ray tube (CRT). A phosphor plate is mounted atop the tube, and light is then dispersed by a large lens with a 160 degree field of view to cover the planetarium dome. The original lens bore the inscription: "August 1979 mfg. by Lincoln Optical Corp., L.A., CA for Evans and Sutherland Computer Corp., SLC, UT, Digital planetarium CRT projection lens, 43mm, f2.8, 160 degree field of view".
The coordinates of the stars and wire-frame models to be displayed by the projector were stored in computer RAM in a display list. The display would read each set of coordinates in turn and drive the CRT's electron beam directly to those coordinates. If the electron beam was enabled while being moved a line would be painted on the phosphor plate. Otherwise, the electron beam would be enabled once at its destination and a star would be painted. Once all coordinates in the display list had been processed, the display would repeat from the top of the display list.
Thus, the shorter the display list the more frequently the electron beam would refresh the charge on a given point on the phosphor plate, making the projection of t
Document 3:::
Cooling load is the rate at which sensible and latent heat must be removed from the space to maintain a constant space dry-bulb air temperature and humidity. Sensible heat into the space causes its air temperature to rise while latent heat is associated with the rise of the moisture content in the space. The building design, internal equipment, occupants, and outdoor weather conditions may affect the cooling load in a building using different heat transfer mechanisms. The SI units are watts.
Overview
The cooling load is calculated to select HVAC equipment that has the appropriate cooling capacity to remove heat from the zone. A zone is typically defined as an area with similar heat gains, similar temperature and humidity control requirements, or an enclosed space within a building with the purpose to monitor and control the zone's temperature and humidity with a single sensor e.g. thermostat. Cooling load calculation methodologies take into account heat transfer by conduction, convection, and radiation. Methodologies include heat balance, radiant time series, cooling load temperature difference, transfer function, and sol-air temperature. Methods calculate the cooling load in either steady state or dynamic conditions and some can be more involved than others. These methodologies and others can be found in ASHRAE handbooks, ISO Standard 11855, European Standard (EN) 15243, and EN 15255. ASHRAE recommends the heat balance method and radiant time series methods.
Differentiation from heat gains
The cooling load of a building should not be confused with its heat gains. Heat gains refer to the rate at which heat is transferred into or generated inside a building. Just like cooling loads, heat gains can be separated into sensible and latent heat gains that can occur through conduction, convection, and radiation. Thermophysical properties of walls, floors, ceilings, and windows, lighting power density (LPD), plug load density, occupant density, and equipment efficiency
Document 4:::
A Hagenhufendorf (German: also Bachhufendorf, Hagenhufensiedlung or Hägerhufensiedlung) is an elongated settlement, similar to a Reihendorf, laid out along a road running parallel to a stream, whereby only one side of the road has houses, whilst on the opposite side are the hides (Hufen), the handkerchief-shaped farmer's fields of medieval origin, about 20 to 40 morgens in area.
Name
This German term is probably derived from Hagenrecht ("hedging right"), i.e. the owner had a right to enclose the land he used. This went even further for the Hägerhufensiedlungen, where there was a Hägerjunker in charge and special courts dealing with enclosure issues, the Hägergerichte.
Description
The Hagenhufensiedlung was a form of planned settlement typical of the High Middle Ages that consisted of individually owned strips of land that were strung together in a line. The hides were as wide as the farmstead, but could be several hundred metres long.
The enclosed plots were used as vegetable gardens and for keeping small animals. The stream at the back of the farmhouses supplied the necessary water. From this type of settlement, long linear villages developed like Auhagen, Wiedensahl, Isernhagen, Kathrinhagen or Rodewald in Lower Saxony. Hagenhufensiedlungen may be found from the Taunus region as far as Western Pomerania. The Hägerhufensiedlung is restricted to a region in the area of the Weser Uplands, Leine Uplands and Lippe.
Origin
The Hagenhufendörfer arose from the planned settlement of forested areas, predominantly in the 13th century, with the aim of clearing and cultivating the land. These villages had a lokator with a double hide (Doppelhufe).
The Hägerhufensiedlungen go back to the 11th-century Eschershausen Treaty. Although they had no double hide and no lokator, they did have special enclosure rights (Hägerrecht).
Extent
Hagenhufendörfer are especially common in the Börde regions, on the North German Plain immediately north of the German Central Uplands. The be
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the main goal of the Madrid Río project initiated by Ginés Garrido?
A. To construct a new highway
B. To recover the banks of the Manzanares River for public use
C. To build luxury apartments along the river
D. To create a dam system for flood prevention
Answer:
|
B. To recover the banks of the Manzanares River for public use
|
Relavent Documents:
Document 0:::
In the Taroom district in the Dawson River valley of Queensland, Australia, a boggomoss (pl. boggomossi or boggomosses) is a mound spring. Boggomosses range in form from small muddy swamps to elevated peat bogs or swamps, up to 150 meters across scattered among dry woodland communities, which form part of the Springsure Group of Great Artesian Basin springs. They are rich in invertebrates and form a vital chain of permanently moist oases in an otherwise dry environment.
The origin of the term boggomoss is not known, but is most likely a compound of the words bog and moss. "Boggomoss creek" in the Parish of Fernyside appears on very early maps.
Environment
The spring flow rate is usually in the range of 0.5 to 2.0 litres per second (8th magnitude spring), however some large boggomosses have a flow rate of up to 10.0 litres per second (7th magnitude spring).
A report commissioned by the Queensland Department of Environment defined four boggomoss vegetation types with distinct environmental relations:
Group 1 associated with sandy and relatively infertile soils.
Group 2 associated with fertile and heavy surface soils and relatively large mounds.
Group 3 associated with fertile and heavy surface soils, but with little or no mound development (probably young springs).
Group 4 are linear in shape and flood prone at the base of a gorge
Sources
Adclarkia dawsonensis (Boggomoss Snail, Dawson Valley Snail)
Document 1:::
A col in geomorphology is the lowest point on a mountain ridge between two peaks. It may also be called a gap or pass. Particularly rugged and forbidding cols in the terrain are usually referred to as notches. They are generally unsuitable as mountain passes, but are occasionally crossed by mule tracks or climbers' routes. Derived from the French ("collar, neck") from Latin collum, "neck", the term tends to be associated more with mountain than hill ranges. The distinction with other names for breaks in mountain ridges such as saddle, wind gap or notch is not sharply defined and may vary from place to place. Many double summits are separated by prominent cols.
The height of a summit above its highest col (called the key col) is effectively a measure of a mountain's topographic prominence.
Cols lie on the line of the watershed between two mountains, often on a prominent ridge or arête. For example, the highest col in Austria, the ("Upper Glockner Col", ) lies between the Kleinglockner () and Grossglockner () mountains, giving the Kleinglockner a minimum prominence of 17 metres.
See also
Saddle (landform)
Document 2:::
Mammoplasia is the normal or spontaneous enlargement of human breasts. Mammoplasia occurs normally during puberty and pregnancy in women, as well as during certain periods of the menstrual cycle. When it occurs in males, it is called gynecomastia and is considered to be pathological. When it occurs in females and is extremely excessive, it is called macromastia (also known as gigantomastia or breast hypertrophy) and is similarly considered to be pathological. Mammoplasia may be due to breast engorgement, which is temporary enlargement of the breasts caused by the production and storage of breast milk in association with lactation and/or galactorrhea (excessive or inappropriate production of milk). Mastodynia (breast tenderness/pain) frequently co-occurs with mammoplasia.
During the luteal phase (latter half) of the menstrual cycle, due to increased mammary blood flow and/or premenstrual fluid retention caused by high circulating concentrations of estrogen and/or progesterone, the breasts temporarily increase in size, and this is experienced by women as fullness, heaviness, swollenness, and a tingling sensation.
Mammoplasia can be an effect or side effect of various drugs, including estrogens, antiandrogens such as spironolactone, cyproterone acetate, bicalutamide, and finasteride, growth hormone, and drugs that elevate prolactin levels such as D2 receptor antagonists like antipsychotics (e.g., risperidone), metoclopramide, and domperidone and certain antidepressants like selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs). The risk appears to be less with serotonin-norepinephrine reuptake inhibitors (SNRIs) like venlafaxine. The "atypical" antidepressants mirtazapine and bupropion do not increase prolactin levels (bupropion may actually decrease prolactin levels), and hence there may be no risk with these agents. Other drugs that have been associated with mammoplasia include D-penicillamine, bucillamine, neothetazone, ciclosporin,
Document 3:::
Glycoprotein 130 (also known as gp130, IL6ST, IL6R-beta or CD130) is a transmembrane protein which is the founding member of the class of tall cytokine receptors. It forms one subunit of the type I cytokine receptor within the IL-6 receptor family. It is often referred to as the common gp130 subunit, and is important for signal transduction following cytokine engagement. As with other type I cytokine receptors, gp130 possesses a WSXWS amino acid motif that ensures correct protein folding and ligand binding. It interacts with Janus kinases to elicit an intracellular signal following receptor interaction with its ligand. Structurally, gp130 is composed of five fibronectin type-III domains and one immunoglobulin-like C2-type (immunoglobulin-like) domain in its extracellular portion.
Characteristics
The members of the IL-6 receptor family are all complex with gp130 for signal transduction. For example, IL-6 binds to the IL-6 Receptor. The complex of these two proteins then associates with gp130. This complex of 3 proteins then homodimerizes to form a hexameric complex which can produce downstream signals. There are many other proteins which associate with gp130, such as cardiotrophin 1 (CT-1), leukemia inhibitory factor (LIF), ciliary neurotrophic factor (CNTF), oncostatin M (OSM), and IL-11. There are also several other proteins which have structural similarity to gp130 and contain the WSXWS motif and preserved cysteine residues. Members of this group include LIF-R, OSM-R, and G-CSF-R.
Loss of gp130
gp130 is an important part of many different types of signaling complexes. Inactivation of gp130 is lethal to mice. Homozygous mice who are born show a number of defects including impaired development of the ventricular myocardium. Haematopoietic effects included reduced numbers of stem cells in the spleen and liver.
Signal transduction
gp130 has no intrinsic tyrosine kinase activity. Instead, it is phosphorylated on tyrosine residues after complexing with other
Document 4:::
Peter Cameron (1847 – 1 December 1912 in New Mills, Derbyshire) was an English amateur entomologist who specialised in Hymenoptera.
An artist, Cameron worked in the dye industry and in calico printing. He described many new species; his collection, including type material, is now in the Natural History Museum. He suffered from poor health and lack of employment. Latterly, he lived in New Mills and was supported by scholarships from the Royal Society.
He loaned specimens to Jean-Jacques Kieffer, a teacher and Catholic priest in Bitche, Lorraine, who also named species after Cameron.
Some of Cameron's taxonomic work is not very well regarded. Upon his death Claude Morley wrote, "Peter Cameron is dead, as was announced by most of the halfpenny papers on December 4th. What can we say of his life? Nothing; for it concerns us in no way. What shall we say of his work? Much, for it is entirely ours, and will go down to posterity as probably the most prolific and chaotic output of any individual for many years past." Similarly, American entomologist Richard M. Bohart "wound up with the thankless task of sorting through Cameron's North American contributions to a small group of wasps known as the Odynerini. Of the hundred or so names Cameron proposed within the group, almost all, Bohart found, were invalid."
The Panamanian oak gall wasp Callirhytis cameroni described 2014 is named in his honor.
Works
A Monograph of the British Phytophagous Hymenoptera Ray Society (1882–1893)
Hymenoptera volumes of the Biologia Centrali-Americana, volumes 1–2 (1883–1900) and (1888–1900)
A complete list is given in external links below.
External links
Publications of Peter Cameron
Manuscript collection
BHL Hymenoptera Orientalis: or contributions to a knowledge of the Hymenoptera of the Oriental zoological region. Manchester :Literary and Philosophical Society,1889–1903.
Digital Version of Biologia Centrali-Americana
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main argument presented by critics of the GAARlandia hypothesis regarding the colonization of the Greater Antilles?
A. They support the hypothesis based on geological evidence.
B. They believe oceanic dispersal is the best explanation for colonization.
C. They argue that multiple lineages colonized simultaneously.
D. They find ample support from studies of individual lineages.
Answer:
|
B. They believe oceanic dispersal is the best explanation for colonization.
|
Relavent Documents:
Document 0:::
Sonata was a 3D building design software application developed in the early 1980s and now regarded as the forerunner of today's building information modeling applications.
Sonata was commercially released in 1986, having been developed by Jonathan Ingram independently and was sold to T2 Solutions (renamed from GMW Computers in 1987 - which was eventually bought by Alias|Wavefront), and was sold as a successor to GMW's RUCAPS. It ran on workstation computer hardware (by contrast, other 2D computer-aided design (CAD) systems could run on personal computers). The system was not expensive, according to Michael Phiri. Reiach Hall purchased "three Sonata workstations on Silicon Graphics machines, at a total cost of approximately £2000 each" [1990 prices]. Approximately 1,000 seats were sold between 1985 and 1992. However, as a BIM application, in addition to geometric modelling, it could model complete buildings, including complex parametrics, costs and staging of the construction process.
Archicad founder Gábor Bojár has acknowledged that Sonata "was more advanced in 1986 than Archicad at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later".
Many projects were designed and built using Sonata, including Peddle Thorp Architect's Rod Laver Arena in 1987, and Gatwick Airport North Terminal Domestic Facility by Taylor Woodrow. The US-based architect HKS used the software in 1992 to design a horse racing facility (Lone Star Park in Grand Prairie, Texas) and subsequently purchased the successor product, Reflex.
Target Australia Pty. Ltd. the Australian discount department store retailer bought two Sonata licences in 1992 to replace two RUCAPS workstations originally from Coles Supermarkets. The software was run on two Silicon Graphics IRIS Indigo workstations. Staff were trained to use the software including the parametric language. The simple but powerful parametrics enable productivity gains in doc
Document 1:::
EL Aquilae, also known as Nova Aquilae 1927 was a nova that appeared in 1927. It was discovered by Max Wolf on photographic plates taken at Heidelberg Observatory on 30 and 31 July 1927 when it had a photographic magnitude of 9. Subsequent searches of plates taken at the Harvard College Observatory showed the nova was fainter than magnitude 11.1 on 8 June 1927 and had flared to magnitude 6.4 on 15 June 1927. It declined from peak brightness at an average rate of 0.105 magnitudes per day, making it a fast nova, and ultimately dimmed to about magnitude 21. The 14.5 magnitude change from peak brightness to quiescence was unusually large for a nova.
All novae are binary stars, with a "donor" star orbiting a white dwarf so closely that matter is transferred from the donor to the white dwarf. Pagnotta & Schaefer argued that the donor star for the EL Aquilae system is a red giant, based on its position in an infrared color–color diagram. Tappert et al. suggest that Pagnotta & Schaefer misidentified EL Aquilae, and claim that EL Aquilae is probably an intermediate polar, a nova with a main sequence donor star, based on its eruption amplitude and color.
Notes
References
Document 2:::
The Barton decarboxylation is a radical reaction in which a carboxylic acid is converted to a thiohydroxamate ester (commonly referred to as a Barton ester). The product is then heated in the presence of a radical initiator and a suitable hydrogen donor to afford the decarboxylated product. This is an example of a reductive decarboxylation. Using this reaction it is possible to remove carboxylic acid moieties from alkyl groups and replace them with other functional groups. (See Scheme 1) This reaction is named after its developer, the British chemist and Nobel laureate Sir Derek Barton (1918–1998).
Mechanism
The reaction is initiated by homolytic cleavage of a radical initiator, in this case 2,2'-azobisisobutyronitrile (AIBN), upon heating. A hydrogen is then abstracted from the hydrogen source (tributylstannane in this case) to leave a tributylstannyl radical that attacks the sulfur atom of the thiohydroxamate ester. The N-O bond of the thiohydroxamate ester undergoes homolysis to form a carboxyl radical which then undergoes decarboxylation and carbon dioxide (CO2) is lost. The remaining alkyl radical (R·) then abstracts a hydrogen atom from remaining tributylstannane to form the reduced alkane (RH). (See Scheme 2) The tributyltin radical enters into another cycle of the reaction until all thiohydroxamate ester is consumed.
N-O bond cleavage of the Barton ester can also occur spontaneously upon heating or by irradiation with light to initiate the reaction. In this case a radical initiator is not required but a hydrogen-atom (H-atom) donor is still necessary to form the reduced alkane (RH). Alternative H-atom donors to tributylstannane include tertiary thiols and organosilanes. The relative expense, smell, and toxicity associated with tin, thiol or silane reagents can be avoided by carrying the reaction out using chloroform as both solvent and H-atom donor.
It is also possible to functionalize the alkyl radical by use of other radical trapping species (X-Y + R·
Document 3:::
Inspur Group is an information technology conglomerate in the People's Republic of China focusing on cloud computing, big data, key application hosts, servers, storage, artificial intelligence and ERP. On April 18, 2006, Inspur changed its English name from Langchao to Inspur. It is listed on the SSE, SZSE, and SEHK.
History
In 2005, Microsoft invested US$20 million in the company. Inspur announced several agreements with virtualization software developer VMware on research and development of cloud computing technologies and related products. In 2009, Inspur acquired the Xi'an-based research and development facilities of Qimonda AG for 30 million Chinese yuan (around US$4 million). The centre had been responsible for design and development of Qimonda's DRAM products.
In 2011, Shandong Inspur Software Co., Ltd., Inspur Electronic Information Co., Ltd. and Inspur (Shandong) Electronic Information Company, established a cloud computing joint venture, with each holding a third.
U.S. sanctions
In June 2020, the United States Department of Defense published a list of Chinese companies operating in the U.S. that have ties to the People's Liberation Army, which included Inspur. In November 2020, Donald Trump issued an executive order prohibiting any American company or individual from owning shares in companies that the U.S. Department of Defense has listed as having links to the People's Liberation Army.
In March 2023, the United States Department of Commerce added Inspur to the Bureau of Industry and Security's Entity List. In March 2025, several Inspur subsidiaries were also added to the Entity List, including its Aivres Systems subsidiary.
See also
Inspur Server Series
References
Document 4:::
NIST Special Publication 800-92, "Guide to Computer Security Log Management", establishes guidelines and recommendations for securing and managing sensitive log data. The publication was prepared by Karen Kent and Murugiah Souppaya of the National Institute of Science and Technology and published under the SP 800-Series; a repository of best practices for the InfoSec community. Log management is essential to ensuring that computer security records are stored in sufficient detail for an appropriate period of time.
Background
Effective security event logging and log analysis is a critical component of any comprehensive security program within an organization. It is used to monitor system, network and application activity. It serves as a deterrent for unauthorized activity, as well as provides a means to detect and analyze an attack in order to allow the organization to mitigate or prevent similar attacks in the future. However, security professionals have a significant challenge to determine what events must be logged, where and how long to retain those logs, and how to analyze the enormous amount of information that can be generated. A deficiency in any of these areas can cause an organization to miss signs of unauthorized activity, intrusion, and loss of data, which creates additional risk.
Scope
NIST SP 800-92 provides a high-level overview and guidance for the planning, development and implementation of an effective security log management strategy. The intended audience for this publication include the general information security (InfoSec) community involved in incident response, system/application/network administration and managers.
NIST SP 800-92 defines a log management infrastructure as having 4 major functions:
General - log parsing, event filtering and event aggregation;
Log Storage - rotation, archival, compression, reduction, normalization, integrity checking;
Log Analysis - event correlation, viewing and reporting;
Disposal - clearing;
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What significant event occurred in 2006 regarding Inspur's branding?
A. Inspur changed its name from Langchao.
B. Inspur was acquired by Microsoft.
C. Inspur launched its first cloud computing product.
D. Inspur established a joint venture with VMware.
Answer:
|
A. Inspur changed its name from Langchao.
|
Relavent Documents:
Document 0:::
Controlled vocabularies provide a way to organize knowledge for subsequent retrieval. They are used in subject indexing schemes, subject headings, thesauri, taxonomies and other knowledge organization systems. Controlled vocabulary schemes mandate the use of predefined, preferred terms that have been preselected by the designers of the schemes, in contrast to natural language vocabularies, which have no such restriction.
In library and information science
In library and information science, controlled vocabulary is a carefully selected list of words and phrases, which are used to tag units of information (document or work) so that they may be more easily retrieved by a search. Controlled vocabularies solve the problems of homographs, synonyms and polysemes by a bijection between concepts and preferred terms. In short, controlled vocabularies reduce unwanted ambiguity inherent in normal human languages where the same concept can be given different names and ensure consistency.
For example, in the Library of Congress Subject Headings (a subject heading system that uses a controlled vocabulary), preferred terms—subject headings in this case—have to be chosen to handle choices between variant spellings of the same word (American versus British), choice among scientific and popular terms (cockroach versus Periplaneta americana), and choices between synonyms (automobile versus car), among other difficult issues.
Choices of preferred terms are based on the principles of user warrant (what terms users are likely to use), literary warrant (what terms are generally used in the literature and documents), and structural warrant (terms chosen by considering the structure, scope of the controlled vocabulary).
Controlled vocabularies also typically handle the problem of homographs with qualifiers. For example, the term pool has to be qualified to refer to either swimming pool or the game pool to ensure that each preferred term or heading refers to only one concept.
Types u
Document 1:::
The HP Precision bus (also called HP-PB and HP-NIO)
is the data transfer bus of the proprietary Hewlett Packard architecture HP 3000 and later many variants of the HP 9000 series of UNIX systems. This bus has a 32-bit data path with an 8 MHz clock. It supports a maximum transfer rate of 23 MB/s in burst mode. That bus was also used to directly support the Programmable Serial Interface (PSI) cards, which offered multi-protocol support for networking, notably IBM Bisync and similar systems.The 920, 922 and 932 series supported up to three PSI cards, and up to five cards in the 948 and 958 series.
Two form factors/sizes of HP-PB expansion cards were sold: single and double.
32-bit data path width
32 MB/s maximum data rate
8 MHz maximum frequency
5 V signalling voltage
96-pin (32×3) female pin+socket card connector (Is this a DIN 41612 connector?)
External links
HP 3000 manuals
HP/PA buses on Openpa.net
Notes
Document 2:::
In the hyperbolic plane, as in the Euclidean plane, each point can be uniquely identified by two real numbers. Several qualitatively different ways of coordinatizing the plane in hyperbolic geometry are used.
This article tries to give an overview of several coordinate systems in use for the two-dimensional hyperbolic plane.
In the descriptions below the constant Gaussian curvature of the plane is −1. Sinh, cosh and tanh are hyperbolic functions.
Polar coordinate system
The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction.
The reference point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is called the angular coordinate, or polar angle.
From the hyperbolic law of cosines, we get that the distance between two points given in polar coordinates is
Let , differentiating at :
we get the corresponding metric tensor:
The straight lines are described by equations of the form
where r0 and θ0 are the coordinates of the nearest point on the line to the pole.
Quadrant model system
The Poincaré half-plane model is closely related to a model of the hyperbolic plane in the quadrant Q = {(x,y): x > 0, y > 0}. For such a point the geometric mean and the hyperbolic angle produce a point (u,v) in the upper half-plane. The hyperbolic metric in the quadrant depends on the Poincaré half-plane metric. The motions of the Poincaré model carry over to the quadrant; in particular the left or right shifts of the real axis correspond to hyperbolic rotations of the quadrant. Due to the study of ratios in physics and economics where the quadrant is the universe of discourse, its points are said to be located by hyperbolic coordinates.
Cartesian-style coordinate systems
I
Document 3:::
A deadband or dead-band (also known as a dead zone or a neutral zone) is a band of input values in the domain of a transfer function in a control system or signal processing system where the output is zero (the output is 'dead' - no action occurs). Deadband regions can be used in control systems such as servoamplifiers to prevent oscillation or repeated activation-deactivation cycles (called 'hunting' in proportional control systems). A form of deadband that occurs in mechanical systems, compound machines such as gear trains is backlash.
Voltage regulators
In some power substations there are regulators that keep the voltage within certain predetermined limits, but there is a range of voltage in-between during which no changes are made, such as between 112 and 118 volts (the deadband is 6 volts), or between 215 and 225 volts (deadband is 10 volts).
Backlash
Gear teeth with slop (backlash) exhibit deadband. There is no drive from the input to the output shaft in either direction while the teeth are not meshed. Leadscrews generally also have backlash and hence a deadband, which must be taken into account when making position adjustments, especially with CNC systems. If mechanical backlash eliminators are not available, the control can compensate for backlash by adding the deadband value to the position vector whenever direction is reversed.
Hysteresis versus Deadband
Deadband is different from hysteresis. With hysteresis, there is no deadband and so the output is always in one direction or another. Devices with hysteresis have memory, in that previous system states dictate future states. Examples of devices with hysteresis are single-mode thermostats and smoke alarms. Deadband is the range in a process where no changes to output are made. Hysteresis is the difference in a variable depending on the direction of travel.
Thermostats
Simple (single mode) thermostats exhibit hysteresis. For example, the furnace in the basement of a house is adjusted automatically by
Document 4:::
A micronucleus is a small nucleus that forms whenever a chromosome or a fragment of a chromosome is not incorporated into one of the daughter nuclei during cell division. It usually is a sign of genotoxic events and chromosomal instability. Micronuclei are commonly seen in cancerous cells and may indicate genomic damage events that can increase the risk of developmental or degenerative diseases.
Micronuclei form during anaphase from lagging acentric chromosomes or chromatid fragments caused by incorrectly repaired or unrepaired DNA breaks or by nondisjunction of chromosomes. This improper segregation of chromosomes may result from hypomethylation of repeat sequences present in pericentromeric DNA, irregularities in kinetochore proteins or their assembly, a dysfunctional spindle apparatus, or flawed anaphase checkpoint genes. Micronuclei can contribute to genome instability by promoting a catastrophic mutational event called chromothripsis. Many micronucleus assays have been developed to test for the presence of these structures and determine their frequency in cells exposed to certain chemicals or subjected to stressful conditions.
The term micronucleus may also refer to the smaller nucleus in ciliate protozoans, such as the Paramecium. In mitosis it divides by fission, and in conjugation a pair of gamete micronuclei undergo reciprocal fusion to form a zygote nucleus, which gives rise to the macronuclei and micronuclei of the individuals of the next cycle of fission.
Discovery
Micronuclei in newly formed red blood cells in humans are known as Howell-Jolly bodies because these structures were first identified and described in erythrocytes by hematologists William Howell and Justin Jolly. These structures were later found to be associated with deficiencies in vitamins such as folate and B12. The relationship between formation of micronuclei and exposure to environmental factors was first reported in root tip cells exposed to ionizing radiation. Micronucleus inducti
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the maximum transfer rate of the HP Precision bus in burst mode?
A. 23 MB/s
B. 32 MB/s
C. 5 MB/s
D. 8 MB/s
Answer:
|
A. 23 MB/s
|
Relavent Documents:
Document 0:::
The National Board for Respiratory Care (NBRC Inc. is a non-profit organization formed in 1960 with the purpose of awarding and maintaining credentialing for Respiratory Therapists in the United States. The NBRC is the only organization in the United States which develops certification examinations for Registered Respiratory Therapists (RRTs) and Certified Respiratory Therapists (CRTs). The NBRC also offers additional specialization credentialing for respiratory practitioners that hold its certifications. The CRT and RRT designations are the standard credential in respiratory care for licensure requirements in the portions of the United States that have enacted a Respiratory Care Act. States that license respiratory therapists sometimes require the practitioner to maintain their NBRC credentialing to maintain their license to practice. The NBRC is headquartered in Overland Park, Kansas. It has been in the Kansas City metropolitan area since 1974. The NBRC is located at 10801 Mastin St, Suite 300, Overland Park, KS 66210.
Certification levels
Entry level certification
Certification is the entry level and is separated as such by the NBRC. Certified Respiratory Therapists and Certified Pulmonary Function Technologists may require oversight and supervision by their advanced-practice counterparts.
Specialties
The NBRC has sub-specialties for the Respiratory Therapist designations. Both the CRT and the RRT are eligible to sit for additional credentialing but the CRT still requires the same supervision by the RRT in clinical applications.
Sleep Disorders Specialist — The sleep disorder specialist (RRT-SDS or CRT-SDS) is a credential recognized by the American Academy of Sleep Medicine for the role of Scoring in sleep studies.
Neonatal & Pediatric Specialist — The neonatal and pediatric specialist (RRT-NPS or CRT-NPS) is a respiratory therapist that may work in advanced care in pediatrics and neonatology centers and units.
Adult Critical Care Specialist — Th
Document 1:::
In architecture, the term xystum refers to a wall, promenade, alley, or open path. It can also refer to an atrium, ambulacrum, or parvis in front of a basilica. The term should not be confused with the ancient Greek architectural term xystus, meaning the covered portico of a gymnasium.
Sources
Document 2:::
An electronic control unit (ECU), also known as an electronic control module (ECM), is an embedded system in automotive electronics that controls one or more of the electrical systems or subsystems in a car or other motor vehicle.
Modern vehicles have many ECUs, and these can include some or all of the following: engine control module (ECM), powertrain control module (PCM), transmission control module (TCM), brake control module (BCM or EBCM), central control module (CCM), central timing module (CTM), general electronic module (GEM), body control module (BCM), and suspension control module (SCM). These ECUs together are sometimes referred to collectively as the car's computer though technically they are all separate computers, not a single one. Sometimes an assembly incorporates several individual control modules (a PCM often controls both the engine and the transmission).
Some modern motor vehicles have up to 150 ECUs. Embedded software in ECUs continues to increase in line count, complexity, and sophistication. Managing the increasing complexity and number of ECUs in a vehicle has become a key challenge for original equipment manufacturers (OEMs).
Types
Generic industry controller namingIs the naming of controllers where the logical thought of the controller's name implies the system the controller is responsible for controlling
Generic powertrainThe generic powertrain pertains to a vehicle's emission system and is the only regulated controller name.
Other controllersAll other controller names are decided upon by the individual OEM. The engine controller may have several different names, such as "DME", "Enhanced Powertrain", "PGM-FI" and many others.
Door control unit (DCU)
Engine control unit (ECU)not to be confused with electronic control unit, the generic term for all these devices
Electric power steering control unit (PSCU)Generally this will be integrated into the EPS power pack.
Human–machine interface (HMI)
Powertrain control module (PCM)Somet
Document 3:::
An aridity index (AI) is a numerical indicator of the degree of dryness of the climate at a given location. The American Meteorological Society defined it in meteorology and climatology, as "the degree to which a climate lacks effective, life-promoting moisture". Aridity is different from drought because aridity is permanent whereas drought is temporary. A number of aridity indices have been proposed (see below); these indicators serve to identify, locate or delimit regions that suffer from a deficit of available water, a condition that can severely affect the effective use of the land for such activities as agriculture or stock-farming.
Historical background and indices
Köppen
At the turn of the 20th century, Wladimir Köppen and Rudolf Geiger developed the concept of a climate classification where arid regions were defined as those places where the annual rainfall accumulation (in centimetres) is less than , where:
if rainfall occurs mainly in the cold season,
if rainfall is evenly distributed throughout the year, and
if rainfall occurs mainly in the hot season.
where is the mean annual temperature in Celsius.
This was one of the first attempts at defining an aridity index, one that reflects the effects of the thermal regime and the amount and distribution of precipitation in determining the native vegetation possible in an area. It recognizes the significance of temperature in allowing colder places such as northern Canada to be seen as humid with the same level of precipitation as some tropical deserts because of lower levels of potential evapotranspiration in colder places. In the subtropics, the allowance for the distribution of rainfall between warm and cold seasons recognizes that winter rainfall is more effective for plant growth that can flourish in the winter and go dormant in the summer than the same amount of summer rainfall during a warm-to-hot season. Thus a place like Athens, Greece that gets most of its rainfall in winter can be conside
Document 4:::
The Sepsis Six is the name given to a bundle of medical therapies designed to reduce mortality in patients with sepsis.
Drawn from international guidelines that emerged from the Surviving Sepsis Campaign the Sepsis Six was developed by The UK Sepsis Trust. (Daniels, Nutbeam, Laver) in 2006 as a practical tool to help healthcare professionals deliver the basics of care rapidly and reliably.
In 2011, The UK Sepsis Trust published evidence that use of the Sepsis Six was associated with a 50% reduction in mortality, a decreased length of stay in hospital, and fewer intensive care days. Though the authors urge caution in a causal interpretation of these findings.
The Sepsis Six consists of three diagnostic and three therapeutic steps – all to be delivered within one hour of the initial diagnosis of sepsis:
Titrate oxygen to a saturation target of 94%
Take blood cultures and consider source control
Administer empiric intravenous antibiotics
Measure serial serum lactates
Start intravenous fluid resuscitation
Commence accurate urine output measurement.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the term "xystum" refer to in architecture?
A. A covered portico of a gymnasium
B. An open path or promenade
C. A type of decorative wall
D. A specific style of basilica
Answer:
|
B. An open path or promenade
|
Relavent Documents:
Document 0:::
Vachellia rigidula, commonly known as blackbrush acacia or chaparro prieto, and also known as Acacia rigidula, is a species of shrub or small tree in the legume family, Fabaceae. Its native range stretches from Texas in the United States south to central Mexico. This perennial is not listed as being threatened. It reaches a height of . Blackbrush acacia grows on limestone hillsides and canyons.
Phytochemistry
A phytochemical study of V. rigidula by workers at the Texas A & M University Agricultural Research and Extension Center at Uvalde, TX, reported the presence of over forty alkaloids, including low amounts (up to around 15 ppm) of several phenolic amines that had previously been found by the same research group in the related species Senegalia berlandieri, but which otherwise are known only as products of laboratory synthesis. Compounds found in the highest concentrations (ranging from a few hundred to a few thousand ppm) were phenethylamine, tryptamine, tyramine, and β-Methylphenethylamine (that it can be misidentified as amphetamine). Other notable compounds reported were N,N-dimethyltryptamine, mescaline, and nicotine, although these were found in low concentrations (e.g. mescaline at 3-28 ppm).
The presence of such an unprecedented chemical range of psychoactive compounds, including ones not previously found in nature, in a single plant species has led to the suggestion that some of these findings may have resulted from cross-contamination or were possibly artifacts of the analytical technique.
Uses
Vachellia rigidula is used in weight loss dietary supplements because of the presence of chemical compounds claimed to stimulate beta-receptors to increase lipolysis and metabolic rate and decrease appetite.
Vachellia rigidula is also known as a large honey producer and early blooming plant for its native region.
Safety
In 2015, 52% of supplements labeled as containing Acacia rigidula were found to be adulterated with synthetic BMPEA, an amphetamine iso
Document 1:::
Foraminoplasty is a type of endoscopic surgery used to operate on the spine. It is considered a minimally invasive surgery technique and its endoscopic laser is legally regulated. Although most patients have benefited from foraminoplasty, the National Institute for Health and Care Excellence does not fully support it due to it not completing its randomised controlled clinical trial.
External links
http://ijssurgery.com/10.14444/1026
https://www.nice.org.uk/guidance/ipg31/informationforpublic
Document 2:::
A voltage doubler is an electronic circuit which charges capacitors from the input voltage and switches these charges in such a way that, in the ideal case, exactly twice the voltage is produced at the output as at its input.
The simplest of these circuits is a form of rectifier which take an AC voltage as input and outputs a doubled DC voltage. The switching elements are simple diodes and they are driven to switch state merely by the alternating voltage of the input. DC-to-DC voltage doublers cannot switch in this way and require a driving circuit to control the switching. They frequently also require a switching element that can be controlled directly, such as a transistor, rather than relying on the voltage across the switch as in the simple AC-to-DC case.
Voltage doublers are a variety of voltage multiplier circuits. Many, but not all, voltage doubler circuits can be viewed as a single stage of a higher order multiplier: cascading identical stages together achieves a greater voltage multiplication.
Voltage doubling rectifiers
Villard circuit
The Villard circuit, conceived by Paul Ulrich Villard, consists simply of a capacitor and a diode. While it has the great benefit of simplicity, its output has very poor ripple characteristics. Essentially, the circuit is a diode clamp circuit. The capacitor is charged on the negative half cycles to the peak AC voltage (Vpk). The output is the superposition of the input AC waveform and the steady DC of the capacitor. The effect of the circuit is to shift the DC value of the waveform. The negative peaks of the AC waveform are "clamped" to 0 V (actually −VF, the small forward bias voltage of the diode) by the diode, therefore the positive peaks of the output waveform are 2Vpk. The peak-to-peak ripple is an enormous 2Vpk and cannot be smoothed unless the circuit is effectively turned into one of the more sophisticated forms. This is the circuit (with diode reversed) used to supply the negative high voltage for the
Document 3:::
Majorana 1 is a hardware device developed by Microsoft, with potential applications to quantum computing. It is the first device produced by Microsoft intended for use in quantum computing. It is an indium arsenide-aluminium hybrid device that admits superconductivity at low temperatures. Microsoft claims that it shows some signals of hosting boundary Majorana zero modes. The device can fit eight qubits. Majorana zero modes, if confirmed, could have potential application to making topological qubits, and eventually a large-scale topological quantum computers.
In its February 2025 announcement, Microsoft claimed that the Majorana 1 represents progress in its long-running project to create a quantum computer based on topological qubits. The announcement has generated both excitement and skepticism within the scientific community, in the absence of definitive public evidence that the Majorana 1 device exhibits Majorana zero modes.
Background
Quantum computing research has historically faced challenges in achieving qubit stability and scalability. Traditional qubits, such as those based on superconducting circuits or trapped ions, are highly susceptible to noise and decoherence, which can introduce errors in computations. To overcome these limitations, researchers have been exploring various approaches to building more robust and fault-tolerant quantum computers. Topological qubits, first theorized in 1997 by Alexei Kitaev and Michael Freedman, offer a promising solution by encoding quantum information in a way that is inherently protected from environmental disturbances. This protection stems from the topological properties of the system, which are resistant to local perturbations. Microsoft's approach, based on Majorana fermions in semiconductor-superconductor heterostructures, is one of several efforts to realize topological quantum computing.
Controversy
Microsoft's quantum hardware has been the subject of controversy since its high-profile retracted articl
Document 4:::
In algebra and number theory, a distribution is a function on a system of finite sets into an abelian group which is analogous to an integral: it is thus the algebraic analogue of a distribution in the sense of generalised function.
The original examples of distributions occur, unnamed, as functions φ on Q/Z satisfying
Such distributions are called ordinary distributions. They also occur in p-adic integration theory in Iwasawa theory.
Let ... → Xn+1 → Xn → ... be a projective system of finite sets with surjections, indexed by the natural numbers, and let X be their projective limit. We give each Xn the discrete topology, so that X is compact. Let φ = (φn) be a family of functions on Xn taking values in an abelian group V and compatible with the projective system:
for some weight function w. The family φ is then a distribution on the projective system X.
A function f on X is "locally constant", or a "step function" if it factors through some Xn. We can define an integral of a step function against φ as
The definition extends to more general projective systems, such as those indexed by the positive integers ordered by divisibility. As an important special case consider the projective system Z/nZ indexed by positive integers ordered by divisibility. We identify this with the system (1/n)Z/Z with limit Q/Z.
For x in R we let ⟨x⟩ denote the fractional part of x normalised to 0 ≤ ⟨x⟩ < 1, and let {x} denote the fractional part normalised to 0 < {x} ≤ 1.
Examples
Hurwitz zeta function
The multiplication theorem for the Hurwitz zeta function
gives a distribution relation
Hence for given s, the map is a distribution on Q/Z.
Bernoulli distribution
Recall that the Bernoulli polynomials Bn are defined by
for n ≥ 0, where bk are the Bernoulli numbers, with generating function
They satisfy the distribution relation
Thus the map
defined by
is a distribution.
Cyclotomic units
The cyclotomic units satisfy distribution relations. Let a be an element of Q
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the primary cause of the Gerrards Cross Tunnel collapse as confirmed by a later investigation in 2022?
A. Heavy rainfall
B. Insufficient safety checks
C. Backfilling operation
D. Design flaws
Answer:
|
C. Backfilling operation
|
Relavent Documents:
Document 0:::
The free surface effect is a mechanism which can cause a watercraft to become unstable and capsize.
It refers to the tendency of liquids — and of unbound aggregates of small solid objects, like seeds, gravel, or crushed ore, whose behavior approximates that of liquids — to move in response to changes in the attitude of a craft's cargo holds, decks, or liquid tanks in reaction to operator-induced motions (or sea states caused by waves and wind acting upon the craft). When referring to the free surface effect, the condition of a tank that is not full is described as a "slack tank", while a full tank is "pressed up".
Stability and equilibrium
In a normally loaded vessel any rolling from perpendicular is countered by a righting moment generated from the increased volume of water displaced by the hull on the lowered side. This assumes the center of gravity of the vessel is relatively constant. If a moving mass inside the vessel moves in the direction of the roll, this counters the righting effect by moving the center of gravity towards the lowered side. The free surface effect can become a problem in a craft with large partially full bulk cargo compartments, fuel tanks, or water tanks (especially if they span the full breadth of the ship), or from accidental flooding, such as has occurred in several accidents involving roll-on/roll-off ferries.
If a compartment or tank is either empty or full, there is no change in the craft's center of mass as it rolls from side to side (in strong winds, heavy seas, or on sharp motions or turns). However, if the compartment is only partially full, the liquid in the compartment will respond to the vessel's heave, pitch, roll, surge, sway or yaw. For example, as a vessel rolls to port, liquid will displace to the port side of a compartment, and this will move the vessel's center of mass to port. This has the effect of slowing the vessel's return to vertical.
The momentum of large volumes of moving liquids cause significant dynamic for
Document 1:::
The Truth About Crime is a British television documentary series inspired and presented by Nick Ross in association with the film-maker Roger Graef, executive producer Sam Collyns and series producer Alice Perman. It was first broadcast on BBC One in July and August 2009.
Region
The show focused on a single city, Oxford, which has demographics and crime rates broadly representative of urban Britain, and used a multitude of techniques to measure the real extent of crime and victimisation. In a two-week crime audit, camera teams followed police, fire services and paramedics and an on-line questionnaire, based closely on the national British Crime Survey, invited residents and businesspeople to describe their personal experiences of victimisation. A survey of 14- and 15-year-olds in Oxford schools was one of the biggest of its kind ever undertaken to discover how under-16s are affected by, and involved in, crime. Random children from some of the schools were then chosen to take part in the series being quizzed on crime by Ross on such topics as theft, drugs, alcohol and many other things.
Programme 1
The first programme (BBC One 9pm Tuesday 21 July 2009) looked at violent crime and revealed that almost all woundings involved victims who had been drinking or who had been assaulted by someone who was drunk, and took place close to a licensed premise.
Lack of commensuration with official statistics
Most of the seriously injured, as measured by those treated in hospital, did not report what had happened to the police. One senior physician told Nick Ross that police figures on crime are virtually "meaningless". The physician interviewed was Jonathan Shepherd, a professor of Oral and Maxillofacial Surgery at the Cardiff University Dental School. Domestic violence was shown to be a major problem, though mostly resulting only in minor injuries, and with women as perpetrators as well as men. Women were interviewed in a Women's shelter.
Crime mapping
Helmet cameras were show
Document 2:::
{{DISPLAYTITLE:Lp space}}
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz .
spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines.
Preliminaries
The -norm in finite dimensions
The Euclidean length of a vector in the -dimensional real vector space is given by the Euclidean norm:
The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and computer science.
For a real number the -norm or -norm of is defined by
The absolute value bars can be dropped when is a rational number with an even numerator in its reduced form, and is drawn from the set of real numbers, or one of its subsets.
The Euclidean norm from above falls into this class and is the -norm, and the -norm is the norm that corresponds to the rectilinear distance.
The -norm or maximum norm (or uniform norm) is the limit of the -norms for , given by:
For all the -norms and maximum norm satisfy the properties of a "length function" (or norm), that is:
only the zero vector has zero length,
the length of the vector is positive homogeneous with respect to multiplication by a scalar (positive homogeneity), and
the length of the sum of two vectors is no larger than the sum of lengths of the vectors (triangle inequality).
Abstractly speaking, this means that together with the -norm is a normed vector space. Moreover, it turns out that this space is complete, thus making it a Banach space.
Relations between -norms
The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm:
This fact generalizes to -norms in that the -norm of any given vector does not grow with :
For the opposite direction, the following relation between the -norm and the -norm is known:
This inequality depends on the dimension of the underlying vector space and follows directly from the Cauchy–Schwarz inequality.
In general, for vectors in where
This is a consequence of Hölder's inequality.
When
In for the formula
defines an absolutely homogeneous function for however, the resulting function does not define a norm, because it is not subadditive. On the other hand, the formula
defines a subadditive function at the cost of losing absolute homogeneity. It does define an F-norm, though, which is homogeneous of degree
Hence, the function
defines a metric. The metric space is denoted by
Although the -unit ball around the origin in this metric is "concave", the topology defined on by the metric is the usual vector space topology of hence is a locally convex topological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity of is to denote by the smallest constant such that the scalar multiple of the -unit ball contains the convex hull of which is equal to The fact that for fixed we have
shows that the infinite-dimensional sequence space defined below, is no longer locally convex.
When
There is one norm and another function called the "norm" (with quotation marks).
The mathematical definition of the norm was established by Banach's Theory of Linear Operations. The space of sequences has a complete metric topology provided by the F-norm on the product metric:
The -normed space is studied in functional analysis, probability theory, and harmonic analysis.
Another function was called the "norm" by David Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vector Many authors abuse terminology by omitting the quotation marks. Defining the zero "norm" of is equal to
This is not a norm because it is not homogeneous. For example, scaling the vector by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses in scientific computing, information theory, and statistics–notably in compressed sensing in signal processing and computational harmonic analysis. Despite not being a norm, the associated metric, known as Hamming distance, is a valid distance, since homogeneity is not required for distances.
spaces and sequence spaces
The -norm can be extended to vectors that have an infinite number of components (sequences), which yields the space This contains as special cases:
the space of sequences whose series are absolutely convergent,
the space of square-summable sequences, which is a Hilbert space, and
the space of bounded sequences.
The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinite sequences of real (or complex) numbers are given by:
Define the -norm:
Here, a complication arises, namely that the series on the right is not always convergent, so for example, the sequence made up of only ones, will have an infinite -norm for The space is then defined as the set of all infinite sequences of real (or complex) numbers such that the -norm is finite.
One can check that as increases, the set grows larger. For example, the sequence
is not in but it is in for as the series
diverges for (the harmonic series), but is convergent for
One also defines the -norm using the supremum:
and the corresponding space of all bounded sequences. It turns out that
if the right-hand side is finite, or the left-hand side is infinite. Thus, we will consider spaces for
The -norm thus defined on is indeed a norm, and together with this norm is a Banach space.
General ℓp-space
In complete analogy to the preceding definition one can define the space over a general index set (and ) as
where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence).
With the norm
the space becomes a Banach space.
In the case where is finite with elements, this construction yields with the -norm defined above.
If is countably infinite, this is exactly the sequence space defined above.
For uncountable sets this is a non-separable Banach space which can be seen as the locally convex direct limit of -sequence spaces.
For the -norm is even induced by a canonical inner product called the , which means that holds for all vectors This inner product can expressed in terms of the norm by using the polarization identity.
On it can be defined by
Now consider the case Define
where for all
The index set can be turned into a measure space by giving it the discrete σ-algebra and the counting measure. Then the space is just a special case of the more general -space (defined below).
Lp spaces and Lebesgue integrals
An space may be defined as a space of measurable functions for which the -th power of the absolute value is Lebesgue integrable, where functions which agree almost everywhere are identified. More generally, let be a measure space and
When , consider the set of all measurable functions from to or whose absolute value raised to the -th power has a finite integral, or in symbols:
To define the set for recall that two functions and defined on are said to be , written , if the set is measurable and has measure zero.
Similarly, a measurable function (and its absolute value) is (or ) by a real number written , if the (necessarily) measurable set has measure zero.
The space is the set of all measurable functions that are bounded almost everywhere (by some real ) and is defined as the infimum of these bounds:
When then this is the same as the essential supremum of the absolute value of :
For example, if is a measurable function that is equal to almost everywhere then for every and thus for all
For every positive the value under of a measurable function and its absolute value are always the same (that is, for all ) and so a measurable function belongs to if and only if its absolute value does. Because of this, many formulas involving -norms are stated only for non-negative real-valued functions. Consider for example the identity which holds whenever is measurable, is real, and (here when ). The non-negativity requirement can be removed by substituting in for which gives
Note in particular that when is finite then the formula relates the -norm to the -norm.
Seminormed space of -th power integrable functions
Each set of functions forms a vector space when addition and scalar multiplication are defined pointwise.
That the sum of two -th power integrable functions and is again -th power integrable follows from
although it is also a consequence of Minkowski's inequality
which establishes that satisfies the triangle inequality for (the triangle inequality does not hold for ).
That is closed under scalar multiplication is due to being absolutely homogeneous, which means that for every scalar and every function
Absolute homogeneity, the triangle inequality, and non-negativity are the defining properties of a seminorm.
Thus is a seminorm and the set of -th power integrable functions together with the function defines a seminormed vector space. In general, the seminorm is not a norm because there might exist measurable functions that satisfy but are not equal to ( is a norm if and only if no such exists).
Zero sets of -seminorms
If is measurable and equals a.e. then for all positive
On the other hand, if is a measurable function for which there exists some such that then almost everywhere. When is finite then this follows from the case and the formula mentioned above.
Thus if is positive and is any measurable function, then if and only if almost everywhere. Since the right hand side ( a.e.) does not mention it follows that all have the same zero set (it does not depend on ). So denote this common set by
This set is a vector subspace of for every positive
Quotient vector space
Like every seminorm, the seminorm induces a norm (defined shortly) on the canonical quotient vector space of by its vector subspace
This normed quotient space is called and it is the subject of this article. We begin by defining the quotient vector space.
Given any the coset consists of all measurable functions that are equal to almost everywhere.
The set of all cosets, typically denoted by
forms a vector space with origin when vector addition and scalar multiplication are defined by and
This particular quotient vector space will be denoted by
Two cosets are equal if and only if (or equivalently, ), which happens if and only if almost everywhere; if this is the case then and are identified in the quotient space. Hence, strictly speaking consists of equivalence classes of functions.
The -norm on the quotient vector space
Given any the value of the seminorm on the coset is constant and equal to denote this unique value by so that:
This assignment defines a map, which will also be denoted by on the quotient vector space
This map is a norm on called the .
The value of a coset is independent of the particular function that was chosen to represent the coset, meaning that if is any coset then for every (since for every ).
The Lebesgue space
The normed vector space is called or the of -th power integrable functions and it is a Banach space for every (meaning that it is a complete metric space, a result that is sometimes called the [[Riesz–Fischer theorem#Completeness of Lp, 0 < p ≤ ∞|Riesz–Fischer theorem]]).
When the underlying measure space is understood then is often abbreviated or even just
Depending on the author, the subscript notation might denote either or
If the seminorm on happens to be a norm (which happens if and only if ) then the normed space will be linearly isometrically isomorphic to the normed quotient space via the canonical map (since ); in other words, they will be, up to a linear isometry, the same normed space and so they may both be called " space".
The above definitions generalize to Bochner spaces.
In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset of in For however, there is a theory of lifts enabling such recovery.
Special cases
For the spaces are a special case of spaces; when are the natural numbers and is the counting measure. More generally, if one considers any set with the counting measure, the resulting space is denoted For example, is the space of all sequences indexed by the integers, and when defining the -norm on such a space, one sums over all the integers. The space where is the set with elements, is with its -norm as defined above.
Similar to spaces, is the only Hilbert space among spaces. In the complex case, the inner product on is defined by
Functions in are sometimes called square-integrable functions, quadratically integrable functions or square-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of a Riemann integral .
As any Hilbert space, every space is linearly isometric to a suitable where the cardinality of the set is the cardinality of an arbitrary basis for this particular
If we use complex-valued functions, the space is a commutative C*-algebra with pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutative von Neumann algebra. An element of defines a bounded operator on any space by multiplication.
When
If then can be defined as above, that is:
In this case, however, the -norm does not satisfy the triangle inequality and defines only a quasi-norm. The inequality valid for implies that
and so the function
is a metric on The resulting metric space is complete.
In this setting satisfies a reverse Minkowski inequality, that is for
This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces for .
The space for is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of an F-space that, for most reasonable measure spaces, is not locally convex: in or every open convex set containing the function is unbounded for the -quasi-norm; therefore, the vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure space contains an infinite family of disjoint measurable sets of finite positive measure.
The only nonempty convex open set in is the entire space. Consequently, there are no nonzero continuous linear functionals on the continuous dual space is the zero space. In the case of the counting measure on the natural numbers (i.e. ), the bounded linear functionals on are exactly those that are bounded on , i.e., those given by sequences in Although does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology.
Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure on rather than work with for it is common to work with the Hardy space whenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, the Hahn–Banach theorem still fails in for .
Properties
Hölder's inequality
Suppose satisfy . If and then and
This inequality, called Hölder's inequality, is in some sense optimal since if and is a measurable function such that
where the supremum is taken over the closed unit ball of then and
Generalized Minkowski inequality
Minkowski inequality, which states that satisfies the triangle inequality, can be generalized:
If the measurable function is non-negative (where and are measure spaces) then for all
Atomic decomposition
If then every non-negative has an , meaning that there exist a sequence of non-negative real numbers and a sequence of non-negative functions called , whose supports are pairwise disjoint sets of measure such that
and for every integer
and
and where moreover, the sequence of functions depends only on (it is independent of ).
These inequalities guarantee that for all integers while the supports of being pairwise disjoint implies
An atomic decomposition can be explicitly given by first defining for every integer
and then letting
where denotes the measure of the set and denotes the indicator function of the set
The sequence is decreasing and converges to as Consequently, if then and so that is identically equal to (in particular, the division by causes no issues).
The complementary cumulative distribution function of that was used to define the also appears in the definition of the weak -norm (given below) and can be used to express the -norm (for ) of as the integral
where the integration is with respect to the usual Lebesgue measure on
Dual spaces
The dual space of for has a natural isomorphism with where is such that . This isomorphism associates with the functional defined by
for every
is a well defined continuous linear mapping which is an isometry by the extremal case of Hölder's inequality. If is a -finite measure space one can use the Radon–Nikodym theorem to show that any can be expressed this way, i.e., is an isometric isomorphism of Banach spaces. Hence, it is usual to say simply that is the continuous dual space of
For the space is reflexive. Let be as above and let be the corresponding linear isometry. Consider the map from to obtained by composing with the transpose (or adjoint) of the inverse of
This map coincides with the canonical embedding of into its bidual. Moreover, the map is onto, as composition of two onto isometries, and this proves reflexivity.
If the measure on is sigma-finite, then the dual of is isometrically isomorphic to (more precisely, the map corresponding to is an isometry from onto
The dual of is subtler. Elements of can be identified with bounded signed finitely additive measures on that are absolutely continuous with respect to See ba space for more details. If we assume the axiom of choice, this space is much bigger than except in some trivial cases. However, Saharon Shelah proved that there are relatively consistent extensions of Zermelo–Fraenkel set theory (ZF + DC + "Every subset of the real numbers has the Baire property") in which the dual of is
Embeddings
Colloquially, if then contains functions that are more locally singular, while elements of can be more spread out. Consider the Lebesgue measure on the half line A continuous function in might blow up near but must decay sufficiently fast toward infinity. On the other hand, continuous functions in need not decay at all but no blow-up is allowed. More formally:
If : if and only if does not contain sets of finite but arbitrarily large measure (e.g. any finite measure).
If : if and only if does not contain sets of non-zero but arbitrarily small measure (e.g. the counting measure).
Neither condition holds for the Lebesgue measure on the real line while both conditions holds for the counting measure on any finite set. As a consequence of the closed graph theorem, the embedding is continuous, i.e., the identity operator is a bounded linear map from to in the first case and to in the second. Indeed, if the domain has finite measure, one can make the following explicit calculation using Hölder's inequality
leading to
The constant appearing in the above inequality is optimal, in the sense that the operator norm of the identity is precisely
the case of equality being achieved exactly when -almost-everywhere.
Dense subspaces
Let and be a measure space and consider an integrable simple function on given by
where are scalars, has finite measure and is the indicator function of the set for By construction of the integral, the vector space of integrable simple functions is dense in
More can be said when is a normal topological space and its Borel –algebra.
Suppose is an open set with Then for every Borel set contained in there exist a closed set and an open set such that
for every . Subsequently, there exists a Urysohn function on that is on and on with
If can be covered by an increasing sequence of open sets that have finite measure, then the space of –integrable continuous functions is dense in More precisely, one can use bounded continuous functions that vanish outside one of the open sets
This applies in particular when and when is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrable step functions are dense in .
Closed subspaces
If is any positive real number, is a probability measure on a measurable space (so that ), and is a vector subspace, then is a closed subspace of if and only if is finite-dimensional ( was chosen independent of ).
In this theorem, which is due to Alexander Grothendieck, it is crucial that the vector space be a subset of since it is possible to construct an infinite-dimensional closed vector subspace of (which is even a subset of ), where is Lebesgue measure on the unit circle and is the probability measure that results from dividing it by its mass
Applications
Statistics
In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, can be defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems.
In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its squared norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage sparse solutions (where the many parameters are zero). Elastic net regularization uses a penalty term that is a combination of the norm and the squared norm of the parameter vector.
Hausdorff–Young inequality
The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality.
By contrast, if the Fourier transform does not map into
Hilbert spaces
Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type
Generalizations and extensions
Weak
Let be a measure space, and a measurable function with real or complex values on The distribution function of is defined for by
If is in for some with then by Markov's inequality,
A function is said to be in the space weak , or if there is a constant such that, for all
The best constant for this inequality is the -norm of and is denoted by
The weak coincide with the Lorentz spaces so this notation is also used to denote them.
The -norm is not a true norm, since the triangle inequality fails to hold. Nevertheless, for in
and in particular
In fact, one has
and raising to power and taking the supremum in one has
Under the convention that two functions are equal if they are equal almost everywhere, then the spaces are complete .
For any the expression
is comparable to the -norm. Further in the case this expression defines a norm if Hence for the weak spaces are Banach spaces .
A major result that uses the -spaces is the Marcinkiewicz interpolation theorem, which has broad applications to harmonic analysis and the study of singular integrals.
Weighted spaces
As before, consider a measure space Let be a measurable function. The -weighted space is defined as where means the measure defined by
or, in terms of the Radon–Nikodym derivative, the norm for is explicitly
As -spaces, the weighted spaces have nothing special, since is equal to But they are the natural framework for several results in harmonic analysis ; they appear for example in the Muckenhoupt theorem: for the classical Hilbert transform is defined on where denotes the unit circle and the Lebesgue measure; the (nonlinear) Hardy–Littlewood maximal operator is bounded on Muckenhoupt's theorem describes weights such that the Hilbert transform remains bounded on and the maximal operator on
spaces on manifolds
One may also define spaces on a manifold, called the intrinsic spaces of the manifold, using densities.
Vector-valued spaces
Given a measure space and a locally convex space (here assumed to be complete), it is possible to define spaces of -integrable -valued functions on in a number of ways. One way is to define the spaces of Bochner integrable and Pettis integrable functions, and then endow them with locally convex TVS-topologies that are (each in their own way) a natural generalization of the usual topology. Another way involves topological tensor products of with Element of the vector space are finite sums of simple tensors where each simple tensor may be identified with the function that sends This tensor product is then endowed with a locally convex topology that turns it into a topological tensor product, the most common of which are the projective tensor product, denoted by and the injective tensor product, denoted by In general, neither of these space are complete so their completions are constructed, which are respectively denoted by and (this is analogous to how the space of scalar-valued simple functions on when seminormed by any is not complete so a completion is constructed which, after being quotiented by is isometrically isomorphic to the Banach space ). Alexander Grothendieck showed that when is a nuclear space (a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable.
space of measurable functions
The vector space of (equivalence classes of) measurable functions on is denoted . By definition, it contains all the and is equipped with the topology of convergence in measure. When is a probability measure (i.e., ), this mode of convergence is named convergence in probability. The space is always a topological abelian group but is only a topological vector space if This is because scalar multiplication is continuous if and only if If is -finite then the weaker topology of local convergence in measure is an F-space, i.e. a completely metrizable topological vector space. Moreover, this topology is isometric to global convergence in measure for a suitable choice of probability measure
The description is easier when is finite. If is a finite measure on the function admits for the convergence in measure the following fundamental system of neighborhoods
The topology can be defined by any metric of the form
where is bounded continuous concave and non-decreasing on with and when (for example, Such a metric is called Lévy-metric for Under this metric the space is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only if . To see this, consider the Lebesgue measurable function defined by . Then clearly . The space is in general not locally bounded, and not locally convex.
For the infinite Lebesgue measure on the definition of the fundamental system of neighborhoods could be modified as follows
The resulting space , with the topology of local convergence in measure, is isomorphic to the space for any positive –integrable density
See also
Notes
References
.
.
.
.
.
.
External links
Proof that Lp spaces are complete
Document 3:::
Street reclaiming is the process of converting, or otherwise returning streets to a stronger focus on non-car use — such as walking, cycling and active street life. It is advocated by many urban planners and urban economists, of widely varying political points of view. Its primary benefits are thought to be:
Decreased automobile traffic with less noise pollution, fewer automobile accidents, reduced smog and air pollution
Greater safety and access for pedestrians and cyclists
Less frequent surface maintenance than car-driven roads
Reduced summer temperatures due to less asphalt and more green spaces
Increased pedestrian traffic which also increases social and commercial opportunities
Increased gardening space for urban residents
Better support for co-housing and infirm residents, e.g. suburban eco-villages built around former streets
Campaigns
An early example of street reclamation was the Stockholm carfree day in 1969.
Some consider the best advantages to be gained by redesigning streets, for example as shared space, while others, such as campaigns like "Reclaim the Streets", a widespread "dis-organization", run a variety of events to physically reclaim the streets for political and artistic actions, often called street parties. David Engwicht is also a strong proponent of the concept that street life, rather than physical redesign, is the primary tool of street reclamation.
See also
References
External links
RTS — Reclaim the Streets
Park(ing)
– 22 September
What if everyone had a car? by the BBC World News
Document 4:::
In organometallic chemistry, agostic interaction refers to the intramolecular interaction of a coordinatively-unsaturated transition metal with an appropriately situated C−H bond on one of its ligands. The interaction is the result of two electrons involved in the C−H bond interaction with an empty d-orbital of the transition metal, resulting in a three-center two-electron bond. It is a special case of a C–H sigma complex. Historically, agostic complexes were the first examples of C–H sigma complexes to be observed spectroscopically and crystallographically, due to intramolecular interactions being particularly favorable and more often leading to robust complexes. Many catalytic transformations involving oxidative addition and reductive elimination are proposed to proceed via intermediates featuring agostic interactions. Agostic interactions are observed throughout organometallic chemistry in alkyl, alkylidene, and polyenyl ligands.
History
The term agostic, derived from the Ancient Greek word for "to hold close to oneself", was coined by Maurice Brookhart and Malcolm Green, on the suggestion of the classicist Jasper Griffin, to describe this and many other interactions between a transition metal and a C−H bond. Often such agostic interactions involve alkyl or aryl groups that are held close to the metal center through an additional σ-bond.
Short interactions between hydrocarbon substituents and coordinatively unsaturated metal complexes have been noted since the 1960s. For example, in tris(triphenylphosphine) ruthenium dichloride, a short interaction is observed between the ruthenium(II) center and a hydrogen atom on the ortho position of one of the nine phenyl rings. Complexes of borohydride are described as using the three-center two-electron bonding model.
The nature of the interaction was foreshadowed in main group chemistry in the structural chemistry of trimethylaluminium.
Characteristics of agostic bonds
Agostic interactions are best demonstrated
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the significance of Lp spaces in mathematics and other disciplines according to the text?
A. They are only useful in abstract mathematics.
B. They are crucial for mathematical analysis and have applications in physics, statistics, and finance.
C. They are primarily concerned with the geometric properties of shapes.
D. They are only relevant in the study of finite-dimensional vector spaces.
Answer:
|
B. They are crucial for mathematical analysis and have applications in physics, statistics, and finance.
|
Relavent Documents:
Document 0:::
In electrochemistry, CO stripping is a special process of voltammetry where a monolayer of carbon monoxide already adsorbed on the surface of an electrocatalyst is electrochemically oxidized and thus removed from the surface. A well-known process of this type is CO stripping on Pt/C electrocatalysts in which the electrooxidation peak occurs somewhere between 0.5 and 0.9 V depending on the characteristics and structural properties of the specimen.
Principle
Some metals, such as platinum, readily adsorb carbon monoxide, which is usually undesirable as it results in catalyst poisoning.
However, the strong affinity of CO to such catalysts also presents an opportunity: since carbon monoxide is a small molecule with a strong affinity to the catalyst, a large enough amount of CO will adsorb to the entire available surface area of the catalyst. That, in turn, means that by evaluating the amount of CO adsorbed, the catalyst's available surface area can be indirectly measured.
That surface area - also known as "real surface area" or "electrochemically active surface area" - can be measured by electrochemically oxidizing the adsorbed carbon monoxide, as the charge expended in oxidizing CO is directly proportional to the amount of CO adsorbed on the surface and therefore, the surface area of the catalyst.
Usage
CO stripping is one of the methods used to determine the electrochemically active surface area of electrodes and catalysts that irreversibly adsorb carbon monoxide, most notably ones containing platinum and other transition metals.
Document 1:::
In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time.
Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original.
Transformation
A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian and unitary operator . Under this change, the Hamiltonian transforms as:
.
The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by . Specifically, if the wave function satisfies the original equation, then will satisfy the new equation.
Derivation
Recall that by the definition of a unitary matrix, . Beginning with the Schrödinger equation,
,
we can therefore insert the identity at will. In particular, inserting it after and also premultiplying both sides by , we get
.
Next, note that by the product rule,
.
Inserting another and rearranging, we get
.
Finally, combining (1) and (2) above results in the desired transformation:
.
If we adopt the notation to describe the transformed wave function, the equations can be written in a clearer form. For instance, can be rewritten as
,
which can be rewritten in the form of the original Schrödinger equation,
The original wave function can be recovered as .
Relation to
Document 2:::
Ogataea is a genus of ascomycetous yeasts in the family Saccharomycetaceae. It was separated from the former genus Hansenula via an examination of their 18S and 26S rRNA partial base sequencings by Yamada et al. 1994.
The genus name of Ogataea is in honour of Koichi Ogata (x - 1977), who was a Japanese microbiologist from the Department of Agricultural Chemistry, Faculty of Agriculture, at Kyoto University. It was stated in the journal; "The genus is named in honor of the late Professor Dr. Koichi Ogata, Department of Agricultural Chemistry, Faculty of Agriculture, Kyoto University, Kyoto, Japan, in recognition of his studies on the oxidation and assimilation of methanol (C1 compound) in methanol-utilizing yeasts."
The genus was circumscribed by Yuzo Yamada, Kojiro Maeda and Kozaburo Mikata in Biosc., Biotechn. Biochem. vol.58 (Issue 7) on page 1253 in 1994.
Diagnosis
Like other yeasts, also the species within the genus Ogataea are single-celled or build pseudohyphae of only a few elongated cells; true hyphae are not formed.
They are able to reproduce sexually or asexually. The latter case happens with a cell division by multilateral budding on a narrow base with spherical to ellipsoidal budded cells. In the sexual reproduction the asci are deliquescent and may be unconjugated or show conjugation between a cell and its bud or between independent cells. Asci produce one to four, sometimes more, ascospores which are hat-shaped, allantoid or spherical with a ledge. The species are homothallic or infrequently heterothallic.
Ogataea cells are able to ferment glucose or other sugars and some species assimilate nitrate. All known species are able to utilize methanol as carbon source. The predominant ubiquinone is coenzyme Q-7 and the diazonium blue B test is negative. Some species are used and cultured for microbiological and genetic research e.g. Ogataea polymorpha, Ogataea minuta or Ogataea methanolica. Ogataea minuta (Wickerham) Y. Yamada, K. Maeda & Mikata is th
Document 3:::
An explosimeter is a gas detector which is used to measure the amount of combustible gases present in a sample. When a percentage of the lower explosive limit (LEL) of an atmosphere is exceeded, an alarm signal on the instrument is activated.
The device, also called a combustible gas detector, operates on the principle of resistance proportional to heat—a wire is heated, and a sample of the gas is introduced to the hot wire. Combustible gases burn in the presence of the hot wire, thus increasing the resistance and disturbing a Wheatstone bridge, which gives the reading. A flashback arrestor is installed in the device to avoid the explosimeter igniting the sample external to the device.
Note, that the detection readings of an explosimeter are only accurate if the gas being sampled has the same characteristics and response as the calibration gas. Most explosimeters are calibrated to methane, hydrogen, and carbon monoxide.
Explosimeters are important because workspaces may contain a flammable or explosive atmosphere due to the accumulation of flammable gases or vapors. Sparks from ordinary battery-powered portable equipment, including cameras, cell phones, laptop computers, or anything else located on the job site may serve as an ignition source. The explosimeter warns the user of dangerous atmospheric conditions before a possible explosion can occur.
Explosimetry
Explosimetry simply means the measurement of flammable or explosive conditions, normally in the atmosphere around us. In modern times, jobsites both above ground and below ground can have a wide range of dangerous flammable materials present. The danger of these flammable materials are mitigated by detection systems. Explosimetry sensors are integrated into stationary and portable devices to detect the concentration of the calibrated gas in air. The explosimeter is an example of a detection system with an explosimetry sensor in it.
For explosimeters to work properly they must be calibrated for a parti
Document 4:::
The Chicago Lake Tunnel was the first of several tunnels built from the city of Chicago's shore on Lake Michigan two miles out into the lake to access unpolluted fresh water far from the city's sewage.
Waterborne disease in early Chicago
In the early decades of its existence, the growing city was only about three feet above the surface of Lake Michigan, and the areas of early European settlement were flat and sandy with a high water table. European settlers in Chicago only needed to dig 6 to 12 feet to create a private well. The same settlers, however, would also dig privy vaults for human waste nearby. Because the sandy soil topped a layer of hard clay, human waste would sink from the outhouse, meet the impervious clay, and travel laterally into the freshwater supply. As a result, Chicago suffered numerous widespread outbreaks of waterborne diseases. The Chicago Board of Health was organized in 1835, in response to the threat of a cholera epidemic, and later outbreaks of cholera in 1852 and 1854 killed thousands.
Chesbrough's Water Plan
In 1855 the city’s newly formed Board of Sewerage Commissioners hired the 42-year-old Ellis S. Chesbrough (1813–1886), the first city engineer of Boston to study the problem. In 1863, Chesbrough completed a design for a water and sewer system for the city that included a tunnel five feet wide and lined with brick that would extend through the clay bed of Lake Michigan to a distance of 10,567 feet. Work started in 1864 and the tunnel was opened in 1867.
Construction
Gravity forces water into the tunnel through a structure called a crib. The crib for the Lake Tunnel was forty feet high and had five sides. Each side was fifty-eight feet long. The crib had outer, middle, inner walls bolted together, and each was sealed with caulk and tar in the same way ships of the day were made. The crib comprised fifteen separate water-tight compartments with an opening at the bottom twenty-five feet in diameter referred to as "the well," which d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the primary goal of the Brotherhood in "The Invisible Empire"?
A. To create a new world religion
B. To gain control over the entire world
C. To protect the innocent from evil
D. To develop advanced technology for peace
Answer:
|
B. To gain control over the entire world
|
Relavent Documents:
Document 0:::
Plants may reproduce sexually or asexually. Sexual reproduction produces offspring by the fusion of gametes, resulting in offspring genetically different from either parent. Vegetative reproduction produces new individuals without the fusion of gametes, resulting in clonal plants that are genetically identical to the parent plant and each other, unless mutations occur. In asexual reproduction, only one parent is involved.
Asexual reproduction
Asexual reproduction does not involve the production and fusion of male and female gametes. Asexual reproduction may occur through budding, fragmentation, spore formation, regeneration and vegetative propagation.
Asexual reproduction is a type of reproduction where the offspring comes from one parent only, thus inheriting the characteristics of the parent. Asexual reproduction in plants occurs in two fundamental forms, vegetative reproduction and agamospermy. Vegetative reproduction involves a vegetative piece of the original plant producing new individuals by budding, tillering, etc. and is distinguished from apomixis, which is a replacement of sexual reproduction, and in some cases involves seeds. Apomixis occurs in many plant species such as dandelions (Taraxacum species) and also in some non-plant organisms. For apomixis and similar processes in non-plant organisms, see parthenogenesis.
Natural vegetative reproduction is a process mostly found in perennial plants, and typically involves structural modifications of the stem or roots and in a few species leaves. Most plant species that employ vegetative reproduction do so as a means to perennialize the plants, allowing them to survive from one season to the next and often facilitating their expansion in size. A plant that persists in a location through vegetative reproduction of individuals gives rise to a clonal colony. A single ramet, or apparent individual, of a clonal colony is genetically identical to all others in the same colony. The distance that a plant can move during vegetative reproduction is limited, though some plants can produce ramets from branching rhizomes or stolons that cover a wide area, often in only a few growing seasons. In a sense, this process is not one of reproduction but one of survival and expansion of biomass of the individual. When an individual organism increases in size via cell multiplication and remains intact, the process is called vegetative growth. However, in vegetative reproduction, the new plants that result are new individuals in almost every respect except genetic. A major disadvantage of vegetative reproduction is the transmission of pathogens from parent to offspring. It is uncommon for pathogens to be transmitted from the plant to its seeds (in sexual reproduction or in apomixis), though there are occasions when it occurs.
Seeds generated by apomixis are a means of asexual reproduction, involving the formation and dispersal of seeds that do not originate from the fertilization of the embryos. Hawkweeds (Hieracium), dandelions (Taraxacum), some species of Citrus and Kentucky blue grass (Poa pratensis) all use this form of asexual reproduction. Pseudogamy occurs in some plants that have apomictic seeds, where pollination is often needed to initiate embryo growth, though the pollen contributes no genetic material to the developing offspring. Other forms of apomixis occur in plants also, including the generation of a plantlet in replacement of a seed or the generation of bulbils instead of flowers, where new cloned individuals are produced.
Structures
A rhizome is a modified underground stem serving as an organ of vegetative reproduction; the growing tips of the rhizome can separate as new plants, e.g., polypody, iris, couch grass and nettles.
Prostrate aerial stems, called runners or stolons, are important vegetative reproduction organs in some species, such as the strawberry, numerous grasses, and some ferns.
Adventitious buds form on roots near the ground surface, on damaged stems (as on the stumps of cut trees), or on old roots. These develop into above-ground stems and leaves. A form of budding called suckering is the reproduction or regeneration of a plant by shoots that arise from an existing root system. Species that characteristically produce suckers include elm (Ulmus) and many members of the rose family such as Rosa, Kerria and Rubus.
Bulbous plants such as onion (Allium cepa), hyacinths, narcissi and tulips reproduce vegetatively by dividing their underground bulbs into more bulbs. Other plants like potatoes (Solanum tuberosum) and dahlias reproduce vegetatively from underground tubers. Gladioli and crocuses reproduce vegetatively in a similar way with corms.
Gemmae are single cells or masses of cells that detach from plants to form new clonal individuals. These are common in Liverworts and mosses and in the gametophyte generation of some filmy fern. They are also present in some Club mosses such as Huperzia lucidula . They are also found in some higher plants such as species of Drosera.
Usage
The most common form of plant reproduction used by people is seeds, but a number of asexual methods are used which are usually enhancements of natural processes, including: cutting, grafting, budding, layering, division, sectioning of rhizomes, roots, tubers, bulbs, stolons, tillers, etc., and artificial propagation by laboratory tissue cloning. Asexual methods are most often used to propagate cultivars with individual desirable characteristics that do not come true from seed. Fruit tree propagation is frequently performed by budding or grafting desirable cultivars (clones), onto rootstocks that are also clones, propagated by stooling.
In horticulture, a cutting is a branch that has been cut off from a mother plant below an internode and then rooted, often with the help of a rooting liquid or powder containing hormones. When a full root has formed and leaves begin to sprout anew, the clone is a self-sufficient plant, genetically identical.
Examples include cuttings from the stems of blackberries (Rubus occidentalis), African violets (Saintpaulia), verbenas (Verbena) to produce new plants. A related use of cuttings is grafting, where a stem or bud is joined onto a different stem. Nurseries offer for sale trees with grafted stems that can produce four or more varieties of related fruits, including apples. The most common usage of grafting is the propagation of cultivars onto already rooted plants, sometimes the rootstock is used to dwarf the plants or protect them from root damaging pathogens.
Since vegetatively propagated plants are clones, they are important tools in plant research. When a clone is grown in various conditions, differences in growth can be ascribed to environmental effects instead of genetic differences.
Sexual reproduction
Sexual reproduction involves two fundamental processes: meiosis, which rearranges the genes and reduces the number of chromosomes, and fertilization, which restores the chromosome to a complete diploid number. In between these two processes, different types of plants and algae vary, but many of them, including all land plants, undergo alternation of generations, with two different multicellular structures (phases), a gametophyte and a sporophyte. The evolutionary origin and adaptive significance of sexual reproduction are discussed in the pages Evolution of sexual reproduction and Origin and function of meiosis.
The gametophyte is the multicellular structure (plant) that is haploid, containing a single set of chromosomes in each cell. The gametophyte produces male or female gametes (or both), by a process of cell division, called mitosis. In vascular plants with separate gametophytes, female gametophytes are known as mega gametophytes (mega=large, they produce the large egg cells) and the male gametophytes are called micro gametophytes (micro=small, they produce the small sperm cells).
The fusion of male and female gametes (fertilization) produces a diploid zygote, which develops by mitotic cell divisions into a multicellular sporophyte.
The mature sporophyte produces spores by meiosis, sometimes referred to as reduction division because the chromosome pairs are separated once again to form single sets.
In mosses and liverworts, the gametophyte is relatively large, and the sporophyte is a much smaller structure that is never separated from the gametophyte. In ferns, gymnosperms, and flowering plants (angiosperms), the gametophytes are relatively small and the sporophyte is much larger. In gymnosperms and flowering plants the megagametophyte is contained within the ovule (that may develop into a seed) and the microgametophyte is contained within a pollen grain.
History of sexual reproduction of plants
Unlike animals, plants are immobile, and cannot seek out sexual partners for reproduction. In the evolution of early plants, abiotic means, including water and much later, wind, transported sperm for reproduction. The first plants were aquatic, as described in the page Evolutionary history of plants, and released sperm freely into the water to be carried with the currents. Primitive land plants such as liverworts and mosses had motile sperm that swam in a thin film of water or were splashed in water droplets from the male reproduction organs onto the female organs. As taller and more complex plants evolved, modifications in the alternation of generations evolved. In the Paleozoic era progymnosperms reproduced by using spores dispersed on the wind. The seed plants including seed ferns, conifers and cordaites, which were all gymnosperms, evolved about 350 million years ago. They had pollen grains that contained the male gametes for protection of the sperm during the process of transfer from the male to female parts.
It is believed that insects fed on the pollen, and plants thus evolved to use insects to actively carry pollen from one plant to the next. Seed producing plants, which include the angiosperms and the gymnosperms, have a heteromorphic alternation of generations with large sporophytes containing much-reduced gametophytes. Angiosperms have distinctive reproductive organs called flowers, with carpels, and the female gametophyte is greatly reduced to a female embryo sac, with as few as eight cells. Each pollen grains contains a greatly reduced male gametophyte consisting of three or four cells. The sperm of seed plants are non-motile, except for two older groups of plants, the Cycadophyta and the Ginkgophyta, which have flagella.
Flowering plants
Flowering plants, the dominant plant group, reproduce both by sexual and asexual means. Their distinguishing feature is that their reproductive organs are contained in flowers. Sexual reproduction in flowering plants involves the production of separate male and female gametophytes that produce gametes.
The anther produces pollen grains that contain male gametophytes. The pollen grains attach to the stigma on top of a carpel, in which the female gametophytes (inside ovules) are located. Plants may either self-pollinate or cross-pollinate. The transfer of pollen (the male gametophytes) to the female stigmas occurs is called pollination. After pollination occurs, the pollen grain germinates to form a pollen tube that grows through the carpel's style and transports male nuclei to the ovule to fertilize the egg cell and central cell within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus a binucleate female cell) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The fertilized ovules develop into seeds within a fruit formed from the ovary. When the seeds are ripe they may be dispersed together with the fruit or freed from it by various means to germinate and grow into the next generation.
Pollination
Plants that use insects or other animals to move pollen from one flower to the next have developed greatly modified flower parts to attract pollinators and to facilitate the movement of pollen from one flower to the insect and from the insect to the next flower. Flowers of wind-pollinated plants tend to lack petals and or sepals; typically large amounts of pollen are produced and pollination often occurs early in the growing season before leaves can interfere with the dispersal of the pollen. Many trees and all grasses and sedges are wind-pollinated.
Plants have a number of different means to attract pollinators including color, scent, heat, nectar glands, edible pollen and flower shape. Along with modifications involving the above structures two other conditions play a very important role in the sexual reproduction of flowering plants, the first is the timing of flowering and the other is the size or number of flowers produced. Often plant species have a few large, very showy flowers while others produce many small flowers, often flowers are collected together into large inflorescences to maximize their visual effect, becoming more noticeable to passing pollinators. Flowers are attraction strategies and sexual expressions are functional strategies used to produce the next generation of plants, with pollinators and plants having co-evolved, often to some extraordinary degrees, very often rendering mutual benefit.
The largest family of flowering plants is the orchids (Orchidaceae), estimated by some specialists to include up to 35,000 species, which often have highly specialized flowers that attract particular insects for pollination. The stamens are modified to produce pollen in clusters called pollinia, which become attached to insects that crawl into the flower. The flower shapes may force insects to pass by the pollen, which is "glued" to the insect. Some orchids are even more highly specialized, with flower shapes that mimic the shape of insects to attract them to attempt to 'mate' with the flowers, a few even have scents that mimic insect pheromones.
Another large group of flowering plants is the Asteraceae or sunflower family with close to 22,000 species, which also have highly modified inflorescences composed of many individual flowers called florets. Heads with florets of one sex, when the flowers are pistillate or functionally staminate or made up of all bisexual florets, are called homogamous and can include discoid and liguliflorous type heads. Some radiate heads may be homogamous too. Plants with heads that have florets of two or more sexual forms are called heterogamous and include radiate and disciform head forms.
Ferns
Ferns typically produce large diploids with stem, roots, and leaves. On fertile leaves sporangia are produced, grouped together in sori and often protected by an indusium. If the spores are deposited onto a suitable moist substrate they germinate to produce short, thin, free-living gametophytes called prothalli that are typically heart-shaped, small and green in color. The gametophytes produce both motile sperm in the antheridia and egg cells in separate archegonia. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the antheridia where they fertilize the egg. To promote out crossing or cross-fertilization the sperm is released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of the different thallus. A zygote is formed after fertilization, which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations. Other plants with similar reproductive strategies include Psilotum, Lycopodium, Selaginella and Equisetum.
Bryophytes
The bryophytes, which include liverworts, hornworts and mosses, can reproduce both sexually and vegetatively. The life cycles of these plants start with haploid spores that grow into the dominant form, which is a multicellular haploid gametophyte, with thalloid or leaf-like structures that photosynthesize. The gametophyte is the most commonly known phase of the plant. Bryophytes are typically small plants that grow in moist locations and like ferns, have motile sperm which swim to the ovule using flagella and therefore need water to facilitate sexual reproduction. Bryophytes show considerable variation in their reproductive structures, and a basic outline is as follows: Haploid gametes are produced in antheridia and archegonia by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells, thus producing zygotes that are diploid. The zygote divides repeatedly by mitotic division and grows into a diploid sporophyte. The resulting multicellular diploid sporophyte produces spore capsules called sporangia. The spores are produced by meiosis, and when ripe, the capsules burst open to release the spores. In some species each gametophyte is one sex while other species may be monoicous, producing both antheridia and archegonia on the same gametophyte which is thus hermaphrodite.
Algae
Sexual reproduction in the multicellular facultatively sexual green alga Volvox carteri is induced by oxidative stress. A two-fold increase in cellular reactive oxygen species (associated with oxidative stress) activates the V. carteri genes needed for sexual reproduction. Exposure to antioxidants inhibits the induction of sex in V. carteri. It was proposed on the basis of these observations that sexual reproduction emerged in V. carteri evolution as an adaptive response to oxidative stress and the DNA damage induced by reactive oxygen species. Oxidative stress induced DNA damage may be repaired during the meiotic event associated with germination of the zygospore and the start of a new generation.
Dispersal and offspring care
One of the outcomes of plant reproduction is the generation of seeds, spores, and fruits that allow plants to move to new locations or new habitats.
Plants do not have nervous systems or any will for their actions. Even so, scientists are able to observe mechanisms that help their offspring thrive as they grow. All organisms have mechanisms to increase survival in offspring.
Offspring care is observed in the Mammillaria hernandezii, a small cactus found in Mexico. A cactus is a type of succulent, meaning it retains water when it is available for future droughts. M. hernandezii also stores a portion of its seeds in its stem, and releases the rest to grow. This can be advantageous for many reasons. By delaying the release of some of its seeds, the cactus can protect these from potential threats from insects, herbivores, or mold caused by micro-organisms. A study found that the presence of adequate water in the environment causes M. Hernandezii to release more seeds to allow for germination. The plant was able to perceive a water potential gradient in the surroundings, and act by giving its seeds a better chance in this preferable environment. This evolutionary strategy gives a better potential outcome for seed germination.
External links
Simple Video Tutorial on Reproduction in Plant
Document 1:::
This is a list of simultaneous localization and mapping (SLAM) methods. The KITTI Vision Benchmark Suite website has a more comprehensive list of Visual SLAM methods.
List of methods
EKF SLAM
FastSLAM 1.0
FastSLAM 2.0
L-SLAM (Matlab code)
QSLAM
GraphSLAM
Occupancy Grid SLAM
DP-SLAM
Parallel Tracking and Mapping (PTAM)
LSD-SLAM (available as open-source)
S-PTAM (available as open-source)
ORB-SLAM (available as open-source)
ORB-SLAM2 (available as open-source)
ORB-SLAM3 (available as open-source)
OrthoSLAM
MonoSLAM
GroundSLAM
CoSLAM
SeqSlam
iSAM (Incremental Smoothing and Mapping)
CT-SLAM (Continuous Time) - referred to as Zebedee (SLAM)
RGB-D SLAM
BranoSLAM
Kimera (open-source)
Wildcat-SLAM
References
Document 2:::
Repeated implantation failure (RIF) is the repeated failure of the embryo to implant onto the side of the uterus wall following IVF treatment. Implantation happens at 6–7 days after conception and involves the embedding of the growing embryo into the mothers uterus and a connection being formed. A successful implantation can be determined by using an ultrasound to view the sac which the baby grows in, inside the uterus.
However, the exact definition of RIF is debated. Recently the most commonly accepted definition is when a woman under 40 has gone through three unsuccessful cycles of IVF, when in each cycle four good quality eggs have been transferred.
Repeated implantation failure should not be confused with recurrent IVF failure. Recurrent IVF failure is a much more broad term and includes all repeated failures to get pregnant from IVF. Repeated implantation failure specifically refers to those failures due to unsuccessful implanting to the uterus wall.
An unsuccessful implantation can result from problems with the mother or with the embryo. It is essential that the mother and embryo are able to communicate with each other during all stages of pregnancy, and an absence of this communication can lead to an unsuccessful implantation and a further unsuccessful pregnancy.
Contributing maternal factors
During implantation, the embryo must cross the epithelial layer of the maternal endometrium before invading and implanting in the stroma layer. Maternal factors, including congenital uterine abnormalities, fibroids, endometrial polyps, intrauterine adhesions, adenomyosis, thrombophilia and endometriosis, can reduce the chances of implantation and result in RIF.
Congenital uterine abnormalities
Congenital uterine abnormalities are irregularities in the uterus which occur during the mothers foetal development.
Hox genes
Two Hox genes have been identified to assist in the development and receptivity of the uterus and endometrium, Hoxa10 and Hoxa11. Hoxa10 has been s
Document 3:::
The Debus–Radziszewski imidazole synthesis is a multi-component reaction used for the synthesis of imidazoles from a 1,2-dicarbonyl, an aldehyde, and ammonia or a primary amine. The method is used commercially to produce several imidazoles. The process is an example of a multicomponent reaction.
The reaction can be viewed as occurring in two stages. In the first stage, the dicarbonyl and two ammonia molecules condense with the two carbonyl groups to give a diimine:
In the second stage, this diimine condenses with the aldehyde:
However, the actual reaction mechanism is not certain.
This reaction is named after Heinrich Debus and .
A modification of this general method, where one equivalent of ammonia is replaced by an amine, affords N-substituted imidazoles in good yields.
This reaction has been applied to the synthesis of a range of 1,3-dialkylimidazolium ionic liquids by using various readily available alkylamines.
References
Document 4:::
Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main disadvantage of vegetative reproduction in plants?
A. It requires two parents for offspring.
B. It limits genetic diversity among offspring.
C. It is not effective for perennial plants.
D. It does not allow for the use of seeds.
Answer:
|
B. It limits genetic diversity among offspring.
|
Relavent Documents:
Document 0:::
Erich Otto Engel (29 September 1866, in Alt-Malisch Frankfurt – 11 February 1944, in Dachau) was a German entomologist, who specialised in Diptera.
He was a graphic artist and administrator of the Diptera collection in Zoologische Staatssammlung München.
Selected works
Engel, E. O. (1925) Neue paläarktische Asiliden (Dipt.). Konowia. 4, 189-194.
Engel, E. O., & Cuthbertson A. (1934) Systematic and biological notes on some Asilidae of Southern Rhodesia with a description of a species new to science. Proceedings of the Rhodesia Scientific Association. 34,
Engel, E.O. 1930. Asilidae (Part 24), in E. Lindner (ed.) Die Fliegen der Paläarktischen Region, vol. 4. Schweizerbart'sche, Stuttgart. 491 pp.
Engel, E.O. 1938-1954 Empididae. in Lindner, E. (Ed.). Die Fliegen der Paläarktischen Region, vol.4, 28, 1-400.
Engel, E. O., & Cuthbertson A. (1939). Systematic and biological notes on some brachycerous Diptera of southern Rhodesia. Journal of the Entomological Society of Southern Africa. 2, 181–185.
References
Anonym 1936: [Engel, E. O.] Insektenbörse, Stuttgart 53 (37)
Horn, W. 1936: [Engel, E. O.] Arb. morph. taxon. Ent. Berlin-Dahlem, Berlin 3 (4) 301*
Reiss, F. 1992: Die Sektion Diptera der Zoologischen Staatssammlung München. Spixiana Suppl., München 17 : 72-82, 9 Abb. 72-75, Portrait
External links
DEI Portrait
Document 1:::
In mathematics, the Browder–Minty theorem (sometimes called the Minty–Browder theorem) states that a bounded, continuous, coercive and monotone function T from a real, separable reflexive Banach space X into its continuous dual space X∗ is automatically surjective. That is, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g. (Note that T itself is not required to be a linear map.)
The theorem is named in honor of Felix Browder and George J. Minty, who independently proved it.
See also
Pseudo-monotone operator; pseudo-monotone operators obey a near-exact analogue of the Browder–Minty theorem.
References
(Theorem 10.49)
Document 2:::
Alcohol is generally disallowed in spaceflight, but space agencies have previously allowed its consumption. NASA has been stricter about alcohol consumption than the Roscosmos, both according to regulations and in practice. Astronauts and cosmonauts are restricted from being intoxicated at launch. Despite restrictions on consumption, there have been experiments in making and keeping alcoholic drinks in space.
The effects of alcohol on human physiology in microgravity have not been researched, though because medications can differ in their effects NASA expects that the effects of alcohol will also differ. Beer and other carbonated drinks are not suitable for spaceflight as the bubbles cause "wet burps"; also, a foamy head cannot form as the bubbles do not rise.
United States
On July 20, 1969, Apollo 11 astronaut Buzz Aldrin drank some wine when he took communion while on the Moon in the Lunar Module Eagle. The ceremony was not broadcast following earlier protests against religious activity that opponents believed to breach the separation between church and state.
In the 1970s, NASA's Charles Bourland planned to send sherry with the astronauts visiting Skylab, but the idea was scrapped because the smell was found to induce a gag reflex in zero-gravity flight tests, there was ambivalence among the astronauts, and angry letters were received after plans were discussed in public by Gerry Carr. Alcohol is prohibited aboard the International Space Station due to the impact it can have on the Environmental Control and Life Support System (ECLSS).
A 1985 NASA report on extended spaceflight predicted that alcohol would be missed, but would only become common in stable settlements.
Russia
The Russian state media Russia Beyond says drinking has been officially banned, but the first alcoholic drink sent into space by cosmonauts was a bottle of cognac, to the Salyut 7 in 1984. Cosmonaut Igor Volk said they would lose weight and hide alcohol in their spacesuits or hide bott
Document 3:::
Introduced by Martin Hellman and Susan K. Langford in 1994, the differential-linear attack is a mix of both linear cryptanalysis and differential cryptanalysis.
The attack utilises a differential characteristic over part of the cipher with a probability of 1 (for a few rounds—this probability would be much lower for the whole cipher). The rounds immediately following the differential characteristic have a linear approximation defined, and we expect that for each chosen plaintext pair, the probability of the linear approximation holding for one chosen plaintext but not the other will be lower for the correct key. Hellman and Langford have shown that this attack can recover 10 key bits of an 8-round DES with only 512 chosen plaintexts and an 80% chance of success.
The attack was generalised by Eli Biham et al. to use differential characteristics with probability less than 1. Besides DES, it has been applied to FEAL, IDEA, Serpent, Camellia, and even the stream cipher Phelix.
References
Document 4:::
Sir Richard Spencer (1593 − 1 November 1661) was an English nobleman, gentleman, knight, and politician who sat in the House of Commons from 1621 to 1629 and in 1661. He supported the Royalist cause in the English Civil War.
Early life
Spencer was the son of Robert Spencer, 1st Baron Spencer of Wormleighton and his wife Margaret, daughter of Sir Francis Willoughby and Elizabeth Lyttelton. He was baptised on 21 October 1593. He was educated at Corpus Christi College, Oxford in 1609 and was awarded BA in 1612. His brothers were Sir Edward Spencer, Sir John Spencer, and William Spencer, 2nd Baron Spencer of Wormleighton.
Parliamentary career
In 1621, Spencer was selected Member of Parliament for Northampton. He was a student of Gray's Inn in 1624. In 1624 and 1625 he was re-elected MP for Northampton. He became a gentleman of the bedchamber in 1626. He was re-elected MP for Northampton in 1626 and 1628 and sat until 1629 when King Charles decided to rule without parliament for eleven years.
Post-parliament actions
He was a J.P. for Kent by 1636. Spencer stood for parliament for Kent for the Long Parliament in November 1640, but withdrew before the election took place. He was imprisoned twice by the Long Parliament for promoting the moderate Kentish petition of 1642. He was a commissioner of array for the King in 1642, stood security for loans amounting to £60,000 and helped to raise two regiments of horse, which he commanded at the Battle of Edgehill. He gave up his command in 1643 and was claimed the benefit of the Oxford Articles. He settled £40 a year on the minister of Orpington and was allowed to compound for a mere £300.
However he became involved in the canal schemes of William Sandys and his financial condition became precarious. In 1651 he obtained a pass to France for himself and his family and settled in Brussels. He returned to England in about 1653 and was imprisoned and forced to pay off his debts.
At the Restoration, Spencer petitioned unsucce
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is one of the primary advantages of using the Schmidt–Kalman filter over increasing the dimensionality of the state space?
A. Improved accuracy in bias estimation
B. Reduction in computational complexity
C. Enhanced ability to observe residual biases
D. Simplicity in linear state transition models
Answer:
|
B. Reduction in computational complexity
|
Relavent Documents:
Document 0:::
James Wood (14 December 1760 – 23 April 1839) was a mathematician, and Master of St John's College, Cambridge. In his later years he was Dean of Ely.
Life
Wood was born in Holcombe, Bury where his father ran an evening school and taught his son the elements of arithmetic and algebra. From Bury Grammar School he proceeded to St John's College, Cambridge in 1778, graduating as senior wrangler in 1782. On graduating he became a fellow of the college and in his long tenure there produced several successful academic textbooks for students of mathematics. Between 1795 and 1799 his The principles of mathematics and natural philosophy, was printed, in four volumes, by J. Burges. Vol.I: 'The elements of algebra', by Wood; Vol.II: 'The principles of fluxions' by Samuel Vince; Vol.III Part I: 'The principles of mechanics" by Wood; and Vol.III Part II: "The principles of hydrostatics" by Samuel Vince; Vol.IV "The principles of astronomy" by Samuel Vince. Three other volumes -"A treatise on plane and spherical trigonometry" and "The elements of the conic sections" by Samuel Vince (1800) and "The elements of optics" by Wood (1801" may have been issued as part of the series.
Wood remained for sixty years at St. John's, serving as both President (1802–1815) and Master (1815–1839); on his death in 1839 he was interred in the college chapel and bequeathed his extensive library to the college, comprising almost 4,500 printed books on classics, history, mathematics, theology and travel, dating from the 17th to the 19th centuries.
Wood was also ordained as a priest in 1787 and served as Dean of Ely from 1820 until his death.
Publications
The Elements of Algebra (1795)
The Principles of Mechanics (1796)
The Elements of Optics (1798)
References
Notes
Other sources
W. W. Rouse Ball, A History of the Study of Mathematics at Cambridge University, 1889, repr. Cambridge University Press, 2009, , p. 110
Document 1:::
A hawk is a tool used to hold a plaster, mortar, or a similar material, so that the user can repeatedly, quickly and easily get some of that material on the tool which then applies it to a surface. A hawk consists of a board about 13 inches square with a perpendicular handle fixed centrally on the reverse. The user holds the hawk horizontally with the non-dominant hand and applies the material on the hawk with a tool held in the dominant hand.
Hawks are most often used by plasterers, along with finishing trowels, to apply a smooth finish of plaster to a wall. Brick pointers use hawks to hold mortar while they work. Hawks are also used to hold joint compound for taping and jointing.
The name "hawk" probably derives from the way the object rides on the user's arm, like a bird of prey.
References
de:Glättkelle
Document 2:::
The Dublin Institute of Technology (DIT) School of Electrical and Electronic Engineering (SEEE) was the largest and one of the longest established Schools of Electrical and Electronic Engineering in Ireland. It was located at the DIT Kevin Street Campus in Dublin City, as part of the College of Engineering & Built Environment (CEBE).
In 2019, DIT along with the Institute of Technology Blanchardstown (ITB) and the Institute of Technology Tallaght (ITT) became the founding institutes of the new Technological University Dublin (TU Dublin).
Overview
The DIT School of Electrical and Electronic Engineering was the largest education provider of Electrical and Electronic Engineering in Ireland in terms of programme diversity, staff and student numbers, covering a wide range of engineering disciplines including; Communications Engineering, Computer Engineering, Power Engineering, Electrical Services Engineering, Control Engineering, Energy Management and Electronic Engineering. The school included well established research centres in areas such as photonics, energy, antennas, communications and electrical power, with research outputs in; biomedical engineering, audio engineering, sustainable design, assistive technology and health informatics.
Educational courses in technical engineering commenced at Kevin St. Dublin 8 in 1887. The school sought accreditation for its programmes from the appropriate Irish and international professional body organisations, such as the Institution of Engineers of Ireland, offering education across the full range of third level National Framework of Qualifications (NFQ) Levels, from Level 6 to Level 10 (apprenticeships to post-doctoral degrees).
During the 2010s, the school delivered 23 individual programmes to over 1,200 students and had an output of approximately 350 graduates per year.
In 2015, the head of the school was Professor Michael Conlon.
In 2015, Electrical & Electronic Engineering was expected to be the first School from the C
Document 3:::
Sister chromatid cohesion refers to the process by which sister chromatids are paired and held together during certain phases of the cell cycle. Establishment of sister chromatid cohesion is the process by which chromatin-associated cohesin protein becomes competent to physically bind together the sister chromatids. In general, cohesion is established during S phase as DNA is replicated, and is lost when chromosomes segregate during mitosis and meiosis. Some studies have suggested that cohesion aids in aligning the kinetochores during mitosis by forcing the kinetochores to face opposite cell poles.
Cohesin loading
Cohesin first associates with the chromosomes during G1 phase. The cohesin ring is composed of two SMC (structural maintenance of chromosomes) proteins and two additional Scc proteins. Cohesin may originally interact with chromosomes via the ATPase domains of the SMC proteins. In yeast, the loading of cohesin on the chromosomes depends on proteins Scc2 and Scc4.
Cohesin interacts with the chromatin at specific loci. High levels of cohesin binding are observed at the centromere. Cohesin is also loaded at cohesin attachment regions (CARs) along the length of the chromosomes. CARs are approximately 500-800 base pair regions spaced at approximately 9 kilobase intervals along the chromosomes. In yeast, CARs tend to be rich in adenine-thymine base pairs. CARs are independent of origins of replication.
Establishment of cohesion
Establishment of cohesion refers to the process by which chromatin-associated cohesin becomes cohesion-competent. Chromatin association of cohesin is not sufficient for cohesion. Cohesin must undergo subsequent modification ("establishment") to be capable of physically holding the sister chromosomes together. Though cohesin can associate with chromatin earlier in the cell cycle, cohesion is established during S phase. Early data suggesting that S phase is crucial to cohesion was based on the fact that after S phase, sister chromatids are always found in the bound state. Tying establishment to DNA replication allows the cell to institute cohesion as soon as the sister chromatids are formed. This solves the problem of how the cell might properly identify and pair sister chromatids by ensuring that the sister chromatids are never separate once replication has occurred.
The Eco1/Ctf7 gene (yeast) was one of the first genes to be identified as specifically required for the establishment of cohesion. Eco1 must be present in S phase to establish cohesion, but its continued presence is not required to maintain cohesion. Eco1 interacts with many proteins directly involved in DNA replication, including the processivity clamp PCNA, clamp loader subunits, and a DNA helicase. Though Eco1 contains several functional domains, it is the acetyltransferase activity of the protein which is crucial for establishment of cohesion. During S phase, Eco1 acetylates lysine residues in the Smc3 subunit of cohesin. Smc3 remains acetylated until at least anaphase. Once cohesin has been removed from the chromatin, Smc3 is deacetylated by Hos1.
The Pds5 gene was also identified in yeast as necessary for the establishment of cohesion. In humans, the gene has two homologs, Pds5A and Pds5B. Pds5 interacts with chromatin-associated cohesin. Pds5 is not strictly establishment-specific, as Pds5 is necessary for maintenance of cohesion during G2 and M phase. The loss of Pds5 negates the requirement for Eco1. As such, Pds5 is often termed an "anti-establishment" factor.
In addition to interacting with cohesin, Pds5 also interacts with Wapl (wings apart-like), another protein that has been implicated in the regulation of sister chromatid cohesion. Human Wapl binds cohesin through the Scc cohesin subunits (in humans, Scc1 and SA1). Wapl has been tied to the loss of cohesin from the chromatids during M phase. Wapl interacts with Pds5 through phenylalanine-glycine-phenylalanine (FGF) sequence motifs.
One model of establishment of cohesion suggests that establishment is mediated by the replacement of Wapl in the Wapl-Pds5-cohesin complex with the Sororin protein. Like Wapl, Sororin contains an FGF domain and is capable of interacting with Pds5. In this model, put forward by Nishiyama et al., Wapl interacts with Pds5 and cohesin during G1, before establishment. During S phase, Eco1 (Esco1/Esco2 in humans) acetylates Smc3. This results in recruitment of Sororin. Sororin then replaces Wapl in the Pds5-cohesin complex. This new complex is the established, cohesion-competent cohesin state. At entry to mitosis, Sororin is phosphorylated and replaced again by Wapl, leading to loss of cohesion. Sororin also has chromatin binding activity independent of its ability to mediate cohesion.
Meiosis
Cohesion proteins SMC1ß, SMC3, REC8 and STAG3 appear to participate in the cohesion of sister chromatids throughout the meiotic process in human oocytes. SMC1ß, REC8 and STAG3 are meiosis specific cohesin proteins. The STAG3 protein is essential for female meiosis and fertility. Cohesins are involved in meiotic recombination.
Ties to DNA replication
A growing body of evidence ties establishment of cohesion to DNA replication. As mentioned above, functional coupling of these two processes prevents the cell from having to later distinguish which chromosomes are sisters by ensuring that the sister chromatids are never separate after replication.
Another significant tie between DNA replication and cohesion pathways is through Replication Factor C (RFC). This complex, the "clamp loader," is responsible for loading PCNA onto DNA. An alternative form of RFC is required for sister chromatin cohesion. This alternative form is composed of core RFC proteins RFC2, RFC3, RFC4, and RFC5, but replaces the RFC1 protein with cohesion specific proteins Ctf8, Ctf18, and Dcc1. A similar function-specific alternative RFC (replacing RFC1 with Rad24) plays a role in the DNA damage checkpoint. The presence of an alternative RFC in the cohesion pathway can be interpreted as evidence in support of the polymerase switch model for cohesion establishment. Like the non-cohesion RFC, the cohesion RFC loads PCNA onto DNA.
Some of the evidence tying cohesion and DNA replication comes from the multiple interactions of Eco1. Eco1 interacts with PCNA, RFC subunits, and a DNA helicase, Chl1, either physically or genetically. Studies have also found replication-linked proteins which influence cohesion independent of Eco1. The Ctf18 subunit of the cohesion-specific RFC can interact with cohesin subunits Smc1 and Scc1.
Polymerase switch model
Though the protein was originally identified as a Topoisomerase I redundant factor, the TRF4 gene product was later shown to be required for sister chromatid cohesion. Wang et al. showed that Trf4 is actually a DNA polymerase, which they called Polymerase κ. This polymerase is also referred to as Polymerase σ. In the same paper in which they identified Pol σ, Wang et al. suggested a polymerase switch model for establishment of cohesion. In this model, upon reaching a CAR, the cell switches DNA polymerases in a mechanism similar to that used in Okazaki fragment synthesis. The cell off-loads the processive replication polymerase and instead uses Pol σ for synthesis of the CAR region. It has been suggested that the cohesion-specific RFC could function in off-loading or on-loading PNCA and polymerases in such a switch.
Ties to DNA damage pathways
Changes in patterns of sister chromatid cohesion have been observed in cases of DNA damage. Cohesin is required for repair of DNA double-strand breaks (DSBs). One mechanism of DSB repair, homologous recombination (HR), requires the presence of the sister chromatid for repair at the break site. Thus, it is possible that cohesion is required for this process because it ensures that the sister chromatids are physically close enough to undergo HR. DNA damage can lead to cohesin loading at non-CAR sites and establishment of cohesion at these sites even during G2 phase. In the presence of ionizing radiation (IR), the Smc1 subunit of cohesin is phosphorylated by the ataxia telangiectasia mutated (ATM) kinase. ATM is a key kinase in the DNA damage checkpoint. Defects in cohesion can increase genome instability, a result consistent with the ties between cohesion and DNA damage pathways.
In the bacterium Escherichia coli, repair of mitomycin C-induced DNA damages occurs by a sister chromatid cohesion process involving the RecN protein. Sister chromatid interaction followed by homologous recombination appears to significantly contribute to the repair of DNA double-strand damages.
Medical relevance
Defects in the establishment of sister chromatid cohesion have serious consequences for the cell and are therefore tied to many human diseases. Failure to establish cohesion correctly or inappropriate loss of cohesion can lead to missegregation of chromosomes during mitosis, which results in aneuploidy. The loss of the human homologs of core cohesin proteins or of Eco1, Pds5, Wapl, Sororin, or Scc2 has been tied to cancer. Mutations affecting cohesion and establishment of cohesion are also responsible for Cornelia de Lange Syndrome and Roberts Syndrome. Diseases arising from defects in cohesin or other proteins involved in sister chromatid cohesion are referred to as cohesinopathies.
Cornelia de Lange Syndrome
Genetic alterations in genes NIPBL, SMC1A, SMC3, RAD21 and HDAC8 are associated with Cornelia de Lange Syndrome. The proteins encoded by these genes all function in the chromosome cohesion pathway that is employed in the cohesion of sister chromatids during mitosis, DNA repair, chromosome segregation and the regulation of developmental gene expression. Defects in these functions likely underlie many of the features of Cornelia de Lang Syndrome.
Document 4:::
David Negrete Fernández () was a Mexican colonel who participated in the Mexican Revolution. He was also a musician.
Biography
David fought alongside military officer Felipe Ángeles as a part of División del Norte.
He married Emilia Moreno Anaya, who bore him:
Consuelo Negrete Moreno
Jorge Alberto Negrete Moreno
Emilia Negrete Moreno
Teresa Negrete Moreno
David Negrete Moreno
Rubén Negrete Moreno
David was a math teacher in Mexico City. He was also a father-in-law of Elisa Christy and María Félix.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the role of the Eco1/Ctf7 gene in the establishment of sister chromatid cohesion during the cell cycle?
A. It is required for maintaining cohesion after S phase.
B. It is necessary for the initial loading of cohesin on chromosomes.
C. It acetylates lysine residues in the Smc3 subunit of cohesin during S phase.
D. It interacts with chromatin-associated cohesin only in G1 phase.
Answer:
|
C. It acetylates lysine residues in the Smc3 subunit of cohesin during S phase.
|
Relavent Documents:
Document 0:::
Homothallic refers to the possession, within a single organism, of the resources to reproduce sexually; i.e., having male and female reproductive structures on the same thallus. The opposite sexual functions are performed by different cells of a single mycelium.
It can be contrasted to heterothallic.
It is often used to categorize fungi. In yeast, heterothallic cells have mating types a and α. An experienced mother cell (one that has divided at least once) will switch mating type every cell division cycle because of the HO allele.
Sexual reproduction commonly occurs in two fundamentally different ways in fungi. These are outcrossing (in heterothallic fungi) in which two different individuals contribute nuclei to form a zygote, and self-fertilization or selfing (in homothallic fungi) in which both nuclei are derived from the same individual. Homothallism in fungi can be defined as the capability of an individual spore to produce a sexually reproducing colony when propagated in isolation. Homothallism occurs in fungi by a wide variety of genetically distinct mechanisms that all result in sexually reproducing cultures from a single cell.
Among the 250 known species of aspergilli, about 36% have an identified sexual state. Among those Aspergillus species for which a sexual cycle has been observed, the majority in nature are homothallic (self-fertilizing). Selfing in the homothallic fungus Aspergillus nidulans involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex but instead requires activation of these pathways within a single individual. Fusion of haploid nuclei occurs within reproductive structures termed cleistothecia, in which the diploid zygote undergoes meiotic divisions to yield haploid ascospores.
Several ascomycete fungal species of the genus Cochliobolus (C. luttrellii, C. cymbopogonis, C. kusanoi and C. homomorphus) are homothalli
Document 1:::
In cosmology, a static universe (also referred to as stationary, infinite, static infinite or static eternal) is a cosmological model in which the universe is both spatially and temporally infinite, and space is neither expanding nor contracting. Such a universe does not have so-called spatial curvature; that is to say that it is 'flat' or Euclidean. A static infinite universe was first proposed by English astronomer Thomas Digges (1546–1595).
In contrast to this model, Albert Einstein proposed a temporally infinite but spatially finite model - static eternal universe - as his preferred cosmology during 1917, in his paper Cosmological Considerations in the General Theory of Relativity.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the Belgian astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic space. Zwicky's proposal would come to be termed 'tired light'—a term invented by the major Big Bang proponent Richard Tolman.
The Einstein universe
During 1917, Albert Einstein added a positive cosmological constant to his equations of general relativity to counteract the attractive effects of gravity on ordinary matter, which would otherwise cause a static, spatially finite universe to either collapse or expand forever.
This model of the universe became known as the Einstein World or Einstein's static universe.
This motivation ended after the proposal by the astrophysicist and Roman Catholic priest Georges Lemaître that the universe seems to be not static, but expanding. Edwin Hubble had researched data from the observations made by astronomer Vesto Slipher to confirm a relationship between reds
Document 2:::
gaaa The Britten–Davidson model, also known as the gene-battery model, is a hypothesis for the regulation of protein synthesis in eukaryotes. Proposed by Roy John Britten and Eric H. Davidson in 1969, the model postulates four classes of DNA sequence: an integrator gene, a producer gene, a receptor site, and a sensor site. The sensor site regulates the integrator gene, responsible for synthesis of activator RNA. The integrator gene cannot synthesize activator RNA unless the sensor site is activated. Activation and deactivation of the sensor site is done by external stimuli, such as hormones. The activator RNA then binds with a nearby receptor site, which stimulates the synthesis of mRNA at the structural gene.
This theory would explain how several different integrators could be concurrently synthesized, and would explain the pattern of repetitive DNA sequences followed by a unique DNA sequence that exists in genes.
See also
Transcriptional regulation
Operon
References
Document 3:::
The Internetowy System Aktów Prawnych ( in Polish), shortly ISAP, is a database with information about the legislation in force in Poland, which is part of the oldest and one of the most famous Polish legal information systems, and is publicly available on the website of the Sejm of the Republic of Poland.
References
Document 4:::
Antifragility is a property of systems in which they increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. The concept was developed by Nassim Nicholas Taleb in his book, Antifragile, and in technical papers. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). The concept has been applied in risk analysis, physics, molecular biology, transportation planning, engineering, aerospace (NASA), and computer science.
Taleb defines it as follows in a letter to Nature responding to an earlier review of his book in that journal:
Antifragile versus robust/resilient
In his book, Taleb stresses the differences between antifragile and robust/resilient. "Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." The concept has now been applied to ecosystems in a rigorous way. In their work, the authors review the concept of ecosystem resilience in its relation to ecosystem integrity from an information theory approach. This work reformulates and builds upon the concept of resilience in a way that is mathematically conveyed and can be heuristically evaluated in real-world applications: for example, ecosystem antifragility. The authors also propose that for socio-ecosystem governance, planning or in general, any decision making perspective, antifragility might be a valuable and more desirable goal to achieve than a resilience aspiration. In the same way, Pineda and co-workers have proposed a simply calculable measure of antifragility, based on the change of “satisfaction” (i.e., network complexity) before and after adding perturbations, and apply it to random Boolean networks (RBNs). They also show that several well known biological networks such as Arabidopsis thaliana cell-cycle are as ex
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What role does the Foundation for Biomedical Research (FBR) play in relation to animal research?
A. It advocates against the use of animals in research.
B. It informs various groups about the necessity of animal research in medicine.
C. It solely focuses on legislative activism related to animal rights.
D. It provides funding exclusively for research on nonhuman primates.
Answer:
|
B. It informs various groups about the necessity of animal research in medicine.
|
Relavent Documents:
Document 0:::
The 12-bit ND812, produced by Nuclear Data, Inc., was a commercial minicomputer developed for the scientific computing market.
Nuclear Data introduced it in 1970 at a price under $10,000 ().
Description
The architecture has a simple programmed I/O bus, plus a DMA channel.
The programmed I/O bus typically runs low to medium-speed peripherals, such as printers,
teletypes, paper tape punches and readers, while DMA is used for cathode ray tube
screens with a light pen, analog-to-digital converters, digital-to-analog converters,
tape drives, disk drives.
The word size, 12 bits, is large enough to handle unsigned integers from 0 to 4095 –
wide enough for controlling simple machinery. This is also enough to handle signed numbers from -2048 to +2047.
This is higher precision than a slide rule or most analog computers. Twelve bits could also store two
six-bit characters (note, six-bit isn't enough for two cases, unlike "fuller" ASCII
character set). "ND Code" was one such 6-bit character encoding that included upper-case alphabetic, digit,
a subset of punctuation and a few control characters.
The ND812's basic configuration has a main memory of 4,096 twelve-bit words
with a 2 microsecond cycle time. Memory is expandable to 16K words in 4K word increments. Bits within the word
are numbered from most significant bit (bit 11) to
least significant bit (bit 0).
The programming model consists of four accumulator registers: two main accumulators, J and K,
and two sub accumulators, R and S. A rich set of arithmetic and logical operations are provided for the
main accumulators and instructions are provided to exchange data between the main and sub accumulators.
Conditional execution is provided through "skip" instructions. A condition is tested and the subsequent
instruction is either executed or skipped depending on the result of the test. The subsequent instruction
is usually a jump instruction when more than one instruction is needed for the case where the test fails.
Document 1:::
The Fischer oxazole synthesis is a chemical synthesis of an oxazole from a cyanohydrin and an aldehyde in the presence of anhydrous hydrochloric acid. This method was discovered by Emil Fischer in 1896. The cyanohydrin itself is derived from a separate aldehyde. The reactants of the oxazole synthesis itself, the cyanohydrin of an aldehyde and the other aldehyde itself, are usually present in equimolar amounts. Both reactants usually have an aromatic group, which appear at specific positions on the resulting heterocycle.
A more specific example of Fischer oxazole synthesis involves reacting mandelic acid nitrile with benzaldehyde to give 2,5-diphenyl-oxazole.
History
Fischer developed the Fischer oxazole synthesis during his time at Berlin University. The Fischer oxazole synthesis was one of the first syntheses developed to produce 2,5-disubstituted oxazoles.
Mechanism
The Fischer oxazole synthesis is a type of dehydration reaction which can occur under mild conditions in a rearrangement of the groups that would not seem possible. The reaction occurs by dissolving the reactants in dry ether and passing through the solution dry, gaseous hydrogen chloride. The product, which is the 2,5-disubstituted oxazole, precipitates as the hydrochloride and can be converted to the free base by the addition of water or by boiling with alcohol.
The cyanohydrins and aldehydes used for the synthesis are usually aromatic, however there have been instances where aliphatic compounds have been used.
The first step of the mechanism is the addition of gaseous HCl to the cyanohydrin 1. The cyanohydrin abstracts the hydrogen from HCl while the chloride ion attacks the carbon in the cyano group. This first step results in the formation of an iminochloride intermediate 2, probably as the hydrochloride salt. This intermediate then reacts with the aldehyde; the lone pair of the nitrogen attacks the electrophilic carbonyl carbon on the aldehyde.
The following step results in an SN2 attack fo
Document 2:::
Base Number (BN) is a measurement of basicity that is expressed in terms of the number of milligrams of potassium hydroxide per gram of oil sample (mg KOH/g). BN is an important measurement in petroleum products, and the value varies depending on its application. BN generally ranges from 6–8 mg KOH/g in modern lubricants, 7–10 mg KOH/g for general internal combustion engine use and 10–15 mg KOH/g for diesel engine operations. BN is typically higher for marine grade lubricants, approximately 15-80 mg KOH/g, as the higher BN values are designed to increase the operating period under harsh operating conditions, before the lubricant requires replacement.
Oil Additives
An oil formulation consists of the base or stock oil and oil additives. Most oil formulations contain basic additives and detergents, designed to react with and neutralise acids, preventing damage to engine parts, including corrosion of metal surfaces.
Potentiometric determination
Although IP Standard test methods exist, the more common methods for BN are ASTM standardised, such as the potentiometric titration for fresh oils (Test method BN ASTM D2896). A sample is typically dissolved in a pre-mixed solvent of chlorobenzene and acetic acid and titrated with standardised perchloric acid in glacial acetic acid for fresh oil samples. The end point is detected using a glass electrode which is immersed in an aqueous solution containing the sample, and connected to a voltmeter/potentiometer. This causes an ion exchange in the outer solvated layer at the glass membrane, so a change in potential is generated which can be measured by the electrode. When the end point of the chemical reaction is reached, which is shown by an inflection point on the titration curve using a specified detection system, the amount of titrant required is used to generate a result which is reported in milligrams of potassium hydroxide equivalent per gram of sample (mg of KOH/g). Potentiometric titration for used oils (Test method BN A
Document 3:::
The Health Products and Food Branch (HPFB) of Health Canada manages the health-related risks and benefits of health products and food by minimizing risk factors while maximizing the safety provided by the regulatory system and providing information to Canadians so they can make healthy, informed decisions about their health.
HPFB has ten operational Directorates with direct regulatory responsibilities:
Biologics and Genetic Therapies Directorate
Food Directorate
Marketed Health Products Directorate (with responsibility for post-market surveillance)
Medical Devices Directorate
Natural Health Products Directorate
Office of Nutrition Policy and Promotion
Pharmaceutical Drugs Directorate
Policy, Planning and International Affairs Directorate
Resource Management and Operations Directorate
Veterinary Drugs Directorate
Extraordinary Use New Drugs
Extraordinary Use New Drugs (EUNDs) is a regulatory programme under which, in times of emergency, drugs can be granted regulatory approval under the Food and Drug Act and its regulations. An EUND approved through this pathway can only be sold to federal, provincial, territorial and municipal governments. The text of the EUNDs regulations is available.
On 25 March 2011 and after the pH1N1 pandemic, amendments were made to the Food and Drug Regulations (FDR) to include a specific regulatory pathway for EUNDs. Typically, clinical trials in human subjects are conducted and the results are provided as part of the clinical information package of a New Drug Submission (NDS) to Health Canada, the federal authority that reviews the safety and efficacy of human drugs.
Health Canada recognizes that there are circumstances in which sponsors cannot reasonably provide substantial evidence demonstrating the safety and efficacy of a therapeutic product for NDS as there are logistical or ethical challenges in conducting the appropriate human clinical trials. The EUND pathway was developed to allow a mechanism for authorization of th
Document 4:::
The Rogue River–Siskiyou National Forest is a United States National Forest in the U.S. states of Oregon and California. The formerly separate Rogue River and Siskiyou National Forests were administratively combined in 2004. Now, the Rogue River–Siskiyou National Forest ranges from the crest of the Cascade Range west into the Siskiyou Mountains, covering almost . Forest headquarters are located in Medford, Oregon.
Geography
The former Rogue River portion of the Rogue River–Siskiyou National Forest is located in parts of five counties in southern Oregon and northern California. In descending order of land area they are Jackson, Klamath, Douglas, Siskiyou, and Josephine counties, with Siskiyou County being the only one in California. It has a land area of . There are local ranger district offices located in Ashland, Butte Falls, Grants Pass, Jacksonville, and Prospect.
The former Siskiyou portion of the Rogue River–Siskiyou National Forest is located in parts of four counties in southwestern Oregon and northwestern California. In descending order of land area they are Curry, Josephine, and Coos counties in Oregon and Del Norte County in California. It has a land area of . There are local ranger district offices located in Cave Junction, Gold Beach, and Powers.
Nearly all of the national forest is mountainous and includes parts of the Southern Oregon Coast Range, the Klamath Mountains, and the Cascade Range.
The largest river in the national forest is the Rogue River, which originates in the Cascade Range and flows through the Klamath Mountains and Coast Range. The Illinois River is a major tributary of the Rogue in the Klamath Mountains, while the Sixes, Elk, Pistol, Chetco, and Winchuck rivers drain the Coast Range directly to the Pacific Ocean.
Climate
History
The Siskiyou National Forest was established on October 5, 1906. On July 1, 1908, it absorbed Coquille National Forest and other lands. Rogue River National Forest traces its establishment back to the creation of the Ashland Forest Reserve on September 28, 1893, by the United States General Land Office. The lands were transferred to the Forest Service in 1906, and it became a National Forest on March 4, 1907. On July 1, 1908, Ashland was combined with other lands from Cascade, Klamath and Siskiyou National Forests to establish Crater National Forest. On July 18, 1915, part of Paulina National Forest was added, and on July 9, 1932, the name was changed to Rogue River.
World War II bombing
On September 9, 1942, an airplane dropped bombs on Mount Emily in the Siskiyou National Forest, turned around, and flew back over the Pacific Ocean. The bombs exploded and started a fire, which was put out by several forest service employees. Bomb fragments were said to have Japanese markings. Stewart Holbrook vividly described this event in his essay "First Bomb". It was later confirmed that the plane was indeed Japanese, and the incident became known as the Lookout Air Raids. It was the second bombing of the continental United States by an enemy aircraft, three months after the air attack by Japan on Dutch Harbor three months earlier on June 3–4.
Natural features
The national forest is home to some stands of old growth, including Port Orford cedar and Douglas fir in the Copper Salmon area. A 1993 Forest Service study estimated that the extent of old growth in the forest was some of which occurs in the Red Buttes Wilderness. Blue oak, Quercus douglasii, and Canyon live oak, Quercus chrysolepis occur in the Siskiyou National Forest. For the California endemic Blue Oak, the disjunctive stands are occurring near the northern limit of its range, which occur no farther north than Del Norte County. The world's tallest pine tree is a ponderosa and is located in the national forest.
In 2002, the massive Biscuit Fire burned nearly , including much of the Kalmiopsis Wilderness.
Protected areas
The Rogue River–Siskiyou National Forest contains all or part of eight separate wilderness areas, which together add up to :
Copper Salmon Wilderness -
Grassy Knob Wilderness -
Kalmiopsis Wilderness -
Red Buttes Wilderness -
Rogue–Umpqua Divide Wilderness -
Siskiyou Wilderness -
Sky Lakes Wilderness -
Wild Rogue Wilderness -
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What significant event occurred on September 9, 1942, in the Siskiyou National Forest?
A. The establishment of the forest
B. A bombing incident by a Japanese airplane
C. The first recorded fire in the forest
D. The combination of Rogue River and Siskiyou National Forests
Answer:
|
B. A bombing incident by a Japanese airplane
|
Relavent Documents:
Document 0:::
A spring house, or springhouse, is a small building, usually of a single room, constructed over a spring. While the original purpose of a springhouse was to keep the spring water clean by excluding fallen leaves, animals, etc., the enclosing structure was also used for refrigeration before the advent of ice delivery and, later, electric refrigeration. The water of the spring maintains a constant cool temperature inside the spring house throughout the year. Food that would otherwise spoil, such as meat, fruit, or dairy products, could be kept there, safe from animal depredations as well. Springhouses thus often also served as pumphouses, milkhouses and root cellars.
The Tomahawk Spring spring house at Tomahawk, West Virginia, was listed on the National Register of Historic Places in 1994.
Gallery
See also
Ice house (building)
Smokehouse
Windcatcher
External links
Document 1:::
Biodermogenesi is a procedure adopted in dermatology to promote skin regeneration. The system makes it possible to reorganize the skin layers and stimulate the regeneration of collagen, elastic fibers and basal cells through the reactivation of intra- and extracellular exchange. The therapy is based on the combined delivery of electromagnetic fields, vacuum and electrostimulation (V-EMF therapy). The literature shows that it is generally pleasant and relaxing for patients and has been found to be effective in the anti-aging therapy, of stretch marks and scars, always in the absence of side effects.
See Also
V-EMF therapy
Document 2:::
Mycroft was a free and open-source software virtual assistant that uses a natural language user interface. Its code was formerly copyleft, but is now under a permissive license. It was named after a fictional computer from the 1966 science fiction novel The Moon Is a Harsh Mistress.
Unusually for a voice-controlled assistant, Mycroft did all of its processing locally, not on a cloud server belonging to the vendor. It could access online resources, but it could also function without an internet connection.
In early 2023, Mycroft AI ceased development. A community-driven platform continues with OpenVoiceOS.
History
Inspiration for Mycroft came when Ryan Sipes and Joshua Montgomery were visiting a makerspace in Kansas City, MO, where they came across a simple and basic intelligent virtual assistant project. They were interested in the technology, but did not like its inflexibility. Montgomery believes that the burgeoning industry of intelligent personal assistance poses privacy concerns for users, and has promised that Mycroft will protect privacy through its open source machine learning platform.
Mycroft AI, Inc., has won several awards, including the prestigious Techweek's KC Launch competition in 2016. They were part of the Sprint Accelerator 2016 class in Kansas City and joined 500 Startups Batch 20 in February 2017. The company accepted a strategic investment from Jaguar Land Rover during this same time period. The company had raised more than $2.5 million from institutional investors before they opted to offer shares of the company to the public through StartEngine, an equity crowdfunding platform.
In early 2023, Mycroft AI ceased development.
Software
Mycroft voice stack
Mycroft provides free software for most parts of the voice stack.
Wake Word
Mycroft does Wake Word spotting, also called keyword spotting, through its Precise Wake Word engine. Prior to Precise becoming the default Wake Word engine, Mycroft employed PocketSphinx. Instead of being ba
Document 3:::
A memory protection unit (MPU) is a computer hardware unit that provides memory protection. It is usually implemented as part of the central processing unit (CPU). MPU is a trimmed down version of memory management unit (MMU) providing only memory protection support. It is usually implemented in low power processors that require only memory protection and do not need the full-fledged feature of a MMU like virtual memory management.
Overview
The MPU allows the privileged software to define memory regions and assign memory access permission and memory attributes to each of them. Depending on the implementation of the processor, the number of supported memory regions will vary. The MPU on ARMv8-M processors supports up to 16 regions. The memory attributes define the ordering and merging behaviors of these regions, as well as caching and buffering attributes. Cache attributes can be used by internal caches, if available, and can be exported for use by system caches.
MPU monitors transactions, including instruction fetches and data accesses from the processor, which can trigger a fault exception when an access violation is detected. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug or malware within a process from affecting other processes, or the operating system itself.
References
Document 4:::
The Minkowski content (named after Hermann Minkowski), or the boundary measure, of a set is a basic concept that uses concepts from geometry and measure theory to generalize the notions of length of a smooth curve in the plane, and area of a smooth surface in space, to arbitrary measurable sets.
It is typically applied to fractal boundaries of domains in the Euclidean space, but it can also be used in the context of general metric measure spaces.
It is related to, although different from, the Hausdorff measure.
Definition
For , and each integer m with , the m-dimensional upper Minkowski content is
and the m-dimensional lower Minkowski content is defined as
where is the volume of the (n−m)-ball of radius r and is an -dimensional Lebesgue measure.
If the upper and lower m-dimensional Minkowski content of A are equal, then their common value is called the Minkowski content Mm(A).
Properties
The Minkowski content is (generally) not a measure. In particular, the m-dimensional Minkowski content in Rn is not a measure unless m = 0, in which case it is the counting measure. Indeed, clearly the Minkowski content assigns the same value to the set A as well as its closure.
If A is a closed m-rectifiable set in Rn, given as the image of a bounded set from Rm under a Lipschitz function, then the m-dimensional Minkowski content of A exists, and is equal to the m-dimensional Hausdorff measure of A.
Footnotes
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of a memory protection unit (MPU) in computer systems?
A. To manage virtual memory for applications
B. To provide memory protection and prevent unauthorized access
C. To enhance the speed of data processing
D. To control power consumption in processors
Answer:
|
B. To provide memory protection and prevent unauthorized access
|
Relavent Documents:
Document 0:::
Difelikefalin, sold under the brand name Korsuva, is an opioid peptide used for the treatment of moderate to severe itch. It acts as a peripherally-restricted, highly selective agonist of the κ-opioid receptor (KOR).
Difelikefalin acts as an analgesic by activating KORs on peripheral nerve terminals and KORs expressed by certain immune system cells. Activation of KORs on peripheral nerve terminals results in the inhibition of ion channels responsible for afferent nerve activity, causing reduced transmission of pain signals, while activation of KORs expressed by immune system cells results in reduced release of proinflammatory, nerve-sensitizing mediators (e.g., prostaglandins).
Difelikefalin was approved for medical use in the United States in August 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Society and culture
Legal status
On 24 February 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Kapruvia, intended for treatment of moderate-to-severe pruritus associated with chronic kidney disease. The applicant for this medicinal product is Vifor Fresenius Medical Care Renal Pharma France. Difelikefalin was approved for medical use in the European Union in April 2022.
Research
It is under development by Cara Therapeutics as an intravenous agent for the treatment of postoperative pain. An oral formulation has also been developed. Due to its peripheral selectivity, difelikefalin lacks the central side effects like sedation, dysphoria, and hallucinations of previous KOR-acting analgesics such as pentazocine and phenazocine. In addition to use as an analgesic, difelikefalin is also being investigated for the treatment of pruritus (itching). Difelikefalin has completed phase II clinical trials for postoperative pain and has demonstrated significant and "robust" clinical
Document 1:::
The World Vegetable Center (WorldVeg) (), previously known as the Asian Vegetable Research and Development Center (AVRDC), is an international, nonprofit institute for vegetable research and development. It was founded in 1971 in Shanhua, southern Taiwan, by the Asian Development Bank, Taiwan, South Korea, Japan, the Philippines, Thailand, the United States and South Vietnam.
WorldVeg aims to reduce malnutrition and alleviate poverty in developing nations through improving production and consumption of vegetables.
History
The World Vegetable Center was founded as the Asian Vegetable Research and Development Center (AVRDC) in 1971 by the Asian Development Bank, Taiwan, South Korea, Japan, the Philippines, Thailand, the United States and South Vietnam. The main campus was opened in 1973. In 2008 the center was rebranded as the World Vegetable Center.
For the first 20 years of its existence the World Vegetable Center was a major global sweet potato research center with over 1,600 accessions in their first two years of operation. In 1991 the World Vegetable Center chose to end its sweet potato research due to high costs and other institutions with a tighter focus coming into existence. The WVC duplicated and transferred its research and germplasm to the International Potato Center and Taiwan Agricultural Research institute.
Research and development
The use of vegetables as crops that are of high worth is important in the Sustainable Development Goals of the United Nations Development Program and the World Vegetable Center. The vegetables bred by the Center can be used in poorer areas, where they can serve as an important source of income and can help fight micronutrient deficiencies.
The Center's current crop portfolio focuses on several groups of globally important vegetables, according to the WorldVeg:
solanaceous crops: (tomato, sweet pepper, chili pepper, eggplant)
bulb alliums (onion, shallot, garlic)
cucurbits (Cucurbitaceae): (cucumbers, pumpkins)
Indigeno
Document 2:::
In memory addressing for Intel x86 computer architectures, segment descriptors are a part of the segmentation unit, used for translating a logical address to a linear address. Segment descriptors describe the memory segment referred to in the logical address.
The segment descriptor (8 bytes long in 80286 and later) contains the following fields:
A segment base address
The segment limit which specifies the segment size
Access rights byte containing the protection mechanism information
Control bits
Structure
The x86 and x86-64 segment descriptor has the following form:
What the fields stand for:
Base Address Starting memory address of the segment. Its length is 32 bits and it is created from the lower part bits 16 to 31, and the upper part bits 0 to 7, followed by bits 24 to 31.
Segment Limit Its length is 20 bits and is created from the lower part bits 0 to 15 and the upper part bits 16 to 19. It defines the address of the last accessible data. The length is one more than the value stored here. How exactly this should be interpreted depends on the Granularity bit of the segment descriptor.
G=Granularity If clear, the limit is in units of bytes, with a maximum of 220 bytes. If set, the limit is in units of 4096-byte pages, for a maximum of 232 bytes.
D/B
D = Default operand size : If clear, this is a 16-bit code segment; if set, this is a 32-bit segment.
B = Big: If set, the maximum offset size for a data segment is increased to 32-bit 0xffffffff. Otherwise it's the 16-bit max 0x0000ffff. Essentially the same meaning as "D".
L=Long If set, this is a 64-bit segment (and D must be zero), and code in this segment uses the 64-bit instruction encoding. "L" cannot be set at the same time as "D" aka "B". (Bit 21 in the image)
AVL=Available For software use, not used by hardware (Bit 20 in the image with the label A)
P=Present If clear, a "segment not present" exception is generated on any reference to this segment
DPL=Descriptor privilege lev
Document 3:::
Botanical gardens in Bulgaria sometimes have collections consisting entirely of native and endemic species; most have a collection that include plants from around the world. There are botanical gardens and arboreta in all states and territories of Bulgaria, most are administered by local governments, some are privately owned.
– Balchik
– Varna
Sofia University Botanical garden, Sofia – Sofia
– Sofia
University of Forestry Botanical garden – Sofia
Document 4:::
Bridgwater tidal barrier will be a flood control gate located on the River Parrett in Bridgwater, Somerset, England. The River Parrett is tidal for some upstream of Bridgwater, and the combination of flooding on the Somerset Levels and high tides reaching up the Bristol Channel, have a detrimental effect on the whole area. In 2022, a tidal flood gate was approved to be installed at a cost of £249 million, which is expected to be operational by 2027.
History
Historically, flooding on the River Parrett has occurred when both excess rainwater and high tides in the Bristol Channel, backflow upstream on the river. In December 1929, serious flooding upstream at Lyng and Athelney was in danger of overwhelming those villages, and to prevent this, the locals suggested cutting the dykes, but this would release a "tidal wave" high, and combined with near incoming tide, it was feared mass flooding would occur in the Bridgwater area. However, by early 1930, the locals had abandoned this idea and they seem "resigned to their fate." Several times, locals on the Somerset Levels have complained that their settlements have been sacrificed to save Bridgwater, but one Environment Agency official noted that that is what the Somerset Levels are supposed to do; retain the floodwater and release it slowly.
Serious floods occurred in 1960, and as a result, defences against flooding were built along the Parrett catchment. One of the suggestions put forward after the 2014 floods was to build a giant lagoon in Bridgwater Bay which could generate electricity through the flowing of the tides, but could be allowed to store fresh floodwater and release it into the sea at low tide. Future flooding is based on modelling and estimates from the Environment Agency, which detail an increase of 20% of peak flow in all watercourses, coupled with a sea level rise of by the year 2100.
In December 2019, proposals for the barrier were submitted in response to severe flooding in Somerset in 2014. The
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of botanical gardens in Bulgaria?
A. To showcase only native species
B. To preserve global plant diversity
C. To serve as private recreational spaces
D. To exclusively collect endemic species
Answer:
|
B. To preserve global plant diversity
|
Relavent Documents:
Document 0:::
A modillion is an ornate bracket, more horizontal in shape and less imposing than a corbel. They are often seen underneath a cornice which helps to support them. Modillions are more elaborate than dentils (literally translated as small teeth). All three are selectively used as adjectival historic past participles (corbelled, modillioned, dentillated) as to what co-supports or simply adorns any high structure of a building, such as a terrace of a roof (a flat area of a roof), parapet, pediment/entablature, balcony, cornice band or roof cornice. Modillions occur classically under a Corinthian or a Composite cornice but may support any type of eaves cornice. They may be carved or plain.
See also
Glossary of architecture
Gallery
References
Document 1:::
Heterotopia is a concept elaborated by philosopher Michel Foucault to describe certain cultural, institutional and discursive spaces that are somehow "other": disturbing, intense, incompatible, contradictory or transforming. Heterotopias are "worlds within worlds": both similar to their surroundings, and contrasting with or upsetting them. Foucault provides examples: ships, cemeteries, bars, brothels, prisons, gardens of antiquity, fairs, Muslim baths and many more. Foucault outlines the notion of heterotopia on three occasions between 1966 and 1967. A lecture given by Foucault to a group of architects in 1967 is the most well-known explanation of the term. His first mention of the concept is in his preface to The Order of Things, and refers to texts rather than socio-cultural spaces.
Etymology
Heterotopia follows the template established by the notions of utopia and dystopia. The prefix hetero- is from Ancient Greek ἕτερος (héteros, "other, another, different") and is combined with the Greek morpheme τόπος (topos) and means "place". A utopia is an idea or an image that is not real but represents a perfected version of society, such as Thomas More's book or Le Corbusier's drawings. As Walter Russell Mead has written, "Utopia is a place where everything is good; dystopia is a place where everything is bad; heterotopia is where things are different — that is, a collection whose members have few or no intelligible connections with one another."
In Foucault
Foucault uses the term heterotopia () to describe spaces that have more layers of meaning or relationships to other places than immediately meet the eye. In general, a heterotopia is a physical representation or approximation of a utopia, or a parallel space (such as a prison) that contains undesirable bodies to make a real utopian space possible.
Foucault explains the link between utopias and heterotopias using the example of a mirror. A mirror is a utopia because the image reflected is a "placeless place", an
Document 2:::
The Planetary Science Decadal Survey is a serial publication of the United States National Research Council produced for NASA and other United States Government Agencies such as the National Science Foundation. The documents identify key questions facing planetary science and outlines recommendations for space and ground-based exploration ten years into the future. Missions to gather data to answer these big questions are described and prioritized, where appropriate. Similar decadal surveys cover astronomy and astrophysics, earth science, and heliophysics.
As of 2022 there have been three "Decadals", one published in April 2002 for the decade from 2003 to 2013, one published on March 7, 2011 for the decade from 2013 to 2022, and one published on April 19, 2022 for the decade from 2023 to 2032.
Before the decadal surveys
Planetary Exploration, 1968-1975, published in 1968, recommended missions to Jupiter, Mars, Venus, and Mercury in that order of priority.
Report of Space Science, 1975 recommended exploration of the outer planets.
Strategy for Exploration of the Inner Planets, 1977–1987 was published in 1977.
Strategy for the Exploration of Primitive Solar-System Bodies--Asteroids, Comets, and Meteoroids, 1980–1990 was published in 1980.
A Strategy for Exploration of the Outer Planets, 1986-1996 was published in 1986.
Space Science in the Twenty-First Century – Imperatives for the Decades 1995 to 2015, published in 1988, recommended a focus on "Galileo-like missions to study Saturn, Uranus and Neptune" including a mission to rendezvous with Saturn's rings and study Titan. It also recommended study of the moon with a "Lunar Geoscience Orbiter", a network of lunar rovers and sample return from the lunar surface. The report recommended a Mercury Orbiter to study not only that planet but provide some solar study as well. A "Program of Extensive Study of Mars" beginning with the Mars Pathfinder mission was planned for 1995 to be followed up by one in 1998 to ret
Document 3:::
Bataan Rice Enrichment Project or the Bataan Experiment, was a collaborative research venture between American chemist Robert R. Williams and Juan Salcedo Jr. It was a series of feeding experiments conducted in municipalities in Bataan between 1947 and 1949. By the end of the experiments, it is shown that thiamine-enriched rice can reduce the cases of beriberi in the Philippines, which was the leading cause of deaths during those times.
Overview
The enrichment project came first as a plan in 1943. During this time, Salcedo, who had his studies at Columbia University, met American chemist Robert R. Williams, a well renowned scientist for his synthesis on vitamin B1 in 1935.
The Philippine Bureau of Health reported a relatively stable beriberi rate from the mid-1920s to 1940. However, after World War II, beriberi cases surged, becoming the second leading cause of death in 1946 and 1947. Infants accounted for a significant portion of these deaths.
At this time, Williams was disappointed by the agencies at the United Nations to further eradicate the rise of beriberi cases around the globe. Due to his previous failures in rice enrichment programs, Williams became desperate. Together with Salcedo, they began feeding experiments in the province of Bataan.
The specific objectives of the feeding experiment were:
Determine if enriched rice could effectively treat beriberi.
Test the practicality of using enriched rice in the rice trade.
Establish a system to ensure only enriched rice is sold.
Promote the use of enriched rice among the people and explore its potential for widespread use throughout the Philippines.
The experiments were conducted in Bataan where it was divided into two areas: the experimental zone and control zone. After the introduction of thiamine-enriched rice, the experimental area received significant results. Mortality rate in beriberi significantly decreased after the introduction of the nutrient-enriched white rice from July 1, 1948, to June 30, 1
Document 4:::
Jason X: Planet of the Beast is a 2005 British science fiction horror novel written by Nancy Kilpatrick and published by Black Flame. A tie-in to the Friday the 13th series of American horror films, it is the third in a series of five Jason X novels published by Black Flame and involves undead cyborg Jason Voorhees running amok on G7, a space station orbiting Planet #666.
Plot
G7, a research station orbiting Planet #666, receives distress signals from an approaching derelict vessel, Black Star 13. The crew of The Revival, a spaceship docked with G7, boards and investigates Black Star 13, the crew of which had found a waste disposal rocket adrift in space before being slaughtered by the vessel's only passenger, undead cyborg Jason Voorhees. Jason attacks The Revival, causing it and Black Star 13 to crash on Planet #666. Professor Claude Bardox, the lead scientist on G7, discovers Jason was aboard Black Star 13. Bardox, believing Jason's nanotechnology-enhanced physiology is the key to genetic breakthroughs, especially in the field of cloning, sets out to retrieve samples of Jason's DNA. Jason rips Bardox's prosthetic arm off and murders Bardox's assistant, Emery Peterson, and pilot, Felicity Lawrence. Despite this, Bardox is successful in bringing samples of Jason's genetic material back to G7, unaware Jason climbed aboard his shuttle to escape Planet #666. After Jason slays two of the station's personnel, Andre Desjardines and Doctor Brandi Essex Williams, Bardox, oblivious to Jason's presence on G7, knocks the rest of the station's crew out by tampering with the air and sets to work modifying the genetic samples he took from Jason. Bardox wants to create a new, perfect breed of human with Jason's DNA, which he uses to artificially inseminate one of his subordinates, a fellow geneticist named London Jefferson. Bardox believes humanity is being held back by oppressive morality and physical frailty and also wants to show up and impress his abusive father back on Ea
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the three main stages of the human action cycle as proposed by Donald A. Norman?
A. Goal formation, Execution, Evaluation
B. Planning, Execution, Review
C. Formation, Execution, Assessment
D. Goal setting, Action, Outcome
Answer:
|
A. Goal formation, Execution, Evaluation
|
Relavent Documents:
Document 0:::
To help compare different orders of magnitude, the following list describes various speed levels between approximately 2.2 m/s and 3.0 m/s (the speed of light). Values in bold are exact.
List of orders of magnitude for speed
See also
Typical projectile speeds - also showing the corresponding kinetic energy per unit mass
Neutron temperature
References
Document 1:::
In computing, a segmentation fault (often shortened to segfault) or access violation is a failure condition raised by hardware with memory protection, notifying an operating system (OS) the software has attempted to access a restricted area of memory (a memory access violation). On standard x86 computers, this is a form of general protection fault. The operating system kernel will, in response, usually perform some corrective action, generally passing the fault on to the offending process by sending the process a signal. Processes can in some cases install a custom signal handler, allowing them to recover on their own, but otherwise the OS default signal handler is used, generally causing abnormal termination of the process (a program crash), and sometimes a core dump.
Segmentation faults are a common class of error in programs written in languages like C that provide low-level memory access and few to no safety checks. They arise primarily due to errors in use of pointers for virtual memory addressing, particularly illegal access. Another type of memory access error is a bus error, which also has various causes, but is today much rarer; these occur primarily due to incorrect physical memory addressing, or due to misaligned memory access – these are memory references that the hardware cannot address, rather than references that a process is not allowed to address.
Many programming languages have mechanisms designed to avoid segmentation faults and improve memory safety. For example, Rust employs an ownership-based model to ensure memory safety. Other languages, such as Lisp and Java, employ garbage collection, which avoids certain classes of memory errors that could lead to segmentation faults.
Overview
A segmentation fault occurs when a program attempts to access a memory location that it is not allowed to access, or attempts to access a memory location in a way that is not allowed (for example, attempting to write to a read-only location, or to overwrite part
Document 2:::
Circoviridae is a family of DNA viruses. Birds and mammals serve as natural hosts. The family has two genera. Diseases associated with this family include: PCV-2: postweaning multisystemic wasting syndrome; CAV: chicken infectious anemia.
Structure
Viruses in the family Circoviridae are non-enveloped, with icosahedral and round geometries, and T=1 symmetry. The diameter is around 20 nm. Genomes are circular and non-segmented, around 3.8kb in length. The capsid consists of 12 pentagonal trumpet-shaped pentamers. There are two main open reading frames arranged in opposite directions that encode the replication and capsid proteins.
Life cycle
Viral replication is nuclear. Entry into the host cell is achieved by penetration into the host cell. Replication follows the ssDNA rolling circle model. DNA templated transcription, with some alternative splicing mechanism is the method of transcription. The virus exits the host cell by nuclear egress, and nuclear pore export. A stem loop structure with a conserved nonanucleotide motif is located at the 5' intergenic region of circovirus genomes and is thought to initiate rolling-cycle replication.
Birds and mammals serve as the natural host. Transmission routes are fecal-oral.
Taxonomy
The family Circoviridae contains two genera—Circovirus and Cyclovirus.
Clinical
A cyclovirus—cyclovirus-Vietnam—has been isolated from the cerebrospinal fluid of 25 Vietnamese patients with CNS infections of unknown aetiology. The same virus has been isolated from the faeces of healthy children and also from pigs and chickens. This suggests an orofaecal route of transmission with a possible animal reservoir.
See also
Animal viruses
References
External links
ICTV Online Report; Circoviridae
ICTVdB Entry for Circoviridae
Viralzone: Circoviridae
Document 3:::
Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3).
Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just under 37 years as of January 2025, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.
In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. , the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs.
History
A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.
A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.
The RC5-56 challenge was solved on October 19, 1997, after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is
Document 4:::
ETUT (European Training network in collaboration with Ukraine for electrical Transport) is a research project funded by the European Commission's Horizon 2020 program under the Marie Sktodowska-Curie Actions Innovative Training Networks (MSCA- ITN) scheme. The project, undertaken by a collaborative effort of the University of Twente, the University of Nottingham, and Dnipro National University of Railway Transport, aims to develop efficient interfacing technology for more-electric transport amidst the ever-increasing demand in transportation systems which contribute to increased carbon dioxide emissions. The project has employed 12 Early Stage Researches (doctoral candidates) who will work closely with six industrial partners to improve upon the existing electrical and energy storage systems that will help in alleviating the reliance on non-renewable energy sources for large-scale transportation systems such as railways and maritime transport. The project is segregated into two main groups with one focusing on power electronics for efficient use of energy resources in power delivery, and the other on electromagnetic compatibility of such systems.
History
With the increasing use of solid-state devices in transportation systems, a number of electromagnetic interference and interoperability problems are occurring which could hinder the electrification of transport on a global scale. This requires a closer inspection of the interaction of power electronic converters with equipment of the information and communications technology. With this in mind, MSCA ETUT brought together researchers from different academic backgrounds spanning all over the globe to contrive innovative ways of countering these problems. Jan Abraham Ferreira (IEEE Fellow, former President of IEEE Power Electronics Society ), Frank Leferink (IEEE Fellow and Director EMC at Thales Nederland), Patrick Wheeler (IEEE Fellow and Global Director of the University of Nottingham's Institute of Aerospace Tech
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What significant achievement did White Widow accomplish in 1995?
A. It became a medical cannabis strain.
B. It won the Cannabis Cup.
C. It was developed by Mr. Nice Seedbank.
D. It was renamed Black Widow.
Answer:
|
B. It won the Cannabis Cup.
|
Relavent Documents:
Document 0:::
GI Monocerotis, also known as Nova Monocerotis 1918, was a nova that erupted in the constellation Monoceros during 1918. It was discovered by Max Wolf on a photographic plate taken at the Heidelberg Observatory on 4 February 1918. At the time of its discovery, it had a photographic magnitude of 8.5, and had already passed its peak brightness. A search of plates taken at the Harvard College Observatory showed that it had a photographic magnitude of 5.4 on 1 January 1918, so it would have been visible to the naked eye around that time. By March 1918 it had dropped to ninth or tenth magnitude. By November 1920 it was a little fainter than 15th magnitude.
A single pre-eruption photographic detection of GI Monocerotis exists, showing its magnitude was 15.1 before the nova event.
GI Monoceros dropped by 3 magnitudes from its peak in about 23 days, making it a "fast nova". Long after the nova eruption, six small outbursts with a mean amplitude of 0.9 magnitudes were detected when the star was monitored from the year 1991 through 2000. Radio emission from the nova has been detected at the JVLA in the C (5 GHz), X (8 GHz) and K (23 GHz) bands.
All novae are binary stars, with a "donor" star orbiting a white dwarf. The two stars are so close together that matter is transferred from the donor star to the white dwarf. Worpel et al. report that the orbital period for the binary is probably 4.33 hours, and there is a 48.6 minute period which may represent the rotation period for the white dwarf. Their X-ray observations indicate that GI Mon is a non-magnetic cataclysmic variable star, meaning that the material lost from the donor star forms an accretion disk around the white dwarf, rather than flowing directly to the surface of the white dwarf. It is estimated that the donor star is transferring of material to the accretion disk each year.
A 1995 search for an optically resolved nova remnant using the Anglo-Australian Telescope was unsuccessful.
References
Document 1:::
In the healthcare industry, information continuity is the process by which information relevant to a patient's care is made available to both the patient and the provider at the right place and the right time, to facilitate ongoing health care management and continuity of care.
This is an extension of the concept of "Continuity of Care," which is defined by the American Academy of Family Physicians in their Continuity of Care definition as "the process by which the patient and the physician are cooperatively involved in ongoing health care management toward the goal of high quality, cost-effective medical care."
There is a non-Information Technology reference to "Informational continuity" — the use of information on past events and personal circumstances to make current care appropriate for each individual. This exists with "Management continuity" and "Relational continuity."
Information continuity in the information technology sense may exist alongside physical care continuity, such as when a medical chart arrives with a patient to the hospital. Information continuity may also be separate, such as when a patient's electronic records are sent to a treating physician before the patient arrives at a care site.
Creating information continuity in health care typically involves the use of health information technology to link systems using standards. Information continuity will become more and more important as patients in health care systems expect that their treating physicians have all of their medical information across the health care spectrum.
This use of this term in health information technology initiated at Seattle, Washington, at the Group Health Cooperative non-profit care system to describe activities including data sharing, allergy and medication reconciliation, and interfacing of data between health care institutions.
See also
Health care continuity
References
Document 2:::
Sayre's paradox is a dilemma encountered in the design of automated handwriting recognition systems. A standard statement of the paradox is that a cursively written word cannot be recognized without being segmented and cannot be segmented without being recognized. The paradox was first articulated in a 1973 publication by Kenneth M. Sayre, after whom it was named.
Nature of the problem
It is relatively easy to design automated systems capable of recognizing words inscribed in a printed format. Such words are segmented into letters by the very act of writing them on the page. Given templates matching typical letter shapes in a given language, individual letters can be identified with a high degree of probability. In cases of ambiguity, probable letter sequences can be compared with a selection of properly spelled words in that language (called a lexicon). If necessary, syntactic features of the language can be applied to render a generally accurate identification of the words in question. Printed-character recognition systems of this sort are commonly used in processing standardized government forms, in sorting mail by zip code, and so forth.
In cursive writing, however, letters comprising a given word typically flow sequentially without gaps between them. Unlike a sequence of printed letters, cursively connected letters are not segmented in advance. Here is where Sayre's Paradox comes into play. Unless the word is already segmented into letters, template-matching techniques like those described above cannot be applied. That is, segmentation is a prerequisite for word recognition. But there are no reliable techniques for segmenting a word into letters unless the word itself has been identified. Word recognition requires letter segmentation, and letter segmentation requires word recognition. There is no way a cursive writing recognition system employing standard template-matching techniques can do both simultaneously.
Advantages to be gained by use of automa
Document 3:::
Indian hedgehog homolog (Drosophila), also known as IHH, is a protein which in humans is encoded by the IHH gene. This cell signaling protein is in the hedgehog signaling pathway. The several mammalian variants of the Drosophila hedgehog gene (which was the first named) have been named after the various species of hedgehog; the Indian hedgehog is honored by this one. The gene is not specific to Indian hedgehogs.
Function
The Indian hedgehog protein is one of three proteins in the mammalian hedgehog family, the others being desert hedgehog (DHH) and sonic hedgehog (SHH). It is involved in chondrocyte differentiation, proliferation and maturation especially during endochondral ossification. It regulates its effects by feedback control of parathyroid hormone-related peptide (PTHrP).
Indian Hedge Hog, (Ihh) is one of three signaling molecules from the Hedgehog (Hh) gene family. Genes of the Hh family, Sonic Hedgehog (Shh), Desert Hedgehog (Dhh) and Ihh regulate several fetal developmental processes. The Ihh homolog is involved in the formation of chondrocytes during the development of limbs. The protein is released by small, non-proliferating, mature chondrocytes during endochondral ossification. Recently, Ihh mutations are shown to cause brachydactyly type A1 (BDA1), the first Mendelian autosomal dominant disorder in humans to be recorded. There are seven known mutations to Ihh that cause BDA1. Of particular interest, are mutations involving the E95 residue, which is thought to be involved with proper signaling mechanisms between Ihh and its receptors. In a mouse model, mice with mutations to the E95 residue were found to have abnormalities to their digits.
Ihh may also be involved in endometrial cell differentiation and implantation. Studies have shown progesterone to upregulate Ihh expression in the murine endometrium, suggesting a role in implantation. Ihh is suspected to be involved in the downstream regulation of other signaling molecules that are known to pla
Document 4:::
The Warkworth Radio Astronomical Observatory is a radio telescope observatory, located just south of Warkworth, New Zealand, about 50 km north of the Auckland CBD. It was established by the Institute for Radio Astronomy and Space Research, Auckland University of Technology. The WARK12M 12m Radio Telescope was constructed in 2008. In 2010, a licence to operate the Telecom New Zealand 30m dish was granted, which led to the commissioning of the WARK30M 30m Radio Telescope. The first observations made in conjunction with the Australian Long Baseline Array took place in 2011.
The observatory was purchased by Space Operations New Zealand Limited (SpaceOps NZ) in July 2023 after the university decided to close the facility. Because of its wider ambit, the facility is now called the Warkworth Space Centre.
History
The Warkworth Space Centre is on land that was first developed for long-range telecommunications, operated by the New Zealand Post Office and opening on 17 July 1971. The station, primarily connecting to fourth generation Intelsat satellites, was used for satellite telephone circuits and television, including the broadcast of the 1974 British Commonwealth Games, held in Christchurch. The shallow valley site was chosen as it was sheltered from winds and radio noise, and the horizon elevation of only 5° allowed the station to be useful for transmissions to low orbit satellites. The original 30-metre antenna was decommissioned on 18 June 2008 and demolished.
A second antenna and station building were opened on 24 July 1984. This was removed from commercial service in November 2010, after which the Auckland University of Technology upgraded the motor drives and began using it for radio astronomy.
Technical information
A hydrogen maser is installed on-site to provide the very accurate timing required by VLBI observations. The observatory has a 10 Gbit/s connection to the REANNZ network, providing high speed data transfers for files and e-VLBI as well as linking
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary goal of information continuity in healthcare as defined in the text?
A. To ensure that patients have access to all healthcare facilities
B. To facilitate ongoing health care management and continuity of care
C. To maintain a patient's personal information securely
D. To reduce the cost of medical services
Answer:
|
B. To facilitate ongoing health care management and continuity of care
|
Relavent Documents:
Document 0:::
Computed tomography of the chest or chest CT is a group of computed tomography scan protocols used in medical imaging to evaluate the lungs and search for lung disorders.
Contrast agents are sometimes used in CT scans of the chest to accentuate or enhance the differences in radiopacity between vascularized and less vascularized structures, but a standard chest CT scan is usually non-contrasted (i.e. "plain") and relies on different algorithms to produce various series of digitalized images known as view or "window". Modern detail-oriented scans such as high-resolution computed tomography (HRCT) is the gold standard in respiratory medicine and thoracic surgery for investigating disorders of the lung parenchyma (alveoli).
Contrasted CT scans of the chest are usually used to confirm diagnosis of for lung cancer and abscesses, as well as to assess lymph node status at the hila and the mediastinum. CT pulmonary angiogram, which uses time-matched ("phased") protocols to assess the lung perfusion and the patency of great arteries and veins, particularly to look for pulmonary embolism.
References
Document 1:::
Rachel E. Scherr is an American physics educator, currently an associate professor of physics at the University of Washington Bothell. Her research includes studies of responsive teaching and active learning, video and gestural analysis of classroom behavior, and student understanding of energy and special relativity.
Education and career
In high school, Scherr worked as an "explainer" at the Exploratorium in San Francisco. She majored in physics at Reed College, where she used the support of a Watson Fellowship for a year abroad studying physics education in Europe, Asia, and Africa. After graduating in 1993 she went to the University of Washington for graduate study in physics and physics education, earning a master's degree in 1996 and completing her Ph.D. in 2001. Her dissertation, An investigation of student understanding of basic concepts in special relativity, was jointly supervised by Lillian C. McDermott and Stamatis Vokos.
After a short-term stint at Evergreen State College, she became a postdoctoral researcher and research assistant professor at the University of Maryland from 2001 to 2010. She moved to Seattle Pacific University as a research scientist in 2008, and to her present position at the University of Washington Bothell in 2018.
She was chair of the Topical Group on Physics Education Research of the American Physical Society (APS) for the 2016 term.
Recognition
Scherr was elected as a Fellow of the American Physical Society in 2017, after a nomination from the APS Topical Group on Physics Education Research, "for foundational research into energy learning and representations, application of video analysis methods to study physics classrooms, and physics education research community leadership".
References
External links
Home page
Document 2:::
Vaccine-naive is a lack of immunity, or immunologic memory, to a disease because the person has not been vaccinated. There are a variety of reasons why a person may not have received a vaccination, including contraindications due to preexisting medical conditions, lack of resources, previous vaccination failure, religious beliefs, personal beliefs, fear of side-effects, phobias to needles, lack of information, vaccine shortages, physician knowledge and beliefs, social pressure, and natural resistance.
Effect on herd immunity
Communicable diseases, such as measles and influenza, are more readily spread in vaccine-naive populations, causing frequent outbreaks. Vaccine-naive persons threaten what epidemiologists call herd immunity. This is because vaccinations provide not just protection to those who receive them, but also provide indirect protection to those who remain susceptible because of the reduced prevalence of infectious diseases. Fewer individuals available to transmit the disease reduce the incidence of it, creating herd immunity.
See also
Immune system
Outbreak
Pulse vaccination strategy
Vaccine
References
External links
Centers for Disease Control and Prevention (CDC) immunization schedules
CDC Vaccine contraindications
Document 3:::
Bistel, acronym for Belgian Information System by Telephone, was a Belgian service using the Videotex protocol, established in 1986 at the initiative of the then Prime Minister Wilfried Martens.
It was a federal government service that gave access through videotex to Belga telex news releases, to a press review, judicial databases JUSTEL and CREDOC, economical databases ECOTEL and BUDGETEX.
In 1988, Bart Halewijck, a former PM cabinet adviser and municipal councillor for the same CVP party in Gistel, used the passwords he had kept to view sensitive government informations and showed it to a friend, then to a journalist from De Standaard to prove the security failures. This led to a judiciary "Bistel Case". In spite of recommendations, all passwords had not been changed every six months and many were still the same after two years. Halewijck and his friend, as there was then no specific penal provisions for such crimes, were charged for forgery and use of forgery, theft of energy with burglary, forgery by an official of a telegraphic despatch (an article of the Belgian Penal Code never used since 1911) and destruction of State property. The two first ever "Belgian hackers" got only light penalties but the case showed the urgent need for a specific legislation.
Sources
Document 4:::
In theoretical computer science, Arden's rule, also known as Arden's lemma, is a mathematical statement about a certain form of language equations.
Background
A (formal) language is simply a set of strings. Such sets can be specified by means of some language equation, which in turn is based on operations on languages. Language equations are mathematical statements that resemble numerical equations, but the variables assume values of formal languages rather than numbers. Among the most common operations on two languages A and B are the set union A ∪ B, and their concatenation A⋅B. Finally, as an operation taking a single operand, the set A* denotes the Kleene star of the language A.
Statement of Arden's rule
Arden's rule states that the set A*⋅B is the smallest language that is a solution for X in the linear equation X = A⋅X ∪ B where X, A, B are sets of strings. Moreover, if the set A does not contain the empty word, then this solution is unique.
Equivalently, the set B⋅A* is the smallest language that is a solution for X in X = X⋅A ∪ B.
Application
Arden's rule can be used to help convert some finite automatons to regular expressions, as in Kleene's algorithm.
See also
Regular expression
Nondeterministic finite automaton
Notes
References
Arden, D. N. (1960). Delayed logic and finite state machines, Theory of Computing Machine Design, pp. 1-35, University of Michigan Press, Ann Arbor, Michigan, USA.
(open-access abstract)
John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. . Chapter 2: Finite Automata and Regular Expressions, p.54.
Arden, D.N. An Introduction to the Theory of Finite State Machines, Monograph No. 12, Discrete System Concepts Project, 28 June 1965.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is one common benchmark used to define a small state in terms of population?
A. Less than 500,000 people
B. Less than 1 million people
C. Less than 1.5 million people
D. Less than 2 million people
Answer:
|
B. Less than 1 million people
|
Relavent Documents:
Document 0:::
City Brain (sometimes treated as an improper noun by sources and rendered as city brain; ) is a software system that utilizes artificial intelligence and data collection for urban management. Developed by Chinese tech company Alibaba Group, the City Brain systems have been adopted by local governments throughout the country as well as in Kuala Lumpur, the capital of Malaysia. Currently, these systems are mostly used for traffic management, but they have been used and expanded to cover other needs as well. City Brain systems have been promoted as making cities "smarter" and improving their residents' quality of life. However, the systems have also faced criticism for issues relating to privacy, cost, and their use of surveillance.
History and overview
The first City Brain system was announced and developed in 2016 by Alibaba Cloud for its home city of Hangzhou. First aiming to curb the city's high level of traffic congestion, it was initially "given control" of traffic lights in Xiaoshan District, where it increased traffic speed by 15%. This led to its adoption by the rest of the city in 2017, where it has seen praise for continuing to reduce congestion and aiding first responders to travel faster. In addition to traffic light management, the system also analyzes camera feeds to detect accidents and alert authorities. In the following years, many other local governments in China sought their own such systems.
In addition, Kuala Lumpur, the capital of Malaysia, announced its adoption of a City Brain system in 2018 to deal with traffic. City Brain's use in Kuala Lumpur was Alibaba's first overseas implementation of the platform. By September 2019, Alibaba stated that 22 Chinese cities, including Macau, as well as Kuala Lumpur, had City Brain systems. The scope of these systems has expanded, with some being used to track pollution, alert authorities to illegal gatherings and possible conflicts of interest/corruption in government outsourcing, and aid in contact tr
Document 1:::
Sir Francis Rolle (1630–1686) was an English lawyer and politician who sat in the House of Commons at various times between 1656 and 1685.
Early life
Rolle was the only son of Henry Rolle of Shapwick in Somerset, who was Chief Justice of the King's Bench and his wife Margaret Bennett. He entered Inner Temple in 1646 and was admitted at Emmanuel College, Cambridge on 25 January 1647. He was called to the bar in 1653.
Career
In 1656, Rolle was elected Member of Parliament for Somerset in the Second Protectorate Parliament. He succeeded his father to the estate at Shapwick in 1656 and became JP for Somerset until July 1660, In 1657 he was commissioner for assessment for Somerset and Hampshire. He was commissioner for militia in 1659 and JP for Hampshire from 1659 to July 1660. He was commissioner for assessment for Somerset and Hampshire from January 1660 to 1680 and commissioner for militia in March 1660. In April 1660 he was elected MP for Bridgwater in the Convention Parliament. He was commissioner for sewers for Somerset in August 1660 and was JP for Somerset from September 1660 to 1680. He was High Sheriff of Hampshire from 1664 to 1665 and was knighted on 1 March 1665. He also became a freeman of Portsmouth in 1665. In 1669 he was briefly MP for Bridgwater again but was removed on petition. He was High Sheriff of Somerset from 1672 to 1673 and was commissioner for inquiry for Finkley forest and for New Forest in 1672. He was commissioner for inquiry for the New Forest again in 1673. In 1675 he was elected MP for Hampshire for the Cavalier Parliament and was made freeman of Winchester 1675. He was commissioner for recusants for Somerset and Hampshire in 1675 and commissioner for inquiry for the New Forest in 1676 and in 1679. In May 1679 he was elected MP for Bridgwater, and in October 1679 he was elected MP for Hampshire. He was elected MP for Hampshire again in 1681. In 1685 he was committed to the Tower of London on 26 June before he could join Monmouth who
Document 2:::
Cadherin EGF LAG seven-pass G-type receptor 3 is a protein that in humans is encoded by the CELSR3 gene.
The protein encoded by this gene is a member of the flamingo subfamily, part of the cadherin superfamily. The flamingo subfamily consists of nonclassic-type cadherins; a subpopulation that does not interact with catenins. The flamingo cadherins are located at the plasma membrane and have nine cadherin domains, seven epidermal growth factor-like repeats and two laminin A G-type repeats in their ectodomain. They also have seven transmembrane domains, a characteristic unique to this subfamily. It is postulated that these proteins are receptors involved in contact-mediated communication, with cadherin domains acting as homophilic binding regions and the EGF-like domains involved in cell adhesion and receptor-ligand interactions. The specific function of this particular member has not been determined.
External links
Document 3:::
A contract manufacturer (CM) is a manufacturer that contracts with a firm for components or products (in which case it is a turnkey supplier). It is a form of outsourcing. A contract manufacturer performing packaging operations is called copacker or a contract packager. Brand name companies focus on product innovation, design and sales, while the manufacturing takes place in independent factories (the turnkey suppliers).
Most turnkey suppliers specialize in simply manufacturing physical products, but some are also able to handle a significant part of the design and customization process if needed. Some turnkey suppliers specialize in one base component (e.g. memory chips) or a base process (e.g. plastic molding).
Business model
In a contract manufacturing business model, the hiring firm approaches the contract manufacturer with a design or formula. The contract manufacturer will quote the parts based on processes, labor, tooling, and material costs. Typically a hiring firm will request quotes from multiple CMs. After the bidding process is complete, the hiring firm will select a source, and then, for the agreed-upon price, the CM acts as the hiring firm's factory, producing and shipping units of the design on behalf of the hiring firm.
Job production is, in essence, manufacturing on a contract basis, and thus it forms a subset of the larger field of contract manufacturing. But the latter field also includes, in addition to jobbing, a higher level of outsourcing in which a product-line-owning company entrusts its entire production to a contractor, rather than just outsourcing parts of it.
Industries that use the practice
Many industries use this process, especially the aerospace, defense, computer, semiconductor, energy, medical, food manufacturing, personal care, packaging, and automotive fields. Some types of contract manufacturing include CNC machining, complex assembly, aluminum die casting, grinding, broaching, gears, and forging. The pharmaceutical ind
Document 4:::
The 2N7000 is an N-channel, enhancement-mode MOSFET used for low-power switching applications.
The 2N7000 is a widely available and popular part, often recommended as useful and common components to have around for hobbyist use.
Packaged in a TO-92 enclosure, the 2N7000 is rated to withstand 60 volts and can switch 200 millamps.
Applications
The 2N7000 has been referred to as a "FETlington" and as an "absolutely ideal hacker part." The word "FETlington" is a reference to the Darlington-transistor-like saturation characteristic.
A typical use of these transistors is as a switch for moderate voltages and currents, including as drivers for small lamps, motors, and relays. In switching circuits, these FETs can be used much like bipolar junction transistors, but have some advantages:
high input impedance of the insulated gate means almost no gate current is required
consequently no current-limiting resistor is required in the gate input
MOSFETs, unlike PN junction devices (such as LEDs) can be paralleled because resistance increases with temperature, although the quality of this load balance is largely dependent on the internal chemistry of each individual MOSFET in the circuit
The main disadvantages of these FETs over bipolar transistors in switching are the following:
susceptibility to cumulative damage from static discharge prior to installation
circuits with external gate exposure require a protection gate resistor or other static discharge protection
Non-zero ohmic response when driven to saturation, as compared to a constant junction voltage drop in a bipolar junction transistor
Other devices
Many other n-channel MOSFETs exist. Some part number are 2N7002, BS170, VQ1000J, and VQ1000P. They may have different pin outs, packages, and electrical properties. The BS250P is "a good p-channel analog of the 2N7000."
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the genus of the Bean leafroll virus (BLRV)?
A. Luteovirus
B. Retrovirus
C. Adenovirus
D. Picornavirus
Answer:
|
A. Luteovirus
|
Relavent Documents:
Document 0:::
Domination analysis of an approximation algorithm is a way to estimate its performance, introduced by Glover and Punnen in 1997. Unlike the classical approximation ratio analysis, which compares the numerical quality of a calculated solution with that of an optimal solution, domination analysis involves examining the rank of the calculated solution in the sorted order of all possible solutions. In this style of analysis, an algorithm is said to have dominance number or domination number K, if there exists a subset of K different solutions to the problem among which the algorithm's output is the best. Domination analysis can also be expressed using a domination ratio, which is the fraction of the solution space that is no better than the given solution; this number always lies within the interval [0,1], with larger numbers indicating better solutions. Domination analysis is most commonly applied to problems for which the total number of possible solutions is known and for which exact solution is difficult.
For instance, in the Traveling salesman problem, there are (n-1)! possible solutions for a problem instance with n cities. If an algorithm can be shown to have dominance number close to (n-1)!, or equivalently to have domination ratio close to 1, then it can be taken as preferable to an algorithm with lower dominance number.
If it is possible to efficiently find random samples of a problem's solution space, as it is in the Traveling salesman problem, then it is straightforward for a randomized algorithm to find a solution that with high probability has high domination ratio: simply construct a set of samples and select the best solution from among them. (See, e.g., Orlin and Sharma.)
The dominance number described here should not be confused with the domination number of a graph, which refers to the number of vertices in the smallest dominating set of the graph.
Recently, a growing number of articles in which domination analysis has been applied to assess the
Document 1:::
Predispositioning theory, in the field of decision theory and systems theory, is a theory focusing on the stages between a complete order and a complete disorder.
Predispositioning theory was founded by Aron Katsenelinboigen (1927–2005), a professor in the Wharton School who dealt with indeterministic systems such as chess, business, economics, and other fields of knowledge and also made an essential step forward in elaboration of styles and methods of decision-making.
Predispositioning theory
Predispositioning theory is focused on the intermediate stage between a complete order and a complete disorder. According to Katsenelinboigen, the system develops gradually, going through several stages, starting with incomplete and inconsistent linkages between its elements and ending with complete and consistent ones.
"Mess. The zero phase can be called a mess because it contains no linkages between the system's elements. Such a definition of mess as ‘a disorderly, un-tidy, or dirty state of things’ we find in Webster's New World Dictionary. (...)
Chaos. Mess should not be confused with the next phase, chaos, as this term is understood today. Arguably, chaos is the first phase of indeterminism that displays sufficient order to talk of the general problem of system development.
The chaos phase is characterized by some ordering of accumulated statistical data and the emergence of the basic rules of interactions of inputs and outputs (not counting boundary conditions). Even such a seemingly limited ordering makes it possible to fix systemic regularities of the sort shown by Feigenbaum numbers and strange attractors.
(...) Different types of orderings in the chaos phase may be brought together under the notion of directing, for they point to a possible general direction of system development and even its extreme states.
But even if a general path is known, enormous difficulties remain in linking algorithmically the present state with the final one and in operationalizing t
Document 2:::
Torque ripple is an effect seen in many electric motor designs, referring to a periodic increase or decrease in output torque as the motor shaft rotates. It is measured as the difference in maximum and minimum torque over one complete revolution, generally expressed as a percentage.
Examples
A common example is "cogging torque" due to slight asymmetries in the magnetic field generated by the motor windings, which causes variations in the reluctance depending on the rotor position. This effect can be reduced by careful selection of the winding layout of the motor, or through the use of realtime controls to the power delivery.
References
"Torque ripple", Emetor.
External links
Document 3:::
Nuisance wildlife management is the selective removal of problem individuals or populations of specific species of wildlife. Other terms for the field include wildlife damage management, wildlife control, and animal damage control. Some wild animal species may get used to human presence, causing property damage or risking the transfer of diseases (zoonoses) to humans or pets. Many wildlife species coexist with humans very successfully, such as commensal rodents which have become more or less dependent on humans.
Common nuisance species
Wild animals that can cause problems in homes, gardens or yards include armadillos, skunks, boars, foxes, squirrels, snakes, rats, groundhogs, beavers, opossums, raccoons, bats, moles, deer, mice, coyotes, bears, ravens, seagulls, woodpeckers and pigeons. In the United States, some of these species are protected, such as bears, ravens, bats, deer, woodpeckers, and coyotes, and a permit may be required to control some species.
Conflicts between people and wildlife arise in certain situations, such as when an animal's population becomes too large for a particular area to support. Human-induced changes in the environment will often result in increased numbers of a species. For example, piles of scrap building material make excellent sites where rodents can nest. Food left out for household pets is often equally attractive to some wildlife species. In these situations, the wildlife have suitable food and habitat and may become a nuisance.
The Humane Society of the United States (HSUS) provides strategies for the control of species such as bats, bears, chipmunks, coyotes, deer, mice, racoons and snakes.
Control methods
The most commonly used methods for controlling nuisance wildlife around homes and gardens include exclusion, habitat modification, repellents, toxic baits, glue boards, traps and frightening.
Exclusion
Exclusion techniques refer to the act of sealing a home to prevent wildlife; such as, rodents (squirrels, rats, mice
Document 4:::
Sea balls (also known as Aegagropila or Pillae marinae) are tightly packed balls of fibrous marine material, recorded from the seashore. They vary in size but are generally up to in size. In Edgartown, Massachusetts a longish sea ball around in diameter has been found. Others have been reported at Dingle Bay in Ireland and at Valencia, Spain. They may occur in hundreds and are composed of plant material, in majority seagrass rhizome (especially from the genus Posidonia) netting torn out by water movement.
In recent years they have been shown to contain more and more plastic marine debris and even microplastics.
Gallery
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is torque ripple in electric motors primarily characterized by?
A. A constant output torque
B. A periodic increase or decrease in output torque
C. A fixed maximum torque value
D. A reduction in motor efficiency
Answer:
|
B. A periodic increase or decrease in output torque
|
Relavent Documents:
Document 0:::
A suitport or suitlock is an alternative technology to an airlock, designed for use in hazardous environments including in human spaceflight, especially planetary surface exploration. Suitports present advantages over traditional airlocks in terms of mass, volume, and ability to mitigate contamination by—and of—the local environment.
Operation
In a suitport system, a rear-entry space suit is attached and sealed against the outside of a spacecraft, space habitat, or pressurized rover, facing outward. To begin an extra-vehicular activity (EVA), an astronaut in shirt-sleeves first enters the suit feet-first from inside the pressurized environment, and closes and seals the space suit backpack and the vehicle's hatch (which seals to the backpack for dust containment). The astronaut then unseals and separates the suit from the vehicle, and is ready to perform an EVA.
To re-enter the vehicle, the astronaut backs up to the suitport and seals the suit to the vehicle, before opening the hatch and backpack and transferring back into the vehicle. If the vehicle and suit do not operate at the same pressure, it will be necessary to equalize the two pressures before the hatch can be opened.
Advantages and disadvantages
Advantages
Suitports carry three major advantages over traditional airlocks. First, the mass and volume required for a suitport is significantly less than that required for an airlock. Launch mass is at a premium in modern chemical rocket-powered launch vehicles, at an estimated cost of US$60,000 per kilogram delivered to the lunar surface.
Secondly, suitports can eliminate or minimize the problem of dust migration. During the Apollo program, it was discovered that the lunar soil is electrically charged, and adheres readily to any surface with which it comes into contact, a problem magnified by the sharp, barb-like shapes of the dust particles. Lunar dust may be harmful in several ways:
The abrasive nature of the dust particles may rub and wear down surface
Document 1:::
The persistent random walk is a modification of the random walk model.
A population of particles are distributed on a line, with constant speed , and each particle's velocity may be reversed at any moment. The reversal time is exponentially distributed as , then the population density evolves according towhich is the telegrapher's equation.
Document 2:::
VFDB also known as Virulence Factor Database is a database that provides scientist quick access to virulence factors in bacterial pathogens. It can be navigated and browsed using genus or words. A BLAST tool is provided for search against known virulence factors. VFDB contains a collection of 16 important bacterial pathogens. Perl scripts were used to extract positions and sequences of VF from GenBank. Clusters of Orthologous Groups (COG) was used to update incomplete annotations. More information was obtained by NCBI. VFDB was built on Windows operation systems on DELL PowerEdge 1600SC servers.
Document 3:::
ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019.
It included definitions for symbols for mathematical logic, set theory, arithmetic and complex numbers, functions and special functions and values, matrices, vectors, and tensors, coordinate systems, and miscellaneous mathematical relations.
References and notes
Document 4:::
XSIL (Extensible Scientific Interchange Language) is an XML-based transport language for scientific data, supporting the inclusion of both in-file data and metadata. The language comes with an extensible Java object model. The language's elementary objects include Param (arbitrary association between a keyword and a value), Array, Table (a set of column headings followed by a set of records), and Stream, which enables one to either encapsulate data inside the XSIL file or point to an external data source.
BFD is an XML dialect based on XSIL.
External links
XSIL: Extensible Scientific Interchange Language
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main function of GPR126 in the development of Schwann cells, as described in the text?
A. It inhibits axon sorting.
B. It promotes myelin gene activity through cAMP induction.
C. It decreases cAMP levels in Schwann cells.
D. It blocks VEGF signaling in endothelial cells.
Answer:
|
B. It promotes myelin gene activity through cAMP induction.
|
Relavent Documents:
Document 0:::
Palsy is a medical term which refers to various types of paralysis or paresis, often accompanied by weakness and the loss of feeling and uncontrolled body movements such as shaking. The word originates from the Anglo-Norman paralisie, parleisie et al., from the accusative form of Latin paralysis, from Ancient Greek παράλυσις (parálusis), from παραλύειν (paralúein, "to disable on one side"), from παρά (pará, "beside") + λύειν (lúein, "loosen"). The word is longstanding in the English language, having appeared in the play Grim the Collier of Croydon, reported to have been written as early as 1599:
In some editions, the Bible passage of Luke 5:18 is translated to refer to "a man which was taken with a palsy". More modern editions simply refer to a man who is paralysed. Although the term has historically been associated with paralysis generally, it "is now almost always used in connection to the word cerebral—meaning the brain".
Specific kinds of palsy include:
Bell's palsy, partial facial paralysis
Bulbar palsy, impairment of cranial nerves
Cerebral palsy, a neural disorder caused by intracranial lesions
Conjugate gaze palsy, a disorder affecting the ability to move the eyes
Erb's palsy, also known as brachial palsy, involving paralysis of an arm
Spinal muscular atrophy, also known as wasting palsy
Progressive supranuclear palsy, a degenerative disease
Squatter's palsy, a common name for bilateral peroneal nerve palsy that may be triggered by sustained squatting
Third nerve palsy, involving cranial nerve III
References
External links
Document 1:::
Micromodels are a type of card model or paper model that was popular during the 1940s and 1950s in the United Kingdom. In 1941, Geoffrey Heighway invented and marketed a new concept in card models. He took the available concept of card models and miniaturized them so that an entire train or building could be wrapped in a packet of post cards. These packets usually sold for about a shilling, or pocket change.
When he released his product in the 1940s it caught on and Micro-modeling became a national pastime. The slogan "Your Workshop in a Cigar Box" is still widely quoted today among paper modelers. Their other slogan was: Three-Dimensional, Volumetric. During the war years, the models were especially popular as they were extremely portable and builders were able to work on them anywhere. Anecdotes say people liked them because they were small enough to take with you, so if you got stuck in a bomb shelter during an air raid, you had something interesting to do. Several of the railway Micromodels were distributed in Australia in the form of small stapled books, rather than individual postcards.
The subjects of original Micromodels included among other things, Trains, Planes, Ships, Boats, Buildings, Cars and a Dragon. There were 82 original Micromodel Packets. From these packets one could make up 121 separate models. Micromodels also released some items including how-to booklets and powdered glue. Heighway continued to release models from 1941 until he became seriously ill in 1956. He died in 1959. There were several models that Heighway had mentioned in his catalogs and advertising, that for one reason or another were never released. These were known as the "Might have beens."
Micromodels are considered collectable, and some rare originals can only be had for thousands of dollars. Others are more easily obtainable.
There have been other products similar to or based on Micromodels, including Modelcraft Ltd. and a set of books based on enlargements of some of He
Document 2:::
PcrA, standing for plasmid copy reduced is a helicase that was originally discovered in a screen for chromosomally encoded genes that are affected in plasmid rolling circle replication in the Gram-positive pathogen Staphylococcus aureus.
Biological functions
Genetic and biochemical studies have shown that the helicase is essential for plasmid rolling-circle replication and repair of DNA damage caused by exposure to ultraviolet radiation. It catalyzes the unwinding of double-stranded plasmid DNA that has been nicked at the replication origin by the replication initiation protein. Genetic and biochemical studies have also shown that the helicase plays an important role in cell-survival by regulating the levels of RecA-mediated recombination in Gram-positive bacteria.
Biochemical properties
The helicase is a monomeric translocase and utilizes ATP to unwind DNA. The preferred substrates are single-stranded DNA containing 3' overhangs. The processivity of PcrA is increased in the presence of plasmid replication initiation protein.
Crystal Structure
The structure of the helicase has been solved at high resolution and indicates "inchworming" as the mechanism of translocation on single-stranded DNA. A Mexican-wave model has been proposed based on the changes in conformation of the helicase observed in the product versus substrate complex.
Classification
PcrA belongs to the SF1 superfamily of helicases, which also include the E. coli helicases UvrD and Rep and the eukaryotic helicase Srs2.
Literature
External links
Document 3:::
An heirloom plant, heirloom variety, heritage fruit (Australia and New Zealand), or heirloom vegetable (especially in Ireland and the UK) is an old cultivar of a plant used for food that is grown and maintained by gardeners and farmers, particularly in isolated communities of the Western world. These were commonly grown during earlier periods in human history, but are not used in modern large-scale agriculture.
In some parts of the world, it is illegal to sell seeds of cultivars that are not listed as approved for sale. The Henry Doubleday Research Association, now known as Garden Organic, responded to this legislation by setting up the Heritage Seed Library to preserve seeds of as many of the older cultivars as possible. However, seed banks alone have not been able to provide sufficient insurance against catastrophic loss. In some jurisdictions, like Colombia, laws have been proposed that would make seed saving itself illegal.
Many heirloom vegetables have kept their traits through open pollination, while fruit varieties such as apples have been propagated over the centuries through grafts and cuttings. The trend of growing heirloom plants in gardens has been returning in popularity in North America and Europe.
Origin
Before the industrialization of agriculture, a much wider variety of plant foods were grown for human consumption, largely due to farmers and gardeners saving seeds and cuttings for future planting. From the 16th century through the early 20th century, the diversity was huge. Old nursery catalogues were filled with plums, peaches, pears and apples of numerous varieties, and seed catalogs offered legions of vegetable varieties. Valuable and carefully selected seeds were sold and traded using these catalogs along with useful advice on cultivation. Since World War II, agriculture in the industrialized world has mostly consisted of food crops which are grown in large, monocultural plots. In order to maximize consistency, few varieties of each type of crop are grown. These varieties are often selected for their productivity and their ability to ripen at the same time while withstanding mechanical picking and cross-country shipping, as well as their tolerance to drought, frost, or pesticides. This form of agriculture has led to a 75% drop in crop genetic diversity.
While heirloom gardening has maintained a niche community, in recent years it has seen a resurgence in response to the industrial agriculture trend. In the Global South, heirloom plants are still widely grown, for example, in the home gardens of South and Southeast Asia. Before World War II, the majority of produce grown in the United States was heirlooms.
In the 21st century, numerous community groups all over the world are working to preserve historic varieties to make a wide variety of fruits, vegetables, herbs, and flowers available again to the home gardener, by renovating old orchards, sourcing historic fruit varieties, engaging in seed swaps, and encouraging community participation. Heirloom varieties are an increasingly popular way for gardeners and small farmers to connect with traditional forms of agriculture and the crops grown in these systems. Growers also cite lower costs associated with purchasing seeds, improved taste, and perceived improved nutritional quality as reasons for growing heirlooms. In many countries, hundreds or even thousands of heirloom varieties are commercially available for purchase or can be obtained through seed libraries and banks, seed swaps, or community events. Heirloom varieties may also be well suited for market gardening, farmer's market sales, and CSA programs.
A primary drawback to growing heirloom varieties is lower disease resistance compared to many commercially available hybrid varieties. Common disease problems, such as verticillium and fusarium wilt, may affect heirlooms more significantly than non-heirloom crops. Heirloom varieties may also be more delicate and perishable. In recent years, research has been conducted into improving the disease resistance of heirlooms, particularly tomatoes, by crossing them with resistant hybrid varieties.
Requirements
The term heirloom to describe a seed variety was first used in the 1930s by horticulturist and vegetable grower J.R. Hepler to describe bean varieties handed down through families. However, the current definition and use of the word heirloom to describe plants is fiercely debated.
One school of thought places an age or date point on the cultivars. For instance, one school says the cultivar must be over 100 years old, others 50 years old, and others prefer the date of 1945, which marks the end of World War II and roughly the beginning of widespread hybrid use by growers and seed companies. Many gardeners consider 1951 to be the latest year a plant could have originated and still be called an heirloom, since that year marked the widespread introduction of the first hybrid varieties. It was in the 1970s that hybrid seeds began to proliferate in the commercial seed trade. Some heirloom varieties are much older; some are apparently pre-historic.
Another way of defining heirloom cultivars is to use the definition of the word heirloom in its truest sense. Under this interpretation, a true heirloom is a cultivar that has been nurtured, selected, and handed down from one family member to another for many generations.
Additionally, there is another category of cultivars that could be classified as "commercial heirlooms": cultivars that were introduced many generations ago and were of such merit that they have been saved, maintained and handed down—even if the seed company has gone out of business or otherwise dropped the line. Additionally, many old commercial releases have actually been family heirlooms that a seed company obtained and introduced.
Regardless of a person's specific interpretation, most authorities agree that heirlooms, by definition, must be open-pollinated. They may also require open-pollinated varieties to have been bred and stabilized using classic breeding practices. While there is currently one genetically modified tomato available to home growers, it is generally agreed that no genetically modified organisms can be considered heirloom cultivars. Another important point of discussion is that without the ongoing growing and storage of heirloom plants, the seed companies and the government will control all seed distribution. Most, if not all, hybrid plants, if they do not have sterile seeds and can be regrown, will not be the same as the original hybrid plant, thus ensuring the dependency on seed distributors for future crops.
Writer and author Jennifer A. Jordan describes the term "heirloom" as a culturally constructed concept that is only relevant due to the relatively recent loss of many crop varieties: "It is only with the rise of industrial agriculture that [the] practice of treating food as a literal heirloom has disappeared in many parts of the world—and that is precisely when the heirloom label emerges. ...[T]he concept of an heirloom becomes possible only in the context of the loss of actual heirloom varieties, of increased urbanization and industrialization as fewer people grow their own food, or at least know the people who grow their food."
Collection sites
The heritage fruit trees that exist today are clonally descended from trees of antiquity. Heirloom roses are sometimes collected (nondestructively as small cuttings) from vintage homes and from cemeteries, where they were once planted at gravesites by mourners and left undisturbed in the decades since. Modern production methods and the rise in population have largely supplanted this practice.
UK and EU law and national lists
In the UK and Europe, it is thought that many heritage vegetable varieties (perhaps over 2,000) have been lost since the 1970s, when EEC (now EU) laws were passed making it illegal to sell any vegetable cultivar not on the national list of any EEC country. This was set up to help in eliminating seed suppliers selling one seed as another, guarantee the seeds were true to type, and that they germinated consistently. Thus, there were stringent tests to assess varieties, with a view to ensuring they remain the same from one generation to the next. However, unique varieties were lost for posterity.
These tests (called DUS) assess "distinctness", "uniformity", and "stability". But since some heritage cultivars are not necessarily uniform from plant to plant, or indeed within a single plant—a single cultivar—this has been a sticking point. "Distinctness" has been a problem, moreover, because many cultivars have several names, perhaps coming from different areas or countries (e.g., carrot cultivar Long Surrey Red is also known as "Red Intermediate", "St. Valery", and "Chertsey"). However, it has been ascertained that some of these varieties that look similar are in fact different cultivars. On the other hand, two that were known to be different cultivars were almost identical to each other, thus one would be dropped from the national list in order to clean it up.
Another problem has been the fact that it is somewhat expensive to register and then maintain a cultivar on a national list. Therefore, if no seed breeder or supplier thinks it will sell well, no one will maintain it on a list, and so the seed will not be re-bred by commercial seed breeders.
In recent years, progress has been made in the UK to set up allowances and less stringent tests for heritage varieties on a B national list, but this is still under consideration.
When heirloom plants are not being sold, however, laws are often more lenient. Because most heirloom plants are at least 50 years old and grown and swapped in a family or community they fall under the public domain. Another worldwide alternative is to submit heirloom seeds to a seedbank. These public repositories in turn maintain and disperse these genetics to anyone who will use them appropriately. Typically, approved uses are breeding, study, and sometimes, further distribution.
US state law
There are a variety of intellectual property protections and laws that are applied to heirloom seeds, which can often differ greatly between states. Plant patents are based on the Plant Patent Act of 1930, which protects plants grown from cuttings and division, while under intellectual property rights, the Plant Variety Protection Act of 1970 (PVPA) shields non-hybrid, seed-propagated plants. However, seed breeders can only shelter their variety for 20 years under PVPA. There are also a couple of exceptions under the PVPA which allow growers to cultivate, save seeds, and sell the resultant crops, and give breeders allowances to use PVPA protected varieties as starter material as long as it constitutes less than half of the breeding material. There are also seed licenses which may place restrictions on the use of seeds or trademarks that guard against the use of certain plant variety names.
In 2014, the Pennsylvania Department of Agriculture caused a seed-lending library to shut down and promised to curtail any similar efforts in the state. The lending library, hosted by a town library, allowed gardeners to "check out" a package of open-pollinated seed, and "return" seeds kept from the crop grown from those seeds. The Department of Agriculture said that this activity raises the possibility of "agro-terrorism", and that a Seed Act of 2004 requires the library staff to test each seed packet for germination rate and whether the seed was true to type. In 2016 the department reversed this decision, and clarified that seed libraries and non-commercial seed exchanges are not subject to the requirements of the Seed Act.
Food justice
In disputed Palestine, some heirloom growers and seed savers see themselves as contributing a form of resistance against the privatization of agriculture, while also telling stories of their ancestors, defying violence, and encouraging rebellion. The Palestinian Heirloom Seed Library (PHSL), founded by writer and activist Vivien Sansour, breeds and maintains a selection of traditional crops from the region, seeking to "preserve and promote heritage and threatened seed varieties, traditional Palestinian farming practices, and the cultural stories and identities associated with them." Some scholars have additionally framed the increasing control of Israeli agribusiness corporations over Palestinian seed supplies as an attempt to suppress food sovereignty and as a form of subtle ecocide.
In January 2012, a conflict over seed access erupted in Latvia when two undercover investigators from the Latvian State Plant Protection Agency charged an independent farm with the illegal sale of unregistered heirloom tomato seeds. The agency suggested that the farm choose a small number of varieties to officially register and to abandon the other approximately 800 varieties grown on the farm. This infuriated customers as well as members of the general public, many of whom spoke out against what was seen as an overly strict interpretation of the law. The scandal further escalated with a series of hearings held by agency officials, during which residents called for a reexamination of seed registration laws and demanded greater citizen participation in legal and political matters relating to agriculture.
In Peru and Ecuador, genes from heirloom tomato varieties and wild tomato relatives have been the subject of patent claims by the University of Florida. These genes have been investigated for their usefulness in increasing drought and salt tolerance and disease resistance, as well as improving flavor, in commercial tomatoes. The American genomics development company Evolutionary Genomics identified genes found in Galapagos tomatoes that may increase sweetness by up to 25% and as of 2023 has filed an international patent application on the usage of these genes.
Native heirloom and landrace crop varieties and their stewards are sometimes subject to theft and biopiracy. Biopiracy may negatively impact communities that grow these heirloom varieties through loss of profits and livelihoods, as well as litigation. One infamous example is the case of Enola bean patent, in which a Texas corporation collected heirloom Mexican varieties of the scarlet runner bean and patented them, and then sued the farmers who had supplied the seeds in the first place to prevent them from exporting their crops to the US. The 'Enola' bean was granted 20-year patent protection in 1999, but subsequently underwent numerous legal challenges on the grounds that the bean was not a novel variety. In 2004, DNA fingerprinting techniques were used to demonstrate that 'Enola' was functionally identical to a yellow bean grown in Mexico known as Azufrado Peruano 87. The case has been widely cited as a prime example of biopiracy and misapplication of patent rights.
Native communities in the United States and Mexico have drawn particular attention to the importance of traditional and culturally appropriate seed supplies. The Traditional Native American Farmers Association (TNAFA) is an Indigenous organization aiming to "revitalize traditional agriculture for spiritual and human need" and advocating for traditional methods of growing, preparing, and consuming plants. In concert with other organizations, TNAFA has also drafted a formal Declaration of Seed Sovereignty and worked with legislators to protect Indigenous heritage seeds.
Indigenous peoples are also at the forefront of the seed rematriation movement to bring lost seed varieties back to their traditional stewards. Rematriation efforts are frequently directed at institutions such as universities, museums, and seed banks, which may hold Indigenous seeds in their collection that are inaccessible to the communities from which they originate. In 2018, the Seed Savers Exchange, the largest publicly accessible seed bank in the United States, rematriated several heirloom seed varieties back to Indigenous communities.
Activism
Activism surrounding food justice, farmers' rights, and seed sovereignty frequently overlap with the promotion and usage of heirloom crop varieties. International peasant farmers' organization La Via Campesina is credited with the first usage of the term "food sovereignty" and campaigns for agrarian reform, seed freedom, and farmers' rights. It currently represents more than 150 social movement organizations in 56 countries. Numerous other organizations and collectives worldwide participate in food sovereignty activism, including the US Food Sovereignty Alliance, Food Secure Canada, and the Latin American Seeds Collective in North and South America; the African Center for Biodiversity (ACB), the Coalition for the Protection of African Genetic Heritage (COPAGEN), and the West African Peasant Seed Committee (COASP) in Africa; and the Alliance for Sustainable and Holistic Agriculture (ASHA), Navdanya, and the Southeast Asia Regional Initiatives for Community Empowerment (SEARICE) in Asia. In a 2022 BBC interview, Indian environmental activist and scholar Vandana Shiva stated that "Seed is the source of life. Seed is the source of food. To protect food freedom, we must protect seed freedom."
Other writers have pushed back against the promotion and proliferation of heirloom crop varieties, connecting their usage to the impacts of colonialism. Quoting American author and educator Martín Prechtel in his article in The Guardian, Chris Smith writes that To keep seeds alive, clear, strong and open-pollinated, purity as the idea of a single pure race must be understood as the ironic insistence of imperial minds. Writer and journalist Brendan Borrell calls heirloom tomatoes "the tomato equivalent of the pug—that 'purebred' dog with the convoluted nose that snorts and hacks when it tries to catch a breath" and claims that selection for unique size, shape, color, and flavor has hampered disease resistance and hardiness in heirlooms.
Future
More attention is being put on heirloom plants as a way to restore genetic diversity and feed a growing population while safeguarding the food supply of diverse regions. Specific heirloom plants are often selected, saved, and planted again because of their superior performance in a particular locality. Over many crop cycles these plants develop unique adaptive qualities to their environment, which empowers local communities and can be vital to maintaining the genetic resources of the world.
Some debate has occurred regarding the perceived improved nutritional qualities of heirloom varieties compared to modern cultivars. Anecdotal reports claim that heirloom vegetables are more nutritious or contain more vitamins and minerals than more recently developed vegetables. Current research does not support the claim that heirloom varieties generally contain a greater concentration of nutrients; however, nutrient concentration and composition does appear to vary between different cultivars. Nevertheless, heirloom varieties may still contain the genetic basis for useful traits that can be employed to improve modern crops, including for human nutritional qualities.
Heirloom varieties are also critical to promoting global crop diversity, which has generally declined since the middle of the 20th century. Heirloom crops may contain genetic material that is distinct from varieties typically grown in monocrop systems, many of which are hybrid varieties. Monocrop systems tend to be vulnerable to disease and pest outbreaks, which can decimate whole industries due to the genetic similarity between plants. Some organizations have employed seed banks and vaults to preserve and protect crop genetics against catastrophic loss. One of the most notable of these seed banks is the Svalbard Global Seed Vault located in Svalbard, Norway, which safeguards approximately 1.2 million seed samples with capacity for up to 4.5 million. Some writers and farmers have criticized the apparent reliance on seed vaults, however, and argue that heirloom and rare varieties are better protected against extinction when actively planted and grown than stored away with no immediate influence on crop genetic diversity.
Examples
Bhutanese red rice
Black rice
Heirloom tomato
See also
Ark of Taste
Biodiversity
Community gardening
History of gardening
Association Kokopelli
Landrace
List of organic gardening and farming topics
Local food
Orthodox seed
Rare breed
Recalcitrant seed
Seed saving
Seedbank
Slow Food
Kyoyasai, a specific class of Japanese heirloom vegetables originating around Kyoto, Japan.
References
Further reading
External links
What is an heirloom vegetable?
Heirloom Vegetables from the Home and Garden Information Center at Clemson University
FAO/IAEA Programme Mutant Variety Database
FDA Statement of Policy - Foods Derived from New Plant Varieties
DEFRA - Plant varieties and seeds
Document 4:::
Icones Plantarum is an extensive series of published volumes of botanical illustration, initiated by Sir William Jackson Hooker. The Latin name of the work means "Illustrations of Plants". The illustrations are drawn from herbarium specimens of Hooker's herbarium, and subsequently the herbarium of Kew Gardens. Hooker was the author of the first ten volumes, produced 1837–1854. His son, Sir Joseph Dalton Hooker, was responsible for Volumes XI-XIX (most of Series III). Daniel Oliver was the editor of Volumes XX-XXIV. His successor was William Turner Thiselton-Dyer. The series now comprises forty volumes.
References
External links
Hooker’s Icones Plantarum Kew: Bentham-Moxon Trust, 1851- (incomplete)
Icones Plantarum: Or Figures, with Brief Descriptive Characters and Remarks, of New Or Rare Plants, Selected from the Author's Herbarium By William Jackson Hooker, Joseph Dalton Hooker
Hooker's Icones Plantarum At: Biodiversity Heritage Library
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a primary drawback of growing heirloom varieties compared to commercially available hybrid varieties?
A. Higher disease resistance
B. Greater productivity
C. Lower disease resistance
D. Increased shelf life
Answer:
|
C. Lower disease resistance
|
Relavent Documents:
Document 0:::
Heated humidified high-flow therapy, often simply called high flow therapy , is a medical treatment providing respiratory support by delivering a flow of oxygen of up to 60 liters per minute to a patient through a large-bore or high-flow nasal cannula. Primarily studied in neonates, it has also been found effective in some adults to treat hypoxemia and work of breathing issues. The key components of it are a gas blender, heated humidifier, heated circuit, and cannula.
History
The development of heated humidified high flow started in 1999 with Vapotherm introducing the concept of high flow use with race horses.
High flow was approved by the U.S. Food and Drug Administration in the early 2000s and used as an alternative to positive airway pressure for treatment of apnea of prematurity in neonates. The term high flow is relative to the size of the patient which is why the flow rate used in children is done by weight as just a few liters can meet the inspiratory demands of a neonate unlike in adults It has since become popular for use in adults for respiratory failure
Mechanism
The traditional low flow system used for medical gas delivery is the Nasal cannula which is limited to the delivery of 1–6 L/min of oxygen or up to 15 L/min in certain types. This is because even with quiet breathing, the inspiratory flow rate at the nares of an adult usually exceeds 30 L/min. Therefore, the oxygen provided is diluted with room air during inspiration. Being a high flow system means that it meets or exceeds the flow demands of the patient.
Oxygenation
Since it is a high flow system, it is able to maintain the wearers fraction of inhaled oxygen (FiO2) at the set rate because they shouldn't be entraining ambient air. However, this may not be the case in patients who are poorly compliant with the therapy and are actively breathing through their mouth.
Ventilation
The flow can wash out some of the dead space in the upper airway. This can reduce slightly the amount of carbon d
Document 1:::
DreamLab was a volunteer computing Android and iOS app launched in 2015 by Imperial College London and the Vodafone Foundation. It was discontinued on 2nd April 2025.
Description
The app currently helps to research cancer, COVID-19, new drugs and tropical cyclones. To do this, DreamLab accesses part of the device's processing power, with the user's consent, while the owner is charging their smartphone, to speed up the calculations of the algorithms from Imperial College London.
The aim of the tropical cyclone project is to prepare for climate change risks. Other projects aim to find existing drugs and food molecules that could help people with COVID-19 and other diseases. The performance of 100,000 smartphones would reach the annual output of all research computers at Imperial College in just three months, with a nightly runtime of six hours.
The app was developed in 2015 by the Garvan Institute of Medical Research in Sydney and the Vodafone Foundation. As of May 2020, the project had over 490,000 registered users.
See also
Volunteer computing
Folding@home
BOINC
Document 2:::
Wind engineering is a subset of mechanical engineering, structural engineering, meteorology, and applied physics that analyzes the effects of wind in the natural and the built environment and studies the possible damage, inconvenience or benefits which may result from wind. In the field of engineering it includes strong winds, which may cause discomfort, as well as extreme winds, such as in a tornado, hurricane or heavy storm, which may cause widespread destruction. In the fields of wind energy and air pollution it also includes low and moderate winds as these are relevant to electricity production and dispersion of contaminants.
Wind engineering draws upon meteorology, fluid dynamics, mechanics, geographic information systems, and a number of specialist engineering disciplines, including aerodynamics and structural dynamics. The tools used include atmospheric models, atmospheric boundary layer wind tunnels, and computational fluid dynamics models.
Wind engineering involves, among other topics:
Wind impact on structures (buildings, bridges, towers)
Wind comfort near buildings
Effects of wind on the ventilation system in a building
Wind climate for wind energy
Air pollution near buildings
Wind engineering may be considered by structural engineers to be closely related to earthquake engineering and explosion protection.
Some sports stadiums such as Candlestick Park and Arthur Ashe Stadium are known for their strong, sometimes swirly winds, which affect the playing conditions.
History
Wind engineering as a separate discipline can be traced to the UK in the 1960s, when informal meetings were held at the National Physical Laboratory, the Building Research Establishment, and elsewhere. The term "wind engineering" was first coined in 1970. Alan Garnett Davenport was one of the most prominent contributors to the development of wind engineering. He is well known for developing the Alan Davenport wind-loading chain or in short "wind-loading chain" that describes how
Document 3:::
Moffat Stove Company was a Canadian stove manufacturer that was established in 1892 in Weston, Toronto, Ontario. It manufactured stoves and ranges that have been widely distributed across the continents and even used extensively in Europe and Asia.
Moffat Stoves are credited with inventing the first electric ranges for the domestic market. And although the Moffat family sold the business in 1953, stoves and refrigerators continued to be built at the Weston plant until the early 1970s.
History
Early years and growth
The company was started by Thomas Lang Moffat (born in Crossfire, Fife, Scotland in 1836) who emigrated to Canada with his family when he was 5 years old. He started an iron and brass castings company in Dundas, Ontario with a Mr. McCallum. Eventually, he moved to Owen Sound where he got a job as a moulder at the firm of William Kennedy & Sons. In 1882, he moved to Markdale, Ontario where he founded a general engineering and machine shop, T.L. Moffat and Sons. They produced mill machinery, steam engines, pulleys, plows, mangles, and various ornamental castings in iron and brass.
Business was slow until the company was persuaded by a local hardware merchant that the future lay in stoves, so they turned their energies to this end, producing their first stove, the "Ploughboy" (which a picture of a man with a plough cast into the door). Soon afterward they set up a long-standing factory in Weston, Ontario, from where their manufactured stoves and ranges became widely distributed across the continents and even used extensively in Europe and Asia.
In 1907, T.L. Moffat died, and his eldest son, John King Moffat, took the company reins for the next 23 years. His sister Agnes, who was born in 1905, studied medicine and subsequently became Peterborough's newest female doctor. Her family was apparently not happy with her choice of vocation.
Frederick W. Moffat, son of the founder and brother of John King, was credited with at least 3 patents for stove co
Document 4:::
Wheal Maid (also Wheal Maiden) is a former mine in the Camborne-Redruth-St Day Mining District, 1.5 km east of St Day.
Between 1800 and 1840, profits are said to have been up to £200,000. In 1852, the mine was amalgamated with Poldice Mine and Carharrack Mine and worked as St Day United mine. Throughout the 1970s and 1980s, the mine site was turned into large lagoons and used as a tip for two other nearby mines: Mount Wellington and Wheal Jane.
There were suggestions that the mine could be used as a landfill site for rubbish imported from New York and a power plant that would produce up to 40 megawatts of electricity; the concept was opposed by local residents and by Cornwall County Council, with Doris Ansari, the chair of the council's planning committee, saying that the idea "[did] not seem right for Cornwall".
The site was bought from Carnon Enterprises by Gwennap District Council for a price of £1 in 2002. An investigation by the Environment Agency that concluded in 2007 found that soil near the mine had high levels of arsenic, copper and zinc contamination and by 2012, it was deemed too hazardous for human activity.
The mine gains attention during dry spells when the lagoons dry up and leaving brightly coloured stains on the pit banks and bed.
2014 murder
In 2014, a 72-year-old man from Falmouth died at the site after what was initially thought to be a cycling accident. It was later found that the man had been murdered. A 34-year-old was found guilty and sentenced to life and to serve at least 28 years.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the primary purpose of the DreamLab app developed by Imperial College London and the Vodafone Foundation?
A. To entertain users with games
B. To research various health issues and climate change
C. To improve smartphone battery life
D. To provide social networking features
Answer:
|
B. To research various health issues and climate change
|
Relavent Documents:
Document 0:::
A solar sibling is a star that formed in the same star cluster as the Sun.
Stars that have been proposed as candidate solar siblings include HD 162826, HD 175740, and HD 186302. There is as yet no confirmed solar sibling and studies disagree on the most likely candidates; for example, a 2016 study suggested that previous candidates, including HD 162826 and HD 175740, are unlikely to be solar siblings.
A study using Gaia DR2 data published in 2020 found that HD 186302 is unlikely to be a solar sibling, while identifying a new candidate, "Solar Sibling 1", designated 2MASS J19354742+4803549. This star is also known as Kepler-1974 or KOI-7368, and was found in 2022 to be a member of a stellar association that is 40 million years old, much younger than the Sun, so it cannot be a solar sibling.
A 2019 study of the comet C/2018 V1 (Machholz–Fujikawa–Iwamoto) found that it may be an interstellar object, and identified two stars it may have originated from (Gaia DR2 1927143514955658880 and 1966383465746413568), which could be candidate solar siblings. This comet is not confirmed to have an interstellar origin and could be a more typical Oort cloud object.
References
Further reading
Document 1:::
Medical technology assessment (MTA) is the objective evaluation of a medical technology regarding its safety and performance, its (future) impact on clinical and non-clinical patient outcomes as well as its interactive effects on economical, organizational, social, juridical and ethical aspects of healthcare. Medical technologies are assessed both in absolute terms and in comparison to other (combinations of) medical technologies, procedures, treatments or ‘doing-nothing’.
The aim of MTA is to provide objective, high-quality information that relevant stakeholders use for decision-making about for example development, pricing, market access and reimbursement of new medical technologies. As such, MTA is similar to health technology assessment (HTA), except that HTA has a wider scope and may include assessments of for example organizational or financial interventions.
The classical approach of MTA is to evaluate technologies after they enter the marketplace. Yet, a growing number of researchers and policy-makers argue that new technologies should be evaluated before they diffuse into routine clinical practice. MTA of biomedical innovations in a very early stage of development could improve health outcomes, minimise wrong investment and prevent social and ethical conflicts.
One particular method within the area of early MTA is constructive technology assessment (CTA). CTA is particularly appropriate for the early assessment of dynamic technologies that are implemented under uncertain circumstances. CTA is based on the idea that during the course of technology development, choices are constantly being made about the form, the function, and the use of that technology. Especially in early stages, technologies are not always stable, nor are its specifications and neither is its use, as both technology and environment will mutually influence each other. In recent years, CTA has developed from assessing the (clinical) impact of a new technology to a much broader approach,
Document 2:::
In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results.
History
SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function.
In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares.
An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike.
A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of spe
Document 3:::
In computer architecture, cache coherence is the uniformity of shared resource data that is stored in multiple local caches. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched.
A cache coherence protocol is used to maintain cache coherency. The two main types are snooping and directory-based protocols.
Cache coherence is of particular relevance in multiprocessing systems, where each CPU may have its own local cache of a shared memory resource.
Overview
In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.
The following are the requirements for cache coherence:
Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order.
Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks.
Definition
Coherence defines the behavior of reads and writes to a single address location.
In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence:
In a read made by a processor P to a location X that follows a write b
Document 4:::
Pimascovirales is an order of viruses. The term is a portmanteau of a portmanteau of pitho-, irido-, marseille-, and ascoviruses.
Families
Pimascovirales contains six families, three of which are assigned to a suborder. This taxonomy is shown hereafter:
Suborder: Ocovirineae
Hydriviridae
Orpheoviridae
Pithoviridae
Families not assigned to a suborder:
Ascoviridae
Iridoviridae
Marseilleviridae
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the 10G-EPON standard primarily support in terms of data transmission rates for upstream and downstream directions?
A. 1 Gbit/s upstream and 10 Gbit/s downstream
B. 10 Gbit/s upstream and 1 Gbit/s downstream
C. 10 Gbit/s in both upstream and downstream directions
D. 25 Gbit/s upstream and 10 Gbit/s downstream
Answer:
|
C. 10 Gbit/s in both upstream and downstream directions
|
Relavent Documents:
Document 0:::
Nestin is a protein that in humans is encoded by the NES gene.
Nestin (acronym for neuroepithelial stem cell protein) is a type VI intermediate filament (IF) protein. These intermediate filament proteins are expressed mostly in nerve cells where they are implicated in the radial growth of the axon. Seven genes encode for the heavy (NF-H), medium (NF-M) and light neurofilament (NF-L) proteins, nestin and α-internexin in nerve cells, synemin α and desmuslin/synemin β (two alternative transcripts of the DMN gene) in muscle cells, and syncoilin (also in muscle cells). Members of this group mostly preferentially coassemble as heteropolymers in tissues. Steinert et al. has shown that nestin forms homodimers and homotetramers but does not form IF by itself in vitro. In mixtures, nestin preferentially co-assembles with purified vimentin or the type IV IF protein internexin to form heterodimer coiled-coil molecules.
Gene
Structurally, nestin has the shortest head domain (N-terminus) and the longest tail domain (C-terminus) of all the IF proteins. Nestin is of high molecular weight (240kDa) with a terminus greater than 500 residues (compared to cytokeratins and lamins with termini less than 50 residues).
After subcloning the human nestin gene into plasmid vectors, Dahlstrand et al. determined the nucleotide sequence of all coding regions and parts of the introns. In order to establish the boundaries of the introns, they used the polymerase chain reaction (PCR) to amplify a fragment made from human fetal brain cDNA using two primers located in the first and fourth exon, respectively. The resulting 270 base pair (bp) long fragment was then sequenced directly in its entirety, and intron positions precisely located by comparison with the genomic sequence. Putative initiation and stop codons for the human nestin gene were found at the same positions as in the rat gene, in regions where overall similarity was very high. Based on this assumption, the human nestin gene encod
Document 1:::
The dynamic aperture is the stability region of phase space in a circular accelerator.
For hadrons
In the case of protons or heavy ion accelerators, (or synchrotrons, or storage rings), there is minimal radiation, and hence the dynamics is symplectic.
For long term stability, tiny dynamical diffusion (or Arnold diffusion) can lead an initially stable orbit slowly into an unstable region. This makes the dynamic aperture problem particularly challenging. One may be considering stability over billions of turns.
A scaling law for Dynamic aperture vs. number of turns has been proposed by Giovannozzi.
For electrons
For the case of electrons, the electrons will radiate which causes a damping effect. This means that one typically only cares about stability over thousands of turns.
Methods to compute or optimize dynamic aperture
The basic method for computing dynamic aperture involves the use of a tracking code. A model of the ring is built within the code that includes an integration routine for each magnetic element. The particle is tracked many turns and stability is determined.
In addition, there are other quantities that may be computed to characterize the dynamics, and can be related to the dynamic aperture. One example is the tune shift with amplitude.
There have also been other proposals for approaches to enlarge dynamic aperture, such as:
References
Document 2:::
NGC 855 is a star-forming dwarf elliptical galaxy located in the Triangulum constellation. The discovery and a first description (as H 26 613) was realized by William Herschel on 26 October 1786 and the findings made public through his Catalogue of Nebulae and Clusters of Stars, published the same year.
NGC 855's relative velocity to the cosmic microwave background is 343 ± 18 km/s (343 ± 18) km/s, corresponding to a Hubble distance of 5.06 ± 0.44 Mpc (~16.5 million ly). There is some uncertainty about its precise distance since two surface brightness fluctuation measurements give a distance of 9.280 ± 0.636 Mpc (~30.3 million ly), a range outside the Hubble distance determined by the galaxy's redshift survey.
Star formation
Using infrared data collected from two regions in the center of the galaxy by the Spitzer Space Telescope, astronomers were able to suggest NGC 855 to be a star-forming galaxy. Its HI distribution (Neutral atomic hydrogen emission lines) suggests the star-forming activity might have been triggered by a minor merger.
Document 3:::
The Palm Foleo (stylised as Folēo) was a planned subnotebook computer that was announced by mobile device manufacturer Palm Inc. on May 30, 2007, and canceled three months later. It was meant to serve as a companion for smartphones including Palm's own Treo line. The device ran on the Linux operating system and featured 256 MB of flash memory and an immediate boot-up feature.
The Foleo featured wireless access via Bluetooth and Wi-Fi. Integrated software included an e-mail client which was to be capable of syncing with the Treo E-Mail client, the Opera web browser and the Documents To Go office suite. The client did not send and retrieve mail over the Wi-Fi connection, instead transmitting via synchronization with the companion smartphone.
The device was slated to launch in the U.S. in the third quarter of 2007 for a price expected by Palm to be $499 after an introductory $100 rebate. Palm canceled Foleo development on September 4, 2007, with Palm CEO Ed Colligan announcing that the company would return its focus to its core product of smartphones and handheld computers. Soon after the device was canceled, a branch of subnotebooks called netbooks, similar to the Foleo in size and functionality, reached the market. Had it been released, the Foleo would have been the founding device in the category. At the time, Palm was performing poorly in face of heavy competition in the smartphone market. The company's sales did not recover, and it was purchased by information technology giant Hewlett-Packard in April 2010.
Software
The Foleo was initially reported to run a modified Linux kernel. The kernel was reported as being version 2.6.14-rmk1-pxa1-intc2 ("rmk1" indicates this is the ARM architectural version, "pxa1" indicates it is of the PXA family of Intel/Marvell Technology Group XScale processors, "intc2" is possibly an IRQ handler). On August 7, 2007, Palm announced that it had chosen Wind River Systems to help it customize the standard Linux kernel to make it more s
Document 4:::
Energy subsidies are government payments that keep the price of energy lower than market rate for consumers or higher than market rate for producers. These subsidies are part of the energy policy of the United States.
According to Congressional Budget Office testimony in 2016, an estimated $10.9 billion in tax preferences was directed toward renewable energy, $4.6 billion went to fossil fuels, and $2.7 billion went to energy efficiency or electricity transmission.
According to a 2015 estimate by the Obama administration, the US oil industry benefited from subsidies of about $4.6 billion per year. A 2017 study by researchers at Stockholm Environment Institute published in the journal Nature Energy estimated that "tax preferences and other subsidies push nearly half of new, yet-to-be-developed oil investments into profitability, potentially increasing US oil production by 17 billion barrels over the next few decades."
Overview of energy subsidies
Biofuel subsidies
In the United States, biofuel subsidies have been justified on the following grounds: energy independence, reduction in greenhouse gas emissions, improvements in rural development related to biofuel plants and farm income support. Several economists from Iowa State University found "there is no evidence to disprove that the primary objective of biofuel policy is to support farm income."
Consumer subsidies
Consumers who purchase hybrid vehicles are eligible for a tax credit that depends upon the type of vehicle and the difference in fuel economy in comparison to vehicles of similar weights. These credits range from several hundred dollars to a few thousand dollars. Homeowners can receive a tax credit up to $500 for energy-efficient products like insulation, windows, doors, as well as heating and cooling equipment. Homeowners who install solar electric systems can receive a 30% tax credit and homeowners who install small wind systems can receive a tax credit up to $4000. Geothermal heat pumps also qualif
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main characteristic that distinguishes Sarcoscypha dudleyi from Sarcoscypha coccinea?
A. The color of the fruit body
B. The size of the spores
C. The presence and number of oil droplets in the spores
D. The habitat preferences
Answer:
|
C. The presence and number of oil droplets in the spores
|
Relavent Documents:
Document 0:::
In Unified Modeling Language (UML) 2.5.1, an Element
is "a constituent of a model. As such, it has the capability of owning other Elements."
In UML 2.4.1, an element is an abstract class with no superclass.
It is used as the superclass or base class, as known by object-oriented programmers, for all the metaclasses in the UML infrastructure library. All other elements in the UML inherit, directly or indirectly from Element. An Element has a derived composition association to itself to support the general capability for elements to own other elements. As such, it has no additional attributes as part of its specification.
Associations
An association describes a set of tuples of typed instances.
ownedComment: Comment[*]: An Element may own, or have associated to it, an arbitrary quantity of comments. A comment is sometimes referred to as a note. The asterisk in brackets is the Comment's multiplicity which means that there can be an arbitrary number of comments owned by an Element.
/ ownedElement: Element[*]: An Element may own an arbitrary quantity of elements. This is called a derived union, symbolized by the forward slash notation. The asterisk in brackets is the Element's multiplicity which means that there can be an arbitrary number of elements owned by an Element.
/ owner: Element[0..1]: The Element that owns this element. This is called a derived union, symbolized by the forward slash notation. The [0..1] is the owning Element's multiplicity which means that there can only be zero to one owner element.
The Element class belongs to the base package in the UML called the Kernel. This is the package that contains the superclasses that make up the superstructure of the UML.
Subclasses of Element provide semantics appropriate to the concept they represent. The comments for an Element add no semantics but may represent information useful to the reader of the model.
Document 1:::
Byssomerulius corium is a common species of crust fungus in the family Irpicaceae. The fungus was first described as Thelephora corium by Christiaan Hendrik Persoon in 1801. Erast Parmasto made it the type species of his newly circumscribed genus Byssomerulius in 1967.
Distribution
Byssomerulius corium is a highly distributed fungus, and has been recorded in Africa, Asia, Australia, Europe, and in South, Central, and North America.
References
Document 2:::
A pattern is a regularity in the world, in human-made design, or in abstract ideas. As such, the elements of a pattern repeat in a predictable manner. A geometric pattern is a kind of pattern formed of geometric shapes and typically repeated like a wallpaper design.
Any of the senses may directly observe patterns. Conversely, abstract patterns in science, mathematics, or language may be observable only by analysis. Direct observation in practice means seeing visual patterns, which are widespread in nature and in art. Visual patterns in nature are often chaotic, rarely exactly repeating, and often involve fractals. Natural patterns include spirals, meanders, waves, foams, tilings, cracks, and those created by symmetries of rotation and reflection. Patterns have an underlying mathematical structure; indeed, mathematics can be seen as the search for regularities, and the output of any function is a mathematical pattern. Similarly in the sciences, theories explain and predict regularities in the world.
In many areas of the decorative arts, from ceramics and textiles to wallpaper, "pattern" is used for an ornamental design that is manufactured, perhaps for many different shapes of object. In art and architecture, decorations or visual motifs may be combined and repeated to form patterns designed to have a chosen effect on the viewer.
Nature
Nature provides examples of many kinds of pattern, including symmetries, trees and other structures with a fractal dimension, spirals, meanders, waves, foams, tilings, cracks and stripes.
Symmetry
Symmetry is widespread in living things. Animals that move usually have bilateral or mirror symmetry as this favours movement. Plants often have radial or rotational symmetry, as do many flowers, as well as animals which are largely static as adults, such as sea anemones. Fivefold symmetry is found in the echinoderms, including starfish, sea urchins, and sea lilies.
Among non-living things, snowflakes have striking sixfold symmetry:
Document 3:::
WISEPA J101905.63+652954.2 (also called WISE 1019+6529) is a binary made up of two cold brown dwarfs. Both brown dwarfs have a late spectral type T. The pair was detected in radio emission, which is pulsed and periodic. The radio emission could, in principle, be powered by the interaction of the binary.
WISE 1019+6529 was discovered in 2011 with the Wide-field Infrared Survey Explorer and spectroscopy from the NASA Infrared Telescope Facility, Palomar and Keck Observatory confirmed it as a T6 or T7-dwarf. A preliminary parallax is published, placing it 24 parsec distant from the Solar System, and it has a proper motion of 150.6 ±1.1 mas/yr. In 2023 a study was published, describing the 144 MHz highly circularly polarised radio emission detected with the Low Frequency Array. The same paper describes observations with Keck adaptive optics, which showed that the brown dwarf is a binary. The researchers find a polar dipole magnetic field strength of 660 ± 300 Gauss for the T5.5 dwarf and 460 ± 210 Gauss for the T5.5 dwarf. The surface magnetic field strength is constrained between 51.4–103 Gauss, which was constrained with cyclotron maser emission and the upper limit of H-alpha luminosity. The radio emission could, in principle, be powered by a binary interaction, if the mass loss rate is ≥25 tonnes per second. The researchers also find that the radio emission is consistent with a Jupiter-like aurora.
Document 4:::
The Connaissance des temps (English: Knowledge of the Times) is an official yearly publication of astronomical ephemerides in France. Until just after the French Revolution, the title appeared as Connoissance des temps, and for several years afterwards also as Connaissance des tems.
Since 1984 it has appeared under the title Ephémérides astronomiques: Annuaire du Bureau des longitudes.
History
Connaissance des temps is the oldest such publication in the world, published without interruption since 1679 (originally named La Connoissance des Temps ou calendrier et éphémérides du lever & coucher du Soleil, de la Lune & des autres planètes), when the astronomer Jean Picard (1620–1682) obtained from the King the right to create the annual publication. The first eight editors were:
1679–1684: Jean Picard (1620–1682)
1685–1701: Jean Le Fèvre (1650–1706)
1702–1729: (1660–1733)
1730–1734: Louis Godin (1704–1760)
1735–1759: Giovanni Domenico Maraldi (1709–1788)
1760–1775: Joseph Jérôme Lefrançois de Lalande (1732–1807)
1776–1787: Edme-Sébastien Jeaurat (1725–1803)
1788–1794: Pierre Méchain (1744–1804)
Other notable astronomers who edited the Connaissance des temps were:
Alexis Bouvard (1767–1843)
Bureau des Longitudes
Rodolphe Radau (1835–1911)
Marie Henri Andoyer (1862–1929)
Among the other prestigious national astronomical ephemerides, The Nautical Almanac was established in Great Britain in 1767, and the Berliner Astronomisches Jahrbuch in 1776.
Contents
The volumes of the Connaissance des temps had two parts:
a section of ephemerides, containing various tables
articles giving a deeper coverage of various topics, often written by famous astronomers
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the original name given to Byssomerulius corium when it was first described?
A. Thelephora corium
B. Byssomerulius corium
C. Irpicaceae corium
D. Christiaan corium
Answer:
|
A. Thelephora corium
|
Relavent Documents:
Document 0:::
HD 102117 or Uklun is a star in the southern constellation of Centaurus. With an apparent visual magnitude of 7.47, it is too dim to be seen without binoculars or a small telescope. It is located at a distance of approximately 129 light-years from the Sun based on parallax. HD 102117 is drifting further away with a radial velocity of +50 km/s, having come to within some 692,000 years ago. It has one known planet.
The stellar classification of HD 102117 is G6V, which matches the spectrum of an ordinary G-type main-sequence star. It is roughly five billion years old and is spinning with a projected rotational velocity of 0.9 km/s. The star shows only a low level of chromospheric activity and is photometrically stable, meaning it doesn't vary significantly in brightness. It appears metal-enriched, showing a higher abundance of heavy elements compared to the Sun.
Planetary system
In 2004, the Anglo-Australian Planet Search announced a planet orbiting the star. A short time later the HARPS team also announced the presence of a planet around this star. Both groups detected this planet with the radial velocity method.
HD 102117, and its planet HD 102117b, were chosen as part of the 2019 NameExoWorlds campaign organised by the International Astronomical Union, which assigned each country a star and planet to be named. HD 102117 was assigned to Pitcairn Islands. The winning proposal named the star Uklun, from the word aklan 'we/us' in the Pitcairn language, and the planet Leklsullun , from the phrase lekl salan 'child/children' (lit. 'little person').
References
Document 1:::
The Alegría watermill is located in Córdoba, Spain, on the right bank of the Guadalquivir River, downstream of San Rafael bridge on the weir called Azuda de la Alhadra. The watermill houses the Roberto Wagner Museum of Paleobotany, which is part of the Botanical Garden of Córdoba. The mill is one of the eleven so-called Mills of the Guadalquivir, which were declared a Bien de Interés Cultural in 2009.
The watermill building
The Alegría watermill is on the right bank of the Guadalquivir. The water mill proper is between two spillways in the weir. The spillway on the southwest separates it from the rest of the weir. The spillway on the northwest side can be crossed over a small bridge and leads to a renaissance fulling mill which was part of the Alegría complex.
In the early twentieth century, two floors were added to the building. This masks the original Moorish building. The warehouses and structure are what remains of the previous mill. The façade shows three levels. The lower level is made of stone, the two above it are brick and date from the 19th century.
The mill building consists of a large rectangular central room. It is roughly aligned from west to east, and has an apsidal ceiling on the east side. The room is divided into three rectangular spaces, roughly aligned north to south, i.e. in line with the course of the river. Each of these three spaces was over two channels that could move a separate wheel. Each wheel could turn two grinding stones. Four of these have been preserved in this area. The third space has lost its stones and now shows the shaft that remains from a water turbine.
The fulling mill is separated from the main mill by a spillway channel. The entrance door of the fulling mill can be dated to the sixteenth century. From 1910 to 1913, the spilling way that separated the two mills was vaulted, and so the original mill and the fulling mill were joined by two upper floors of brick.
History
Background
The weir on which the Alegría mill
Document 2:::
Cell unroofing encompasses various methods used to isolate and expose the cell membrane of cells. Unlike the more common membrane extraction protocols performed with multiple steps of centrifugation (which aims to separate the membrane fraction from a cell lysate), cell unroofing aims to tear and preserve patches of the plasma membrane to perform in situ experiments using microscopy and biomedical spectroscopy.
History
The first observation of the bi-layer cell membrane was made in 1959 on a cell section using the electron microscope.
However, the first micrograph of the internal side of a cell dates back to 1977 by M.V. Nermut. Professor John Heuser made substantial contributions in the field, imaging the detailed internal structure of the membrane and the cytoskeleton bound to it with extensive use of the electron microscope.
It was only after the development of atomic force microscope operated in liquid that it became possible to image cell membranes in almost-physiological conditions and to test their mechanical properties.
Methods
Freeze-fracturing of monolayers
Quick-freeze deep-etch electron microscopy and cryofixation
Sonication for atomic force microscopy
Single-cell unroofing
See also
Sonoporation
Lysis
Document 3:::
In botanical geography, the Saharo-Arabian region is a floristic region in the Holarctic kingdom, covered by hot deserts, semideserts and savanna. The region occupies the temperate parts of the Sahara desert, Sinai Peninsula, Arabian Peninsula (geographically defined), Southern Palestine and Lower Mesopotamia.
The region was proposed by Armen Takhtajan.
Flora
Plant life usually consists of only a handful of species, which are mostly found in natural depressions in the ground or and rocky pavements. Much of its flora is shared with the neighboring Mediterranean and Irano-Turanian regions of the Holarctic kingdom and Sudano-Zambezian region of the Paleotropical kingdom.
However, about a quarter of the species, especially in the families Asteraceae, Brassicaceae and Chenopodiaceae, are endemic. Some of the endemic genera are Nucularia, Fredolia, Agathophora, Muricaria, Nasturtiopsis, Zilla, Oudneya, Foleyola, Lonchophora, Gymnarrhena, Lifago.
References
Document 4:::
The following is a list of ecoregions in Rwanda, according to the Worldwide Fund for Nature (WWF).
Terrestrial ecoregions
By major habitat type:
Tropical and subtropical moist broadleaf forests
Albertine Rift montane forests
Tropical and subtropical grasslands, savannas, and shrublands
Victoria Basin forest–savanna mosaic
Montane grasslands and shrublands
Ruwenzori-Virunga montane moorlands
Freshwater ecoregions
By bioregion:
Great Lakes
Lake Victoria Basin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary distinguishing feature of Lepiota babruzalka that can be observed microscopically?
A. The length of the stem
B. The color of the gills
C. The variably shaped cystidia on the edges of the gills
D. The size of the cap
Answer:
|
C. The variably shaped cystidia on the edges of the gills
|
Relavent Documents:
Document 0:::
The Çocuktepe Dam is a gravity dam under construction on the Güzeldere River (a tributary of the Great Zab) in Çukurca district of Hakkâri Province, southeast Turkey. Under contract from Turkey's State Hydraulic Works, İnelsan İnşaat began construction on the dam in 2008 and a completion date has not been announced. Construction on the Gölgeliyamaç Dam immediately upstream began in 2008 as well but was cancelled due to poor geology.
The reported purpose of the dam is water storage and it can also support a hydroelectric power station in the future. Another purpose of the dam which has been widely reported in the Turkish press is to reduce the freedom of movement of militants of the Kurdistan Workers' Party (PKK). Blocking and flooding valleys in close proximity to the Iraq–Turkey border is expected to help curb cross-border PKK smuggling and deny caves in which ammunition can be stored. A total of 11 dams along the border; seven in Şırnak Province and four in Hakkâri Province were implemented for this purpose. In Hakkâri are the Gölgeliyamaç (since cancelled) and Çocuktepe Dams on the Güzeldere River and the Aslandağ and Beyyurdu Dams on the Bembo River. In Şırnak there is the Silopi Dam on the Hezil River and the Şırnak, Uludere, Balli, Kavşaktepe, Musatepe and Çetintepe Dams on the Ortasu River.
Construction was still ongoing as of March 2019.
See also
List of dams and reservoirs in Turkey
References
Document 1:::
Simple file verification (SFV) is a file format for storing CRC32 checksums of files to verify the integrity of files. SFV is used to verify that a file has not been corrupted, but it does not otherwise verify the file's authenticity. The .sfv file extension is usually used for SFV files.
Checksum
Files can become corrupted for a variety of reasons, including faulty storage media, errors in transmission, write errors during copying or moving, and software bugs. SFV verification ensures that a file has not been corrupted by comparing the file's CRC hash value to a previously calculated value. Due to the nature of hash functions, hash collisions may result in false positives, but the likelihood of collisions is usually negligible with random corruption. (The number of possible checksums is limited though large, so that with any checksum scheme many files will have the same checksum. However, the probability of a corrupted file having the same checksum as its original is exceedingly small, unless deliberately constructed to maintain the checksum.)
SFV cannot be used to verify the authenticity of files, as CRC32 is not a collision resistant hash function; even if the hash sum file is not tampered with, it is computationally trivial for an attacker to cause deliberate hash collisions, meaning that a malicious change in the file is not detected by a hash comparison. In cryptography, this attack is called a collision attack. For this reason, the md5sum and sha1sum utilities are often preferred in Unix operating systems, which use the MD5 and SHA-1 cryptographic hash functions respectively.
Even a single-bit error causes both SFV's CRC and md5sum's cryptographic hash to fail, requiring the entire file to be re-fetched.
The Parchive and rsync utilities are often preferred for verifying that a file has not been accidentally corrupted in transmission, since they can correct common small errors with a much shorter download.
Despite the weaknesses of the SFV format, it is
Document 2:::
Indirect DNA damage occurs when a UV-photon is absorbed in the human skin by a chromophore that does not have the ability to convert the energy into harmless heat very quickly. Molecules that do not have this ability have a long-lived excited state. This long lifetime leads to a high probability for reactions with other molecules—so-called bimolecular reactions. Melanin and DNA have extremely short excited state lifetimes in the range of a few femtoseconds (10−15s). The excited state lifetime of compounds used in sunscreens such as menthyl anthranilate, avobenzone or padimate O is 1,000 to 1,000,000 times longer than that of melanin, and therefore they may cause damage to living cells that come in contact with them.
The molecule that originally absorbs the UV-photon is called a "chromophore". Bimolecular reactions can occur either between the excited chromophore and DNA or between the excited chromophore and another species, to produce free radicals and reactive oxygen species. These reactive chemical species can reach DNA by diffusion and the bimolecular reaction damages the DNA (oxidative stress). Unlike direct DNA damage which causes sunburn, indirect DNA damage does not result in any warning signal or pain in the human body.
The bimolecular reactions that cause the indirect DNA damage are illustrated in the figure:
1O2 is reactive harmful singlet oxygen:
Location of the damage
Unlike direct DNA damage, which occurs in areas directly exposed to UV-B light, reactive chemical species can travel through the body and affect other areas—possibly even inner organs. The traveling nature of the indirect DNA damage can be seen in the fact that the malignant melanoma can occur in places that are not directly illuminated by the sun—in contrast to basal-cell carcinoma and squamous cell carcinoma, which appear only on directly illuminated locations on the body.
References
Document 3:::
International Ideographs Core (IICore) is a subset of up to ten thousand CJK Unified Ideographs characters, which can be implemented on devices with limited memories and capability that make it not feasible to implement the full ISO 10646/Unicode standard.
History
The IICore subset was initially raised in the 21st meeting of the Ideographic Rapporteur Group (IRG) in Guilin during 17th-20 November in 2003, and is subsequently passed in the group's 22nd meeting in Chengdu in May 2004.
See also
Chinese character encoding
Han unification
References
Document 4:::
In computational complexity theory and quantum computing, Simon's problem is a computational problem that is proven to be solved exponentially faster on a quantum computer than on a classical (that is, traditional) computer. The quantum algorithm solving Simon's problem, usually called Simon's algorithm, served as the inspiration for Shor's algorithm. Both problems are special cases of the abelian hidden subgroup problem, which is now known to have efficient quantum algorithms.
The problem is set in the model of decision tree complexity or query complexity and was conceived by Daniel R. Simon in 1994. Simon exhibited a quantum algorithm that solves Simon's problem exponentially faster with exponentially fewer queries than the best probabilistic (or deterministic) classical algorithm. In particular, Simon's algorithm uses a linear number of queries and any classical probabilistic algorithm must use an exponential number of queries.
This problem yields an oracle separation between the complexity classes BPP (bounded-error classical query complexity) and BQP (bounded-error quantum query complexity). This is the same separation that the Bernstein–Vazirani algorithm achieves, and different from the separation provided by the Deutsch–Jozsa algorithm, which separates P and EQP. Unlike the Bernstein–Vazirani algorithm, Simon's algorithm's separation is exponential.
Because this problem assumes the existence of a highly-structured "black box" oracle to achieve its speedup, this problem has little practical value. However, without such an oracle, exponential speedups cannot easily be proven, since this would prove that P is different from PSPACE.
Problem description
Simon's problem considers access to a function as implemented by a black box or an oracle. This function is promised to be either a one-to-one function, or a two-to-one function; if is two-to-one, it is furthermore promised that two inputs and evaluate to the same value if and only if and differ in a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of the International Ideographs Core (IICore)?
A. To provide a complete set of CJK Unified Ideographs characters
B. To create a subset of characters for devices with limited capabilities
C. To replace the ISO 10646/Unicode standard entirely
D. To standardize all Chinese character encodings
Answer:
|
B. To create a subset of characters for devices with limited capabilities
|
Relavent Documents:
Document 0:::
Electrical/Electronics engineering technology (EET) is an engineering technology field that implements and applies the principles of electrical engineering. Like electrical engineering, EET deals with the "design, application, installation, manufacturing, operation or maintenance of electrical/electronic(s) systems." However, EET is a specialized discipline that has more focus on application, theory, and applied design, and implementation, while electrical engineering may focus more of a generalized emphasis on theory and conceptual design. Electrical/Electronic engineering technology is the largest branch of engineering technology and includes a diverse range of sub-disciplines, such as applied design, electronics, embedded systems, control systems, instrumentation, telecommunications, and power systems.
Education
Accreditation
The Accreditation Board for Engineering and Technology (ABET) is the recognized organization for accrediting both undergraduate engineering and engineering technology programs in the United States.
Coursework
EET curricula can vary widely by institution type, degree type, program objective, and expected student outcome. Each year after, however, ABET publishes a set of minimum criteria that a given EET program (either associate degree or bachelor's degree) must meet in order to maintain its ABET accreditation. These criteria may be classified as either general criteria, which apply to all ABET accredited programs, or as program criteria, which apply to discipline-specific criteria.
Associate degree
Associate degree programs emphasize the practical field knowledge that is needed to maintain or troubleshoot existing electrical/electronic systems or to build and test new design prototypes.
Discipline-specific program outcomes include the application of circuit analysis and design, analog and digital electronics, computer programming, associated software, and relevant engineering standards
Coursework must be at a minimum algebra and trigo
Document 1:::
In particle physics, the crypton is a hypothetical superheavy particle, thought to exist in a hidden sector of string theory. It has been proposed as a candidate particle to explain the dark matter content of the universe. Cryptons arising in the hidden sector of a superstring-derived flipped SU(5) GUT model have been shown to be metastable with a lifetime exceeding the age of the universe. Their slow decays may provide a source for the ultra-high-energy cosmic rays (UHECR).
References
Document 2:::
The Hong Kong International Medical Devices and Supplies Fair is a trade fair organised by the Hong Kong Trade Development Council, held annually at the Hong Kong Convention and Exhibition Centre. The 2009 fair attracted over 150 exhibitors from 12 countries and regions. Several themed zones which include Medical Device, Medical Supplies and Disposables, and Tech Exchange help buyers connect with the right suppliers.
Major exhibit categories
Accident and Emergency Equipment
Building Technology and Hospital Furniture
Chinese Medical Devices
Communication, Systems and Information Technology
Dental Equipment and Supplies
Diagnostics
Electromedical Equipment / Medical Technology
Laboratory Equipment
Medical Components and Materials
Medical Supplies and Disposables
Physiotherapy / Orthopaedic / Rehabilitation Technology
Textiles
Document 3:::
An upper tropospheric cyclonic vortex is a vortex, or a circulation with a definable center, that usually moves slowly from east-northeast to west-southwest and is prevalent across Northern Hemisphere's warm season. Its circulations generally do not extend below in altitude, as it is an example of a cold-core low. A weak inverted wave in the easterlies is generally found beneath it, and it may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus cloudy and the appearance of circulation at ground level. In rare cases, a warm-core cyclone can develop in its associated convective activity, resulting in a tropical cyclone and a weakening and southwest movement of the nearby upper tropospheric cyclonic vortex. Symbiotic relationships can exist between tropical cyclones and the upper level lows in their wake, with the two systems occasionally leading to their mutual strengthening. When they move over land during the warm season, an increase in monsoon rains occurs
History of research
Using charts of mean 200-hectopascal circulation for July through August (located above sea level) to locate the circumpolar troughs and ridges, trough lines extend over the eastern and central North Pacific and over the North Atlantic. Case studies of upper tropospheric cyclones in the Atlantic and Pacific have been performed by using airplane reports (winds, temperatures and heights), radiosonde data, geostationary satellite cloud imagery, and cloud-tracked winds throughout the troposphere. It was determined they were the origin of an upper tropospheric cold-core lows, or cut-off lows.
Characteristics
The tropical upper tropospheric cyclone has a cold core, meaning it is stronger aloft than at the Earth's surface, or stronger in areas of the troposphere with lower pressures. This is explained by the thermal wind relationship. It also means that a pool of cold air aloft is associated with the feature. If both an upper
Document 4:::
A cascade refrigeration cycle is a multi-stage thermodynamic cycle. An example two-stage process is shown at right. (Bottom on mobile) The cascade cycle is often employed for devices such as ULT freezers.
In a cascade refrigeration system, two or more vapor-compression cycles with different refrigerants are used. The evaporation-condensation temperatures of each cycle are sequentially lower with some overlap to cover the total temperature drop desired, with refrigerants selected to work efficiently in the temperature range they cover. The low temperature system removes heat from the space to be cooled using an evaporator, and transfers it to a heat exchanger that is cooled by the evaporation of the refrigerant of the high temperature system. Alternatively, a liquid-to-liquid or similar heat exchanger may be used instead. The high-temperature system transfers heat to a conventional condenser that carries the entire heat output of the system and may be passive, fan, or water-cooled.
Cascade cycles may be separated by either being sealed in separated loops or in what is referred to as an "auto-cascade", where the gases are compressed as a mixture but separated as one refrigerant condenses into a liquid while the other continues as a gas through the rest of the cycle. Although an auto-cascade introduces several constraints on the design and operating conditions of the system that may reduce the efficiency, it is often used in small systems due to only requiring a single compressor or in cryogenic systems as it reduces the need for high-efficiency heat exchangers to prevent the compressors leaking heat into the cryogenic cycles. Both types can be used in the same system, generally with the separate cycles being the first stage(s) and the auto-cascade being the last stage.
Peltier coolers may also be cascaded into a multi-stage system to achieve lower temperatures. Here, the hot side of the first Peltier cooler is cooled by the cold side of the second Peltier cooler, w
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a "farm dam" as described in the text?
A. A type of plant
B. A water reservoir with a barrier
C. A folk song
D. An excavation site
Answer:
|
B. A water reservoir with a barrier
|
Relavent Documents:
Document 0:::
A lipoprotein is a biochemical assembly whose primary function is to transport hydrophobic lipid (also known as fat) molecules in water, as in blood plasma or other extracellular fluids. They consist of a triglyceride and cholesterol center, surrounded by a phospholipid outer shell, with the hydrophilic portions oriented outward toward the surrounding water and lipophilic portions oriented inward toward the lipid center. A special kind of protein, called apolipoprotein, is embedded in the outer shell, both stabilising the complex and giving it a functional identity that determines its role.
Plasma lipoprotein particles are commonly divided into five main classes, based on size, lipid composition, and apolipoprotein content. They are, in increasing size order: HDL, LDL, IDL, VLDL and chylomicrons. Subgroups of these plasma particles are primary drivers or modulators of atherosclerosis.
Many enzymes, transporters, structural proteins, antigens, adhesins, and toxins are sometimes also classified as lipoproteins, since they are formed by lipids and proteins.
Scope
Transmembrane lipoproteins
Some transmembrane proteolipids, especially those found in bacteria, are referred to as lipoproteins; they are not related to the lipoprotein particles that this article is about. Such transmembrane proteins are difficult to isolate, as they bind tightly to the lipid membrane, often require lipids to display the proper structure, and can be water-insoluble. Detergents are usually required to isolate transmembrane lipoproteins from their associated biological membranes.
Plasma lipoprotein particles
Because fats are insoluble in water, they cannot be transported on their own in extracellular water, including blood plasma. Instead, they are surrounded by a hydrophilic external shell that functions as a transport vehicle. The role of lipoprotein particles is to transport fat molecules, such as triglycerides, phospholipids, and cholesterol within the extracellular water of the body to all the cells and tissues of the body. The proteins included in the external shell of these particles, called apolipoproteins, are synthesized and secreted into the extracellular water by both the small intestine and liver cells. The external shell also contains phospholipids and cholesterol.
All cells use and rely on fats and cholesterol as building blocks to create the multiple membranes that cells use both to control internal water content and internal water-soluble elements and to organize their internal structure and protein enzymatic systems. The outer shell of lipoprotein particles have the hydrophilic groups of phospholipids, cholesterol, and apolipoproteins directed outward. Such characteristics make them soluble in the salt-water-based blood pool. Triglycerides and cholesteryl esters are carried internally, shielded from the water by the outer shell. The kind of apolipoproteins contained in the outer shell determines the functional identity of the lipoprotein particles. The interaction of these apolipoproteins with enzymes in the blood, with each other, or with specific proteins on the surfaces of cells, determines whether triglycerides and cholesterol will be added to or removed from the lipoprotein transport particles.
Characterization in human plasma
Structure
Lipoproteins are complex particles that have a central hydrophobic core of non-polar lipids, primarily cholesteryl esters and triglycerides. This hydrophobic core is surrounded by a hydrophilic membrane consisting of phospholipids, free cholesterol, and apolipoproteins. Plasma lipoproteins, found in blood plasma, are typically divided into five main classes based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons.
Functions
Metabolism
The handling of lipoprotein particles in the body is referred to as lipoprotein particle metabolism. It is divided into two pathways, exogenous and endogenous, depending in large part on whether the lipoprotein particles in question are composed chiefly of dietary (exogenous) lipids or whether they originated in the liver (endogenous), through de novo synthesis of triglycerides.
The hepatocytes are the main platform for the handling of triglycerides and cholesterol; the liver can also store certain amounts of glycogen and triglycerides. While adipocytes are the main storage cells for triglycerides, they do not produce any lipoproteins.
Exogenous pathway
Bile emulsifies fats contained in the chyme, then pancreatic lipase cleaves triglyceride molecules into two fatty acids and one 2-monoacylglycerol. Enterocytes readily absorb the small molecules from the chymus. Inside of the enterocytes, fatty acids and monoacylglycerides are transformed again into triglycerides. Then these lipids are assembled with apolipoprotein B-48 into nascent chylomicrons. These particles are then secreted into the lacteals in a process that depends heavily on apolipoprotein B-48. As they circulate through the lymphatic vessels, nascent chylomicrons bypass the liver circulation and are drained via the thoracic duct into the bloodstream.
In the blood stream, nascent chylomicron particles interact with HDL particles, resulting in HDL donation of apolipoprotein C-II and apolipoprotein E to the nascent chylomicron. The chylomicron at this stage is then considered mature. Via apolipoprotein C-II, mature chylomicrons activate lipoprotein lipase (LPL), an enzyme on endothelial cells lining the blood vessels. LPL catalyzes the hydrolysis of triglycerides that ultimately releases glycerol and fatty acids from the chylomicrons. Glycerol and fatty acids can then be absorbed in peripheral tissues, especially adipose and muscle, for energy and storage.
The hydrolyzed chylomicrons are now called chylomicron remnants. The chylomicron remnants continue circulating the bloodstream until they interact via apolipoprotein E with chylomicron remnant receptors, found chiefly in the liver. This interaction causes the endocytosis of the chylomicron remnants, which are subsequently hydrolyzed within lysosomes. Lysosomal hydrolysis releases glycerol and fatty acids into the cell, which can be used for energy or stored for later use.
Endogenous pathway
The liver is the central platform for the handling of lipids: it is able to store glycerols and fats in its cells, the hepatocytes. Hepatocytes are also able to create triglycerides via de novo synthesis. They also produce the bile from cholesterol. The intestines are responsible for absorbing cholesterol. They transfer it over into the blood stream.
In the hepatocytes, triglycerides and cholesteryl esters are assembled with apolipoprotein B-100 to form nascent VLDL particles. Nascent VLDL particles are released into the bloodstream via a process that depends upon apolipoprotein B-100.
In the blood stream, nascent VLDL particles bump with HDL particles; as a result, HDL particles donate apolipoprotein C-II and apolipoprotein E to the nascent VLDL particle. Once loaded with apolipoproteins C-II and E, the nascent VLDL particle is considered mature. VLDL particles circulate and encounter LPL expressed on endothelial cells. Apolipoprotein C-II activates LPL, causing hydrolysis of the VLDL particle and the release of glycerol and fatty acids. These products can be absorbed from the blood by peripheral tissues, principally adipose and muscle. The hydrolyzed VLDL particles are now called VLDL remnants or intermediate-density lipoproteins (IDLs). VLDL remnants can circulate and, via an interaction between apolipoprotein E and the remnant receptor, be absorbed by the liver, or they can be further hydrolyzed by hepatic lipase.
Hydrolysis by hepatic lipase releases glycerol and fatty acids, leaving behind IDL remnants, called low-density lipoproteins (LDL), which contain a relatively high cholesterol content (). LDL circulates and is absorbed by the liver and peripheral cells. Binding of LDL to its target tissue occurs through an interaction between the LDL receptor and apolipoprotein B-100 on the LDL particle. Absorption occurs through endocytosis, and the internalized LDL particles are hydrolyzed within lysosomes, releasing lipids, chiefly cholesterol.
Possible role in oxygen transport
Plasma lipoproteins may carry oxygen gas. This property is due to the crystalline hydrophobic structure of lipids, providing a suitable environment for O2 solubility compared to an aqueous medium.
Role in inflammation
Inflammation, a biological system response to stimuli such as the introduction of a pathogen, has an underlying role in numerous systemic biological functions and pathologies. This is a useful response by the immune system when the body is exposed to pathogens, such as bacteria in locations that will prove harmful, but can also have detrimental effects if left unregulated. It has been demonstrated that lipoproteins, specifically HDL, have important roles in the inflammatory process.
When the body is functioning under normal, stable physiological conditions, HDL has been shown to be beneficial in several ways. LDL contains apolipoprotein B (apoB), which allows LDL to bind to different tissues, such as the artery wall if the glycocalyx has been damaged by high blood sugar levels. If oxidised, the LDL can become trapped in the proteoglycans, preventing its removal by HDL cholesterol efflux. Normal functioning HDL is able to prevent the process of oxidation of LDL and the subsequent inflammatory processes seen after oxidation.
Lipopolysaccharide, or LPS, is the major pathogenic factor on the cell wall of Gram-negative bacteria. Gram-positive bacteria has a similar component named Lipoteichoic acid, or LTA. HDL has the ability to bind LPS and LTA, creating HDL-LPS complexes to neutralize the harmful effects in the body and clear the LPS from the body. HDL also has significant roles interacting with cells of the immune system to modulate the availability of cholesterol and modulate the immune response.
Under certain abnormal physiological conditions such as system infection or sepsis, the major components of HDL become altered, The composition and quantity of lipids and apolipoproteins are altered as compared to normal physiological conditions, such as a decrease in HDL cholesterol (HDL-C), phospholipids, apoA-I (a major lipoprotein in HDL that has been shown to have beneficial anti-inflammatory properties), and an increase in Serum amyloid A. This altered composition of HDL is commonly referred to as acute-phase HDL in an acute-phase inflammatory response, during which time HDL can lose its ability to inhibit the oxidation of LDL. In fact, this altered composition of HDL is associated with increased mortality and worse clinical outcomes in patients with sepsis.
Classification
By density
Lipoproteins may be classified as five major groups, listed from larger and lower density to smaller and higher density. Lipoproteins are larger and less dense when the fat to protein ratio is increased. They are classified on the basis of electrophoresis, ultracentrifugation and nuclear magnetic resonance spectroscopy via the Vantera Analyzer.
Chylomicrons carry triglycerides (fat) from the intestines to the liver, to skeletal muscle, and to adipose tissue.
Very-low-density lipoproteins (VLDL) carry (newly synthesised) triglycerides from the liver to adipose tissue.
Intermediate-density lipoproteins (IDL) are intermediate between VLDL and LDL. They are not usually detectable in the blood when fasting.
Low-density lipoproteins (LDL) carry 3,000 to 6,000 fat molecules (phospholipids, cholesterol, triglycerides, etc.) around the body. LDL particles are sometimes referred to as "bad" lipoprotein because concentrations of two kinds of LDL (sd-LDL and LPA), correlate with atherosclerosis progression. In healthy individuals, most LDL is large and buoyant (lb LDL).
large buoyant LDL (lb LDL) particles
small dense LDL (sd LDL) particles
Lipoprotein(a) (LPA) is a lipoprotein particle of a certain phenotype
High-density lipoproteins (HDL) collect fat molecules from the body's cells/tissues and take them back to the liver. HDLs are sometimes referred to as "good" lipoprotein because higher concentrations correlate with low rates of atherosclerosis progression and/or regression.
For young healthy research subjects, ~70 kg (154 lb), these data represent averages across individuals studied, percentages represent % dry weight:
However, these data are not necessarily reliable for any one individual or for the general clinical population.
Alpha and beta
It is also possible to classify lipoproteins as "alpha" and "beta", according to the classification of proteins in serum protein electrophoresis. This terminology is sometimes used in describing lipid disorders such as abetalipoproteinemia.
Subdivisions
Lipoproteins, such as LDL and HDL, can be further subdivided into subspecies isolated through a variety of methods. These are subdivided by density or by the protein contents/ proteins they carry. While the research is currently ongoing, researchers are learning that different subspecies contain different apolipoproteins, proteins, and lipid contents between species which have different physiological roles. For example, within the HDL lipoprotein subspecies, a large number of proteins are involved in general lipid metabolism. However, it is being elucidated that HDL subspecies also contain proteins involved in the following functions: homeostasis, fibrinogen, clotting cascade, inflammatory and immune responses, including the complement system, proteolysis inhibitors, acute-phase response proteins, and the LPS-binding protein, heme and iron metabolism, platelet regulation, vitamin binding and general transport.
Research
High levels of lipoprotein(a) are a significant risk factor for atherosclerotic cardiovascular diseases via mechanisms associated with inflammation and thrombosis. The links of mechanisms between different lipoprotein isoforms and risk for cardiovascular diseases, lipoprotein synthesis, regulation, and metabolism, and related risks for genetic diseases are under active research, as of 2022.
Document 1:::
Londina Illustrata. Graphic and Historic Memorials of Monasteries, Churches, Chapels, Schools, Charitable Foundations, Palaces, Halls, Courts, Processions, Places of Early Amusement and Modern & Present Theatres, In the Cities and Suburbs of London & Westminster was a book published in two volumes by Robert Wilkinson in 1819 & 1825, that had initially been released with William Herbert as groups of engravings between 1808 and 1819 which featured topographical illustrations by some of the foremost engravers and illustrators of the day, of the cities of London and Westminster, the county of Middlesex and some areas south of the River Thames, then in Surrey, such as Southwark.
Most of the plates carry names of the draughtsman and engraver. A few early artists are included such as Wenceslaus Hollar. More recent draughtsmen included Robert Blemmell Schnebbelie, Frederick Nash, William Capon, George Jones, H. Gardner, George Shepherd, William Goodman, C.J.M. Whichelo, John Carter, Fellows, C. Westmacott, E. Burney, Bartholomew Howlett, Thomas H. Shepherd, Banks, Ravenhill, William Oram. Engravers include James Stow, T. Dale, Bartholomew Howlett, John Whichelo, W. Wise, Samuel Rawle, T. Bourne, H. Cook, M. Springsguth, Wenceslaus Hollar, Joseph Skelton, Israel Silvestre, Richard Sawyer, S. Springsguth junr., Taylor.
Document 2:::
Albert Bijaoui (born in Monastir in 1943) is a French astronomer, former student of the Ecole Polytechnique (X 1962), renowned in image processing in astrophysics and its application in cosmology, he then prepared his PhD thesis at the Paris Observatory, under the supervision of André Lallemand. He defended his thesis at the Université Denis Diderot (Paris VII) in March 1971.
He has been a corresponding member of the French Academy of sciences since 1997.
Biography
Trainee then Research Associate at the CNRS at the Paris Observatory, then at the Nice Observatory, he became an Astronomer at the Nice Observatory in 1972. He is Honorary Astronomer at the Observatoire de la Côte d'Azur since 2015. He was Director of the Centre de Dépouillement des Clichés Astronomiques at the Institut National d'Astronomie et de Géophysique between 1973 and 1981. He was also director of the Cassiopeia laboratory (UMR CNRS/OCA) between 2004 and 2007.
He experimented with André Lallemand's electronic camera in 1970 at the 1.93 m telescope at the Haute-Provence Observatory.
The asteroid Bijaoui was named in his honour.
Scientific work
His research work has focused on various themes related to astronomical imaging. During the preparation of his thesis, he contributed to the development of electronography with the study and interpretation of the properties of André Lallemand's electronic camera. In parallel, he was involved in its exploitation for astrophysical purposes. With the creation in 1973 of the Centre de Dépouillement des Clichés Astronomiques at Nice Observatory, he was involved in the development of new methods for the analysis of astrophysical data and in the creation of software to exploit them for the French astronomical community. A system for the analysis of astronomical images has resulted. It will be widely disseminated in the astronomical community. With the commissioning of INAG's Schmidt telescope on the Calern plateau near Caussols in the Alpes-Maritimes, INAG ha
Document 3:::
The Scanning Multichannel Microwave Radiometer (SMMR) [pronounced simmer] was a five-frequency microwave radiometer flown on the Seasat and Nimbus 7 satellites. Both were launched in 1978, with the Seasat mission lasting less than six months until failure of the primary bus. The Nimbus 7 SMMR lasted from 25 October 1978 until 20 August 1987. It measured dual-polarized microwave radiances, at 6.63, 10.69, 18.0, 21.0, and 37.0 GHz, from the Earth's atmosphere and surface. Its primary legacy has been the creation of areal sea-ice climatologies for the Arctic and Antarctic.
The final few months of operation were considerably fortuitous as they allowed the calibration of the radiometers and their products with the first results from the SSMI.
References
Jezek, K.C., C. Merry, D. Cavalieri, S.Grace, J. Bedner, D. Wilson and D. Lampkin 1991: Comparison between SMMR and SSM/I passive microwave data collected over the Antarctic Ice Sheet. Byrd Polar Research Center, The Ohio State university, Columbus, OH., BPRC Technical Report Number 91-03, .
External links
http://nsidc.org/data/nsidc-0071.html
https://web.archive.org/web/20060930044535/http://podaac.jpl.nasa.gov:2031/SENSOR_DOCS/smmr.html
IEEE
Document 4:::
In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson.
More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals
or
is finite, then ƒ is Lebesgue-integrable on R2.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of lipoproteins in the body?
A. To transport oxygen in the blood
B. To transport hydrophobic lipid molecules in water
C. To provide structural support to cells
D. To store energy in muscle tissues
Answer:
|
B. To transport hydrophobic lipid molecules in water
|
Relavent Documents:
Document 0:::
The Mandarin paradox is an ethical parable used to illustrate the difficulty of fulfilling moral obligations when moral punishment is unlikely or impossible, leading to moral disengagement. It has been used to underscore the fragility of ethical standards when moral agents are separated by physical, cultural, or other distance, especially as facilitated by globalization. It was first posed by French writer Chateaubriand in "The Genius of Christianity" (1802):
I ask my own heart, I put to myself this question: "If thou couldst by a mere wish kill a fellow-creature in China, and inherit his fortune in Europe, with the supernatural conviction that the fact would never be known, wouldst thou consent to form such a wish?"
The paradox is famously used to foreshadow the character development of the arriviste Eugène de Rastignac in Balzac's novel Père Goriot. Rastignac asks Bianchon if he recalls the paradox, to which Bianchon first replies that he is "at [his] thirty-third mandarin," but then states that he would refuse to take an unknown man's life regardless of circumstance. Rastignac wrongly attributes the quote to Jean-Jacques Rousseau, which propagated to later writings.
In fiction
The Mandarin (novel) by José Maria de Eça de Queirós
Button, Button by Richard Matheson (plus movie The Box (2009 film))
The Count of Monte Cristo by Alexandre Dumas
See also
Ring of Gyges
References
Bibliography
Champsaur F., Le Mandarin, Paris, P. Ollendorff, 1895‐1896 (réimprimé sous le nom L’Arriviste, Paris, Albin Michel, 1902)
Schneider L. Tuer le Mandarin // Le Gaulois, 18 juillet 1926, no 17818
Paul Ronal. Tuer le mandarin // Revue de littirature comparee 10, no. 3 (1930): 520–23.
E. Latham // Une citaon de Rousseau // Mercure de France, 1er juin 1947, no 1006, p. 393.
Barbey B. «Tuer le Mandarin» // Mercure de France, 1er septembre 1947, no 1009
Laurence W. Keates. Mysterious Miraculous Mandarin: Origins, Literary Paternity, Implications in Ethics // Revue de
Document 1:::
TCL NXTPAPER is a display technology developed by TCL Corporation that attempts to replicate the experience of reading on paper to improve eye comfort. TCL claims that the NXTPAPER technology reduces blue light emissions, which are often linked to digital eye strain and sleep disturbances. It also includes anti-glare properties, designed to make screen reading more akin to reading paper, further reducing strain on the eyes.
TCL has implemented NXTPAPER technology in various products, including tablets and smartphones. Devices featuring NXTPAPER displays include the TCL NXTPAPER 10s and TCL's 40 NXTPAPER series smartphones.
Document 2:::
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.
More formally, in differential geometry of curves, the osculating circle of a sufficiently smooth plane curve at a given point p on the curve has been traditionally defined as the circle passing through p and a pair of additional points on the curve infinitesimally close to p. Its center lies on the inner normal line, and its curvature defines the curvature of the given curve at that point. This circle, which is the one among all tangent circles at the given point that approaches the curve most tightly, was named circulus osculans (Latin for "kissing circle") by Leibniz.
The center and radius of the osculating circle at a given point are called center of curvature and radius of curvature of the curve at that point. A geometric construction was described by Isaac Newton in his Principia:
Nontechnical description
Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The curvature of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point.
Mathematical description
Let be a regular parametric plane curve, where is the arc length (the natural parameter). This determines the unit tangent vector , the unit normal vector , the signed curvature and the radius of curvature at each point for which is composed:
Suppose that P is a point on γ where . The corresponding center of curvature is the point Q at distance R along N, in the same direction if k is positive and in the opposite
Document 3:::
In a low-IF receiver, the radio frequency (RF) signal is mixed down to a non-zero low or moderate intermediate frequency (IF). Typical frequency values are a few megahertz (instead of 33–40 MHz) for TV, and even lower frequencies, typically 120–130 kHz (instead of 10.7–10.8 MHz or 13.45 MHz) in the case of FM radio receivers or 455–470 kHz for AM radio (MW/LW/SW) receivers. Low-IF receiver topologies have many of the desirable properties of zero-IF architectures, but avoid the DC offset and 1/f noise problems.
The use of a non-zero IF re-introduces the image issue. However, when there are relatively relaxed image and neighbouring channel rejection requirements they can be satisfied by carefully designed low-IF receivers. Image signal and unwanted blockers can be rejected by quadrature down-conversion (complex mixing) and subsequent filtering.
This technique is now widely used in the tiny FM receivers incorporated into MP3 players and mobile phones and is becoming commonplace in both analog and digital TV receiver designs. Using advanced analog- and digital signal processing techniques, cheap, high quality receivers using no resonant circuits at all are now possible.
Document 4:::
High winds can blow railway trains off tracks and cause accidents.
Dangers of high winds
High winds can cause problems in a number of ways:
blow trains off the tracks
blow trains or wagons along the tracks and cause collisions
cause cargo to blow off trains which can damage objects outside the railway or which other trains can collide with
cause pantographs and overhead wiring to tangle
cause trees and other objects to fall onto the railway.
Preventative measures
Risks from high winds can be reduced by:
wind fences akin to snow sheds
lower profile of carriages
lowered centre of gravity of vehicles
reduction in train speed or cancellation, at high winds
a wider rail gauge
improve overhead wiring with:
regulated tension rather than fixed terminations
shorter catenary spans
solid conductors
By country
Australia
1928 – 47 wagons blown along line at Tocumwal
1931 – Kandos – wind blows level crossing gates closed in front of motor-cyclist
1943 – Hobart, Tasmania; Concern that wind will blow over doubledeck trams on gauge if top deck enclosed.
2010 – Marla, South Australia; Small tornado blows over train.
Austria
1910 – Trieste (now in Italy) – train blown down embankment.
China
Lanxin High-Speed Railway#Wind shed risk
February 28, 2007 – Wind blows 10 passenger rail cars off the track near Turpan, China.
Denmark
Great Belt Bridge rail accident. On 2 January 2019 a DSB express passenger train is hit by a semi-trailer from a passing cargo train on the western bridge of the Great Belt Fixed Link during Storm Alfrida, killing eight people and injuring 16.
Germany
Rügen narrow-gauge railway, 20 October 1936: derailment of a train, five injured
India
One reason for choosing broad gauge in India for greater stability in high winds.
Ireland
On the night of 30 January 1925, strong winds derailed carriages of a train crossing the Owencarrow Viaduct of the gauge Londonderry and Lough Swilly Railway.
Japan
Inaho
Amarube Viaduct
1895
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the six parameters used to classify a rock mass in the Rock Mass Rating (RMR) system?
A. Uniaxial compressive strength, RQD, spacing of discontinuities, condition of discontinuities, groundwater conditions, orientation of discontinuities
B. Density, porosity, uniaxial compressive strength, spacing of discontinuities, moisture content, orientation of discontinuities
C. Rock quality designation, ground stability, water saturation, temperature, orientation of discontinuities, cohesion
D. Uniaxial compressive strength, permeability, spacing of discontinuities, weathering, groundwater conditions, cohesion
Answer:
|
A. Uniaxial compressive strength, RQD, spacing of discontinuities, condition of discontinuities, groundwater conditions, orientation of discontinuities
|
Relavent Documents:
Document 0:::
Green building certification systems are a set of rating systems and tools that are used to assess a building or a construction project's performance from a sustainability and environmental perspective. Such ratings aim to improve the overall quality of buildings and infrastructures, integrate a life cycle approach in its design and construction, and promote the fulfillment of the United Nations Sustainable Development Goals by the construction industry. Buildings that have been assessed and are deemed to meet a certain level of performance and quality, receive a certificate proving this achievement.
According to the Global Status Report 2017 published by United Nations Environment Programme (UNEP) in coordination with the International Energy Agency (IEA), buildings and construction activities together contribute to 36% of the global energy use and 39% of carbon dioxide () emissions. Through certification, the associated environmental impacts during the lifecycle of buildings and other infrastructures (typically design, construction, operation and maintenance) could be better understood and mitigated. Currently, more than 100 building certifications systems exist around the world. The most popular building certification models today are BREEAM (UK), LEED (US), and DGNB (Germany).
History
In the mid-1980s, environmental issues were in the news and public attention due to different international disasters such as the Bhopal disaster (1984), Chernobyl nuclear explosion (1986) and the Exxon Valdez tanker spill (1989). Lifecycle assessments (LCAs) were starting to gain traction from its initial stages in the 1970s to the 1980s and it was in 1991 that the term was first coined. With increasing cognizance of environmental impacts due to human activities, a more comprehensive assessment of buildings utilizing the principles of LCA was much sought after. In 1990, the first Sustainability Assessment Method for buildings, BREEAM was released. In 1993, Rick Fedrizzi, Davi
Document 1:::
Kappa Ophiuchi, Latinized from κ Ophiuchi, is a star in the equatorial constellation Ophiuchus. It is a suspected variable star with an average apparent visual magnitude of 3.20, making it visible to the naked eye and one of the brighter members of this constellation. Based upon parallax measurements made during the Hipparcos mission, it is situated at a distance of around from Earth. The overall brightness of the star is diminished by 0.11 magnitudes due to extinction from intervening matter along the line of sight.
The spectrum of this star matches a stellar classification of K2 III, with the luminosity class of 'III' indicating this is a giant star that has exhausted the hydrogen at its core and evolved away from the main sequence of stars like the Sun. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It is 19% more massive than the Sun, but the outer envelope has expanded to around 11 times the Sun's radius. With its enlarged size, it is radiating 51 times the luminosity of the Sun from its outer atmosphere at an effective temperature of 4,449 K. This is cooler than the Sun's surface and gives Kappa Ophiuchi the orange-hued glow of a K-type star.
Although designated as a variable star, observations with the Hipparcos satellite showed a variation of no more than 0.02 in magnitude. In designating this as a suspected variable star, it is possible that Kappa Ophiuchi was mistaken for Chi Ophiuchi, which is a variable star. Kappa Ophiuchi belongs to an evolutionary branch known as the red clump, making it a clump giant. The surface abundance of elements other than hydrogen and helium, what astronomers term the star's metallicity, is similar to the abundances of those elements in the Sun.
Document 2:::
Immersion chillers work by circulating a cooling fluid (usually tap water from a garden hose or faucet) through a copper/stainless steel coil that is placed directly in the hot wort. As the cooling fluid runs through the coil it absorbs and carries away heat until the wort has cooled to the desired temperature.
The advantage of using a copper or stainless steel immersion chiller is the lower risk of contamination versus other methods when used in an amateur or homebrewing environment. The clean chiller is placed directly in the still boiling wort and thus sanitized before the cooling process begins.
See also
Brewing#Wort cooling
coolship, alternate equipment for cooling
References
Document 3:::
Salonga National Park (French: Parc National de la Salonga) is a national park in the Democratic Republic of the Congo located in the Congo River basin. It is Africa's largest tropical rainforest reserve covering about 36,000 km2 or . It extends into the provinces of Mai Ndombe, Equateur, Kasaï and Sankuru. In 1984, the national park was inscribed on the UNESCO World Heritage List for its protection of a large swath of relatively intact rainforest and its important habitat for many rare species. In 1999, the site has been listed as endangered due to poaching and housing construction. Following the improvement in its state of conservation, the site was removed from the endangered list in 2021.
Geography
The park is in an area of rainforest about halfway between Kinshasa, the capital, and Kisangani. There are no roads and most of the park is accessible only by river. Sections of the national park are almost completely inaccessible and have never been systematically explored. The southern region inhabited by the Iyaelima people is accessible via the Lokoro River, which flows through the center and northern parts of the park, and the Lula River in the south.
The Salonga River meanders in a generally northwest direction through the Salonga National Park to its confluence with the Busira River.
History
The Salonga National Park was established as the Tshuapa National Park in 1956, and gained its present boundaries with a 1970 presidential decree by President Mobutu Sese Seko. It was registered as a UNESCO World Heritage Site in 1984. Due to the civil war in the eastern half of the country, it was added to the List of World Heritage in Danger in 1999.
The park is co-managed by the Institut Congolais pour la Conservation de la Nature and the World Wide Fund for Nature since 2015. Extensive consultation is ongoing, with the two main populations living within the park; the Iyaelima, the last remaining residents of the park and the Kitawalistes, a religious sect who insta
Document 4:::
EXAPT (a portmanteau of "Extended Subset of APT") is a production-oriented programming language that allows users to generate NC programs with control information for machining tools and facilitates decision-making for production-related issues that may arise during various machining processes.
EXAPT was first developed to address industrial requirements. Through the years, the company created additional software for the manufacturing industry. Today, EXAPT offers a suite of SAAS products and services for the manufacturing industry.
The trade name, EXAPT, is most commonly associated with the CAD/CAM-System, production data, and tool management software of the German company EXAPT Systemtechnik GmbH based in Aachen, DE.
General
EXAPT is a modularly built programming system for all NC machining operations as
Drilling
Turning
Milling
Turn-Milling
Nibbling
Flame-, laser-, plasma- and water jet cutting
Wire eroding
Operations with industrial robots
Due to the modular structure, the main product groups, EXAPTcam and EXAPTpdo, are gradually expandable and permit individual software for the manufacturing industry used individually and also in a compound with an existing IT environment.
Functionality
EXAPTcam meets the requirements for NC planning, especially for the cutting operations such as turning, drilling, and milling up to 5-axis simultaneous machining. Thereby new process technologies, tool, and machine concepts are constantly involved. In the NC programming data from different sources such as 3D CAD models, drawings or tables can flow in. The possibilities of NC programming reaches from language-oriented to feature-oriented NC programming. The integrated EXAPT knowledge database and intelligent and scalable automatisms support the user. The EXAPT NC planning also covers the generation of production information as clamping and tool plans, presetting data or time calculations. The realistic simulation possibilities of NC planning and NC control data provi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of the enzyme transposase in conservative transposition?
A. To replicate the transposon
B. To cut and paste the transposon into the genome
C. To repair DNA gaps after transposon insertion
D. To encode additional proteins for transposon movement
Answer:
|
B. To cut and paste the transposon into the genome
|
Relavent Documents:
Document 0:::
The Cloaca Maxima ( , ) or, less often, Maxima Cloaca, is one of the world's earliest sewage systems. Its name is related to that of Cloacina, a Roman goddess. Built during either the Roman Kingdom or early Roman Republic, it was constructed in Ancient Rome in order to drain local marshes and remove waste from the city. It carried effluent to the River Tiber, which ran beside the city. The sewer started at the Forum Augustum and ended at the Ponte Rotto and Ponte Palatino. It began as an open air canal, but it developed into a much larger sewer over the course of time. Agrippa renovated and reconstructed much of the sewer. This would not be the only development in the sewers, by the first century AD all eleven Roman aqueducts were connected to the sewer. After the Roman Empire fell the sewer still was used. By the 19th century, it had become a tourist attraction. Some parts of the sewer are still used today. During its heyday, it was highly valued as a sacred symbol of Roman culture and Roman engineering.
Construction and history
According to tradition, it may have initially been constructed around 600 BC under the orders of the king of Rome, Tarquinius Priscus. He ordered Etruscan workers and the plebeians to construct the sewers. Before constructing the Cloaca Maxima, Priscus, and his son Tarquinius Superbus, worked to transform the land by the Roman forum from a swamp into a solid building ground, thus reclaiming the Velabrum. In order to achieve this, they filled it up with 10-20,000 cubic meters of soil, gravel, and debris.
At the beginning of the sewer's life it consisted of open-air channels lined up with bricks centered around a main pipe. At this stage it might have had no roof. However, wooden holes spread throughout the sewer indicate that wooden bridges may have been built over it, which possibly functioned as a roof. Alternatively, the holes could have functioned as a support for the scaffolding needed to construct the sewer. The Cloaca Maxima may al
Document 1:::
Optimal virulence is a concept relating to the ecology of hosts and parasites. One definition of virulence is the host's parasite-induced loss of fitness. The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis. This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved.
A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence.
The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against.
Mode of transmission
Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue. Cholera is spread through sewage and dengue through mosquitos. In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the
Document 2:::
Obstacle avoidance, in robotics, is a critical aspect of autonomous navigation and control systems. It is the capability of a robot or an autonomous system/machine to detect and circumvent obstacles in its path to reach a predefined destination. This technology plays a pivotal role in various fields, including industrial automation, self-driving cars, drones, and even space exploration. Obstacle avoidance enables robots to operate safely and efficiently in dynamic and complex environments, reducing the risk of collisions and damage.
For a robot or autonomous system to successfully navigate through obstacles, it must be able to detect such obstacles. This is most commonly done through the use of sensors, which allow the robot to process its environment, make a decision on what it must do to avoid an obstacle, and carry out that decision with the use of its effectors, or tools that allow a robot to interact with its environment.
Approaches
There are several methods for robots or autonomous machines to carry out their decisions in real-time. Some of these methods include sensor-based approaches, path planning algorithms, and machine learning techniques.
Sensor-based
One of the most common approaches to obstacle avoidance is the use of various sensors, such as ultrasonic, LiDAR, radar, sonar, and cameras. These sensors allow an autonomous machine to do a simple 3 step process: sense, think, and act. They take in inputs of distances in objects and provide the robot with data about its surroundings enabling it to detect obstacles and calculate their distances. The robot can then adjust its trajectory to navigate around these obstacles while maintaining its intended path. All of this is done and carried out in real-time and can be practically and effectively used in most applications of obstacle avoidance
While this method works well under most circumstances, there are such where more advanced techniques could be useful and appropriate for efficiently reaching an en
Document 3:::
The Jesus Incident (1979) is the second science fiction novel set in the Destination: Void universe by the American author Frank Herbert and poet Bill Ransom. It is a sequel to Destination: Void (1965), and has two sequels: The Lazarus Effect (1983) and The Ascension Factor (1988).
Plot introduction
The book takes place at an indeterminate time following the events in Destination: Void. At the end of Destination: Void the crew of the ship had succeeded in creating an artificial consciousness. The new conscious being, now known as 'Ship', gains a level of awareness that allows it to manipulate space and time. Ship instantly transports itself to a planet which it has decided the crew will colonize, christening it "Pandora". The first book ends with a demand from Ship for the crew to learn how to WorShip or how to establish a relationship with Ship, a godlike being.
The action of the book is divided between two settings, the internal spaces of Ship which is orbiting Pandora and the settlements on the planet. While the original crew of Ship, as described in Destination: Void, were cloned human beings from the planet Earth, by the time of The Jesus Incident, the crew has become a mixed bag of peoples from various cultures that have been accepted as crew members by Ship when it visited their planet as well as people who have been conceived and born on the ship. Evidently Ship has shown up at a number of planets as the suns of those planets were going nova, the implication being that these planets were other, failed experiments by Ship to establish a relationship with human beings. Ship refers to these as replays of human history, suggesting Ship itself has manipulated human history time and time again.
Ship's charge for humans to decide how to WorShip still remains unsatisfied. In the opening chapters, Ship reveals that Pandora will be a final test for the human race. Ship awakens the Chaplain/Psychiatrist Raja Flattery (part of the original Destination: Void crew)
Document 4:::
A rod end bearing, also known as a heim joint (North America) or rose joint (UK and elsewhere), is a mechanical articulating joint. Such joints are used on the ends of control rods, steering links, tie rods, or anywhere a precision articulating joint is required, and where a clevis end (which requires perfect alignment between the components) is unsuitable. A ball swivel with an opening through which a bolt or other attaching hardware may pass is pressed into a circular casing with a threaded shaft attached. The threaded portion may be either male or female. The heim joint's advantage is that the ball insert permits the rod or bolt passing through it to be misaligned to a limited degree. A link terminated in two heim joints permits misalignment of their attached shafts.
A linkage with heim joints on both of its ends transmits force in one direction only: directly along its long axis. It cannot transmit forces in any other direction, or any moments. This makes these types of joints useful for statically determinate structures.
History
The spherical rod end bearing was developed by Nazi Germany during World War II. When one of the first German planes to be shot down by the British in early 1940 was examined, they found this joint in use in the aircraft's control systems. Following this discovery, the Allied governments gave the H.G. Heim Company an exclusive patent to manufacture these joints in North America, while in the UK the patent passed to Rose Bearings Ltd. The ubiquity of these manufacturers in their respective markets led to the terms heim joint and rose joint becoming synonymous with their product. After the patents ran out the common names stuck, although , rosejoint remains a registered trademark of Minebea Mitsumi Inc., successor to Rose Bearings Ltd. Originally used in aircraft, the rod end bearing may be found in cars, trucks, race cars, motorcycles, lawn tractors, boats, industrial machines, go-karts, radio-control helicopters, formula cars, and man
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What type of non-covalent interaction is primarily responsible for the aggregation of non-polar molecules in aqueous solutions to minimize their exposure to water?
A. Hydrogen bonding
B. Electrostatic interactions
C. Hydrophobic effect
D. Van der Waals forces
Answer:
|
C. Hydrophobic effect
|
Relavent Documents:
Document 0:::
A log-structured filesystem is a file system in which data and metadata are written sequentially to a circular buffer, called a log. The design was first proposed in 1988 by John K. Ousterhout and Fred Douglis and first implemented in 1992 by Ousterhout and Mendel Rosenblum for the Unix-like Sprite distributed operating system.
Rationale
Conventional file systems lay out files with great care for spatial locality and make in-place changes to their data structures in order to perform well on optical and magnetic disks, which tend to seek relatively slowly.
The design of log-structured file systems is based on the hypothesis that this will no longer be effective because ever-increasing memory sizes on modern computers would lead to I/O becoming write-heavy since reads would be almost always satisfied from memory cache. A log-structured file system thus treats its storage as a circular log and writes sequentially to the head of the log.
This has several important side effects:
Write throughput on optical and magnetic disks is improved because they can be batched into large sequential runs and costly seeks are kept to a minimum.
The structure is naturally suited to media with append-only zones or pages such as flash storages and shingled magnetic recording HDDs
Writes create multiple, chronologically-advancing versions of both file data and meta-data. Some implementations make these old file versions nameable and accessible, a feature sometimes called time-travel or snapshotting. This is very similar to a versioning file system.
Recovery from crashes is simpler. Upon its next mount, the file system does not need to walk all its data structures to fix any inconsistencies, but can reconstruct its state from the last consistent point in the log.
Log-structured file systems, however, must reclaim free space from the tail of the log to prevent the file system from becoming full when the head of the log wraps around to meet it. The tail can release space and move fo
Document 1:::
Collective motion is defined as the spontaneous emergence of ordered movement in a system consisting of many self-propelled agents. It can be observed in everyday life, for example in flocks of birds, schools of fish, herds of animals and also in crowds and car traffic. It also appears at the microscopic level: in colonies of bacteria, motility assays and artificial self-propelled particles. The scientific community is trying to understand the universality of this phenomenon. In particular it is intensively investigated in statistical physics and in the field of active matter. Experiments on animals, biological and synthesized self-propelled particles, simulations and theories are conducted in parallel to study these phenomena. One of the most famous models that describes such behavior is the Vicsek model introduced by Tamás Vicsek et al. in 1995.
Collective behavior of Self-propelled particles
Source:
Just like biological systems in nature, self-propelled particles also respond to external gradients and show collective behavior. Micromotors or nanomotors can interact with self-generated gradients and exhibit schooling and exclusion behavior. For example, Ibele, et al. demonstrated that silver chloride micromotors, in the presence of UV light, interact with each other at high concentrations and form schools. Similar behavior can also be observed with titanium dioxide microparticles. Silver orthophosphate microparticles exhibit transitions between schooling and exclusion behaviors in response to ammonia, hydrogen peroxide, and UV light. This behavior can be used to design a NOR gate since different combinations of the two different stimuli (ammonia and UV light) generate different outputs. Oscillations between schooling and exclusion behaviors are also tunable via changes in hydrogen peroxide concentration. The fluid flows generated by these oscillations are strong enough to transport microscale cargo and can even direct the assembly of close-packed colloidal cryst
Document 2:::
Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán.
Mathematical description
Consider a steady supersonic flow past a solid cone that has a semi-vertical angle . A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave.
The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar. This particularly suggests that for each value of incoming Mach number , there exists a maximum value of beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when is close to , it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream.
Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant t
Document 3:::
Processing is a free graphics library and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching non-programmers the fundamentals of computer programming in a visual context.
Processing uses the Java programming language, with additional simplifications such as additional classes and aliased mathematical functions and operations. It also provides a graphical user interface for simplifying the compilation and execution stage.
The Processing language and IDE have been the precursor to other projects including Arduino and Wiring.
History
The project was initiated in 2001 by Casey Reas and Ben Fry, both formerly of the Aesthetics and Computation Group at the MIT Media Lab. In 2012, they started the Processing Foundation along with Daniel Shiffman, who joined as a third project lead. Johanna Hedva joined the Foundation in 2014 as Director of Advocacy.
Originally, Processing had used the domain proce55ing.net, because the processing domain was taken; Reas and Fry eventually acquired the domain processing.org and moved the project to it in 2004. While the original name had a combination of letters and numbers, it was always officially referred to as processing, but the abbreviated term p5 is still occasionally used (e.g. in "p5.js") in reference to the old domain name.
In 2012 the Processing Foundation was established and received 501(c)(3) nonprofit status, supporting the community around the tools and ideas that started with the Processing Project. The foundation encourages people around the world to meet annually in local events called Processing Community Day.
Features
Processing includes a sketchbook, a minimal alternative to an integrated development environment (IDE) for organizing projects.
Every Processing sketch is actually a subclass of the PApplet Java class (formerly a subclass of Java's built-in Applet) which implements most of the Processing language's features
Document 4:::
A spica splint is a type of orthopedic splint used to immobilize the thumb and/or wrist while allowing the other digits freedom to move. It is used to provide support for thumb injuries (ligament instability, sprain or muscle strain), gamekeeper's thumb, osteoarthritis, de Quervain's syndrome or fractures of the scaphoid, lunate, or first metacarpal. It is also suitable for post-operative use or after removal of a hand/thumb cast.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of a spica splint?
A. To immobilize the entire hand
B. To provide support for thumb injuries and allow movement of other digits
C. To enhance flexibility in the wrist
D. To assist in strengthening the hand muscles
Answer:
|
B. To provide support for thumb injuries and allow movement of other digits
|
Relavent Documents:
Document 0:::
The NSLU2 (Network Storage Link for USB 2.0 Disk Drives) is a network-attached storage (NAS) device made by Linksys introduced in 2004 and discontinued in 2008. It makes USB flash memory and hard disks accessible over a network using the SMB protocol (also known as Windows file sharing or CIFS). It was superseded mainly by the NAS200 (enclosure type storage link) and in another sense by the WRT600N and WRT300N/350N which both combine a Wi-Fi router with a storage link.
The device runs a modified version of Linux and by default, formats hard disks with the ext3 filesystem, but a firmware upgrade from Linksys adds the ability to use NTFS and FAT32 formatted drives with the device for better Windows compatibility. The device has a web interface from which the various advanced features can be configured, including user and group permissions and networking options.
Hardware
The device has two USB 2.0 ports for connecting hard disks and uses an ARM-compatible Intel XScale IXP420 CPU. In models manufactured prior to around April 2006, Linksys had underclocked the processor to 133 MHz, though a simple hardware modification to remove this restriction is possible. Later models (circa. May 2006) are clocked at the rated speed of 266 MHz. The device includes 32 MB of SDRAM, and 8 MB of flash memory. It also has a 100 Mbit/s Ethernet network connection. The NSLU2 is fanless, making it completely silent.
User community
Stock, the device runs a customised version of Linux. Linksys was required to release their source code as per the terms of the GNU General Public License. Due to the availability of source code, the NSLU2's use of well-documented commodity components and its relatively low price, there are several community projects centered around it, including hardware modifications, alternative firmware images, and alternative operating systems with varying degrees of reconfiguration.
Hardware modifications
Unofficial hardware modifications include:
Doubling the clock freq
Document 1:::
The Porto Alegre Botanical Garden is a Foundation of Rio Grande do Sul located on the street Salvador França, in Porto Alegre, Brazil.
History
The project for a botanical garden in Porto Alegre dates back to the beginning of the 19th century, when Dom Joao VI, after creating the Rio de Janeiro Botanical Garden, sent seedlings to Porto Alegre to establish another similar park in the city. Unfortunately these seedlings did not come to the capital, remaining trapped in Rio Grande, where they were planted. The agriculturist Paul Schoenwald subsequently donated a plot of land to the state government to establish a green area, but the project was unsuccessful.
A third attempt would be made in 1882, when councilman Francisco Pinto de Souza presented a proposal for scientific exploitation of the area then known as the Várzea de Petrópolis, providing a garden and a promenade. Considered utopian, the plan was terminated and lay dormant for decades, only returning to consideration in the mid-20th century.
In 1953 the 2136 law authorized the selling of an area of 81.57 hectares, of which 50 hectares would be to create a park or botanical garden. A committee, which included prominent teacher and religious figure Teodoro Luís, was formed to develop the project, which began in 1957 with the first planting of selected species: a collection of palm trees, conifers and succulent. When opened to the public on September 10, 1958, it already featured nearly 600 species.
Soon after, in 1962, was inaugurated the oven for cacti, in the 1970s and the botanical garden was integrated into Fundação Zoobotânica Foundation, along with the Park Zoo and the Museum of Natural Sciences. This season began the collection of trees, with emphasis on families of ecological importance (Myrtaceae, Rutaceae, Myrsinaceae, Bignoniaceae, Fabales, Zingiberales, among others), thematic groups (condiments and scented) and forest formations typical of the state, and is launched a program for expeditions to c
Document 2:::
The Agricultural Technology Research Program (ATRP) is part of the Aerospace, Transportation and Advanced Systems Laboratory of the Georgia Tech Research Institute. It was founded in 1973 to work with Georgia agribusiness, especially the poultry industry, to develop new technologies and adapt existing ones for specialized industrial needs. The program's goal is to improve productivity, reduce costs, and enhance safety and health through technological innovations.
ATRP conducts state-sponsored and contract research for industry and government agencies. Researchers focus efforts on both immediate and long-term industrial needs, including advanced robotic systems for materials handling, machine-vision developments for grading and control, improved wastewater treatment technologies, and biosensors for rapid microbial detection. With guidance from the Georgia Poultry Federation, ATRP also conducts a variety of outreach activities to provide the industry with timely information and technical assistance. Researchers have complementary backgrounds in mechanical, electrical, computer, environmental, and safety engineering; physics; and microbiology.
ATRP is one of the oldest and largest agricultural technology research and development programs in the nation, and is conducted in cooperation with the Georgia Poultry Federation with funding from the Georgia General Assembly.
Research and development areas
Robotics and automation systems
Robotics/automation research studies focus heavily on integrated, "intelligent" automation systems. These systems offer major opportunities to further enhance productivity in the poultry and food processing industries. They incorporate advanced sensors, robotics, and computer simulation and control technologies in an integrated package and tackle a number of unique challenges in trying to address specific industrial needs.
Advanced imaging and sensor concepts
Advanced imaging and sensor concepts research studies focus on the design of syste
Document 3:::
A loader is a heavy equipment machine used in construction to move or load materials such as soil, rock, sand, demolition debris, etc. into or onto another type of machinery (such as a dump truck, conveyor belt, feed-hopper, or railroad car).
There are many types of loader, which, depending on design and application, are variously called a bucket loader, end loader, front loader, front-end loader, payloader, high lift, scoop, shovel dozer, skid-steer, skip loader, tractor loader or wheel loader.
Description
A loader is a type of tractor, usually wheeled, sometimes on tracks, that has a front-mounted wide bucket connected to the end of two booms (arms) to scoop up loose material from the ground, such as dirt, sand or gravel, and move it from one place to another without pushing the material across the ground. A loader is commonly used to move a stockpiled material from ground level and deposit it into an awaiting dump truck or into an open trench excavation.
The loader assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools—for example, many can mount forks to lift heavy pallets or shipping containers, and a hydraulically opening "clamshell" bucket allows a loader to act as a light dozer or scraper. The bucket can also be augmented with devices like a bale grappler for handling large bales of hay or straw.
Large loaders, such as the Kawasaki 95ZV-2, John Deere 844K, ACR 700K Compact Wheel Loader, Caterpillar 950H, Volvo L120E, Case 921E, or Hitachi ZW310 usually have only a front bucket and are called front loaders, whereas small loader tractors are often also equipped with a small backhoe and are called backhoe loaders or loader backhoes or JCBs, after the company that first claimed to have invented them. Other companies like CASE in America and Whitlock in the UK had been manufacturing excavator loaders well before JCB.
The largest loader in the world is LeTourneau L-2350. Currently these la
Document 4:::
Intel Shell was an unfinished Intel building located in Austin, Texas. It was imploded on February 25, 2007, and the Austin United States Courthouse now stands in its place.
Halting the building
It was halted in March 2001, abandoned from March 2001 to 2004 by pain, because Intel abruptly stopped its construction.
Aftermath
A few days after the implosion, David Weaver, tried to sell many bottles that comes with the label "Intel Shell Inside" with the leftovers for $20.
In 2009, a courthouse started its construction process and was completed in 2012, and the Austin U.S. courthouse stands today.
References
External links
Intel shell implosion, picture gallery
Intel Shell Demolition - 02/25/07, 2007 video
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What phenomenon describes the behavior of wavepackets launched through disordered media that eventually return to their starting points?
A. Quantum tunneling effect
B. Quantum boomerang effect
C. Quantum entanglement effect
D. Quantum displacement effect
Answer:
|
B. Quantum boomerang effect
|
Relavent Documents:
Document 0:::
Lift-induced drag, induced drag, vortex drag, or sometimes drag due to lift, in aerodynamics, is an aerodynamic drag force that occurs whenever a moving object redirects the airflow coming at it. This drag force occurs in airplanes due to wings or a lifting body redirecting air to cause lift and also in cars with airfoil wings that redirect air to cause a downforce. It is symbolized as , and the lift-induced drag coefficient as .
For a constant amount of lift, induced drag can be reduced by increasing airspeed. A counter-intuitive effect of this is that, up to the speed-for-minimum-drag, aircraft need less power to fly faster. Induced drag is also reduced when the wingspan is higher, or for wings with wingtip devices.
Explanation
The total aerodynamic force acting on a body is usually thought of as having two components, lift and drag. By definition, the component of force parallel to the oncoming flow is called drag; and the component perpendicular to the oncoming flow is called lift. At practical angles of attack the lift greatly exceeds the drag.
Lift is produced by the changing direction of the flow around a wing. The change of direction results in a change of velocity (even if there is no speed change), which is an acceleration. To change the direction of the flow therefore requires that a force be applied to the fluid; the total aerodynamic force is simply the reaction force of the fluid acting on the wing.
An aircraft in slow flight at a high angle of attack will generate an aerodynamic reaction force with a high drag component. By increasing the speed and reducing the angle of attack, the lift generated can be held constant while the drag component is reduced. At the optimum angle of attack, total drag is minimised. If speed is increased beyond this, total drag will increase again due to increased profile drag.
Vortices
When producing lift, air below the wing is at a higher pressure than the air pressure above the wing. On a wing of finite span, this p
Document 1:::
Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events A and B, the probability of A happening given that B has happened is assumed to be about the same as the probability of B given A, when there is actually no evidence for this assumption. More formally, P(A|B) is assumed to be approximately equal to P(B|A).
Examples
Example 1
In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. A test can detect 80% of malignancies and has a 10% false positive rate. What is the probability of malignancy given a positive test result? Approximately 95 out of 100 physicians responded the probability of malignancy would be about 75%, apparently because the physicians believed that the chances of malignancy given a positive test result were approximately the same as the chances of a positive test result given malignancy.
The correct probability of malignancy given a positive test result as stated above is 7.5%, derived via Bayes' theorem:
Other examples of confusion include:
Hard drug users tend to use marijuana; therefore, marijuana users tend to use hard drugs (the first probability is marijuana use given hard drug use, the second is hard drug use given marijuana use).
Most accidents occur within 25 miles from home; therefore, you are safest when you are far from home.
Terrorists tend to have an engineering background; so, engineers have a tendency towards terrorism.
For other errors in conditional probability, see the Monty Hall problem and the base rate fallacy. Compare to illicit conversion.
Example 2
In order to identify individuals having a serious disease in an early curable form, one may consider screening a large group of people. While the benefits are obvious, an argument against such screenings is the disturbance caused by false positive screening r
Document 2:::
A guard byte is a part of a computer program's memory that helps software developers find buffer overflows while developing the program.
Principle
When a program is compiled for debugging, all memory allocations are prefixed and postfixed by guard bytes. Special memory allocation routines may then perform additional tasks to determine unwanted read and write attempts outside the allocated memory. These extra bytes help to detect that the program is writing into (or even reading from) inappropriate memory areas, potentially causing buffer overflows. In case of accessing these bytes by the program's algorithm, the programmer is warned with information assisting them to locate the problem.
Checking for the inappropriate access to the guard bytes may be done in two ways:
by setting a memory breakpoint on a condition of write and/or read to those bytes, or
by pre-initializing the guard bytes with specific values and checking the values upon deallocation.
The first way is possible only with a debugger that handles such breakpoints, but significantly increases the chance of locating the problem. The second way does not require any debuggers or special environments and can be done even on other computers, but the programmer is alerted about the overflow only upon the deallocation, which is sometimes quite late.
Because guard bytes require additional code to be executed and additional memory to be allocated, they are used only when the program is compiled for debugging. When compiled as a release, guard bytes are not used at all, neither the routines working with them.
Example
A programmer wants to allocate a buffer of 100 bytes of memory while debugging. The system memory allocating routine will allocate 108 bytes instead, adding 4 leading and 4 trailing guard bytes, and return a pointer shifted by the 4 leading guard bytes to the right, hiding them from the programmer. The programmer should then work with the received pointer without the knowledge of the presence of
Document 3:::
A basal cell is a general cell type that is present in many forms of epithelial tissue throughout the body. Basal cells are located between the basement membrane and the remainder of the epithelium, effectively functioning as an anchor for the epithelial layer and an important mechanism in the maintenance of intraorgan homeostasis.
Basal cells can interact with surrounding cells including neurons, the basement membrane, columnar epithelium, and underlying mesenchymal cells. They also engage in interactions with dendritic, lymphocytic, and inflammatory cells, with the majority of these interactions occurring in the lateral intercellular gap between basal cells.
Basal cells have important health implications since the most common types of skin cancer are basal cell and squamous cell carcinomas. More than 1 million instances of these cancers, referred to as non-melanoma skin cancers (NMSC) are expected to be diagnosed in the United States each year, and the incidence is rapidly increasing. Basal and squamous cell malignancies, while seldom metastatic, can cause significant local damage and disfigurement, affecting large sections of soft tissue, cartilage, and bone.
Location
Basal cells are located in various tissues throughout the body. They are located at the bottom of epithelial tissues, generally situated directly on top of the basal lamina, above the basement membrane and below the remainder of the epithelium. Examples include:
Epidermal cells in the stratum basale
Airway basal cells, which are respiratory cells located in the respiratory epithelium (found in decreasing concentrations as airway diameter decreases)
Basal cells of prostate glands
Basal cells of the Gastrointestinal tract mucosal layer
Structure
Regardless of their specific location, basal cells generally share a similar basic structure. They are all usually either cuboidal, polyhedral or pyramidal shaped cells with enlarged nuclei and minimal cytoplasm. Basal cells are bound to each
Document 4:::
An adaptive algorithm is an algorithm that changes its behavior at the time it is run, based on information available and on a priori defined reward mechanism (or criterion). Such information could be the story of recently received data, information on the available computational resources, or other run-time acquired (or a priori known) information related to the environment in which it operates.
Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares (LMS), which represents a class of stochastic gradient-descent algorithms used in adaptive filtering and machine learning. In adaptive filtering the LMS is used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal).
For example, stable partition, using no additional memory is O(n lg n) but given O(n) memory, it can be O(n) in time. As implemented by the C++ Standard Library, stable_partition is adaptive and so it acquires as much memory as it can get (up to what it would need at most) and applies the algorithm using that available memory. Another example is adaptive sort, whose behavior changes upon the presortedness of its input.
An example of an adaptive algorithm in radar systems is the constant false alarm rate (CFAR) detector.
In machine learning and optimization, many algorithms are adaptive or have adaptive variants, which usually means that the algorithm parameters such as learning rate are automatically adjusted according to statistics about the optimisation thus far (e.g. the rate of convergence). Examples include adaptive simulated annealing, adaptive coordinate descent, adaptive quadrature, AdaBoost, Adagrad, Adadelta, RMSprop, and Adam.
In data compression, adaptive coding algorithms such as Adaptive Huffman coding or Prediction by partial matching can take a stream of data as input, and adapt their compression technique based on the symbols that th
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the temporary equilibrium method primarily used for in economic analysis?
A. To analyze interdependent variables of different speeds
B. To forecast long-term market trends
C. To determine fixed prices in the market
D. To calculate the total supply of goods
Answer:
|
A. To analyze interdependent variables of different speeds
|
Relavent Documents:
Document 0:::
In chemical biology, bioorthogonal chemical reporter is a non-native chemical functionality that is introduced into the naturally occurring biomolecules of a living system, generally through metabolic or protein engineering. These functional groups are subsequently utilized for tagging and visualizing biomolecules. Jennifer Prescher and Carolyn R. Bertozzi, the developers of bioorthogonal chemistry, defined bioorthogonal chemical reporters as "non-native, non-perturbing chemical handles that can be modified in living systems through highly selective reactions with exogenously delivered probes." It has been used to enrich proteins and to conduct proteomic analysis.
In the early development of the technique, chemical motifs have to fulfill criteria of biocompatibility and selective reactivity in order to qualify as bioorthogonal chemical reporters. Some combinations of proteinogenic amino acid side chains meet the criteria, as do ketone and aldehyde tags. Azides and alkynes are other examples of chemical reporters.
A bioorthogonal chemical reporter must be incorporated into a biomolecule. This occurs via metabolism. The chemical reporter is linked to a substrate, which a cell can metabolize.
Document 1:::
Sperry Corporation was a major American equipment and electronics company whose existence spanned more than seven decades of the 20th century. Sperry ceased to exist in 1986 following a prolonged hostile takeover bid engineered by Burroughs Corporation, which merged the combined operation under the new name Unisys. Some of Sperry's former divisions became part of Honeywell, Lockheed Martin, Raytheon Technologies, and Northrop Grumman.
The company is best known as the developer of the artificial horizon and a wide variety of other gyroscope-based aviation instruments like autopilots, bombsights, analog ballistics computers and gyro gunsights. In the post-WWII era the company branched out into electronics, both aviation-related, and later, computers.
The company was founded by Elmer Ambrose Sperry.
History
Early history
The company was incorporated on April 14 1910 by Elmer Ambrose Sperry as the Sperry Gyroscope Company, to manufacture navigation equipment—chiefly his own inventions: the marine gyrostabilizer and the gyrocompass—at 40 Flatbush Avenue Extension in Downtown Brooklyn. During World War I the company diversified into aircraft components including bomb sights and fire control systems. In their early decades, Sperry Gyroscope and related companies were concentrated on Long Island, New York, especially in Nassau County. Over the years, it diversified to other locations.
In 1918, Lawrence Sperry split from his father to compete over aero-instruments with the Lawrence Sperry Aircraft Company, including the new automatic pilot. After the death of Lawrence on December 13, 1923, the two firms were brought together in 1924. Then in January 1929 it was acquired by North American Aviation, who reincorporated it in New York as the Sperry Gyroscope Company, Inc. The company once again became independent in 1933 when it was spun-off as a subsidiary of the newly formed Sperry Corporation. The new corporation was a holding company for a number of smaller entities su
Document 2:::
A deformity, dysmorphism, or dysmorphic feature is a major abnormality of an organism that makes a part of the body appear or function differently than how it is supposed to.
Causes
Deformity can be caused by a variety of factors:
Arthritis and other rheumatoid disorders
Chronic application of external forces, e.g. artificial cranial deformation
Chronic paresis, paralysis or muscle imbalance, especially in children, e.g. due to poliomyelitis or cerebral palsy
Complications at birth
Damage to the fetus or uterus
Fractured bones left to heal without being properly set (malunion)
Genetic mutation
Growth or hormone disorders
Skin disorders
Reconstructive surgery following a severe injury, e.g. burn injury
Deformity can occur in all organisms:
Frogs can be mutated due to Ribeiroia (Trematoda) infection.
Plants can undergo irreversible cell deformation
Insects, such as honeybees, can be affected by deformed wing virus
Fish can be found with scoliosis due to environmental factors
Mortality
In many cases where a major deformity is present at birth, it is the result of an underlying condition severe enough that the baby does not survive very long. The mortality of severely deformed births may be due to a range of complications including missing or non-functioning vital organs, structural defects that prevent necessary function, high susceptibility to injuries, abnormal facial appearance, or infections that eventually lead to death.
In some cases, such as that of twins, one fetus is brought to term healthy, while the other faces major, even life-threatening defects. An example of this is seen in cattle, referred to as amorphous globosus.
In mythology
There are many instances of mythological characters showing signs of a deformity.
Descriptions of mermaids may be related to the symptoms of sirenomelia.
The Irish mythology includes the Fomorians, who are almost without exception described as being deformed, possessing only one of what most have two (eyes, arms, le
Document 3:::
In design for additive manufacturing (DFAM), there are both broad themes (which apply to many additive manufacturing processes) and optimizations specific to a particular AM process. Described here is DFM analysis for stereolithography, in which design for manufacturability (DFM) considerations are applied in designing a part (or assembly) to be manufactured by the stereolithography (SLA) process. In SLA, parts are built from a photocurable liquid resin that cures when exposed to a laser beam that scans across the surface of the resin (photopolymerization). Resins containing acrylate, epoxy, and urethane are typically used. Complex parts and assemblies can be directly made in one go, to a greater extent than in earlier forms of manufacturing such as casting, forming, metal fabrication, and machining. Realization of such a seamless process requires the designer to take in considerations of manufacturability of the part (or assembly) by the process. In any product design process, DFM considerations are important to reduce iterations, time and material wastage.
Challenges in stereolithography
Material
Excessive setup specific material cost and lack of support for 3rd party resins is a major challenge with SLA process:. The choice of material (a design process) is restricted by the supported resin. Hence, the mechanical properties are also fixed. When scaling up dimensions selectively to deal with expected stresses, post curing is done by further treatment with UV light and heat. Although advantageous to mechanical properties, the additional polymerization and cross linkage can result in shrinkage, warping and residual thermal stresses. Hence, the part shall be designed in its 'green' stage i.e. pre-treatment stage.
Setup and process
SLA process is an additive manufacturing process. Hence, design considerations such as orientation, process latitude, support structures etc. have to be considered.
Orientation affects the support structures, manufacturing time, part q
Document 4:::
The Teletype Model 37 is an electromechanical teleprinter manufactured by the Teletype Corporation in 1968. Electromechanical user interfaces would be superseded as a year later in 1969 the Computer Terminal Corporation introduced the electronic terminal with a screen.
Features
The Model 37 came with many features including upper- and lowercase letters, reverse page feed for printing charts, red and black ink, and optional tape and punch reader. It handled speeds up to 150 Baud (15 characters/second). This made it 50% faster than its predecessor, the Model 33.
The Model 37 terminal utilizes a serial input / output 10 unit code signal consisting of a start bit, seven information bits, an even parity bit and a stop bit. It was produced in ASR (Automatic Send and Receive)also known as the Model 37/300, KSR (Keyboard Send and Receive) also known as the Model 37/200 and RO (Receive Only) also known as the Model 37/100.
The Model 37 handles all 128 ASCII code combinations. It uses a six-row removable typebox with provisions for 96 type pallet positions. When the Shift-Out feature is included, the six-row typebox is replaced with a seven-row typebox allowing 112 pallet positions, or it can be replaced with an eight-row typebox allowing 128 type pallet positions.
Technical specifications
The Model 37 RO and KSR are 36.25 inches high, 27.5 inches deep and 22.5 inches deep. The ASR is 36.25 inches high, 44.5 inches wide and 27.5 inches deep.
The ASR weighs approximately 340 pounds and KSR and RO weighs approximately 185 pounds.
The Model 37 interface meets the requirements of EIA RS-232-B and has a recommended maintenance interval of every six months or every 1500 hours.
Power Requirements - 115VAC ± 10%, 60 Hz ± .45 Hz, RO is approximately 200 Watts, KSR is approximately 300 Watts, ASR is approximately 550 Watts.
Fun Facts
Most Model 37s were re-purchased from customers and sold to the Soviet Union with antiquated mainframe computer systems.
Further reading
Dolotta,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the significance of the relativistic Breit–Wigner distribution in high-energy physics?
A. It models stable particles.
B. It describes resonances of unstable particles.
C. It calculates the speed of light.
D. It establishes the mass-energy equivalence.
Answer:
|
B. It describes resonances of unstable particles.
|
Relavent Documents:
Document 0:::
Concatenation theory, also called string theory, character-string theory, or theoretical syntax, studies character strings over finite alphabets of characters, signs, symbols, or marks. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory.
The most basic operation on strings is concatenation; connect two strings to form a longer string whose length is the sum of the lengths of those two strings. ABCDE is the concatenation of AB with CDE, in symbols ABCDE = AB ^ CDE. Strings, and concatenation of strings can be treated as an algebraic system with some properties resembling those of the addition of integers; in modern mathematics, this system is called a free monoid.
In 1956 Alonzo Church wrote: "Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method". Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski. Coincidentally, the first English presentation of Tarski's 1933 axiomatic foundations of string theory appeared in 1956 – the same year that Church called for such axiomatizations. As Tarski himself noted using other terminology, serious difficulties arise if strings are construed as tokens rather than types in the sense of Peirce's type-token distinction.
References
Document 1:::
Finite volume method (FVM) is a numerical method. FVM in computational fluid dynamics is used to solve the partial differential equation which arises from the physical conservation law by using discretisation. Convection is always followed by diffusion and hence where convection is considered we have to consider combine effect of convection and diffusion. But in places where fluid flow plays a non-considerable role we can neglect the convective effect of the flow. In this case we have to consider more simplistic case of only diffusion. The general equation for steady convection-diffusion can be easily derived from the general transport equation for property by deleting transient.
General transport equation is defined as:
…………………………………………….1
Where,
is a conservative form of all fluid flow,
is density,
is a net rate of flow of out of fluid element represents convective term,
is a transient term,
is a rate of change of due to diffusion,
is a rate of increase of due to source.
Due to steady state condition transient term becomes zero and due to absence of convection convective term becomes zero, therefore steady state three- dimensional convection and diffusion equation becomes:
………………………………………………………….2
Therefore,
…………………………………………………………………….3
Flow should also satisfy continuity equation therefore,
………………………………………………………………………………………………………4
To solve the problem we will follow following general steps
Grid formation:
1. Divide the domain into discrete control volume.
2. Place the nodal point between end points defining the physical boundaries. Boundaries/ faces of the control volume are created midway between adjacent nodes.
3. Set up the control volume near the edge of domain such that physical as well as control volume boundaries will coincide with each other.
4. Considering a general nodal point P accompanied by six neighboring nodal point ‘E’ (which represent east), ‘W’ (which represent west), ‘N’ (which represent north), ‘S’ (which re
Document 2:::
In astronomy, planetary transits and occultations occur when a planet passes in front of another object, as seen by an observer. The occulted object may be a distant star, but in rare cases it may be another planet, in which case the event is called a mutual planetary occultation or mutual planetary transit, depending on the relative apparent diameters of the objects.
The word "transit" refers to cases where the nearer object appears smaller than the more distant object. Cases where the nearer object appears larger and completely hides the more distant object are known as occultations.
Mutual planetary occultations and transits
Mutual occultations or transits of planets are extremely rare. The most recent event occurred on 3 January 1818, and the next will occur on 22 November 2065. Both involve the same two planets: Venus and Jupiter.
Historical observations
An occultation of Mars by Venus on 13 October 1590 was observed by the German astronomer Michael Maestlin at Heidelberg. The 1737 event (see list below) was observed by John Bevis at Greenwich Observatory – it is the only detailed account of a mutual planetary occultation. A transit of Mars across Jupiter on 12 September 1170 was observed by the monk Gervase at Canterbury, and by Chinese astronomers.
Future events
The next time a mutual planetary transit or occultation will happen (as seen from Earth) will be on 22 November 2065 at about 12:43 UTC, when Venus near superior conjunction (with an angular diameter of 10.6") will transit in front of Jupiter (with an angular diameter of 30.9"); however, this will take place only 8° west of the Sun, and will therefore not be visible to the unaided/unprotected eye. Before transiting Jupiter, Venus will occult Jupiter's moon Ganymede at around 11:24 UTC as seen from some southernmost parts of Earth. Parallax will cause actual observed times to vary by a few minutes, depending on the precise location of the observer.
List of mutual planetary occultations and transit
Document 3:::
Diffbot is a developer of machine learning and computer vision algorithms and public APIs for extracting data from web pages / web scraping to create a knowledge base.
The company has gained interest from its application of computer vision technology to web pages, wherein it visually parses a web page for important elements and returns them in a structured format. In 2015 Diffbot announced it was working on its version of an automated "Knowledge Graph" by crawling the web and using its automatic web page extraction to build a large database of structured web data. In 2019 Diffbot released their Knowledge Graph which has since grown to include over two billion entities (corporations, people, articles, products, discussions, and more), and ten trillion "facts."
The company's products allow software developers to analyze web home pages and article pages, and extract the "important information" while ignoring elements deemed not core to the primary content.
In August 2012 the company released its Page Classifier API, which automatically categorizes web pages into specific "page types". As part of this, Diffbot analyzed 750,000 web pages shared on the social media service Twitter and revealed that photos, followed by articles and videos, are the predominant web media shared on the social network.
In September 2020 the company released a Natural Language Processing API for automatically building Knowledge Graphs from text.
The company raised $2 million in funding in May 2012 from investors including Andy Bechtolsheim and Sky Dayton.
Diffbot's customers include Adobe, AOL, Cisco, DuckDuckGo, eBay, Instapaper, Microsoft, Onswipe and Springpad.
See also
GPT-3
References
External links
Knowledge Graph
Document 4:::
Random self-reducibility (RSR) is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances.
Definition
If for a function f evaluating any instance x can be reduced in polynomial time to the evaluation of f on one or more random instances yi, then it is self-reducible (this is also known as a non-adaptive uniform self-reduction). In a random self-reduction, an arbitrary worst-case instance x in the domain of f is mapped to a random set of instances y1, ..., yk. This is done so that f(x) can be computed in polynomial time, given the coin-toss sequence from the mapping, x, and f(y1), ..., f(yk). Therefore, taking the average with respect to the induced distribution on yi, the average-case complexity of f is the same (within polynomial factors) as the worst-case randomized complexity of f.
One special case of note is when each random instance yi is distributed uniformly over the entire set of elements in the domain of f that have a length of |x|. In this case f is as hard on average as it is in the worst case. This approach contains two key restrictions. First the generation of y1, ..., yk is performed non-adaptively. This means that y2 is picked before f(y1) is known. Second, it is not necessary that the points y1, ..., yk be uniformly distributed.
Application in cryptographic protocols
Problems that require some privacy in the data (typically cryptographic problems) can use randomization to ensure that privacy. In fact, the only provably secure cryptographic system (the one-time pad) has its security relying totally on the randomness of the key data supplied to the system.
The field of cryptography utilizes the fact that certain number-theoretic functions are randomly self-reducible. This includes probabilistic encryption and cryptographically strong pseudorandom number generation. Also, instance-hiding schemes (whe
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary role of cyclic adenosine monophosphate (cAMP) in the process of meiotic arrest in oocytes?
A. It promotes meiotic resumption by activating MPF.
B. It inhibits meiotic resumption by regulating MPF activity.
C. It facilitates the breakdown of the nuclear envelope.
D. It increases the production of cGMP in granulosa cells.
Answer:
|
B. It inhibits meiotic resumption by regulating MPF activity.
|
Relavent Documents:
Document 0:::
In physics and mathematics, the phase (symbol φ or ϕ) of a wave or other periodic function of some real variable (such as time) is an angle-like quantity representing the fraction of the cycle covered up to . It is expressed in such a scale that it varies by one full turn as the variable goes through each period (and goes through each complete cycle). It may be measured in any angular unit such as degrees or radians, thus increasing by 360° or as the variable completes a full period.
This convention is especially appropriate for a sinusoidal function, since its value at any argument then can be expressed as , the sine of the phase, multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.)
Usually, whole turns are ignored when expressing the phase; so that is also a periodic function, with the same period as , that repeatedly scans the same range of angles as goes through each period. Then, is said to be "at the same phase" at two argument values and (that is, ) if the difference between them is a whole number of periods.
The numeric value of the phase depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to.
The term "phase" is also used when comparing a periodic function with a shifted version of it. If the shift in is expressed as a fraction of the period, and then scaled to an angle spanning a whole turn, one gets the phase shift, phase offset, or phase difference of relative to . If is a "canonical" function for a class of signals, like is for all sinusoidal signals, then is called the initial phase of .
Mathematical definition
Let the signal be a periodic function of one real variable, and be its period (that is, the smallest positive real number such that for all ). Then the phase of at any argument is
Here denotes the fractional part of a real number, di
Document 1:::
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.
The description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, and Hermitian operators, emerge naturally from the classical Maxwell's equations in the description. The quantum polarization state vector for the photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators then follow for infinitesimal transformations of a classical polarization state.
Many of the implications of the mathematical machinery are easily verified experimentally. In fact, many of the experiments can be performed with polaroid sunglass lenses.
The connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein. The correspondence principle then allows the identification of momentum and angular momentum (called spin), as well as energy, with the photon.
Polarization of classical electromagnetic waves
Polarization states
Linear polarization
The wave is linearly polarized (or plane polarized) when the phase angles are equal,
This represents a wave with phase polarized at an angle with respect to the x axis.
In this case the Jones vector
can be written with a single phase:
The state vectors for linear polarization in x or y are special cases of this state vector.
If unit vectors are defined such that
and
then the linearly polarized polarization state can be written in the "x–y basis" as
Circular polarization
If the phase angles and differ by exactly and the x amplitude equals the y amplitude the wave is circularly polarized. The Jones vector then becomes
where the plus sign indicates left circular polarization and the minus sign indicates right circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x–y plane.
If unit vectors are defined such that
and
then an arbitrary polarization state can be written in the "R–L basis" as
where
and
We can see that
Elliptical polarization
The general case in which the electric field rotates in the x–y plane and has variable magnitude is called elliptical polarization. The state vector is given by
Geometric visualization of an arbitrary polarization state
To get an understanding of what a polarization state looks like, one can observe the orbit that is made if the polarization state is multiplied by a phase factor of and then having the real parts of its components interpreted as x and y coordinates respectively. That is:
If only the traced out shape and the direction of the rotation of is considered when interpreting the polarization state, i.e. only
(where and are defined as above) and whether it is overall more right circularly or left circularly polarized (i.e. whether or vice versa), it can be seen that the physical interpretation will be the same even if the state is multiplied by an arbitrary phase factor, since
and the direction of rotation will remain the same. In other words, there is no physical difference between two polarization states and , between which only a phase factor differs.
It can be seen that for a linearly polarized state, M will be a line in the xy plane, with length 2 and its middle in the origin, and whose slope equals to . For a circularly polarized state, M will be a circle with radius and with the middle in the origin.
Energy, momentum, and angular momentum of a classical electromagnetic wave
Energy density of classical electromagnetic waves
Energy in a plane wave
The energy per unit volume in classical electromagnetic fields is (cgs units) and also Planck units:
For a plane wave, this becomes:
where the energy has been averaged over a wavelength of the wave.
Fraction of energy in each component
The fraction of energy in the x component of the plane wave is
with a similar expression for the y component resulting in .
The fraction in both components is
Momentum density of classical electromagnetic waves
The momentum density is given by the Poynting vector
For a sinusoidal plane wave traveling in the z direction, the momentum is in the z direction and is related to the energy density:
The momentum density has been averaged over a wavelength.
Angular momentum density of classical electromagnetic waves
Electromagnetic waves can have both orbital and spin angular momentum. The total angular momentum density is
For a sinusoidal plane wave propagating along axis the orbital angular momentum density vanishes. The spin angular momentum density is in the direction and is given by
where again the density is averaged over a wavelength.
Optical filters and crystals
Passage of a classical wave through a polaroid filter
A linear filter transmits one component of a plane wave and absorbs the perpendicular component. In that case, if the filter is polarized in the x direction, the fraction of energy passing through the filter is
Example of energy conservation: Passage of a classical wave through a birefringent crystal
An ideal birefringent crystal transforms the polarization state of an electromagnetic wave without loss of wave energy. Birefringent crystals therefore provide an ideal test bed for examining the conservative transformation of polarization states. Even though this treatment is still purely classical, standard quantum tools such as unitary and Hermitian operators that evolve the state in time naturally emerge.
Initial and final states
A birefringent crystal is a material that has an optic axis with the property that the light has a different index of refraction for light polarized parallel to the axis than it has for light polarized perpendicular to the axis. Light polarized parallel to the axis are called "extraordinary rays" or "extraordinary photons", while light polarized perpendicular to the axis are called "ordinary rays" or "ordinary photons". If a linearly polarized wave impinges on the crystal, the extraordinary component of the wave will emerge from the crystal with a different phase than the ordinary component. In mathematical language, if the incident wave is linearly polarized at an angle with respect to the optic axis, the incident state vector can be written
and the state vector for the emerging wave can be written
While the initial state was linearly polarized, the final state is elliptically polarized. The birefringent crystal alters the character of the polarization.
Dual of the final state
The initial polarization state is transformed into the final state with the operator U. The dual of the final state is given by
where is the adjoint of U, the complex conjugate transpose of the matrix.
Unitary operators and energy conservation
The fraction of energy that emerges from the crystal is
In this ideal case, all the energy impinging on the crystal emerges from the crystal. An operator U with the property that
where I is the identity operator and U is called a unitary operator. The unitary property is necessary to ensure energy conservation in state transformations.
Hermitian operators and energy conservation
If the crystal is very thin, the final state will be only slightly different from the initial state. The unitary operator will be close to the identity operator. We can define the operator H by
and the adjoint by
Energy conservation then requires
This requires that
Operators like this that are equal to their adjoints are called Hermitian or self-adjoint.
The infinitesimal transition of the polarization state is
Thus, energy conservation requires that infinitesimal transformations of a polarization state occur through the action of a Hermitian operator.
Photons: connection to quantum mechanics
Energy, momentum, and angular momentum of photons
Energy
The treatment to this point has been classical. It is a testament, however, to the generality of Maxwell's equations for electrodynamics that the treatment can be made quantum mechanical with only a reinterpretation of classical quantities. The reinterpretation is based on the theories of Max Planck and the interpretation by Albert Einstein of those theories and of other experiments.
Einstein's conclusion from early experiments on the photoelectric effect is that electromagnetic radiation is composed of irreducible packets of energy, known as photons. The energy of each packet is related to the angular frequency of the wave by the relationwhere is an experimentally determined quantity known as the reduced Planck constant. If there are photons in a box of volume , the energy in the electromagnetic field isand the energy density is
The photon energy can be related to classical fields through the correspondence principle that states that for a large number of photons, the quantum and classical treatments must agree. Thus, for very large , the quantum energy density must be the same as the classical energy density
The number of photons in the box is then
Momentum
The correspondence principle also determines the momentum and angular momentum of the photon. For momentumwhere is the wave number. This implies that the momentum of a photon is
Angular momentum and spin
Similarly for the spin angular momentumwhere is field strength. This implies that the spin angular momentum of the photon isthe quantum interpretation of this expression is that the photon has a probability of of having a spin angular momentum of and a probability of of having a spin angular momentum of . We can therefore think of the spin angular momentum of the photon being quantized as well as the energy. The angular momentum of classical light has been verified. A photon that is linearly polarized (plane polarized) is in a superposition of equal amounts of the left-handed and right-handed states. Upon absorption by an electronic state, the angular momentum is "measured" and this superposition collapses into either right-hand or left-hand, corresponding to a raising or lowering of the angular momentum of the absorbing electronic state, respectively.
Spin operator
The spin of the photon is defined as the coefficient of in the spin angular momentum calculation. A photon has spin 1 if it is in the state and −1 if it is in the state. The spin operator is defined as the outer product
The eigenvectors of the spin operator are and with eigenvalues 1 and −1, respectively.
The expected value of a spin measurement on a photon is then
An operator S has been associated with an observable quantity, the spin angular momentum. The eigenvalues of the operator are the allowed observable values. This has been demonstrated for spin angular momentum, but it is in general true for any observable quantity.
Spin states
We can write the circularly polarized states aswhere s = 1 for and s = −1 for . An arbitrary state can be writtenwhere and are phase angles, θ is the angle by which the frame of reference is rotated, and
Spin and angular momentum operators in differential form
When the state is written in spin notation, the spin operator can be written
The eigenvectors of the differential spin operator are
To see this, note
The spin angular momentum operator is
Nature of probability in quantum mechanics
Probability for a single photon
There are two ways in which probability can be applied to the behavior of photons; probability can be used to calculate the probable number of photons in a particular state, or probability can be used to calculate the likelihood of a single photon to be in a particular state. The former interpretation violates energy conservation. The latter interpretation is the viable, if nonintuitive, option. Dirac explains this in the context of the double-slit experiment:
Some time before the discovery of quantum mechanics people realized that the connection between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.—Paul Dirac, The Principles of Quantum Mechanics, 1930, Chapter 1
Probability amplitudes
The probability for a photon to be in a particular polarization state depends on the fields as calculated by the classical Maxwell's equations. The polarization state of the photon is proportional to the field. The probability itself is quadratic in the fields and consequently is also quadratic in the quantum state of polarization. In quantum mechanics, therefore, the state or probability amplitude contains the basic probability information. In general, the rules for combining probability amplitudes look very much like the classical rules for composition of probabilities: [The following quote is from Baym, Chapter 1]
The probability amplitude for two successive probabilities is the product of amplitudes for the individual possibilities. For example, the amplitude for the x polarized photon to be right circularly polarized and for the right circularly polarized photon to pass through the y-polaroid is the product of the individual amplitudes.
The amplitude for a process that can take place in one of several indistinguishable ways is the sum of amplitudes for each of the individual ways. For example, the total amplitude for the x polarized photon to pass through the y-polaroid is the sum of the amplitudes for it to pass as a right circularly polarized photon, plus the amplitude for it to pass as a left circularly polarized photon,
The total probability for the process to occur is the absolute value squared of the total amplitude calculated by 1 and 2.
Uncertainty principle
Mathematical preparation
For any legal operators the following inequality, a consequence of the Cauchy–Schwarz inequality, is true.
If B A ψ and A B ψ are defined, then by subtracting the means and re-inserting in the above formula, we deduce
where
is the operator mean of observable X in the system state ψ and
Here
is called the commutator of A and B.
This is a purely mathematical result. No reference has been made to any physical quantity or principle. It simply states that the uncertainty of one operator times the uncertainty of another operator has a lower bound.
Application to angular momentum
The connection to physics can be made if we identify the operators with physical operators such as the angular momentum and the polarization angle. We have then
which means that angular momentum and the polarization angle cannot be measured simultaneously with infinite accuracy. (The polarization angle can be measured by checking whether the photon can pass through a polarizing filter oriented at a particular angle, or a polarizing beam splitter. This results in a yes/no answer that, if the photon was plane-polarized at some other angle, depends on the difference between the two angles.)
States, probability amplitudes, unitary and Hermitian operators, and eigenvectors
Much of the mathematical apparatus of quantum mechanics appears in the classical description of a polarized sinusoidal electromagnetic wave. The Jones vector for a classical wave, for instance, is identical with the quantum polarization state vector for a photon. The right and left circular components of the Jones vector can be interpreted as probability amplitudes of spin states of the photon. Energy conservation requires that the states be transformed with a unitary operation. This implies that infinitesimal transformations are transformed with a Hermitian operator. These conclusions are a natural consequence of the structure of Maxwell's equations for classical waves.
Quantum mechanics enters the picture when observed quantities are measured and found to be discrete rather than continuous. The allowed observable values are determined by the eigenvalues of the operators associated with the observable. In the case angular momentum, for instance, the allowed observable values are the eigenvalues of the spin operator.
These concepts have emerged naturally from Maxwell's equations and Planck's and Einstein's theories. They have been found to be true for many other physical systems. In fact, the typical program is to assume the concepts of this section and then to infer the unknown dynamics of a physical system. This was done, for instance, with the dynamics of electrons. In that case, working back from the principles in this section, the quantum dynamics of particles were inferred, leading to Schrödinger's equation, a departure from Newtonian mechanics. The solution of this equation for atoms led to the explanation of the Balmer series for atomic spectra and consequently formed a basis for all of atomic physics and chemistry.
This is not the only occasion in which Maxwell's equations have forced a restructuring of Newtonian mechanics. Maxwell's equations are relativistically consistent. Special relativity resulted from attempts to make classical mechanics consistent with Maxwell's equations (see, for example, Moving magnet and conductor problem).
Document 2:::
Thymosin beta-4 is a protein that in humans is encoded by the TMSB4X gene. Recommended INN (International Nonproprietary Name) for thymosin beta-4 is 'timbetasin', as published by the World Health Organization (WHO).
The protein consists (in humans) of 43 amino acids (sequence: SDKPDMAEI EKFDKSKLKK TETQEKNPLP SKETIEQEKQ AGES) and has a molecular weight of 4921 g/mol.
Thymosin-β4 is a major cellular constituent in many tissues. Its intracellular concentration may reach as high as 0.5 mM. Following Thymosin α1, β4 was the second of the biologically active peptides from Thymosin Fraction 5 to be completely sequenced and synthesized.
Function
This gene encodes an actin sequestering protein which plays a role in regulation of actin polymerization. The protein is also involved in cell proliferation, migration, and differentiation. This gene escapes X inactivation and has a homolog on chromosome Y (TMSB4Y).
Biological activities of thymosin β4
Any concepts of the biological role of thymosin β4 must inevitably be coloured by the demonstration that total ablation of the thymosin β4 gene in the mouse allows apparently normal embryonic development of mice which are fertile as adults.
Actin binding
Thymosin β4 was initially perceived as a thymic hormone. However this changed when it was discovered that it forms a 1:1 complex with G (globular) actin, and is present at high concentration in a wide range of mammalian cell types. When appropriate, G-actin monomers polymerize to form F (filamentous) actin, which, together with other proteins that bind to actin, comprise cellular microfilaments. Formation by G-actin of the complex with β-thymosin (= "sequestration") opposes this.
Due to its profusion in the cytosol and its ability to bind G-actin but not F-actin, thymosin β4 is regarded as the principal actin-sequestering protein in many cell types. Thymosin β4 functions like a buffer for monomeric actin as represented in the following reaction:
F-actin ↔ G-actin + Thymo
Document 3:::
M2 is a line of the Copenhagen Metro, colored yellow on the map. It runs from Vanløse to Lufthavnen through the center of Copenhagen, sharing track with the M1 from Vanløse to Christianshavn. The line was built along with M1 as part of the redevelopment of Ørestad. The principle of the line was passed in 1992, and construction commenced in 1998. The line opened in several stages between 2002 and 2007. It is owned by Metroselskabet and operated by Metro Service, and operates with a headway between four and twenty minutes.
The line is long, and runs in a tunnel through the city center between Lindevang and Amager Strand. It connects the western borough of Vanløse and the municipality of Frederiksberg to the city center of Copenhagen, as well as the eastern parts of Amager and Copenhagen Airport. It provides transfer to the S-train at three stations and to DSB trains at two stations.
Its southern end, in the district of Amager Øst, largely follows the same route as a disused railway line, Amagerbanen, along the coast of Øresund.
History
The background for the metro was the urban development of the Ørestad area of Copenhagen. The principle of building a rail transit system was passed by the Parliament of Denmark on 24 June 1992, with the Ørestad Act. The responsibility for developing the area, as well as building and operating the metro, was given to the Ørestad Development Corporation, a joint venture between Copenhagen Municipality (45%) and the Ministry of Finance (55%). Initially, three modes of transport were considered: a tramway, a light rail system that would have seen trams running underground through the city center, and a rapid transit (metro) system. In October 1994, the Development Corporation chose a light rapid transit system, combining elements of a light rail system and rapid transit to form a low-capacity metro network.
The decision to build stage 2, from Nørreport to Vanløse, and stage 3 to the airport, was made by parliament on 21 December 1994
Document 4:::
Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B.
In 1915 and 1916 during World War I, M. Osipov and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. Among these are what is known as Lanchester's linear law (for ancient combat) and Lanchester's square law (for modern combat with long-range weapons such as firearms).
As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania.
Lanchester's linear law
For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons.
The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force.
Lanchester's square law
Lanchester's square law is also known as the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What describes the relationship between the polarization state of a photon and classical electromagnetic waves?
A. The polarization state is unrelated to classical electromagnetic waves.
B. The polarization state is a simple linear function of classical electromagnetic waves.
C. The polarization state is identical to the Jones vector used for classical wave polarization.
D. The polarization state can only be described using quantum mechanics, with no classical counterpart.
Answer:
|
C. The polarization state is identical to the Jones vector used for classical wave polarization.
|
Relavent Documents:
Document 0:::
Luche reduction is the selective organic reduction of α,β-unsaturated ketones to allylic alcohols. The active reductant is described as "cerium borohydride", which is generated in situ from NaBH4 and CeCl3(H2O)7.
The Luche reduction can be conducted chemoselectively toward ketone in the presence of aldehydes or towards α,β-unsaturated ketones in the presence of a non-conjugated ketone.
An enone forms an allylic alcohol in a 1,2-addition, and the competing conjugate 1,4-addition is suppressed.
The selectivity can be explained in terms of the HSAB theory: carbonyl groups require hard nucleophiles for 1,2-addition. The hardness of the borohydride is increased by replacing hydride groups with alkoxide groups, a reaction catalyzed by the cerium salt by increasing the electrophilicity of the carbonyl group. This is selective for ketones because they are more Lewis basic.
In one application, a ketone is selectively reduced in the presence of an aldehyde. Actually, in the presence of methanol as solvent, the aldehyde forms a methoxy acetal that is inactive in the reducing conditions.
References
Document 1:::
Cisgenesis is a product designation for a category of genetically engineered plants. A variety of classification schemes have been proposed that order genetically modified organisms based on the nature of introduced genotypical changes, rather than the process of genetic engineering.
Cisgenesis (etymology: cis = same side; and genesis = origin) is one term for organisms that have been engineered using a process in which genes are artificially transferred between organisms that could otherwise be conventionally bred. Genes are only transferred between closely related organisms. Nucleic acid sequences must be isolated and introduced using the same technologies that are used to produce transgenic organisms, making cisgenesis similar in nature to transgenesis. The term was first introduced in 2000 by Henk J. Schouten and Henk Jochemsen, and in 2004 a PhD thesis by Jan Schaart of Wageningen University in 2004, discussing making strawberries less susceptible to Botrytis cinerea.
In Europe, currently, this process is governed by the same laws as transgenesis. While researchers at Wageningen University in the Netherlands feel that this should be changed and regulated in the same way as conventionally bred plants, other scientists, writing in Nature Biotechnology, have disagreed. In 2012 the European Food Safety Authority (EFSA) issued a report with their risk assessment of cisgenic and intragenic plants. They compared the hazards associated with plants produced by cisgenesis and intragenesis with those obtained either by conventional plant breeding techniques or transgenesis. The EFSA concluded that "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants."
Cisgenesis has been applied to transfer of natural resistance genes to the devastating disease Phytophthora infestans in potato and scab (Venturia inaequalis) in apple.
Cisgenesis and transgenesis use artificial gene t
Document 2:::
The Mobile Clinic is a medical emergency facility, created by Dr. Claudio Costa to rescue riders injured during motorcycle races. In 1976, on Costa's initiative and with funding from Gino Amisano, founder and owner of the AGV, the first vehicle specifically designed to provide rapid medical intervention to injured riders still on the track.
The mobile clinic began on the race of the World Championship in Motorcycle Grand Prix of Austria, in Salzburg on 1 May 1977. During the race for the Class 350, Patrick Fernandez, Franco Uncini, Hans Stadelmann, Dieter Braun and Johnny Cecotto were involved in a terrible accident at the fast curve at Fahrerlager. The clinic intervened, but were attacked by police dogs. The doctors persevered and their actions saved Uncini's life. Stadelmann died on the spot and Braun ended his career because of a serious eye injury.
The mobile clinic became an institution on the motorcycle racing circuit, helping thousands of riders.
References
External links
Document 3:::
Langya virus (LayV), scientific name Parahenipavirus langyaense, is a species of paramyxovirus first detected in the Chinese provinces of Shandong and Henan.
It has been announced in 35 patients from 2018 to August 2022. All but 9 of the 35 cases in China were infected with LayV only, with the symptoms including fever, fatigue, and cough. No deaths due to LayV have been reported . Langya virus affects species including humans, dogs, goats, and its presumed original host, shrews. The 35 cases were not in contact with each other, and it is not known if the virus is capable of human-to-human transmission.
Etymology
The name of the virus in Simplified Chinese (, ) refers to Langya Commandery, a historical commandery in present-day Shandong, China.
Classification
Langya virus is classified in the family Paramyxoviridae. It is also closely related to the Nipah virus and the Hendra virus.
Symptoms
Of the 35 people infected with the virus, 26 were identified as not showing signs of another infection. They all experienced fever, with fatigue being the second most common symptom. Coughing, muscle pains, nausea, headaches and vomiting were also reported as symptoms of infection.
More than half of the infectees had leukopenia, and more than a third had thrombocytopenia, with a smaller number being reported to have impaired liver or kidney function.
Transmission
The researchers who identified the virus found LayV antibodies in a few goats and dogs, and identified LayV viral RNA in 27% of the 262 shrews they sampled. They found no strong evidence of the virus spreading between people. One researcher commented in the NEJM that henipaviruses do not typically spread between people, and thus LayV would be unlikely to become a pandemic, stating: "The only henipavirus that has showed some sign of human-to-human transmission is the Nipah virus and that requires very close contact. I don't think this has much pandemic potential." Another researcher noted that LayV is most likely
Document 4:::
Quanta Magazine is an editorially independent online publication of the Simons Foundation covering developments in physics, mathematics, biology and computer science.
History
Quanta Magazine was initially launched as Simons Science News in October 2012, but it was renamed to its current title in July 2013.
It was founded by the former New York Times journalist Thomas Lin, who was the magazine's editor-in-chief until 2024. The two deputy editors are John Rennie and Michael Moyer, formerly of Scientific American, and the art director is Samuel Velasco. In 2024, Samir Patel became the magazine's second editor in chief.
Content
The articles in the magazine are freely available to read online. Scientific American, Wired, The Atlantic, and The Washington Post, as well as international science publications like Spektrum der Wissenschaft, have reprinted articles from the magazine.
In November 2018, MIT Press published two collections of articles from Quanta Magazine, Alice and Bob Meet the Wall of Fire and The Prime Number Conspiracy.
The magazine also has three podcasts, two of which are hosted by Steven Strogatz. Janna Levin joined as cohost of The Joy of Why for the third season in 2024.
Reception
Undark Magazine described Quanta Magazine as "highly regarded for its masterful coverage of complex topics in science and math." The science news aggregator RealClearScience ranked Quanta Magazine first on its list of "The Top 10 Websites for Science in 2018."
In 2020, the magazine received a National Magazine Award for General Excellence from the American Society of Magazine Editors for its "willingness to tackle some of the toughest and most difficult topics in science and math in a language that is accessible to the lay reader without condescension or oversimplification."
In May 2022 the magazine's staff, notably Natalie Wolchover, were awarded the Pulitzer Prize for Explanatory Reporting.
References
External links
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is transrepression in molecular biology?
A. The process by which one protein enhances the activity of another protein
B. A mechanism where one protein inhibits the activity of another protein through interaction
C. The interaction of multiple proteins leading to cell division
D. The activation of a transcription factor to increase gene expression
Answer:
|
B. A mechanism where one protein inhibits the activity of another protein through interaction
|
Relavent Documents:
Document 0:::
In a broad sense, the term graphic statics is used to describe the technique of solving particular practical problems of statics using graphical means. Actively used in the architecture of the 19th century, the methods of graphic statics were largely abandoned in the second half of the 20th century, primarily due to widespread use of frame structures of steel and reinforced concrete that facilitated analysis based on linear algebra. The beginning of the 21st century was marked by a "renaissance" of the technique driven by its addition to the computer-aided design tools thus enabling engineers to instantly visualize form and forces.
History
Markou and Ruan trace the origins of the graphic statics to da Vinci and Galileo who used the graphical means to calculate the sum of forces, Simon Stevin's parallelogram of forces and the 1725 introduction of the force polygon and funicular polygon by Pierre Varignon. Giovanni Poleni used the graphical calculations (and Robert Hooke's analogy between the hanging chain and standing structure) while studying the dome of the Saint Peter's Basilica in Rome (1748). Gabriel Lamé and Émile Clapeyron studied of the dome of the Saint Isaac's Cathedral with the help of the force and funicular polygons (1823).
Finally, Carl Culmann had established the new discipline (and gave it a name) in his 1864 work Die Graphische Statik. Culmann was inspired by preceding work by Jean-Victor Poncelet on earth pressure and Lehrbuch der Statik by August Möbius. The next twenty years saw rapid development of methods that involved, among others, major physicists like James Clerk Maxwell and William Rankine. In 1872 Luigi Cremona introduced the Cremona diagram to calculate trusses, in 1873 Robert H. Bow established the "Bow's notation" that is still in use. It fell out of use, especially since construction methods, such as concrete post and beam, allowed for familiar numerical calculations. Access to powerful computation gave structural engineers new to
Document 1:::
Addiction is a state characterized by compulsive engagement in rewarding stimuli, despite adverse consequences. The process of developing an addiction occurs through instrumental learning, which is otherwise known as operant conditioning.
Neuroscientists believe that drug addicts’ behavior is a direct correlation to some physiological change in their brain, caused by using drugs. This view believes there is a bodily function in the brain causing the addiction. This is brought on by a change in the brain caused by brain damage or adaptation from chronic drug use.
In humans, addiction is diagnosed according to diagnostic models such as the Diagnostic and Statistical Manual of Mental Disorders, through observed behaviors. There has been significant advancement in understanding the structural changes that occur in parts of the brain involved in the reward pathway (mesolimbic system) that underlies addiction. Most research has focused on two portions of the brain: the ventral tegmental area, (VTA) and the nucleus accumbens (NAc).
The VTA is the portion of the mesolimbic system responsible for spreading dopamine to the whole system. The VTA is stimulated by ″rewarding experiences″. The release of dopamine by the VTA induces pleasure, thus reinforcing behaviors that lead to the reward. Drugs of abuse increase the VTA's ability to project dopamine to the rest of the reward circuit. These structural changes only last 7–10 days, however, indicating that the VTA cannot be the only part of the brain that is affected by drug use, and changed during the development of addiction.
The nucleus accumbens (NAc) plays an essential part in the formation of addiction. Almost every drug with addictive potential induces the release of dopamine into the NAc. In contrast to the VTA, the NAc shows long-term structural changes. Drugs of abuse weaken the connections within the NAc after habitual use, as well as after use then withdrawal.
Structural changes of learning
Learning by experience occurs through modifications of the structural circuits of the brain. These circuits are composed of many neurons and their connections, called synapses, which occur between the axon of one neuron and the dendrite of another. A single neuron generally has many dendrites which are called dendritic branches, each of which can be synapsed by many axons.
Along dendritic branches there can be hundreds or even thousands of dendritic spines, structural protrusions that are sites of excitatory synapses. These spines increase the number of axons from which the dendrite can receive information. Dendritic spines are very plastic, meaning they can be formed and eliminated very quickly, in the order of a few hours. More spines grow on a dendrite when it is repetitively activated. Dendritic spine changes have been correlated with long-term potentiation (LTP) and long-term depression (LTD).
LTP is the way that connections between neurons and synapses are strengthened. LTD is the process by which synapses are weakened. For LTP to occur, NMDA receptors on the dendritic spine send intracellular signals to increase the number of AMPA receptors on the post synaptic neuron. If a spine is stabilized by repeated activation, the spine becomes mushroom shaped and acquires many more AMPA receptors. This structural change, which is the basis of LTP, persists for months and may be an explanation for some of the long-term behavioral changes that are associated with learned behaviors including addiction to drugs.
Research methodologies
Animal models
Animal models, especially rats and mice, are used for many types of biological research. The animal models of addiction are particularly useful because animals that are addicted to a substance show behaviors similar to human addicts. This implies that the structural changes that can be observed after the animal ingests a drug can be correlated with an animal's behavioral changes, as well as with similar changes occurring in humans.
Administration protocols
Administration of drugs that are often abused can be done either by the experimenter (non-contingent), or by a self-administration (contingent) method. The latter usually involves the animal pressing a lever to receive a drug. Non-contingent models are generally used for convenience, being useful for examining the pharmacological and structural effects of the drugs. Contingent methods are more realistic because the animal controls when and how much of the drug it receives. This is generally considered a better method for studying the behaviors associated with addiction. Contingent administration of drugs has been shown to produce larger structural changes in certain parts of the brain, in comparison to non-contingent administration.
Types of drugs
All abused drugs directly or indirectly promote dopamine signaling in the mesolimbic dopamine neurons which project from the ventral tegmental area to the nucleus accumbens (NAc). The types of drugs used in experimentation increase this dopamine release through different mechanisms.
Opiates
Opiates are a class of sedative with the capacity for pain relief. Morphine is an opiate that is commonly used in animal testing of addiction. Opiates stimulate dopamine neurons in the brain indirectly by inhibiting GABA release from modulatory interneurons that synapse onto the dopamine neurons. GABA is an inhibitory neurotransmitter that decreases the probability that the target neuron will send a subsequent signal.
Stimulants
Stimulants used regularly in neuroscience experimentation are cocaine and amphetamine. These drugs induce an increase in synaptic dopamine by inhibiting the reuptake of dopamine from the synaptic cleft, effectively increasing the amount of dopamine that reaches the target neuron.
The reward pathway
The reward pathway, also called the mesolimbic system of the brain, is the part of the brain that registers reward and pleasure. This circuit reinforces the behavior that leads to a positive and pleasurable outcome. In drug addiction, the drug-seeking behaviors become reinforced by the rush of dopamine that follows the administration of a drug of abuse. The effects of drugs of abuse on the ventral tegmental area (VTA) and the nucleus accumbens (NAc) have been studied extensively.
Drugs of abuse change the complexity of dendritic branching as well as the number and size of the branches in both the VTA and the NAc. [7] By correlation, these structural changes have been linked to addictive behaviors. The effect of these structural changes on behavior is uncertain and studies have produced conflicting results. Two studies have shown that an increase in dendritic spine density due to cocaine exposure facilitates behavioral sensitization, while two other studies produce contradicting evidence.
In response to drugs of abuse, structural changes can be observed in the size of neurons and the shape and number of the synapses between them. The nature of the structural changes is specific to the type of drug used in the experiment. Opiates and stimulants produce opposite effects in structural plasticity in the reward pathway. It is not expected that these drugs would induce opposing structural changes in the brain because these two classes of drugs, opiates and stimulants, both cause similar behavioral phenotypes.
Both of these drugs induce increased locomotor activity acutely, escalated self-administration chronically, and dysphoria when the drug is taken away. Although their effects on structural plasticity are opposite, there are two possible explanations as to why these drugs still produce the same indicators of addiction: Either these changes produce the same behavioral phenotype when any change from baseline is produced, or the critical changes that cause the addictive behavior cannot be quantified by measuring dendritic spine density.
Opiates decrease spine density and dendrite complexity in the nucleus accumbens (NAc). Morphine decreases spine density regardless of the treatment paradigm (with one exception: "chronic morphine increases spine number on orbitofrontal cortex (oPFC) pyramidal neurons"). Either chronic or intermittent administration of morphine will produce the same effect. The only case where opiates increase dendritic density is with chronic morphine exposure, which increases spine density on pyramidal neurons in the orbitofrontal cortex. Stimulants increase spinal density and dendritic complexity in the nucleus accumbens (NAc), ventral tegmental area (VTA), and other structures in the reward circuit.
Ventral tegmental area
There are neurons with cell bodies in the VTA that release dopamine onto specific parts of the brain, including many of the limbic regions such as the NAc, the medial prefrontal cortex (mPFC), dorsal striatum, amygdala, and the hippocampus. The VTA has both dopaminergic and GABAergic neurons that both project to the NAc and mPFC. GABAergic neurons in the VTA also synapse on local dopamine cells. In non-drug models, the VTA dopamine neurons are stimulated by rewarding experiences. A release of dopamine from the VTA neurons seems to be the driving action behind drug-induced pleasure and reward.
Exposure to drugs of abuse elicits LTP at excitatory synapses on VTA dopamine neurons. Excitatory synapses in brain slices from the VTA taken 24 hours after a single cocaine exposure showed an increase in AMPA receptors in comparison to a saline control. Additional LTP could not be induced in these synapses. This is thought to be because the maximal amount of LTP had already been induced by the administration of cocaine. LTP is only seen on the dopamine neurons, not on neighboring GABAergic neurons. This is of interest because the administration of drugs of abuse increases the excitation of VTA dopamine neurons, but does not increase inhibition. Excitatory inputs into the VTA will activate the dopamine neurons 200%, but do not increase activation of GABA neurons which are important in local inhibition.
This effect of inducing LTP in VTA slices 24 hours after drug exposure has been shown using morphine, nicotine, ethanol, cocaine, and amphetamines. These drugs have very little in common except that they are all potentially addictive. This is evidence supporting a link between structural changes in the VTA and the development of addiction.
Changes other than LTP have been observed in the VTA after treatment with drugs of abuse. For example, neuronal body size decreased in response to opiates.
Although the structural changes in the VTA invoked by exposure to an addictive drug generally disappear after a week or two, the target regions of the VTA, including the NAc, may be where the longer-term changes associated with addiction occur during the development of the addiction.
Nucleus accumbens
The nucleus accumbens plays an integral role in addiction. Almost every addictive drug of abuse induces the release of dopamine into the nucleus accumbens. The NAc is particularly important for instrumental learning, including cue-induced reinstatement of drug-seeking behavior. It is also involved in mediating the initial reinforcing effects of addictive drugs. The most common cell type in the NAc is the GABAergic medium spiny neuron. These neurons project inhibitory connections to the VTA and receive excitatory input from various other structures in the limbic system. Changes in the excitatory synaptic inputs into these neurons have been shown to be important in mediating addiction-related behaviors. It has been shown that LTP and LTD occurs at NAc excitatory synapses.
Unlike the VTA, a single dose of cocaine induces no change in potentiation in the excitatory synapses of the NAc. LTD was observed in the medium spiny neurons in the NAc following two different protocols: a daily cocaine administration for five days or a single dose followed by 10–14 days of withdrawal. This suggests that the structural changes in the NAc are associated with long-term behaviors (rather than acute responses) associated with addiction such as drug seeking.
Human relevance
Relapse
Neuroscientists studying addiction define relapse as the reinstatement of drug-seeking behavior after a period of abstinence. The structural changes in the VTA are hypothesized to contribute to relapse. As the molecular mechanisms of relapse are better understood, pharmacological treatments to prevent relapse are further refined.
Risk of relapse is a serious and long-term problem for recovering addicts. An addict can be forced to abstain from using drugs while they are admitted in a treatment clinic, but once they leave the clinic they are at risk of relapse. Relapse can be triggered by stress, cues associated with past drug use, or re-exposure to the substance. Animal models of relapse can be triggered in the same way.
Search for a cure for addiction
The goal of addiction research is to find ways to prevent and reverse the effects of addiction on the brain. Theoretically, if the structural changes in the brain associated with addiction can be blocked, then the negative behaviors associated with the disease should never develop.
Structural changes associated with addiction can be inhibited by NMDA receptor antagonists which block the activity of NMDA receptors. NMDA receptors are essential in the process of LTP and LTD. Drugs of this class are unlikely candidates for pharmacological prevention of addiction because these drugs themselves are used recreationally. Examples of NMDAR antagonists are ketamine, dextromethorphan (DXM), phencyclidine (PCP).
Document 2:::
LifeSigns: Surgical Unit, released in Europe as LifeSigns: Hospital Affairs, is an adventure game for the Nintendo DS set in a hospital. LifeSigns is the followup to Kenshūi Tendō Dokuta, a game released at the end of 2004; that game has not been released outside Japan, although the localized LifeSigns still makes reference to it.
The game is commented as being "like Phoenix Wright crossed with Trauma Center".
Characters
Seimei Medical University Hospital
(Age 25, Male)
A second-year intern at the prestigious Seimei Medical University Hospital. He is studying Emergency medicine. He can be a little naive but he is very talented. He devoted his life to helping people after seeing his mother die of cancer. In the first game, he misdiagnosed a patient, which nearly ruined his career. His given name is derived from the English word "doctor", transliterated with kanji.
(Age 22, Female)
A nurse who has been working at the hospital 1 year before Tendo. She seems to be careless at times, but nonetheless, she is a talented and devoted nurse. She seems to have a crush on Tendo. She is one of the girls Tendo can date later in the game.
(Age 36, Female)
An extremely gifted surgeon, Asou is Tendo's supervisor and mentor. She recently got over her alcoholism after a disastrous break-up with Prof. Sawai (first game). She is the head of the 3rd Surgery Department, which specializes in heart diseases. Tendo seems to have a crush on her. She wears a bell choker around her neck.
Kyousuke Sawai (Age 52, Male)
A world authority in the field of immunology, and head of the 1st Surgical Department of Seimei Medical University Hospital. A cold and callous man who only seems to be interested in results. In the first game, it was revealed that he was Tendo's biological father and arranged his transfer to the Seimei. He is involved in the cancer treatment research and recently developed a miracle drug that might cure cancer called SPX (Sawai Power Plex). He also dated Suzu Asou for
Document 3:::
In mathematics, a superintegrable Hamiltonian system is a Hamiltonian system on a -dimensional symplectic manifold for which the following conditions hold:
(i) There exist independent integrals of motion. Their level surfaces (invariant submanifolds) form a fibered manifold over a connected open subset .
(ii) There exist smooth real functions on such that the Poisson bracket of integrals of motion reads
.
(iii) The matrix function is of constant corank on .
If , this is the case of a completely integrable Hamiltonian system. The Mishchenko-Fomenko theorem for superintegrable Hamiltonian systems generalizes the Liouville-Arnold theorem on action-angle coordinates of completely integrable Hamiltonian system as follows.
Let invariant submanifolds of a superintegrable Hamiltonian system be connected compact and mutually diffeomorphic. Then the fibered manifold is a fiber bundle
in tori . There exists an open neighbourhood of which is a trivial fiber bundle provided with the bundle (generalized action-angle) coordinates ,
, such that are coordinates on . These coordinates are the Darboux coordinates on a symplectic manifold . A Hamiltonian of a superintegrable system depends only on the action variables which are the Casimir functions of the coinduced Poisson structure on .
The Liouville-Arnold theorem for completely integrable systems and the Mishchenko-Fomenko theorem for the superintegrable ones are generalized to the case of non-compact invariant submanifolds. They are diffeomorphic to a toroidal cylinder .
Document 4:::
A fuzzy associative matrix expresses fuzzy logic rules in tabular form. These rules usually take two variables as input, mapping cleanly to a two-dimensional matrix, although theoretically a matrix of any number of dimensions is possible. From the perspective of neuro-fuzzy systems, the mathematical matrix is called a "Fuzzy associative memory" because it stores the weights of the perceptron.
Applications
In the context of game AI programming, a fuzzy associative matrix helps to develop the rules for non-player characters. Suppose a professional is tasked with writing fuzzy logic rules for a video game monster. In the game being built, entities have two variables: hit points (HP) and firepower (FP):
This translates to:
IF MonsterHP IS VeryLowHP AND MonsterFP IS VeryWeakFP THEN Retreat
IF MonsterHP IS LowHP AND MonsterFP IS VeryWeakFP THEN Retreat
IF MonsterHP IS MediumHP AND MonsterFP is VeryWeakFP THEN Defend
Multiple rules can fire at once, and often will, because the distinction between "very low" and "low" is fuzzy. If it is more "very low" than it is low, then the "very low" rule will generate a stronger response. The program will evaluate all the rules that fire and use an appropriate defuzzification method to generate its actual response.
An implementation of this system might use either the matrix or the explicit IF/THEN form. The matrix makes it easy to visualize the system, but it also makes it impossible to add a third variable just for one rule, so it is less flexible.
Identify a rule set
There is no inherent pattern in the matrix. It appears as if the rules were just made up, and indeed they were. This is both a strength and a weakness of fuzzy logic in general. It is often impractical or impossible to find an exact set of rules or formulae for dealing with a specific situation. For a sufficiently complex game, a mathematician would not be able to study the system and figure out a mathematically accurate set of rules. However, this weakness is
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What role does the nucleus accumbens (NAc) play in addiction according to the text?
A. It is responsible for the initial pleasure derived from drug use.
B. It regulates dopamine release from the ventral tegmental area (VTA).
C. It is involved in mediating the initial reinforcing effects of addictive drugs.
D. It exclusively processes the withdrawal symptoms associated with addiction.
Answer:
|
C. It is involved in mediating the initial reinforcing effects of addictive drugs.
|
Relavent Documents:
Document 0:::
In general relativity, specifically in the Einstein field equations, a spacetime is said to be stationary if it admits a Killing vector that is asymptotically timelike.
Description and analysis
In a stationary spacetime, the metric tensor components, , may be chosen so that they are all independent of the time coordinate. The line element of a stationary spacetime has the form
where is the time coordinate, are the three spatial coordinates and is the metric tensor of 3-dimensional space. In this coordinate system the Killing vector field has the components . is a positive scalar representing the norm of the Killing vector, i.e., , and is a 3-vector, called the twist vector, which vanishes when the Killing vector is hypersurface orthogonal. The latter arises as the spatial components of the twist 4-vector (see, for example, p. 163) which is orthogonal to the Killing vector , i.e., satisfies . The twist vector measures the extent to which the Killing vector fails to be orthogonal to a family of 3-surfaces. A non-zero twist indicates the presence of rotation in the spacetime geometry.
The coordinate representation described above has an interesting geometrical interpretation. The time translation Killing vector generates a one-parameter group of motion in the spacetime . By identifying the spacetime points that lie on a particular trajectory (also called orbit) one gets a 3-dimensional space (the manifold of Killing trajectories) , the quotient space. Each point of represents a trajectory in the spacetime . This identification, called a canonical projection, is a mapping that sends each trajectory in onto a point in and induces a metric on via pullback. The quantities , and are all fields on and are consequently independent of time. Thus, the geometry of a stationary spacetime does not change in time. In the special case the spacetime is said to be static. By definition, every static spacetime is stationary, but the converse is not generall
Document 1:::
ESO 137-001, also known as the Jellyfish Galaxy, is a barred spiral galaxy located in the constellation Triangulum Australe and in the cluster Abell 3627. As the galaxy moves to the center of the galaxy cluster at 1900 km/s, it is stripped by hot gas, thus creating a 260,000 light-year long tail. This is called ram pressure stripping (RPS). The intergalactic gas in Abell 3627 is at 100 million Kelvin, which causes star formation in the tails. The galaxy has a low amount of Hl regions which combine to a total mass of 3.5x10^8 solar masses, only 10% of which is located in the main disk of ESO 137-001.
History
The galaxy was discovered by Ming Sun in 2005.
Galaxy's fate
The stripping of gas is thought to have a significant effect on the galaxy's development, removing cold gas from the galaxy, shutting down the formation of new stars in the galaxy, and changing the appearance of inner spiral arms and bulges because of the effects of star formation.
Gallery
See also
Abell 3627
List of galaxies
Jellyfish galaxy
References
External links
NASA gallery: ESO 137-001
Cornell University: The Flying Spaghetti Monster: Impact of magnetic fields on ram pressure stripping in disk galaxies
Document 2:::
Nesting algorithms are used to make the most efficient use of material or space. This could for instance be done by evaluating many different possible combinations via recursion.
Linear (1-dimensional): The simplest of the algorithms illustrated here. For an existing set there is only one position where a new cut can be placed – at the end of the last cut. Validation of a combination involves a simple Stock - Yield - Kerf = Scrap calculation.
Plate (2-dimensional): These algorithms are significantly more complex. For an existing set, there may be as many as eight positions where a new cut may be introduced next to each existing cut, and if the new cut is not perfectly square then different rotations may need to be checked. Validation of a potential combination involves checking for intersections between two-dimensional objects.
Packing (3-dimensional): These algorithms are the most complex illustrated here due to the larger number of possible combinations. Validation of a potential combination involves checking for intersections between three-dimensional objects.
References
Document 3:::
NGC 6738 is an astronomical feature that is catalogued as an NGC object. Although listed as an open cluster in some astronomical databases, it may be merely an asterism; a 2003 paper in the journal Astronomy and Astrophysics describes it as being an "apparent concentration of a few bright stars on patchy background absorption".
References
External links
Simbad
Image NGC 6738
NGC 6738
Document 4:::
A lark, early bird, morning person, or (in Scandinavian countries) an A-person, is a person who usually gets up early in the morning and goes to bed early in the evening. The term relates to the birds known as larks, which are known to sing before dawn. Human "larks" tend to feel most energetic just after they get up in the morning. They are thus well-suited for working the day shift.
The opposite of the lark is the owl, often awake at night. A person called a night owl is someone who usually stays up late and may feel most awake in the evening and at night. Researchers have traditionally used the terms morningness and eveningness to describe these two chronotypes.
Charting chronotypes
Till Roenneberg, a chronobiologist in Munich, has mapped the circadian rhythms of more than 200,000 people. Biological processes, including sleep-wake patterns, that display an oscillation of about 24 hours are called circadian rhythms. According to Roenneberg, the distribution of circadian rhythms spans from the very early to the very late chronotypes, similarly to how height varies from short to tall.
As circadian rhythm is independent of the number of hours of sleep a person needs, Roenneberg calculates the rhythm based on the midpoint of the sleep period. A person who goes to bed at midnight and rises at 8 thus has the same chronotype as a person who goes to bed at 1 a.m. and rises at 7; the midpoint of sleep is 4 a.m. for both of these individuals.
People with early chronotypes will usually not be able to "sleep in", even if they have stayed up later than usual. While fit for a "lark-like" societal framework, they find it hard to adapt to a context where "sleeping in" is common: despite feeling refreshed in the morning, they may feel hampered socially when confronted with some kinds of social gatherings (such as soirées) that are often scheduled for the evening, even if most kinds of social events are not.
People with late chronotypes go to bed late and rise late. Forced to arise earlier than their circadian rhythm dictates, they have a low body temperature and may require a few hours to feel really awake. They are unable to fall asleep as early as "larks" can.
Prevalence
A 2007 survey of over 55,000 people found that chronotypes tend to follow a normal distribution, with extreme morning and evening types on the far ends. There are studies that suggest genes determine whether a person is a lark or an evening person in the same way it is implicated in people's attitude toward authority, unconventional behavior, as well as reading and television viewing habits. For instance, there is the case of the Per2 gene on chromosome 2, which was discovered in the early 1990s by Urs Albrecht and colleagues at the University of Fribourg in Switzerland. This gene regulates the circadian clock and a variant of it was found in families that demonstrated advanced sleep-phase syndrome. According to the researchers, its existence in people skews sleep pattern even if the period also cover eight hours.
Age is also implicated in the way one becomes a morning or a night person. It is explained that, developmentally, people are generally night owls in their teens while they become larks later in life. Infants also tend to be early risers.
Career options
Morning larks tend to thrive in careers that start early in the morning. Industries that tend to be favorable to morning larks include farming, construction, and working for public utilities. Many employees in these industries start working at or before 7:00 a.m. Some professions are well-known for their early morning hours, including bakers, school teachers, dairy farmers, and surgeons.
Morning larks tend to be less represented among the employees of restaurants, hotels, entertainment venues, and retail stores, which tend to be open until later in the evening. However, morning larks may be perfectly suited to the opening shift of a coffee shop, handling the morning rush at a hotel, or working on the morning news shows for radio or television. Morning larks may also work the early shift in round-the-clock industries, such as emergency services, transportation, healthcare, and manufacturing.
Many large businesses that operate in the evening or at night need employees at all levels, from entry-level employees to managers to skilled staff, whenever they are open. For example, most hospitals employ many types of workers around the clock:
non-medical staff such as security guards, IT specialists, cleaning and maintenance workers, cooks and food service staff, and admissions clerks;
medical staff such as nurses, paramedics, radiology technicians, pharmacists, and phlebotomists;
managers for each of the main hospital wards or activities, including janitorial supervisors and head nurses.
See also
Advanced sleep phase syndrome
Circadian rhythm sleep disorder
Diurnality
FASPS
Morningness–eveningness questionnaire (MEQ)
Munich ChronoType Questionnaire (MCTQ)
Night owl
Nocturnality
Waking up early
References
External links
The Munich ChronoType Questionnaire (MCTQ)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary characteristic of a person classified as a "morning lark" according to the text?
A. They usually stay up late at night.
B. They feel most energetic just after waking up in the morning.
C. They prefer to work in evening shifts.
D. They have a low body temperature in the morning.
Answer:
|
B. They feel most energetic just after waking up in the morning.
|
Relavent Documents:
Document 0:::
OH 471 (OHIO H 471) is a distant powerful quasar located in the northern constellation of Auriga. First discovered in 1974 from a photoelectric spectrophotometry, the object has a redshift of (z) 3.40. This high redshift makes it one of the most distant objects observed, giving it a nickname of "the blaze marking the edge of the universe." It is found to be significantly variable thus classifying it as a blazar.
Description
OH 471 is a low polarized quasar but also a high frequency peaker (HFP). It is a radio-loud gamma ray blazar with a central supermassive black hole mass of 9.1 Mʘ and a luminosity of 6.8 x 1028 W Hz-1. In its spectrum, it shows an inverted and steep spectra, reaching a peak at 18.6 GHz.
In additional, OH 471 also displayed two major flares, visible at higher frequencies of 15 and 8 GHz, in March 2003 and October 2008. Reduced activity was observed in the object with its flux density decreasing following 2009. During 1985 to 1996, the object exhibited an increase in its radio flux with its factor showing a slight increase by 1.6.
Observations by Very Long Baseline Interferometry found the object has a core-jet morphology. Based on radio images, the source is compact. Its non-linear structure described as a jet, is found to be extended by 8 milliarcseconds to the east direction. The jet also appears as twisted with a bending angle of 50°. Superluminal motion was also implied as the inner jet component displayed an estimated core separation of 0.76 ± 0.11c. A nuclear region was detected, containing most of the flux density. There is a resolved radio core extending along a position angle of 81°, which is further broken up into two individual circular nuclear components with a separation of 0.76 mas. A fainter component can be seen west from the core.
Digicon and image-tube spectroscopy of the spectrum of OH 471, found there are 89 absorption lines. Four absorption-line redshift systems are identified. Based on results, they are located at redshifts (z) 3.122, 3.191, 3.246 and 3.343.
Document 1:::
A thermal wheel, also known as a rotary heat exchanger, or rotary air-to-air enthalpy wheel, energy recovery wheel, or heat recovery wheel, is a type of energy recovery heat exchanger positioned within the supply and exhaust air streams of air-handling units or rooftop units or in the exhaust gases of an industrial process, in order to recover the heat energy. Other variants include enthalpy wheels and desiccant wheels. A cooling-specific thermal wheel is sometimes referred to as a Kyoto wheel.
Rotary thermal wheels are a mechanical means of heat recovery. A rotating porous metallic wheel transfers thermal energy from one air stream to another by passing through each fluid alternately. The system operates by working as a thermal storage mass whereby the heat from the air is temporarily stored within the wheel matrix until it is transferred to the cooler air stream.
Two types of rotary thermal wheels exist: heat wheels and enthalpy (desiccant) wheels. Though there is a geometrical similarity between heat and enthalpy wheels, there are differences that affect the operation of each design. In a system using a desiccant wheel, the moisture in the air stream with the highest relative humidity is transferred to the opposite air stream after flowing through the wheel. This can work in both directions of incoming air to exhaust air and exhaust air to incoming air. The supply air can then be used directly or employed to further cool the air. This is an energy-intensive process.
The rotary air-to-air enthalpy wheel heat exchanger is a rotating cylinder filled with an air permeable material, typically polymer, aluminum, or synthetic fiber, providing the large surface area required for the sensible enthalpy transfer (enthalpy is a measure of heat). As the wheel rotates between the supply and exhaust air streams it picks up heat energy and releases it into the colder air stream. The driving force behind the exchange is the difference in temperatures between the opposing air s
Document 2:::
In probability theory and statistics, the trapezoidal distribution is a continuous probability distribution whose probability density function graph resembles a trapezoid. Likewise, trapezoidal distributions also roughly resemble mesas or plateaus.
Each trapezoidal distribution has a lower bound and an upper bound , where , beyond which no values or events on the distribution can occur (i.e. beyond which the probability is always zero). In addition, there are two sharp bending points (non-differentiable discontinuities) within the probability distribution, which we will call and , which occur between and , such that .
The image to the right shows a perfectly linear trapezoidal distribution. However, not all trapezoidal distributions are so precisely shaped. In the standard case, where the middle part of the trapezoid is completely flat, and the side ramps are perfectly linear, all of the values between and will occur with equal frequency, and therefore all such points will be modes (local frequency maxima) of the distribution. On the other hand, though, if the middle part of the trapezoid is not completely flat, or if one or both of the side ramps are not perfectly linear, then the trapezoidal distribution in question is a generalized trapezoidal distribution, and more complicated and context-dependent rules may apply. The side ramps of a trapezoidal distribution are not required to be symmetric in the general case, just as the sides of trapezoids in geometry are not required to be symmetric.
The non-central moments of the trapezoidal distribution are
Special cases of the trapezoidal distribution include the uniform distribution (with and ) and the triangular distribution (with ). Trapezoidal probability distributions seem to not be discussed very often in the literature. The uniform, triangular, Irwin-Hall, Bates, Poisson, normal, bimodal, and multimodal distributions are all more frequently discussed in the literature. This may be because these other (
Document 3:::
USA-440, also known as GPS-III SV07, NAVSTAR 83, RRT-1 or Sally Ride, is a United States navigation satellite which forms part of the Global Positioning System.
The satellite is named after Sally Ride.
The RRT-1 name refers to the Rapid Response Trailblazer program in which the satellite was launched on an accelerated timeline.
Satellite
SV07 is the seventh GPS Block III satellite to launch.
Launch
USA-440 was launched by SpaceX on 16 December 2024 at 7:52pm Eastern, atop a Falcon 9 rocket.
The launch took place from SLC-40 at Cape Canaveral Space Force Station.
Document 4:::
The Extended Evolutionary Synthesis (EES) consists of a set of theoretical concepts argued to be more comprehensive than the earlier modern synthesis of evolutionary biology that took place between 1918 and 1942. The extended evolutionary synthesis was called for in the 1950s by C. H. Waddington, argued for on the basis of punctuated equilibrium by Stephen Jay Gould and Niles Eldredge in the 1980s, and was reconceptualized in 2007 by Massimo Pigliucci and Gerd B. Müller.
The extended evolutionary synthesis revisits the relative importance of different factors at play, examining several assumptions of the earlier synthesis, and augmenting it with additional causative factors. It includes multilevel selection, transgenerational epigenetic inheritance, niche construction, evolvability, and several concepts from evolutionary developmental biology.
Not all biologists have agreed on the need for, or the scope of, an extended synthesis. Many have collaborated on another synthesis in evolutionary developmental biology, which concentrates on developmental molecular genetics and evolution to understand how natural selection operated on developmental processes and deep homologies between organisms at the level of highly conserved genes.
The preceding "modern synthesis"
The modern synthesis was the widely accepted early-20th-century synthesis reconciling Charles Darwin's theory of evolution by natural selection and Gregor Mendel's theory of genetics in a joint mathematical framework. It established evolution as biology's central paradigm. The 19th-century ideas of natural selection by Darwin and Mendelian genetics were united by researchers who included Ronald Fisher, J. B. S. Haldane and Sewall Wright, the three founders of population genetics, between 1918 and 1932. Julian Huxley introduced the phrase "modern synthesis" in his 1942 book, Evolution: The Modern Synthesis.
Early history
During the 1950s, English biologist C. H. Waddington called for an extended synthesis b
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the redshift value of OH 471, making it one of the most distant objects observed?
A. 2.50
B. 3.40
C. 4.00
D. 1.80
Answer:
|
B. 3.40
|
Relavent Documents:
Document 0:::
Artificially Expanded Genetic Information System (AEGIS) is a synthetic DNA analog experiment that uses some unnatural base pairs from the laboratories of the Foundation for Applied Molecular Evolution in Gainesville, Florida. AEGIS is a NASA-funded project to try to understand how extraterrestrial life may have developed.
The system uses twelve different nucleobases in its genetic code. These include the four canonical nucleobases found in DNA (adenine, cytosine, guanine and thymine) plus eight synthetic nucleobases (S, B, Z, P, V, J, K, and X). AEGIS includes S:B, Z:P, V:J and K:X base pairs.
See also
Abiogenesis
Astrobiology
Hachimoji DNA
Hypothetical types of biochemistry
xDNA
Xeno nucleic acid
References
Document 1:::
Demosaicing (or de-mosaicing, demosaicking), also known as color reconstruction, is a digital image processing algorithm used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA) such as a Bayer filter. It is also known as CFA interpolation or debayering.
Most modern digital cameras acquire images using a single image sensor overlaid with a CFA, so demosaicing is part of the processing pipeline required to render these images into a viewable format.
Many modern digital cameras can save images in a raw format allowing the user to demosaic them using software, rather than using the camera's built-in firmware.
Goal
The aim of a demosaicing algorithm is to reconstruct a full color image (i.e. a full set of color triples) from the spatially undersampled color channels output from the CFA. The algorithm should have the following traits:
Avoidance of the introduction of false color artifacts, such as chromatic aliases, zippering (abrupt unnatural changes of intensity over a number of neighboring pixels) and purple fringing
Maximum preservation of the image resolution
Low computational complexity for fast processing or efficient in-camera hardware implementation
Amenability to analysis for accurate noise reduction
Background: color filter array
A color filter array is a mosaic of color filters in front of the image sensor. Commercially, the most commonly used CFA configuration is the Bayer filter illustrated here. This has alternating red (R) and green (G) filters for odd rows and alternating green (G) and blue (B) filters for even rows. There are twice as many green filters as red or blue ones, catering to the human eye's higher sensitivity to green light.
Since the color subsampling of a CFA by its nature results in aliasing, an optical anti-aliasing filter is typically placed in the optical path between the image sensor and the lens to reduce the false color artifacts (chromati
Document 2:::
Gilbert Wakefield (1756–1801) was an English scholar and controversialist. He moved from being a cleric and academic, into tutoring at dissenting academies, and finally became a professional writer and publicist. In a celebrated state trial, he was imprisoned for a pamphlet critical of government policy of the French Revolutionary Wars; and died shortly after his release.
Early life and background
He was born 22 February 1756 in Nottingham, the third son of the Rev. George Wakefield, then rector of St Nicholas' Church, Nottingham but afterwards at Kingston-upon-Thames, and his wife Elizabeth. He was one of five brothers, who included George, a merchant in Manchester.
His father was from Rolleston, Staffordshire, and came to Cambridge in 1739 as a sizar. He had support in his education from the Hardinge family, of Melbourne, Derbyshire, his patrons being Nicholas Hardinge and his physician brother. In his early career he was chaplain to Margaret Newton, in her own right 2nd Countess Coningsby. George Hardinge, son of Nicholas, after Gilbert's death pointed out that the living of Kingston passed to George Wakefield in 1769, under an Act of Parliament specifying presentations to chapels of the parish, only because he had used his personal influence with his uncle Charles Pratt, 1st Baron Camden the Lord Chancellor, and Jeremiah Dyson.
Education and Fellowship
Wakefield had some schooling in the Nottingham area, under Samuel Berdmore and then at Wilford under Isaac Pickthall. He then made good progress at Kingston Free School under Richard Wooddeson the elder (died 1774), father of Richard Wooddeson the jurist.
Wakefield was sent to university young, because Wooddeson was retiring from teaching. An offer came of a place at Christ Church, Oxford, from the Rev. John Jeffreys (1718–1798); but his father turned it down. He went to Jesus College, Cambridge on a scholarship founded by Robert Marsden: the Master Lynford Caryl was from Nottinghamshire, and a friend of his f
Document 3:::
Naziba, was a small 'city', or 'city-state' south of Dimašqu-(Damascus), in the 1350–1335 BC Amarna letters correspondence. The town of Naziba was located near Amarna letters Qanu, now named Qanawat, and biblical Kenath.
Naziba, is part of a 6-letter series of letters, written by the same scribe, all entitled: "Ready for marching orders (1-6)". The letters are mostly identical, with only the city's Ruler and the location changing; they are letters EA 201-EA 206, (EA for 'el Amarna').
EA 206, for the "ruler of Naziba"
Say to the king, my lord: Message of the ruler of Naziba, your servant. I fall at the feet of the king, my lord, 7 times plus 7 times. You hav[e wr]it[ten] to make preparations before the arrival of the archers, and I am herewith, along with my troops and my chariots, at the disposition of the archers. —EA 206, lines 1-17 (complete)
List of the 6-letter series
EA 201-marching orders (1)—for Artamanya of Siribašani
EA 202-marching orders (2)—for Amawaše of Bašan(?)
EA 203-marching orders (3)—for 'Abdi-Milki of Šashimi
EA 204-marching orders (4)—for the ruler of Qanu, called Qanawat
EA 205-marching orders (5)—for the ruler of Tubu (town)
EA 206-marching orders (6)—for the ruler of Naziba
Additional by scribe: EA 195—title: "Waiting for the Pharaoh's words" – by Biryawaza of Dimašqu (Damascus)
See also
Tubu (town), for the "ruler of Tubu"
Qanawat-(Qanu)
Amarna letters–localities and their rulers
References
Moran, William L. The Amarna Letters. Johns Hopkins University Press, 1987, 1992. (softcover, )
Document 4:::
In molecular genetics, a regulon is a group of genes that are regulated as a unit, generally controlled by the same regulatory gene that expresses a protein acting as a repressor or activator. This terminology is generally, although not exclusively, used in reference to prokaryotes, whose genomes are often organized into operons; the genes contained within a regulon are usually organized into more than one operon at disparate locations on the chromosome. Applied to eukaryotes, the term refers to any group of non-contiguous genes controlled by the same regulatory gene.
A modulon is a set of regulons or operons that are collectively regulated in response to changes in overall conditions or stresses, but may be under the control of different or overlapping regulatory molecules. The term stimulon is sometimes used to refer to the set of genes whose expression responds to specific environmental stimuli.
Examples
Commonly studied regulons in bacteria are those involved in response to stress such as heat shock. The heat shock response in E. coli is regulated by the sigma factor
σ32 (RpoH), whose regulon has been characterized as containing at least 89 open reading frames.
Regulons involving virulence factors in pathogenic bacteria are of particular research interest; an often-studied example is the phosphate regulon in E. coli, which couples phosphate homeostasis to pathogenicity through a two-component system. Regulons can sometimes be pathogenicity islands.
The Ada regulon in E. coli is a well-characterized example of a group of genes involved in the adaptive response form of DNA repair.
Quorum sensing behavior in bacteria is a commonly cited example of a modulon or stimulon, though some sources describe this type of intercellular auto-induction as a separate form of regulation.
Evolution
Changes in the regulation of gene networks are a common mechanism for prokaryotic evolution. An example of the effects of different regulatory environments for homologous protei
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of the Artificially Expanded Genetic Information System (AEGIS) as described in the text?
A. To create a new type of DNA for human use
B. To understand the development of extraterrestrial life
C. To replace natural DNA in all organisms
D. To study the effects of synthetic nucleobases on human health
Answer:
|
B. To understand the development of extraterrestrial life
|
Relavent Documents:
Document 0:::
A twig is a thin, often short, branch of a tree or bush.
The buds on the twig are an important diagnostic characteristic, as are the abscission scars where the leaves have fallen away. The color, texture, and patterning of the twig bark are also important, in addition to the thickness and nature of any pith of the twig.
There are two types of twigs: vegetative twigs and fruiting spurs. Fruiting spurs are specialized twigs that generally branch off the sides of branches and are stubby and slow-growing, with many annular ring markings from seasons past. The twig's age and rate of growth can be determined by counting the winter terminal bud scale scars, or annular ring marking, across the diameter of the twig.
Uses
Twigs can be useful in starting a fire. They can be used as kindling wood, bridging the gap between highly flammable tinder (dry grass and leaves) and firewood. This is due to their high amounts of stored carbon dioxide used in photosynthesis.
Twigs are a feature of tool use by non-humans. For example, chimpanzees have been observed using twigs to go "fishing" for termites, and elephants have been reported using twigs to scratch parts of their ears and mouths which could not be reached by rubbing against a tree.
References
Document 1:::
The precision rectifier, sometimes called a super diode, is an operational amplifier (opamp) circuit configuration that behaves like an ideal diode and rectifier.
The op-amp-based precision rectifier should not be confused with the power MOSFET-based active rectification ideal diode.
Basic circuit
The basic circuit implementing such a feature is shown on the right, where can be any load. When the input voltage is negative, the opamp puts its most negative voltage on the diode's anode, so the diode is reverse biased and works like an open circuit. Since almost no current will flow through the diode, the output voltage will be pulled down to ground through . When the input becomes positive, it is amplified by the opamp, which switches the diode on. Because of the negative feedback, just enough current will flow through so that equals the input voltage.
The actual threshold is very close to zero, but is not zero. It equals the actual threshold of the diode, divided by the gain of the opamp.
This basic configuration has a problem, so it is not commonly used. When the input becomes (even slightly) negative, the opamp runs open-loop, as there is no feedback signal through the diode. For a typical opamp with high open-loop gain, the output saturates. If the input then becomes positive again, the op-amp has to get out of the saturated state before positive amplification can take place again. This change generates some ringing and takes some time, greatly reducing the frequency response of the circuit.
Improved circuit
An alternative version is given on the right. In this case, when the input is greater than zero, D1 is off, and D2 is on, so the output is zero because the other end of is connected to the virtual ground and there is no current through . When the input is less than zero, D1 is on and D2 is off, so the output is like the input with an amplification of . Its input–output relationship is the following:
This circuit has the benefit that the op-amp ne
Document 2:::
Luidia, Inc. produces portable interactive whiteboard technology for classrooms and conference rooms. Its eBeam hardware and software products work with computers and digital projectors to use existing whiteboard or writing surface as interactive whiteboards.
The company’s eBeam products allow text, images and video to be projected onto display surfaces, where an interactive stylus or marker can be used to add notes, access menus, manipulate images and create diagrams and drawings.
Technology
Luidia’s eBeam technology uses infrared and ultrasound receivers to track the location of a transmitter-equipped pen, called a stylus, or a standard dry-erase marker in a transmitter-equipped sleeve.
Company history
Luidia’s eBeam technology was originally developed and patented by engineers at Electronics for Imaging Inc. (Nasdaq: EFII), a Foster City, California developer of digital print server technology. Luidia was spun off from EFI in July 2003 with venture funding from Globespan Capital Partners and Silicom Ventures.
In 2007, Luidia was selected by the Mexican government to install eBeam-enabled interactive boards in public seventh-grade classrooms in Mexico as part of the government’s Enciclomedia program.
In 2007 and 2008, Luidia was accredited by Deloitte LLP in the accounting firm’s Silicon Valley “Technology Fast 50” program, which accredits fast-growing companies in the San Francisco Bay area.
In January 2021 their main sites and web documentation started returning 404 errors; however, their shop was still up. In 2022, Ludia sent out notice that it would be shutting down in July of that year.
References
Document 3:::
In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane.
The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept.
Mathematical definition
Let and be two sets of points in an n-dimensional Euclidean space. Then and are linearly separable if there exist n + 1 real numbers , such that every point satisfies and every point satisfies , where is the -th component of .
Equivalently, two sets are linearly separable precisely when their respective convex hulls are disjoint (colloquially, do not overlap).
In simple 2D, it can also be imagined that the set of points under a linear transformation collapses into a line, on which there exists a value, k, greater than which one set of points will fall into, and lesser than which the other set of points fall.
Examples
Three non-collinear points in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case):
However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would need two straight lines and thus is not linearly separable:
Notice that three points which are collinear and of the form "+ ⋅⋅⋅
Document 4:::
Serang virus (SERV) is a single-stranded, negative-sense, enveloped, novel RNA orthohantavirus.
Natural reservoir
SERV was first isolated from the Asian house rat (R.Tanezumi) in Serang, Indonesia in 2000.
Virology
Phylogenetic analysis based on partial L, M and S segment nucleotide sequences show SERV is novel and distinct among the hantaviruses. It is most closely related to Thailand virus (THAIV) which is carried by the great bandicoot rat (Bandicota indica). Nucleotide sequence comparison suggests that SERV is the result of cross-species transmission from bandicoots to Asian rats.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What characteristic of twigs is important for diagnosing their age and growth rate?
A. Color of the bark
B. Thickness of the pith
C. Number of annular ring markings
D. Length of the twig
Answer:
|
C. Number of annular ring markings
|
Relavent Documents:
Document 0:::
The planar process is a manufacturing process used in the semiconductor industry to build individual components of a transistor, and in turn, connect those transistors together. It is the primary process by which silicon integrated circuit chips are built, and it is the most commonly used method of producing junctions during the manufacture of semiconductor devices. The process utilizes the surface passivation and thermal oxidation methods.
The planar process was developed at Fairchild Semiconductor in 1959 and process proved to be one of the most important single advances in semiconductor technology.
Overview
The key concept is to view a circuit in its two-dimensional projection (a plane), thus allowing the use of photographic processing concepts such as film negatives to mask the projection of light exposed chemicals. This allows the use of a series of exposures on a substrate (silicon) to create silicon oxide (insulators) or doped regions (conductors). Together with the use of metallization, and the concepts of p–n junction isolation and surface passivation, it is possible to create circuits on a single silicon crystal slice (a wafer) from a monocrystalline silicon boule.
The process involves the basic procedures of silicon dioxide (SiO2) oxidation, SiO2 etching and heat diffusion. The final steps involves oxidizing the entire wafer with an SiO2 layer, etching contact vias to the transistors, and depositing a covering metal layer over the oxide, thus connecting the transistors without manually wiring them together.
History
Development
In 1955 at Bell Labs, Carl Frosch and Lincoln Derick accidentally grew a layer of silicon dioxide over a silicon wafer, for which they observed surface passivation properties. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors, the first transistors in which drain and source were adjacent at the surface, showing that silicon dioxide surface passivation protected and insulated
Document 1:::
The Living Museum of Bujumbura () is a zoo and museum in Burundi. The museum is located in Bujumbura, the country's largest city and former capital, and is one of the country's two public museums. It is dedicated to the wildlife and art of Burundi.
The museum was founded in 1977 and occupies a park on the rue du 13 Octobre in downtown Bujumbura. In December 2016, the zoo's collection included six crocodiles, one monkey, one leopard, two chimpanzees, three guinea fowls, a tortoise, an antelope, and a number of snakes and fish. A number of Burundian craftsmen also have workshops on the museum's premises. Several different types of trees stand in the park, alongside a reconstruction of a traditional Burundian house (rugo).
The number of visitors to the museum fell sharply in the aftermath of the 2015 Burundian unrest, following a wider decline in the number of tourists to the country.
References
External links
Le Musée Vivant (Bujumbura) at Petit Futé
Document 2:::
Experimental factor ontology, also known as EFO, is an open-access ontology of experimental variables particularly those used in molecular biology. The ontology covers variables which include aspects of disease, anatomy, cell type, cell lines, chemical compounds and assay information. EFO is developed and maintained at the EMBL-EBI as a cross-cutting resource for the purposes of curation, querying and data integration in resources such as Ensembl, ChEMBL and Expression Atlas.
Scope and access
The original aim of EFO was to describe experimental variables in the EBI's Expression Atlas resource. This consisted primarily of disease, anatomical regions and cell types. By December 2013 the scope had grown to include several other EMBL-EBI resources and several external projects including CellFinder, cell lines from the ENCODE project and phenotype to SNP information in the NHGRI's Catalog of Published Genome-Wide Association Studies.
EFO makes use of existing biomedical ontologies from the Open Biomedical Ontologies collection in order to improve interoperability with other resources which may also use these same ontologies, such as ChEBI and the Ontology for Biomedical Investigations.
All data in the database is non-proprietary or is derived from a non-proprietary source. It is thus freely accessible and available to anyone. In addition, each data item is fully traceable and explicitly referenced to the original source.
The EFO data is available through a public web interface, BioPortal Web Service hosted at the National Centre for Biomedical Ontology (NCBO) and downloads.
See also
European Molecular Biology Laboratory
Gene ontology
Open Biomedical Ontologies
References
External links
http://www.ebi.ac.uk/efo/
Document 3:::
HD 23127 b is a jovian extrasolar planet orbiting the star HD 23127 at the distance of 2.29 AU, taking 3.32 years to orbit. The orbit is very eccentric, a so-called "eccentric Jupiter". At periastron, the distance is 1.28 AU, and at apastron, the distance is 3.30 AU. The mass is at least 1.37 times Jupiter. Only the minimum mass is known because the inclination is not known.
References
Document 4:::
Glaze or glaze ice, also called glazed frost or verglas, is a smooth, transparent and homogeneous ice coating occurring when freezing rain or drizzle hits a surface. It is similar in appearance to clear ice, which forms from supercooled water droplets. It is a relatively common occurrence in temperate climates in the winter when precipitation forms in warm air aloft and falls into below-freezing temperature at the surface.
Effects
When the freezing rain or drizzle is light and not prolonged, the ice formed is thin. It usually causes only minor damage, relieving trees of their dead branches, etc. When large quantities accumulate, however, it is one of the most dangerous types of winter hazard. When the ice layer exceeds , tree limbs with branches heavily coated in ice can break off under the enormous weight and fall onto power lines. Windy conditions, when present, will exacerbate the damage. Power lines coated with ice become extremely heavy, causing support poles, insulators, and lines to break. The ice that forms on roadways makes vehicle travel dangerous. Unlike snow, wet ice provides almost no traction, and vehicles will slide even on gentle slopes. Because it conforms to the shape of the ground or object (such as a tree branch or car) it forms on, it is often difficult to notice until it is too late to react.
Glaze from freezing rain on a large scale causes effects on plants that can be severe, as they cannot support the weight of the ice. Trees may snap as they are dormant and fragile during winter weather. Pine trees are also victims of ice storms as their needles will catch the ice, but not be able to support the weight. Orchardists spray water onto budding fruit to simulate glaze as the ice insulates the buds from even lower temperatures. This saves the crop from severe frost damage.
Glaze from freezing rain is also an extreme hazard to aircraft, as it causes very rapid structural icing. Most helicopters and small airplanes lack the necessary deicin
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the expected completion year for the first stage of the Dasu Dam project?
A. 2024
B. 2028
C. 2029
D. 2030
Answer:
|
C. 2029
|
Relavent Documents:
Document 0:::
In mathematics, a continuum structure function (CSF) is defined by Laurence Baxter as a nondecreasing mapping from the unit hypercube to the unit interval. It is used by Baxter to help in the Mathematical modelling of the level of performance of a system in terms of the performance levels of its components.
References
Further reading
Document 1:::
V553 Centauri is a variable star in the southern constellation of Centaurus, abbreviated V553 Cen. It ranges in brightness from an apparent visual magnitude of 8.22 down to 8.80 with a period of 2.06 days. At that magnitude, it is too dim to be visible to the naked eye. Based on parallax measurements, it is located at a distance of approximately 1,890 light years from the Sun.
Observations
The variability of this star was announced in 1936 by C. Hoffmeister. In 1957, he determined it to be a Delta Cepheid variable with a magnitude range of and a periodicity of . The observers M. W. Feast and G. H. Herbig noted a peculiar spectrum with strong absorption lines of the molecules CH and CN, while neutral iron lines are unusually weak. They found a stellar classification of G5p I–III.
In 1972, T. Lloyd-Evans and associates found the star's prominent bands of C2, CH, and CN varied with the Cepheid phase, being strongest at minimum. They suggested a large overabundance of carbon in the star's atmosphere. Chemical analysis of the atmosphere in 1979 showed a metallicity close to solar, with an enhancement of carbon and nitrogen. It was proposed that V553 Cen is an evolved RR Lyrae variable and is now positioned above the horizontal branch on the HR diagram. V553 Cen is classified as a BL Herculis variable, being a low–mass type II Cepheid with a period between . As with other variables of this type, it displays a secondary bump on its light curve. It is a member of a small group of carbon Cepheids, and is one of the brightest stars of that type.
V553 Cen does not appear to have a companion. From the luminosity and shape of the light curve, stellar models from 1981 suggest a mass equal to 49% of the Sun's with 9.9 times the radius of the Sun. Further analysis of the spectrum showed that oxygen is not enhanced, but sodium may be moderately enhanced. There is no evidence of s-process enhancement of elements. Instead, the abundance peculiarities are the result of nuclear reaction sequences followed by dredge-up. In particular, these are the product of triple-α, CN, ON, and perhaps some Ne–Na reactions.
See also
Carbon star
RT Trianguli Australis
Further reading
Document 2:::
MicroStation is a CAD software platform for two- and three-dimensional design and drafting, developed and sold by Bentley Systems and used in the architectural and engineering industries. It generates 2D/3D vector graphics objects and elements and includes building information modeling (BIM) features. The current version is MicroStation CONNECT Edition.
History
MicroStation was initially developed by 3 Individual developers and sold and supported by Intergraph in the 1980s. The latest versions of the software are released solely for Microsoft Windows operating systems, but historically MicroStation was available for Macintosh platforms and a number of Unix-like operating systems.
From its inception MicroStation was designed as an IGDS (Interactive Graphics Design System) file editor for the PC. Its initial development was a result of the developers experience developing PseudoStation released in 1984, a program designed to replace the use of proprietary Intergraph graphic workstations to edit DGN files by substituting the much less expensive Tektronix compatible graphics terminals. PseudoStation as well as Intergraph's IGDS program ran on a modified version of Digital Equipment Corporation's VAX super-mini computer.
In 1985, MicroStation 1.0 was released as a DGN file read-only and plot program designed to run exclusively on the IBM PC-AT personal computer.
In 1987, MicroStation 2.0 was released, and was the first version of MicroStation to read and write DGN files.
Almost two years later, MicroStation 3.0 was released, which took advantage of the increasing processing power of the PC, particularly with respect to dynamics.
Intergraph MicroStation 4.0 was released in late 1990 and added many features: reference file clipping and masking, a DWG translator, fence modes, the ability to name levels, as well as GUI enhancements. The 1992 release of version 4 introduced the ability to write applications using the MicroStation Development Language (MDL).
In 1993, Mic
Document 3:::
References
Zoltewicz, J. A. & Deady, L. W. Quaternization of heteroaromatic compounds. Quantitative aspects. Adv. Heterocycl. Chem. 22, 71-121 (1978).
Document 4:::
A vacuum-tube computer, now termed a first-generation computer, is a computer that uses vacuum tubes for logic circuitry. While the history of mechanical aids to computation goes back centuries, if not millennia, the history of vacuum tube computers is confined to the middle of the 20th century. Lee De Forest invented the triode in 1906. The first example of using vacuum tubes for computation, the Atanasoff–Berry computer, was demonstrated in 1939. Vacuum-tube computers were initially one-of-a-kind designs, but commercial models were introduced in the 1950s and sold in volumes ranging from single digits to thousands of units. By the early 1960s vacuum tube computers were obsolete, superseded by second-generation transistorized computers.
Much of what we now consider part of digital computing evolved during the vacuum tube era. Initially, vacuum tube computers performed the same operations as earlier mechanical computers, only at much higher speeds. Gears and mechanical relays operate in milliseconds, whereas vacuum tubes can switch in microseconds. The first departure from what was possible prior to vacuum tubes was the incorporation of large memories that could store thousands of bits of data and randomly access them at high speeds. That, in turn, allowed the storage of machine instructions in the same memory as data—the stored program concept, a breakthrough which today is a hallmark of digital computers.
Other innovations included the use of magnetic tape to store large volumes of data in compact form (UNIVAC I) and the introduction of random access secondary storage (IBM RAMAC 305), the direct ancestor of all the hard disk drives we use today. Even computer graphics began during the vacuum tube era with the IBM 740 CRT Data Recorder and the Whirlwind light pen. Programming languages originated in the vacuum tube era, including some still used today such as Fortran & Lisp (IBM 704), Algol (Z22) and COBOL. Operating systems, such as the GM-NAA I/O, also were b
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the apparent visual magnitude range of V553 Centauri as mentioned in the text?
A. 8.22 to 8.80
B. 8.00 to 8.50
C. 7.50 to 8.00
D. 8.10 to 8.60
Answer:
|
A. 8.22 to 8.80
|
Relavent Documents:
Document 0:::
Filariasis is a filarial infection caused by parasitic nematodes (roundworms) spread by different vectors. They are included in the list of neglected tropical diseases.
The most common type is lymphatic filariasis caused by three species of Filaria that are spread by mosquitoes. Other types of filariasis are onchocerciasis also known as river blindness caused by Onchocerca volvulus; Loa loa filariasis (Loiasis) caused by Loa loa; Mansonelliasis caused by three species of Mansonella, and Dirofilariasis caused by two types of Dirofilaria.
Epidemiology
In the year 2000, 199 million infection cases of lymphatic filariasis were predicted with 3.1 million cases in America and around 107 million in South East Asia, making up to 52% of the global cases coming from Bangladesh, India, Indonesia, and Myanmar combined. While the African nations that comprised around 21% of the cases showed a decrease in the trend over a period of 19 years from 2000 to 2018, studies still proved the global burden of infection to be concentrated in southeast Asia.
Cause
Eight known filarial worms have humans as a definitive host. These are divided into three groups according to the part of the body they affect:
Lymphatic filariasis is caused by the worms Wuchereria bancrofti, Brugia malayi, and Brugia timori. These worms occupy the lymphatic system, including the lymph nodes; in chronic cases, these worms can lead to the syndrome of elephantiasis.
Loiasis a subcutaneous filariasis is caused by Loa loa (the eye worm). Mansonella streptocerca, and Onchocerca volvulus. These worms occupy the layer just under the skin. O. volvulus causes river blindness.
Serous cavity filariasis is caused by the worms Mansonella perstans and Mansonella ozzardi, which occupy the serous cavity of the abdomen. Dirofilaria immitis, the dog heartworm, rarely infects humans.
These worms are transmitted by infected mosquitoes of the genera Aedes, Culex, Anopheles and Mansonia. Recent evidence suggests that climat
Document 1:::
The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was originally proposed by Elnagar and his coworkers in 1995. Since then, Ross, Fahroo and their coworkers have extended, generalized and applied the method for a large range of problems. An application that has received wide publicity is the use of their method for generating real time trajectories for the International Space Station.
Fundamentals
There are three basic types of Legendre pseudospectral methods:
One based on Gauss-Lobatto points
First proposed by Elnagar et al and subsequently extended by Fahroo and Ross to incorporate the covector mapping theorem.
Forms the basis for solving general nonlinear finite-horizon optimal control problems.
Incorporated in several software products
DIDO, OTIS, PSOPT
One based on Gauss-Radau points
First proposed by Fahroo and Ross and subsequently extended (by Fahroo and Ross) to incorporate a covector mapping theorem.
Forms the basis for solving general nonlinear infinite-horizon optimal control problems.
Forms the basis for solving general nonlinear finite-horizon problems with one free endpoint.
One based on Gauss points
First proposed by Reddien
Forms the basis for solving finite-horizon problems with free endpoints
Incorporated in several software products
GPOPS, PROPT
Software
The first software to implement the Legendre pseudospectral method was DIDO in 2001. Subsequently, the method was incorporated in the NASA code OTIS. Years later, many other software products emerged at an increasing pace, such as PSOPT, PROPT and GPOPS.
Flight implementations
The Legendre pseudospectral method (based on Gauss-Lobatto points) has been implemented in flight by NASA on several spacecraft through the use of the software, DIDO. The first flight implementation was on November 5, 200
Document 2:::
In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data.
Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes.
Importance for antibody characterization
By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting
Document 3:::
Algaenan is the resistant biopolymer in the cell walls of unrelated groups of green algae, and facilitates their preservation in the fossil record.
References
Document 4:::
The National Institute of Public Health – National Institute of Hygiene or NIPH–NIH, ( or NIZP–PZH) is a research institute in Poland focussed on public health. It was founded on 21 November 1918.
References
Other references
External links
homepage
Profile of the PZH from Eurohealthnet.eu (accessed July 18, 2012)
Przegląd Epidemiologiczny (journal)
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of the Intel 82288 bus controller?
A. To act as a memory controller
B. To control the bus for Intel 80286 processors
C. To serve as a graphics processor
D. To replace the CPU in older systems
Answer:
|
B. To control the bus for Intel 80286 processors
|
Relavent Documents:
Document 0:::
The concept of the stochastic discount factor (SDF) is used in financial economics and mathematical finance. The name derives from the price of an asset being computable by "discounting" the future cash flow by the stochastic factor , and then taking the expectation. This definition is of fundamental importance in asset pricing.
If there are n assets with initial prices at the beginning of a period and payoffs at the end of the period (all xs are random (stochastic) variables), then SDF is any random variable satisfying
The stochastic discount factor is sometimes referred to as the pricing kernel as, if the expectation is written as an integral, then can be interpreted as the kernel function in an integral transform. Other names sometimes used for the SDF are the "marginal rate of substitution" (the ratio of utility of states, when utility is separable and additive, though discounted by the risk-neutral rate), a "change of measure", "state-price deflator" or a "state-price density".
Properties
The existence of an SDF is equivalent to the law of one price; similarly, the existence of a strictly positive SDF is equivalent to the absence of arbitrage opportunities (see Fundamental theorem of asset pricing). This being the case, then if is positive, by using to denote the return, we can rewrite the definition as
and this implies
Also, if there is a portfolio made up of the assets, then the SDF satisfies
By a simple standard identity on covariances, we have
Suppose there is a risk-free asset. Then implies . Substituting this into the last expression and rearranging gives the following formula for the risk premium of any asset or portfolio with return :
This shows that risk premiums are determined by covariances with any SDF.
See also
Hansen–Jagannathan bound
Document 1:::
The Desagüe was the hydraulic engineering project to drain Mexico's central lake system in order to protect the capital from persistent and destructive flooding. Begun in the sixteenth century and completed in the late nineteenth century, it has been deemed "the greatest engineering project of colonial Spanish America." Historian Charles Gibson goes further and considers it "one of the largest engineering enterprises of pre-industrial society anywhere in the world." There had been periodic flooding of the prehispanic Aztec capital of Tenochtitlan, the site which became the Spanish capital of Mexico City. Flooding continued to be a threat to the viceregal capital, so at the start of the seventeenth century, the crown ordered a solution to the problem that entailed the employment of massive numbers of indigenous laborers who were compelled to work on the drainage project. The crown also devoted significant funding. A tunnel and later a surface drainage system diverted flood waters outside the closed basin of Mexico. Not until the late nineteenth century under Porfirio Díaz (1876-1911) was the project completed by British entrepreneur and engineer, Weetman Pearson, using machinery imported from Great Britain and other technology at a cost of 16 million pesos, a vast sum at the time. The ecological impact was long lasting, with desiccation permanently changing the ecology of the Basin of Mexico.
Early history
In the period before the Spanish conquest, the Aztec capital of Tenochtitlan had been subject to flooding during prolonged rains. There was no natural drainage of the lake system outside the closed basin. In the late 1440s, the ruler of Tenochtitlan, Moctezuma I, and the ruler of the allied kingdom of Texcoco, Nezahualcoyotl, ordered a dike (albarradón) to be constructed, which was expanded under the rule of Ahuitzotl. The dike was in place when the Spanish conquered Tenochtitlan in 1521, but major flooding in 1555-56 prompted the construction of a second dik
Document 2:::
In mathematics, the exponential function can be characterized in many ways.
This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics".
It is therefore useful to have multiple ways to define (or characterize) it.
Each of the characterizations below may be more or less useful depending on context.
The "product limit" characterization of the exponential function was discovered by Leonhard Euler.
Characterizations
The six most common definitions of the exponential function for real values are as follows.
Product limit. Define by the limit:
Power series. Define as the value of the infinite series (Here denotes the factorial of . One proof that is irrational uses a special case of this formula.)
Inverse of logarithm integral. Define to be the unique number such that That is, is the inverse of the natural logarithm function , which is defined by this integral.
Differential equation. Define to be the unique solution to the differential equation with initial value: where denotes the derivative of .
Functional equation. The exponential function is the unique function with the multiplicative property for all and . The condition can be replaced with together with any of the following regularity conditions: For the uniqueness, one must impose some regularity condition, since other functions satisfying can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg.
Elementary definition by powers. Define the exponential function with base to be the continuous function whose value on integers is given by repeated multiplication or division of , and whose value on rational numbers is given by . Then define to be the exponential function whose base is the unique positive real number satisfyi
Document 3:::
Materials informatics is a field of study that applies the principles of informatics and data science to materials science and engineering to improve the understanding, use, selection, development, and discovery of materials. The term "materials informatics" is frequently used interchangeably with "data science", "machine learning", and "artificial intelligence" by the community. This is an emerging field, with a goal to achieve high-speed and robust acquisition, management, analysis, and dissemination of diverse materials data with the goal of greatly reducing the time and risk required to develop, produce, and deploy new materials, which generally takes longer than 20 years.
This field of endeavor is not limited to some traditional understandings of the relationship between materials and information. Some more narrow interpretations include combinatorial chemistry, process modeling, materials databases, materials data management, and product life cycle management. Materials informatics is at the convergence of these concepts, but also transcends them and has the potential to achieve greater insights and deeper understanding by applying lessons learned from data gathered on one type of material to others. By gathering appropriate meta data, the value of each individual data point can be greatly expanded.
Databases
Databases are essential for any informatics research and applications. In material informatics many databases exist containing both empirical data obtained experimentally, and theoretical data obtained computationally. Big data that can be used for machine learning is particularly difficult to obtain for experimental data due to the lack of a standard for reporting data and the variability in the experimental environment. This lack of big data has led to growing effort in developing machine learning techniques that utilize data extremely data sets. On the other hand, large uniform database of theoretical density functional theory (DFT) calculations ex
Document 4:::
In telecommunications, the term multiplex baseband has the following meanings:
In frequency-division multiplexing, the frequency band occupied by the aggregate of the signals in the line interconnecting the multiplexing and radio or line equipment.
In frequency division multiplexed carrier systems, at the input to any stage of frequency translation, the frequency band occupied.
For example, the output of a group multiplexer consists of a band of frequencies from 60 kHz to 108 kHz. This is the group-level baseband that results from combining 12 voice-frequency input channels, having a bandwidth of 4 kHz each, including guard bands. In turn, 5 groups are multiplexed into a super group having a baseband of 312 kHz to 552 kHz. This baseband, however, does not represent a group-level baseband. Ten super groups are in turn multiplexed into one master group, the output of which is a baseband that may be used to modulate a microwave-frequency carrier.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What notable event occurred in 1883 related to cryptography?
A. The invention of the Playfair cipher
B. The publication of Auguste Kerckhoffs' La Cryptographie militare
C. The first use of the one-time pad
D. The establishment of the first long-distance semaphore telegraph line
Answer:
|
B. The publication of Auguste Kerckhoffs' La Cryptographie militare
|
Relavent Documents:
Document 0:::
Geomagnetic latitude, or magnetic latitude (MLAT), is a parameter analogous to geographic latitude, except that, instead of being defined relative to the geographic poles, it is defined by the axis of the geomagnetic dipole, which can be accurately extracted from the International Geomagnetic Reference Field (IGRF). Further, Magnetic Local Time (MLT) is the geomagnetic dipole equivalent to geographic longitude.
See also
Earth's magnetic field
Geomagnetic equator
Ionosphere
L-shell
Magnetosphere
World Magnetic Model (WMM)
References
External links
Tips on Viewing the Aurora (SWPC)
Magnetic Field Calculator (NCEI)
Ionospheric Electrodynamics Using Magnetic Apex Coordinates (Journal of Geomagnetism and Geoelectricity)
Document 1:::
Mond gas is a cheap coal gas that was used for industrial heating purposes. Coal gases are made by decomposing coal through heating it to a high temperature. Coal gases were the primary source of gas fuel during the 1940s and 1950s until the adoption of natural gas. They were used for lighting, heating, and cooking, typically being supplied to households through pipe distribution systems. The gas was named after its discoverer, Ludwig Mond.
Discovery
In 1889, Ludwig Mond discovered that the combustion of coal with air and steam produced ammonia along with an extra gas, which was named the Mond gas. He discovered this while looking for a process to form ammonium sulfate, which was useful in agriculture. The process involved reacting low-quality coal with superheated steam, which produced the Mond gas. The gas was then passed through dilute sulfuric acid spray, which ultimately removed the ammonia, forming ammonium sulfate.
Mond modified the gasification process by restricting the air supply and filling the air with steam, providing a low working temperature. This temperature was below ammonia's point of dissociation, maximizing the amount of ammonia that could be produced from the nitrogen, a product from superheating coal.
Gas production
The Mond gas process was designed to convert cheap coal into flammable gas, which was made up of mainly hydrogen, while recovering ammonium sulfate. The gas produced was rich in hydrogen and poor in carbon monoxide. Although it could be used for some industrial purposes and power generation, the gas was limited for heating or lighting.
In 1897, the first Mond gas plant began at the Brunner Mond & Company in Northwich, Cheshire. Mond plants which recovered ammonia needed to be large in order to be profitable, using at least 182 tons of coal per week.
Reaction
Predominant reaction in Mond Gas Process: C + 2H2O = CO2+ 2H2
The Mond gas was composed of roughly:
12% CO (Carbon monoxide)
28% H2 (Hydrogen)
2.2% CH4 (Methane)
1
Document 2:::
In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information. It is also known as the multivariate constraint (Garner 1962) or multiinformation (Studený & Vejnarová 1999). It quantifies the redundancy or dependency among a set of n random variables.
Definition
For a given set of n random variables , the total correlation is defined as the Kullback–Leibler divergence from the joint distribution to the independent distribution of ,
This divergence reduces to the simpler difference of entropies,
where is the information entropy of variable , and is the joint entropy of the variable set . In terms of the discrete probability distributions on variables , the total correlation is given by
The total correlation is the amount of information shared among the variables in the set. The sum represents the amount of information in bits (assuming base-2 logs) that the variables would possess if they were totally independent of one another (non-redundant), or, equivalently, the average code length to transmit the values of all variables if each variable was (optimally) coded independently. The term is the actual amount of information that the variable set contains, or equivalently, the average code length to transmit the values of all variables if the set of variables was (optimally) coded together. The difference between
these terms therefore represents the absolute redundancy (in bits) present in the given
set of variables, and thus provides a general quantitative measure of the
structure or organization embodied in the set of variables
(Rothstein 1952). The total correlation is also the Kullback–Leibler divergence between the actual distribution and its maximum entropy product approximation .
Total correlation quantifies the amount of dependence among a group of variables. A near-zero total correlation indicates that the variables in the group are essentially statistically independent; they are completely unrelated, in the sense that knowing the value of one variable does not provide any clue as to the values of the other variables. On the other hand, the maximum total correlation (for a fixed set of individual entropies ) is given by
and occurs when one of the variables determines all of the other variables. The variables are then maximally related in the sense that knowing the value of one variable provides complete information about the values of all the other variables, and the variables can be figuratively regarded as cogs, in which the position of one cog determines the positions of all the others (Rothstein 1952).
It is important to note that the total correlation counts up all the redundancies among a set of variables, but that these redundancies may be distributed throughout the variable set in a variety of complicated ways (Garner 1962). For example, some variables in the set may be totally inter-redundant while others in the set are completely independent. Perhaps more significantly, redundancy may be carried in interactions of various degrees: A group of variables may not possess any pairwise redundancies, but may possess higher-order interaction redundancies of the kind exemplified by the parity function. The decomposition of total correlation into its constituent redundancies is explored in a number sources (Mcgill 1954, Watanabe 1960, Garner 1962, Studeny & Vejnarova 1999, Jakulin & Bratko 2003a, Jakulin & Bratko 2003b, Nemenman 2004, Margolin et al. 2008, Han 1978, Han 1980).
Conditional total correlation
Conditional total correlation is defined analogously to the total correlation, but adding a condition to each term. Conditional total correlation is similarly defined as a Kullback-Leibler divergence between two conditional probability distributions,
Analogous to the above, conditional total correlation reduces to a difference of conditional entropies,
Uses of total correlation
Clustering and feature selection algorithms based on total correlation have been explored by Watanabe. Alfonso et al. (2010) applied the concept of total correlation to the optimisation of water monitoring networks.
See also
Mutual information
Dual total correlation
Interaction information
References
Alfonso, L., Lobbrecht, A., and Price, R. (2010). Optimization of Water Level Monitoring Network in Polder Systems Using Information Theory, Water Resources Research, 46, W12553, 13 PP., 2010, .
Garner W R (1962). Uncertainty and Structure as Psychological Concepts, JohnWiley & Sons, New York.
Han T S (1978). Nonnegative entropy measures of multivariate symmetric correlations, Information and Control 36, 133–156.
Han T S (1980). Multiple mutual information and multiple interactions in frequency data, Information and Control 46, 26–45.
Jakulin A & Bratko I (2003a). Analyzing Attribute Dependencies, in N Lavra\quad{c}, D Gamberger, L Todorovski & H Blockeel, eds, Proceedings of the 7th European Conference on Principles and Practice of Knowledge Discovery in Databases, Springer, Cavtat-Dubrovnik, Croatia, pp. 229–240.
Jakulin A & Bratko I (2003b). Quantifying and visualizing attribute interactions .
Margolin A, Wang K, Califano A, & Nemenman I (2010). Multivariate dependence and genetic networks inference. IET Syst Biol 4, 428.
McGill W J (1954). Multivariate information transmission, Psychometrika 19, 97–116.
Nemenman I (2004). Information theory, multivariate dependence, and genetic network inference .
Rothstein J (1952). Organization and entropy, Journal of Applied Physics 23, 1281–1282.
Studený M & Vejnarová J (1999). The multiinformation function as a tool for measuring stochastic dependence, in M I Jordan, ed., Learning in Graphical Models, MIT Press, Cambridge, MA, pp. 261–296.
Watanabe S (1960). Information theoretical analysis of multivariate correlation, IBM Journal of Research and Development 4, 66–82.
Document 3:::
Pentamidine is an antimicrobial medication used to treat African trypanosomiasis, leishmaniasis, Balamuthia infections, babesiosis, and to prevent and treat pneumocystis pneumonia (PCP) in people with poor immune function. In African trypanosomiasis it is used for early disease before central nervous system involvement, as a second line option to suramin. It is an option for both visceral leishmaniasis and cutaneous leishmaniasis. Pentamidine can be given by injection into a vein or muscle or by inhalation.
Common side effects of the injectable form include low blood sugar, pain at the site of injection, nausea, vomiting, low blood pressure, and kidney problems. Common side effects of the inhaled form include wheezing, cough, and nausea. It is unclear if doses should be changed in those with kidney or liver problems. Pentamidine is not recommended in early pregnancy but may be used in later pregnancy. Its safety during breastfeeding is unclear. Pentamidine is in the aromatic diamidine family of medications. While the way the medication works is not entirely clear, it is believed to involve decreasing the production of DNA, RNA, and protein.
Pentamidine came into medical use in 1937. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In regions of the world where trypanosomiasis is common pentamidine is provided for free by the World Health Organization (WHO).
Medical uses
Treatment of PCP caused by Pneumocystis jirovecii
Prevention of PCP in adults with HIV who have one or both of the following:
History of PCP
CD4+ count ≤ 200mm³
Treatment of leishmaniasis
Treatment of African trypanosomiasis caused by Trypanosoma brucei gambiense
Balamuthia infections
Pentamidine is classified as an orphan drug by the U.S. Food and Drug Administration
Other uses
Use as an antitumor drug has also been proposed.
Pentamidine is also identified as a potential small molecule antagonist that disrupts this interacti
Document 4:::
Lagrange's four-square theorem, also known as Bachet's conjecture, states that every nonnegative integer can be represented as a sum of four non-negative integer squares. That is, the squares form an additive basis of order four:
where the four numbers are integers. For illustration, 3, 31, and 310 can be represented as the sum of four squares as follows:
This theorem was proven by Joseph Louis Lagrange in 1770. It is a special case of the Fermat polygonal number theorem.
Historical development
From examples given in the Arithmetica, it is clear that Diophantus was aware of the theorem. This book was translated in 1621 into Latin by Bachet (Claude Gaspard Bachet de Méziriac), who stated the theorem in the notes of his translation. But the theorem was not proved until 1770 by Lagrange.
Adrien-Marie Legendre extended the theorem in 1797–8 with his three-square theorem, by proving that a positive integer can be expressed as the sum of three squares if and only if it is not of the form for integers and . Later, in 1834, Carl Gustav Jakob Jacobi discovered a simple formula for the number of representations of an integer as the sum of four squares with his own four-square theorem.
The formula is also linked to Descartes' theorem of four "kissing circles", which involves the sum of the squares of the curvatures of four circles. This is also linked to Apollonian gaskets, which were more recently related to the Ramanujan–Petersson conjecture.
Proofs
The classical proof
Several very similar modern versions of Lagrange's proof exist. The proof below is a slightly simplified version, in which the cases for which m is even or odd do not require separate arguments.
Proof using the Hurwitz integers
Another way to prove the theorem relies on Hurwitz quaternions, which are the analog of integers for quaternions.
Generalizations
Lagrange's four-square theorem is a special case of the Fermat polygonal number theorem and Waring's problem. Another possible generalization
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does total correlation quantify in a set of random variables?
A. The total number of variables in the set
B. The dependencies or redundancies among the variables
C. The individual entropies of each variable
D. The maximum possible entropy of the variable set
Answer:
|
B. The dependencies or redundancies among the variables
|
Relavent Documents:
Document 0:::
A planetary phase is a certain portion of a planet's area that reflects sunlight as viewed from a given vantage point, as well as the period of time during which it occurs. The phase is determined by the phase angle, which is the angle between the planet, the Sun and the Earth.
Inferior planets
The two inferior planets, Mercury and Venus, which have orbits that are smaller than the Earth's, exhibit the full range of phases as does the Moon, when seen through a telescope. Their phases are "full" when they are at superior conjunction, on the far side of the Sun as seen from the Earth. It is possible to see them at these times, since their orbits are not exactly in the plane of Earth's orbit, so they usually appear to pass slightly above or below the Sun in the sky. Seeing them from the Earth's surface is difficult, because of sunlight scattered in Earth's atmosphere, but observers in space can see them easily if direct sunlight is blocked from reaching the observer's eyes. The planets' phases are "new" when they are at inferior conjunction, passing more or less between the Sun and the Earth. Sometimes they appear to cross the solar disk, which is called a transit of the planet. At intermediate points on their orbits, these planets exhibit the full range of crescent and gibbous phases.
Superior planets
The superior planets, orbiting outside the Earth's orbit, do not exhibit a full range of phases since their maximum phase angles are smaller than 90°. Mars often appears significantly gibbous, it has a maximum phase angle of 45°. Jupiter has a maximum phase angle of 11.1° and Saturn of 6°, so their phases are almost always full.
See also
Earth phase
Lunar phase
Phases of Venus
References
Further reading
One Schaaf, Fred. The 50 Best Sights in Astronomy and How to See Them: Observing Eclipses, Bright Comets, Meteor Showers, and Other Celestial Wonders. Hoboken, New Jersey: John Wiley, 2007. Print.
Two Ganguly, J. Thermodynamics in Earth and Planetary Sciences. B
Document 1:::
In mathematics, the Lagrangian theory on fiber bundles is globally formulated in algebraic terms of the variational bicomplex, without appealing to the calculus of variations. For instance, this is the case of classical field theory on fiber bundles (covariant classical field theory).
The variational bicomplex is a cochain complex of the differential graded algebra of exterior forms on jet manifolds of sections of a fiber bundle. Lagrangians and Euler–Lagrange operators on a fiber bundle are defined as elements of this bicomplex. Cohomology of the variational bicomplex leads to the global first variational formula and first Noether's theorem.
Extended to Lagrangian theory of even and odd fields on graded manifolds, the variational bicomplex provides strict mathematical formulation of classical field theory in a general case of reducible degenerate Lagrangians and the Lagrangian BRST theory.
Document 2:::
Economic base analysis is a theory that posits that activities in an area divide into two categories: basic and nonbasic. Basic industries are those exporting from the region and bringing wealth from outside, while nonbasic (or service) industries support basic industries. Because export-import flows are usually not tracked at sub-national (regional) levels, it is not practical to study industry output and trade flows to and from a region. As an alternative, the concepts of basic and nonbasic are operationalized using employment data. The theory was developed by Robert Murray Haig in his work on the Regional Plan of New York in 1928.
Application of the analysis
The basic industries of a region are identified by comparing employment in the region to national norms. If the national norm for employment in, for example, Egyptian woodwind manufacturing is 5 percent and the region's employment is 8 percent, then 3 percent of the region's woodwind employment is basic. Once basic employment is identified, the outlook for basic employment is investigated sector by sector and projections made sector by sector. In turn, this permits the projection of total employment in the region. Typically the basic/nonbasic employment ratio is about 1:1. Extending by manipulation of data and comparisons, conjectures may be made about population and income. This is a rough, serviceable procedure, and it remains in use today. It has the advantage of being readily operationalized, fiddled with, and understandable.
Formula
The formula for computing location quotients can be written as:
Where:
Local employment in industry i
Total local employment
Reference area employment in industry i
Total reference area employment
It is assumed that the base year is identical in all of the above variables.
Example and methodology
Economic base ideas can be easy to understand, as are measures made of employment. For instance, it is well known that the economy of Seattle, Washington is tied to airc
Document 3:::
This page provides supplementary chemical data on aluminium oxide.
Material Safety Data Sheet
SIRI
Science Stuff
Structure and properties
Thermodynamic properties
Spectral data
Document 4:::
In finance, risk factors are the building blocks of investing, that help explain the systematic returns in equity market, and the possibility of losing money in investments or business adventures. A risk factor is a concept in finance theory such as the capital asset pricing model, arbitrage pricing theory and other theories that use pricing kernels. In these models, the rate of return of an asset (hence the converse its price) is a random variable whose realization in any time period is a linear combination of other random variables plus a disturbance term or white noise. In practice, a linear combination of observed factors included in a linear asset pricing model (for example, the Fama–French three-factor model) proxy for a linear combination of unobserved risk factors if financial market efficiency is assumed. In the Intertemporal CAPM, non-market factors proxy for changes in the investment opportunity set.
Risk factors occur whenever any sort of asset is involved, and there are many forms of risks from credit, liquidity risks to investment and currency risks.
Different participants of risk factors contain different risk factors for each participant, for example, financial risks for the individual, financial risks for the Market, financial risks for the Government etc.
Financial risks for the individual
Financial risks for individuals occur when they make sub-optimal decisions. There are several types of Individual risk factors; pure risk, liquidity risk, speculative risk, and currency risk. Pure Risk is a type of risk where the outcome cannot be controlled, and only has two outcomes which are complete loss or no loss at all. An example of pure risk for an individual would be owning an equipment, there is risk of it being stolen and there would be a loss to the individual, however, if it weren't stolen, there is no gain but only no loss for the individual. Liquidity Risk is when securities cannot be purchased or sold fast enough to cut losses in a volatile
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main purpose of a work system according to the text?
A. To create software for internal use
B. To produce products and services for customers
C. To manage human resources effectively
D. To analyze financial performance
Answer:
|
B. To produce products and services for customers
|
Relavent Documents:
Document 0:::
Serruria stellata, the star spiderhead, is a flower-bearing shrub that belongs to the genus Serruria and forms part of the fynbos. The plant is native to the Western Cape, South Africa.
Description
The shrub is small with creeping stems and grows only tall and flowers from September to November. Fire destroys the plant but the seeds survive. Two months after flowering, the fruit falls off and ants disperse the seeds. They store the seeds in their nests. The plant is unisexual and pollinated by insects.
In Afrikaans, it is known as .
Distribution and habitat
The plant occurs from the Stettyns Mountains to the western Riviersonderend Mountains. It grows in sandstone sand at elevations of .
References
Document 1:::
The double-skin façade is a system of building consisting of two skins, or façades, placed in such a way that air flows in the intermediate cavity. The ventilation of the cavity can be natural, fan supported or mechanical. Apart from the type of the ventilation inside the cavity, the origin and destination of the air can differ depending mostly on climatic conditions, the use, the location, the occupational hours of the building and the HVAC strategy.
The glass skins can be single or double glazing units with a distance from 20 cm up to 2 metres. Often, for protection and heat extraction reasons during the cooling period, solar shading devices are placed inside the cavity.
History
The essential concept of the double-skin facade was first explored and tested by the Swiss-French architect Le Corbusier in the early 20th century. His idea, which he called mur neutralisant (neutralizing wall), involved the insertion of heating/cooling pipes between large
layers of glass. Such a system was employed in his Villa Schwob (La Chaux-de-Fonds, Switzerland, 1916), and proposed for several other projects, including the League of Nations competition (1927), Centrosoyuz building (Moscow, 1928–33), and Cité du Refuge (Paris, 1930). American engineers studying the system in 1930 informed Le Corbusier that it would use much more energy than a conventional air system, but Harvey Bryan later concluded Le Corbusier's idea had merit if it included solar heating.
Another early experiment was the 1937 Alfred Loomis house by architect William Lescaze in Tuxedo Park, NY. This house included "an elaborate double envelope" with a 2-foot-deep air space conditioned by a separate system from the house itself. The object was to maintain high humidity levels inside.
One of the first modern examples to be constructed was the Occidental Chemical Building (Niagara Falls, New York, 1980) by Cannon Design. This building, essentially a glass cube, included a 4-feet-deep cavity between glass layers to
Document 2:::
The Miniature X-ray Solar Spectrometer (MinXSS) CubeSat was the first launched National Aeronautics and Space Administration Science Mission Directorate CubeSat with a science mission. It was designed, built, and operated primarily by students at the University of Colorado Boulder with professional mentorship and involvement from professors, scientists, and engineers in the Aerospace Engineering Sciences department and the Laboratory for Atmospheric and Space Physics, as well as Southwest Research Institute, NASA Goddard Space Flight Center, and the National Center for Atmospheric Research's High Altitude Observatory. The mission principal investigator is Dr. Thomas N. Woods and co-investigators are Dr. Amir Caspi, Dr. Phil Chamberlin, Dr. Andrew Jones, Rick Kohnert, Professor Xinlin Li, Professor Scott Palo, and Dr. Stanley Solomon. The student lead (project manager, systems engineer) was Dr. James Paul Mason, who has since become a Co-I for the second flight model of MinXSS.
MinXSS launched on 2015 December 6 to the International Space Station as part of the Orbital ATK Cygnus CRS OA-4 cargo resupply mission. The launch vehicle was a United Launch Alliance Atlas V rocket in the 401 configuration. CubeSat ridesharing was organized as part of NASA ELaNa-IX. Deployment from the International Space Station was achieved with a NanoRacks CubeSat Deployer on 2016 May 16. Spacecraft beacons were picked up soon after by amateur radio operators around the world. Commissioning of the spacecraft was completed on 2016 June 14 and observations of solar flares captured nearly continuously since then. The altitude rapidly decayed in the last week of the mission as atmospheric drag increased exponentially with altitude. The last contact from MinXSS came on 2017-05-06 at 02:37:26 UTC from a HAM operator in Australia. At that time, some temperatures on the spacecraft were already in excess of 100 °C. (One temperature of >300 °C indicated that the solar panel had disconnected, sugge
Document 3:::
Archive for Mathematical Logic is a peer-reviewed mathematics journal published by Springer Science+Business Media. It was established in 1950 and publishes articles on mathematical logic.
Abstracting and indexing
The journal is abstracted and indexed in:
Mathematical Reviews
Zentralblatt MATH
Scopus
SCImago
According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.287.
References
External links
Document 4:::
X-ray absorption spectroscopy (XAS) is a set of advanced techniques used for probing the local environment of matter at atomic level and its electronic structure. The experiments require access to synchrotron radiation facilities for their intense and tunable X-ray beams. Samples can be in the gas phase, solutions, or solids.
Background
XAS data are obtained by tuning the photon energy, using a crystalline monochromator, to a range where core electrons can be excited (0.1-100 keV). The edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. For instance, excitation of a 1s electron occurs at the K-edge, while excitation of a 2s or 2p electron occurs at an L-edge (Figure 1).
There are three main regions found on a spectrum generated by XAS data, which are then thought of as separate spectroscopic techniques (Figure 2):
The absorption threshold determined by the transition to the lowest unoccupied states:
The X-ray absorption near-edge structure (XANES), introduced in 1980 and later in 1983 and also called NEXAFS (near-edge X-ray absorption fine structure), which are dominated by core transitions to quasi bound states (multiple scattering resonances) for photoelectrons with kinetic energy in the range from 10 to 150 eV above the chemical potential, called "shape resonances" in molecular spectra since they are due to final states of short life-time degenerate with the continuum with the Fano line-shape. In this range, multi-electron excitations and many-body final states in strongly correlated systems are relevant;
In the high kinetic energy range of the photoelectron, the scattering cross-section with neighbor atoms is weak, and the absorption spectra are dominated by EXAFS (extended X-ray absorption fine structure), where the scattering of the ejected photoelectron of neighboring atoms can be approximated by single scattering events. In 1985, it was shown
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of a stress test in hardware testing?
A. To measure the maximum performance of a system
B. To determine the breaking points and safe usage limits
C. To evaluate the aesthetic design of the hardware
D. To compile a list of software applications used
Answer:
|
B. To determine the breaking points and safe usage limits
|
Relavent Documents:
Document 0:::
Messier 12 or M 12 (also designated NGC 6218) is a globular cluster in the constellation of Ophiuchus. It was discovered by the French astronomer Charles Messier on May 30, 1764, who described it as a "nebula without stars". In dark conditions this cluster can be faintly seen with a pair of binoculars. Resolving the stellar components requires a telescope with an aperture of or greater. In a scope, the granular core shows a diameter of 3 (arcminutes) surrounded by a 10 halo of stars.
M12 is roughly 3° northwest from the cluster M10 and 5.6° east southeast from star Lambda Ophiuchi. It is also located near the 6th magnitude 12 Ophiuchi. The cluster is about from Earth and has a spatial diameter of about 75 light-years. The brightest stars of M12 are of 12th magnitude. M10 and M12 are only a few thousand light-years away from each other and each cluster would appear at about magnitude 4.5 from the other. With a Shapley-Sawyer rating of IX, it is rather loosely packed for a globular and was once thought to be a tightly concentrated open cluster. Thirteen variable stars have been recorded in this cluster. M12 is approaching us at a velocity of 16 km/s.
A study published in 2006 concluded that this cluster has an unusually low number of low-mass stars. The authors surmise that they were stripped from the cluster by passage through the relatively matter-rich plane of the Milky Way.
See also
List of Messier objects
Notes
References
External links
Messier 12, SEDS Messier pages
Messier 12, Galactic Globular Clusters Database page
'Stolen' stars article at Universe Today
Messier 12, Amateur astrophotographer (hgg) with a Celestron 9.25"
Document 1:::
River bank filtration is a type of filtration that works by passing water to be purified for use as drinking water through the banks of a river or lake. It is then drawn off by extraction from a well that is some distance away from the water body. The process may directly yield drinkable water, or be a relatively uncomplicated way of pre-treating water for further purification.
Usage
The process has been in use in Europe, especially in Germany along the Rhine and later in Berlin, since the 1870s. Major facilities also exist in many other countries, including the United States, where Nebraska leads the use of such facilities.
Procedure
Three filtration mechanisms are possible. Physical filtration or straining takes places when suspended particulates are too large to pass through interstitial spaces between alluvial soil particles. Biological filtration occurs when soil microorganisms remove and digest dissolved or suspended organic material and chemical nutrients.
Chemical filtration or ion exchange may take place when aquifer soils react with soluble chemicals in the water. Most 'normal' contaminants (microbial organisms and inorganic or organic pollutants) will be removed by bank filtration, either because they get filtered out by the sand/earth of the bank, or because the passage of time (which may be days or potentially weeks) is sufficient to render them inactive. Research has also shown that the efficiency of removal depends not only on the contaminant, but also on the "hydraulic and chemical characteristics of the bottom sediment and the aquifer, the local recharge-discharge conditions, and biochemical processes".
Limitations
There have been indications that some pharmaceutical compounds (medical drug traces from human use) may not always be sufficiently removed by bank filtration, and that in areas with substantial contamination of this type, additional treatment may be needed.
Wastewater applications
Alluvial soils may also be used to purify waste
Document 2:::
Ile aux Aigrettes is an islet off the south-east coast of Mauritius. It functions as a nature reserve and a scientific research station. It is also a popular visitors attraction—both for tourists and for Mauritians.
Geography
It has an area of and is the largest islet in the Grand Port bay, off the south-east coast of Mauritius and roughly a kilometer () from the coastal town of Mahebourg. It is low-lying and is formed from coral-limestone (unlike the majority of Mauritius which is from volcanic rock).
Nature reserve and conservation
Ile aux Aigrettes conserves the world's only remaining piece of Mauritius Dry Coastal Forest—a once plentiful vegetation type. It is therefore home to a large number of extremely rare or endangered species of plants and animals.
Over several hundred years, indigenous flora and fauna was devastated by logging and invasive species. In this sense, the islet shared the same fate as the rest of Mauritius. The Dodo and the indigenous species of giant tortoise became extinct, as did many plant species.
Relicts of some species survived though, and in 1965 the island was declared a nature reserve. There followed intense work to restore the vegetation and the few remaining indigenous animal species. In addition, several other species which had disappeared from the island—but survived elsewhere in Mauritius—were reintroduced.
Reptile species include the large, slow Telfairs Skink, several species of ornately coloured day gecko, and a population of non-indigenous Aldabra giant tortoise, brought to Île aux Aigrettes to take over the important ecological role of the extinct Mauritian tortoises. The large tortoises eat and spread the plant seeds and thereby help the forest to rejuvenate naturally.
The rare, endemic ebony tree species Diospyros egrettarum is named after this island, on which it is plentiful.
Endemic Mauritius animals on the island
Other flora and fauna
References
Document 3:::
Amplidata is a privately-held cloud storage technology provider based in Lochristi, Belgium. In November 2010, Amplidata opened its U.S. headquarters in Redwood City, California.
The research and development department has locations in Belgium and Egypt, while the sales and support departments are represented in a number of countries in Europe and North America.
Amplidata has developed a Distributed Storage System (DSS) technology designed to solve the scalability and reliability problems traditional storage systems face, by taking advantage of the introduction of large capacity SATA drives and solid state disk. By storing data across a selection of disks that are widely distributed across storage nodes, racks and sites, the DSS architecture spreads the risk of hardware failure to ensure that a failure of a component such as a disk, storage node or even a full rack has no impact on data availability and a minimal impact on data redundancy. The entire system is permanently monitored and data integrity is constantly checked. As a result, bit errors on disks are proactively healed before they become an issue to the user.
Amplidata's AmpliStor distributes and stores data redundantly across a large number of disks. The algorithm that AmpliStor uses first puts the data into an object and then stores the data across multiple disks in the AmpliStor system. By storing the data as an object, Amplidata can reconstruct the original data from any of the disks on which the data within the object resides.
Amplidata was acquired in March 2015 by HGST, a Western Digital subsidiary.
History
Amplidata was founded in 2008 by Wim De Wispelaere (CEO) and Wouter Van Eetvelde (COO). The company was initially funded by Kristof De Spiegeleer. The two founders started the company to address some of the inherent weaknesses of high capacity disk systems, such as performance, bit error rates, mean time between failures and RAID rebuild times which have not improved sufficiently to keep up wi
Document 4:::
The American Concrete Institute (ACI, formerly National Association of Cement Users or NACU) is a non-profit technical society and standards developing organization. ACI was founded in January 1905 during a convention in Indianapolis. The Institute's headquarters are currently located in Farmington Hills, Michigan, USA. ACI's mission is "ACI develops and disseminates consensus-based knowledge on concrete and its uses."
ACI History
A lack of standards for making concrete blocks resulted in a negative perception of concrete for construction. An editorial by Charles C. Brown in the September 1904 issue of Municipal Engineering discussed the idea of forming an organization to bring order and standard practices to the industry. In 1905 the National Association of Cement Users was formally organized and adopted a constitution and bylaws. Richard Humphrey was elected its first President. The first committees were appointed at the 1905 convention in Indianapolis and offered preliminary reports on a number of subject areas. The first complete committee reports were offered at the 1907 convention. The association's first official headquarters was established in 1908 at Richard Humphrey's office in Philadelphia, Pennsylvania. Clerical and editorial help was brought on to more effectively organize conventions and publish proceedings of the institute. The "Standard Building Regulations for the Use of Reinforced Concrete" was adopted at the 1910 convention and became the association's first reinforced concrete building code. By 1912 the association had adopted 14 standards. At the December 1912 convention the association approved publication of a monthly journal of proceedings. In July 1913 the Board of Direction of NACU decided to change its name to the American Concrete Institute. The new name was deemed to be more descriptive of the work being conducted within the institute.
ACI 318
ACI 318 Building Code Requirements for Structural Concrete provides minimum requirements nec
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of Ile aux Aigrettes as mentioned in the text?
A. A commercial fishing area
B. A nature reserve and scientific research station
C. A historical monument
D. A luxury resort
Answer:
|
B. A nature reserve and scientific research station
|
Relavent Documents:
Document 0:::
Acoustic emission (AE) is the phenomenon of radiation of acoustic (elastic) waves in solids that occurs when a material undergoes irreversible changes in its internal structure, for example as a result of crack formation or plastic deformation due to aging, temperature gradients, or external mechanical forces.
In particular, AE occurs during the processes of mechanical loading of materials and structures accompanied by structural changes that generate local sources of elastic waves. This results in small surface displacements of a material produced by elastic or stress waves generated when the accumulated elastic energy in a material or on its surface is released rapidly.
The mechanism of emission of the primary elastic pulse AE (act or event AE) may have a different physical nature. The figure shows the mechanism of the AE act (event) during the nucleation of a microcrack due to the breakthrough of the dislocations pile-up (dislocation is a linear defect in the crystal lattice of a material) across the boundary in metals with a body-centered cubic (bcc) lattice under mechanical loading, as well as time diagrams of the stream of AE acts (events) (1) and the stream of recorded AE signals (2).
The AE method makes it possible to study the kinetics of processes at the earliest stages of microdeformation, dislocation nucleation and accumulation of microcracks. Roughly speaking, each crack seems to "scream" about its growth. This makes it possible to diagnose the moment of crack origin itself by the accompanying AE. In addition, for each crack that has already arisen, there is a certain critical size, depending on the properties of the material. Up to this size, the crack grows very slowly (sometimes for decades) through a huge number of small discrete jumps accompanied by AE radiation. After the crack reaches a critical size, catastrophic destruction occurs, because its further growth is already at a speed close to half the speed of sound in the material of the struct
Document 1:::
The Global Enabling Trade Report was first published in 2008 by the World Economic Forum.
The 2008 report covers 118 major and emerging economies. At the core of the report is the Enabling Trade Index which ranks the countries using data from different sources (e.g., World Economic Forum's Executive Opinion Survey, International Trade Centre, World Bank, the United Nations Conference on Trade and Development (UNCTAD), IATA, ITU, Global Express Association).
The Enabling Trade Index measures the factors, policies and services that facilitate the trade in goods across borders and to destination. It is made up of four sub-indexes:
Market access
Border administration
Transport and communications infrastructure
Business environment
Each of these sub-indexes contains two to three pillars that assess different aspects of a country's trade environment.
2016 rankings
Global Enabling Trade Report 2016
2014 rankings
Global Enabling Trade Report 2014
2012 rankings
Global Enabling Trade Report 2012
2010 rankings
Global Enabling Trade Report 2010
2009 rankings
Global Enabling Trade Report 2009
2008 rankings
Global Enabling Trade Report 2008
References
External links
The Global Enabling Trade Report 2010
Document 2:::
A mountain chain is a row of high mountain summits, a linear sequence of interconnected or related mountains, or a contiguous ridge of mountains within a larger mountain range. The term is also used for elongated fold mountains with several parallel chains ("chain mountains").
While in mountain ranges, the term mountain chain is common, in hill ranges a sequence of hills tends to be referred to a ridge or hill chain.
Elongated mountain chains occur most frequently in the orogeny of fold mountains, (that are folded by lateral pressure), and nappe belts (where a sheetlike body of rock has been pushed over another rock mass). Other types of range such as horst ranges, fault block mountain or truncated uplands rarely form parallel mountain chains. However, if a truncated upland is eroded into a high table land, the incision of valleys can lead to the formations of mountain or hill chains.
Formation of parallel mountain chains
The chain-like arrangement of summits and the formation of long, jagged mountain crests – known in Spanish as sierras ("saws") – is a consequence of their collective formation by mountain building forces. The often linear structure is linked to the direction of these thrust forces and the resulting mountain folding which in turn relates to the fault lines in the upper part of the Earth's crust, that run between the individual mountain chains. In these fault zones, the rock, which has sometimes been pulverised, is easily eroded, so that large river valleys are carved out. These, so called longitudinal valleys reinforce the trend, during the early mountain building phase, towards the formation of parallel chains of mountains.
The tendency, especially of fold mountains (e. g. the Cordilleras) to produce roughly parallel chains is due to their rock structure and the propulsive forces of plate tectonics. The uplifted rock masses are either magmatic plutonic rocks, easily shaped because of their higher temperature, or sediments or metamorphic rocks
Document 3:::
Clitoral vibrators are vibrators designed to externally stimulate the clitoris for sexual pleasure and orgasm. They are sex toys created for massaging the clitoris, and are not penetrating sex toys, although the shape of some vibrators allows for penetration and the stimulation of inner erogenous zones for extra sexual pleasure.
Use
Regardless of the design, the main function of the clitoral vibrator is to vibrate at varying speeds and intensities. Vibrators are normally driven by batteries and some of them can be used underwater. Discretion is often a useful feature for a sex toy, so clitoral vibrators that look like ordinary objects at first glance are also available. Clitoral vibrators have been designed to resemble lipsticks, mobile telephones, sponges, and many other everyday items.
Types
Most vibrators can be used for clitoral stimulation, but there are a few distinct types of vibrator available:
Manual clitoral vibrators come in a wide variety of designs. Some wand vibrators (such as the Hitachi Magic Wand and the Doxy) are powered by a long cable to a wall socket, making them somewhat less convenient, and unsafe in a wet environment. However, they are generally powerful, offering more intense stimulation and better durability. There are also battery operated vibrators, as well as small ones that can be worn on a finger.
Pressure Wave Vibrators The concept of pressure wave vibrators represents a recent innovation in the field of clitoral stimulation devices. Rather than the conventional method of direct contact, these vibrators utilise air waves to stimulate the clitoris, thereby offering a gentler yet more intense form of stimulation that is distinct from other vibrators. The technology was developed by Michael Lenke and his wife in Bavaria, following years of research and development. In 2014, they established the brand Womanizer. The technology has since been adopted by numerous manufacturers for use in a wide variety of sex toys.
Hands-free clito
Document 4:::
NEFERT (Neck Flexion Rotation Test) is a medical examination procedure developed in 1999 by German neurootologist Claus-Frenz Claussen.
Use
The procedure serves for investigating intracorporal movement differences between head and body, especially at the atlanto-axial joint and the lower cervical spine column. The method can help diagnosing sprains of the neck, stiff necks, and whiplash. According to the National Center for Biotechnology Information within the National Institutes of Health, the procedure is "commonly used in clinical practice to evaluate patients with cervicogenic headache."
Method
The test consists of six movements, which can also be distinguished into four phases. The movements are performed by the patient in a standing position.
(Phase I) The patient turns his head several times maximally within a time period of 20 seconds.
(Phase II) The patient bows his head forward maximally.
In a bowed position, the patient turns his head maximally from the left to the right and from the right to the left within a time period of 20 seconds.
(Phase III) The patient bows his head backwards maximally.
In a position bowed backwards, the patient turns his head for several times maximally from the left to the right and from the right to the left within a time period of 20 seconds.
(Phase IV) After a total time period of 60 seconds, the patient returns into a straight position.
If the test results are interfered by unconscious shoulder movements of the patient, a second test course is performed, during which the examining person holds the patient's shoulders with the hands. The test results are recorded and graphically evaluated by a computer, for example with the help of Cranio-corpography.
Literature
Claus-Frenz Claussen, Burkard Franz: Contemporary and Practical Neurootology, Neurootologisches Forschungsinstitut der 4-G-Forschung e. V., Bad Kissingen 2006,
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What distinguishes allosteric modulators from orthosteric modulators in terms of their binding sites and mechanisms of action?
A. Allosteric modulators bind to the active site and directly block enzyme activity, while orthosteric modulators bind to a regulatory site.
B. Allosteric modulators bind to a site distinct from the active site, causing conformational changes, while orthosteric modulators bind directly to the active site and block substrate binding.
C. Allosteric modulators enhance substrate binding at the active site, while orthosteric modulators decrease enzyme activity by binding to a different site.
D. Allosteric modulators are only inhibitors, while orthosteric modulators can be either inhibitors or activators.
Answer:
|
B. Allosteric modulators bind to a site distinct from the active site, causing conformational changes, while orthosteric modulators bind directly to the active site and block substrate binding.
|
Relavent Documents:
Document 0:::
Binimetinib, sold under the brand name Mektovi, is an anti-cancer medication used to treat various cancers. Binimetinib is a selective inhibitor of MEK, a central kinase in the tumor-promoting MAPK pathway. Inappropriate activation of the pathway has been shown to occur in many cancers. In June 2018 it was approved by the FDA in combination with encorafenib for the treatment of patients with unresectable or metastatic BRAF V600E or V600K mutation-positive melanoma. In October 2023, it was approved by the FDA for treatment of NSCLC with a BRAF V600E mutation in combination with encorafenib. It was developed by Array Biopharma.
Mechanism of action
Binimetinib is an orally available inhibitor of mitogen-activated protein kinase kinase (MEK), or more specifically, a MAP2K inhibitor. MEK is part of the RAS pathway, which is involved in cell proliferation and survival. MEK is upregulated in many forms of cancer. Binimetinib, uncompetitive with ATP, binds to and inhibits the activity of MEK1/2 kinase, which has been shown to regulate several key cellular activities including proliferation, survival, and angiogenesis. MEK1/2 are dual-specificity threonine/tyrosine kinases that play key roles in the activation of the RAS/RAF/MEK/ERK pathway and are often upregulated in a variety of tumor cell types. Inhibition of MEK1/2 prevents the activation of MEK1/2 dependent effector proteins and transcription factors, which may result in the inhibition of growth factor-mediated cell signaling. As demonstrated in preclinical studies, this may eventually lead to an inhibition of tumor cell proliferation and an inhibition in production of various inflammatory cytokines including interleukin-1, -6 and tumor necrosis factor.
Development
In 2015, it was in phase III clinical trials for ovarian cancer, BRAF mutant melanoma, and NRAS Q61 mutant melanoma.
In December 2015, the company announced that the mutant-NRAS melanoma trial was successful. In the trial, those receiving binimetinib h
Document 1:::
A forced circulation boiler is a boiler where a pump is used to circulate water inside the boiler. This differs from a natural circulation boiler which relies on current density to circulate water inside the boiler. In some forced circulation boilers, the water is circulated at twenty times the rate of evaporation.
In water tube boilers, the way the water is recirculated inside the boiler before becoming steam can be described as either natural circulation or forced circulation. In a water tube boiler, the water is recirculated inside until the vapor pressure of the water overcomes the vapor pressure inside the stream drum and becomes saturated steam. The forced circulation boiler begins the same as a natural circulation boiler, at the feed water pump. Water is introduced into the steam drum and is circulated around the boiler, leaving only as steam. What makes the forced circulation boiler different is its use of a secondary pump that circulates water through the boiler. The secondary pump takes the feed water going to the boiler and raises the pressure of the water going in. In a natural circulation boilers, the circulation of water is dependent on the differential pressures caused by the change of density in the water as it is heated. That is to say that as the water is heated and starts turning to steam, the density decreases sending the hottest water and steam to the top of the furnace tubes.
In contrast to the natural circulation boiler, the forced circulation boiler uses a water circulation pump to force that flow instead of waiting for the differential to form. Because of this, the generation tubes of a forced circulation boiler are able to be oriented in whatever way is required by space constraints. Water is taken from the drum and forced through the steel tubes. In this way it is able to produce steam much faster than that of a natural circulation boiler.
Types
LaMont
One example of a forced circulation boiler is the LaMont boiler. Such boilers are
Document 2:::
In computer graphics, view synthesis, or novel view synthesis, is a task which consists of generating images of a specific subject or scene from a specific point of view, when the only available information is pictures taken from different points of view.
This task was only recently (late 2010s – early 2020s) tackled with significant success, mostly as a result of advances in machine learning. Notable successful methods are Neural Radiance Fields and 3D Gaussian Splatting.
Applications of view synthesis are numerous, one of them being Free viewpoint television.
Document 3:::
The chameleon is a hypothetical scalar particle that couples to matter more weakly than gravity, postulated as a dark energy candidate. Due to a non-linear self-interaction, it has a variable effective mass which is an increasing function of the ambient energy density—as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1 mm) but much larger in low-density intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. Although this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally.
In 2021, physicists suggested that an excess reported at the dark matter detector experiment XENON1T rather than being a dark matter candidate could be a dark energy candidate: particularly, chameleon particles yet in July 2022 a new analysis by XENONnT discarded the excess.
Hypothetical properties
Chameleon particles were proposed in 2003 by Khoury and Weltman.
In most theories, chameleons have a mass that scales as some power of the local energy density: , where
Chameleons also couple to photons, allowing photons and chameleons to oscillate between each other in the presence of an external magnetic field.
Chameleons can be confined in hollow containers because their mass increases rapidly as they penetrate the container wall, causing them to reflect. One strategy to search experimentally for chameleons is to direct photons into a cavity, confining the chameleons produced, and then to switch off the light source. Chameleons would be ind
Document 4:::
Thread Level Speculation (TLS), also known as Speculative Multi-threading, or Speculative Parallelization, is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently.
Description
TLS extracts threads from serial code and executes them speculatively in parallel with a safe thread. The speculative thread will need to be discarded or re-run if its presumptions on the input state prove to be invalid. It is a dynamic (runtime) parallelization technique that can uncover parallelism that static (compile-time) parallelization techniques may fail to exploit because at compile time thread independence cannot be guaranteed. For the technique to achieve the goal of reducing overall execute time, there must be available CPU resource that can be efficiently executed in parallel with the main safe thread.
TLS assumes optimistically that a given portion of code (generally loops) can be safely executed in parallel. To do so, it divides the iteration space into chunks that are executed in parallel by different threads. A hardware or software monitor ensures that sequential semantics are kept (in other words, that the execution progresses as if the loop were executing sequentially). If a dependence violation appears, the speculative framework may choose to stop the entire parallel execution and restart it; to stop and restart the offending threads and all their successors, in order to be fed with correct data; or to stop exclusively the offending thread and its successors that have consumed
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What significant event does the legend associated with North Wind's Weir describe involving Storm Wind and his grandmother?
A. Storm Wind helped his grandmother defeat North Wind by creating a flood.
B. Storm Wind was warned by North Wind to stay away from the mountain.
C. Storm Wind built a stone weir to block the river.
D. Storm Wind and North Wind made peace after a long battle.
Answer:
|
A. Storm Wind helped his grandmother defeat North Wind by creating a flood.
|
Relavent Documents:
Document 0:::
XLA (Accelerated Linear Algebra) is an open-source compiler for machine learning developed by the OpenXLA project. XLA is designed to improve the performance of machine learning models by optimizing the computation graphs at a lower level, making it particularly useful for large-scale computations and high-performance machine learning models. Key features of XLA include:
Compilation of Computation Graphs: Compiles computation graphs into efficient machine code.
Optimization Techniques: Applies operation fusion, memory optimization, and other techniques.
Hardware Support: Optimizes models for various hardware, including CPUs, GPUs, and NPUs.
Improved Model Execution Time: Aims to reduce machine learning models' execution time for both training and inference.
Seamless Integration: Can be used with existing machine learning code with minimal changes.
XLA represents a significant step in optimizing machine learning models, providing developers with tools to enhance computational efficiency and performance.
Supported target devices
x86-64
ARM64
NVIDIA GPU
AMD GPU
Intel GPU
Apple GPU
Google TPU
AWS Trainium, Inferentia
Cerebras
Graphcore IPU
Document 1:::
The European environmental research and innovation policy is a set of strategies, actions and programmes to promote more and better research and innovation for building a resource-efficient and climate resilient society and economy in sync with the natural environment. It is based on the Europe 2020 strategy for a smart, sustainable and inclusive economy and it realises the European Research Area (ERA) and Innovation Union in the field of environment.
The aim of the European environmental research and innovation policy is to contribute to a transformative agenda for Europe in the coming years, where the quality of life of the citizens and the environment are steadily improved, in sync with the competitiveness of businesses, the societal inclusion and the management of resources.
Main features
The European environmental research and innovation policy has a multidisciplinary character and involve efforts across many different sectors to provide safe, economically feasible, environmentally sound and socially acceptable solutions along the entire value chain of human activities. To reduce resource use and environmental impacts whilst increasing competitiveness requires a decisive societal and technological transition to an economy based on a sustainable relationship between nature and human well-being. The availability of sufficient raw materials is addressed as well as the creation of opportunities for growth and new jobs. Innovative options are developed in policies ranging across science, technology, economy, regulations, society and citizens’ behavior, and governance. Research and innovation activities improve the understanding and forecasting of climate and environmental change in a systemic and cross-sectoral perspective, reduce uncertainties, identify and assess vulnerabilities, risks, costs, mitigation measures and opportunities, as well as expand the range and improve the effectiveness of societal and policy responses and solutions.
International context
The European environmental research and innovation policy was placed in the context of the process at the United Nations to develop a set of Sustainable Development Goals (SDGs) that were agreed at the Rio+20 Conference on Sustainable development in 2012 and are now integrated into the United Nations development agenda beyond 2015. These goals have succeeded the Millennium Development Goals and are universally applicable to all Nations, hence also to the European Union and its Member States.
Implementation through Framework Programmes
The implementation of the European environmental research and innovation policy relies on a systemic approach to innovation for a system-wide transformation. For large extent, it is carried out through the Framework Programmes for Research and Technological Development.
The current Framework Programme is called Horizon 2020 and environmental research and innovation is envisaged across the entire programme with an interdisciplinary approach. Current price estimates suggest that more than 6,5 € billion per year could be made available for activities related to sustainable development during the duration of Horizon 2020, addressing both research and innovation differently from previous FPs. Horizon 2020 is open to cooperation with researchers and innovators world-wide in order to foster co-design and co-creation of solutions that may have a global impact.
New calls for research and innovation proposals have been opened on 14 October 2015. Information is contained in the Horizon 2020 participant portal.
Document 2:::
A user profile is a collection of settings and information associated with a user. It contains critical information that is used to identify an individual, such as their name, age, portrait photograph and individual characteristics such as knowledge or expertise. User profiles are most commonly present on social media websites such as Facebook, Instagram, and LinkedIn; and serve as voluntary digital identity of an individual, highlighting their key features and traits. In personal computing and operating systems, user profiles serve to categorise files, settings, and documents by individual user environments, known as ‘accounts’, allowing the operating system to be more friendly and catered to the user. Physical user profiles serve as identity documents such as passports, driving licenses and legal documents that are used to identify an individual under the legal system.
A user profile can also be considered as the computer representation of a user model. A user model is a (data) structure that is used to capture certain characteristics about an individual user, and the process of obtaining the user profile is called user modeling or profiling.
Origin
The origin of user profiles can be traced to the origin of the passport, an identity document (ID) made mandatory in 1920, after World War I following negotiations at the League of Nations. The passport served as an official government record of an individual. Consequently, Immigration Act of 1924 was established to identify an individual's country of origin. In the 21st century, passports have now become a highly sought-after commodity as it is widely accepted as a source of verifying an individual's identity under the legal system.
With the advent of digital revolution and social media websites, user profiles have transitioned to an organised group of data describing the interaction between a user and a system. Social media sites like Instagram allow individuals to create profiles that are representative of their
Document 3:::
Thulium is a chemical element; it has symbol Tm and atomic number 69. It is the thirteenth element in the lanthanide series of metals. It is the second-least abundant lanthanide in the Earth's crust, after radioactively unstable promethium. It is an easily workable metal with a bright silvery-gray luster. It is fairly soft and slowly tarnishes in air. Despite its high price and rarity, thulium is used as a dopant in solid-state lasers, and as the radiation source in some portable X-ray devices. It has no significant biological role and is not particularly toxic.
In 1879, the Swedish chemist Per Teodor Cleve separated two previously unknown components, which he called holmia and thulia, from the rare-earth mineral erbia; these were the oxides of holmium and thulium, respectively. His example of thulium oxide contained impurities of ytterbium oxide. A relatively pure sample of thulium oxide was first obtained in 1911. The metal itself was first obtained in 1936 by Wilhelm Klemm and Heinrich Bommer.
Like the other lanthanides, its most common oxidation state is +3, seen in its oxide, halides and other compounds. In aqueous solution, like compounds of other late lanthanides, soluble thulium compounds form coordination complexes with nine water molecules.
Properties
Physical properties
Pure thulium metal has a bright, silvery luster, which tarnishes on exposure to air. The metal can be cut with a knife, as it has a Mohs hardness of 2 to 3; it is malleable and ductile. Thulium is ferromagnetic below 32K, antiferromagnetic between 32 and 56K, and paramagnetic above 56K.
Thulium has two major allotropes: the tetragonal α-Tm and the more stable hexagonal β-Tm.
Chemical properties
Thulium tarnishes slowly in air and burns readily at 150°C to form thulium(III) oxide:
Thulium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form thulium hydroxide:
Thulium reacts with all the halogens. Reactions are slow at room temperature,
Document 4:::
In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval , the rule takes the form:
where
n is the number of sample points used,
wi are quadrature weights, and
xi are the roots of the nth Legendre polynomial.
This choice of quadrature weights wi and quadrature nodes xi is the unique choice that allows the quadrature rule to integrate degree polynomials exactly.
Many algorithms have been developed for computing Gauss–Legendre quadrature rules. The Golub–Welsch algorithm presented in 1969 reduces the computation of the nodes and weights to an eigenvalue problem which is solved by the QR algorithm. This algorithm was popular, but significantly more efficient algorithms exist. Algorithms based on the Newton–Raphson method are able to compute quadrature rules for significantly larger problem sizes. In 2014, Ignace Bogaert presented explicit asymptotic formulas for the Gauss–Legendre quadrature weights and nodes, which are accurate to within double-precision machine epsilon for any choice of n ≥ 21. This allows for computation of nodes and weights for values of n exceeding one billion.
History
Carl Friedrich Gauss was the first to derive the Gauss–Legendre quadrature rule, doing so by a calculation with continued fractions in 1814. He calculated the nodes and weights to 16 digits up to order n=7 by hand. Carl Gustav Jacob Jacobi discovered the connection between the quadrature rule and the orthogonal family of Legendre polynomials. As there is no closed-form formula for the quadrature weights and nodes, for many decades people were only able to hand-compute them for small n, and computing the quadrature had to be done by referencing tables containing the weight and node values. By 1942 these values were only known for up to n=16.
Definition
For integrating f over with Gauss–Legendre quadrature, the associated orthogonal polynomials are Legendre p
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary aim of the European environmental research and innovation policy?
A. To promote military advancements in Europe
B. To contribute to a transformative agenda for improving citizens' quality of life and the environment
C. To increase the production of fossil fuels
D. To reduce the number of researchers in the European Union
Answer:
|
B. To contribute to a transformative agenda for improving citizens' quality of life and the environment
|
Relavent Documents:
Document 0:::
Reed mats are handmade mats of plaited reed or other plant material.
East Asia
In Japan, a traditional reed mat is the tatami (畳). Tatami are covered with a weft-faced weave of (common rush), on a warp of hemp or weaker cotton. There are four warps per weft shed, two at each end (or sometimes two per shed, one at each end, to cut costs). The (core) is traditionally made from sewn-together rice straw, but contemporary tatami sometimes have compressed wood chip boards or extruded polystyrene foam in their cores, instead or as well. The long sides are usually with brocade or plain cloth, although some tatami have no edging.
Southeast Asia
In the Philippines, woven reed mats are called banig. They are used as sleeping mats or floor mats, and were also historically used as sails. They come in many different weaving styles and typically have colorful geometric patterns unique to the ethnic group that created them. They are made from buri palm leaves, pandan leaves, rattan, or various kinds of native reeds known by local names like tikog, sesed (Fimbristykis miliacea), rono, or bamban.
In Thailand and Cambodia, the mats are produced by plaiting reeds, strips of palm leaf, or some other easily available local plant. The supple mats made by this process of weaving without a loom are widely used in Thai homes. These mats are also now being made into shopping bags, place mats, and decorative wall hangings.
One popular kind of Thai mat is made from a kind of reed known as Kachud, which grows in the southern marshes. After the reeds are harvested, they are steeped in mud, which toughens them and prevents them from becoming brittle. They are then dried in the sun for a time and pounded flat, after which they are ready to be dyed and woven into mats of various sizes and patterns.
Other mats are produced in different parts of Thailand, most notably in the eastern province of Chanthaburi. Durable as well as attractive, they are plaited entirely by hand with an intricacy th
Document 1:::
In probability theory, the central limit theorem (CLT) states that, in many situations, when independent and identically distributed random variables are added, their properly normalized sum tends toward a normal distribution. This article gives two illustrations of this theorem. Both involve the sum of independent and identically-distributed random variables and show how the probability distribution of the sum approaches the normal distribution as the number of terms in the sum increases.
The first illustration involves a continuous probability distribution, for which the random variables have a probability density function. The second illustration, for which most of the computation can be done by hand, involves a discrete probability distribution, which is characterized by a probability mass function.
Illustration of the continuous case
The density of the sum of two independent real-valued random variables equals the convolution of the density functions of the original variables.
Thus, the density of the sum of m+n terms of a sequence of independent identically distributed variables equals the convolution of the densities of the sums of m terms and of n term. In particular, the density of the sum of n+1 terms equals the convolution of the density of the sum of n terms with the original density (the "sum" of 1 term).
A probability density function is shown in the first figure below. Then the densities of the sums of two, three, and four independent identically distributed variables, each having the original density, are shown in the following figures.
If the original density is a piecewise polynomial, as it is in the example, then so are the sum densities, of increasingly higher degree. Although the original density is far from normal, the density of the sum of just a few variables with that density is much smoother and has some of the qualitative features of the normal density.
The convolutions were computed via the discrete Fourier transform. A list of value
Document 2:::
Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity (such as an airliner or a nuclear power plant) or the effects of stressors on the environment (probabilistic environmental risk assessment, or PERA).
Risk in a PRA is defined as a feasible detrimental outcome of an activity or action. In a PRA, risk is characterized by two quantities:
the magnitude (severity) of the possible adverse consequence(s), and
the likelihood (probability) of occurrence of each consequence.
Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.
The spectrum of risks across classes of events are also of concern, and are usually controlled in licensing processes – it would be of concern if rare but high consequence events were found to dominate the overall risk, particularly as these risk assessments are very sensitive to assumptions (how rare is a high consequence event?).
Probabilistic risk assessment usually answers three basic questions:
What can go wrong with the studied technological entity or stressor, or what are the initiators or initiating events (undesirable starting events) that lead to adverse consequence(s)?
What and how severe are the potential detriments, or the adverse consequences that the technological entity (or the ecological system in the case of a PERA) may be eventually subjected to as a result of the occurrence of the initiator?
How likely to occur are these undesirable consequences, or what are their probabilities or frequencies?
Two common methods of answering this last question are event tree analysis and fault tree analysis – for explanat
Document 3:::
In mathematical optimization, a trust region is the subset of the region of the objective function that is approximated using a model function (often a quadratic). If an adequate model of the objective function is found within the trust region, then the region is expanded; conversely, if the approximation is poor, then the region is contracted.
The fit is evaluated by comparing the ratio of expected improvement from the model approximation with the actual improvement observed in the objective function. Simple thresholding of the ratio is used as the criterion for expansion and contraction—a model function is "trusted" only in the region where it provides a reasonable approximation.
Trust-region methods are in some sense dual to line-search methods: trust-region methods first choose a step size (the size of the trust region) and then a step direction, while line-search methods first choose a step direction and then a step size.
The general idea behind trust region methods is known by many names; the earliest use of the term seems to be by Sorensen (1982). A popular textbook by Fletcher (1980) calls these algorithms restricted-step methods. Additionally, in an early foundational work on the method, Goldfeld, Quandt, and Trotter (1966) refer to it as quadratic hill-climbing.
Example
Conceptually, in the Levenberg–Marquardt algorithm, the objective function is iteratively approximated by a quadratic surface, then using a linear solver, the estimate is updated. This alone may not converge nicely if the initial guess is too far from the optimum. For this reason, the algorithm instead restricts each step, preventing it from stepping "too far". It operationalizes "too far" as follows. Rather than solving for , it solves , where is the diagonal matrix with the same diagonal as A, and λ is a parameter that controls the trust-region size. Geometrically, this adds a paraboloid centered at to the quadratic form, resulting in a smaller step.
The trick is to change the t
Document 4:::
Interactive Connectivity Establishment (ICE) is a technique used in computer networking to find ways for two computers to talk to each other as directly as possible in peer-to-peer networking. This is most commonly used for interactive media such as Voice over Internet Protocol (VoIP), peer-to-peer communications, video, and instant messaging. In such applications, communicating through a central server would be slow and expensive, but direct communication between client applications on the Internet is very tricky due to network address translators (NATs), firewalls, and other network barriers.
ICE is developed by the Internet Engineering Task Force MMUSIC working group and is published as RFC 8445, as of August 2018, and has obsolesced both RFC 5245 and RFC 4091.
Overview
Network address translation (NAT) became an effective technique in delaying the exhaustion of the available address pool of Internet Protocol version 4, which is inherently limited to around four billion unique addresses. NAT gateways track outbound requests from a private network and maintain the state of each established connection to later direct responses from the peer on the public network to the peer in the private network, which would otherwise not be directly addressable.
VoIP, peer-to-peer, and many other applications require address information of communicating peers within the data streams of the connection, rather than only in the Internet Protocol packet headers. For example, the Session Initiation Protocol (SIP) communicates the IP address of network clients for registration with a location service, so that telephone calls may be routed to registered clients. ICE provides a framework with which a communicating peer may discover and communicate its public IP address so that it can be reached by other peers.
Session Traversal Utilities for NAT (STUN) is a standardized protocol for such address discovery including NAT classification. Traversal Using Relays around NAT (TURN) places
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a primary function of a Storage Area Network (SAN)?
A. To provide access to consolidated, block level data storage
B. To manage user access to cloud applications
C. To connect multiple office locations via a VPN
D. To enhance the performance of personal computers
Answer:
|
A. To provide access to consolidated, block level data storage
|
Relavent Documents:
Document 0:::
The Olympia Eisschnellaufbahn is a speed skating venue located in Innsbruck, Austria. The outdoor venue hosted the speed skating events both for the 1964 and the 1976 Winter Olympics and the 2012 Winter Youth Olympics.
It is part of the OlympiaWorld Innsbruck consortium that is responsible for maintaining venues that hosted both the 1964 and the 1976 Winter Olympics.
References
1964 Winter Olympics official report. p. 142.
1976 Winter Olympics official report. pp. 205–7.
Document 1:::
Domain engineering is the entire process of reusing domain knowledge in the production of new software systems. It is a key concept in systematic software reuse and product line engineering. A key idea in systematic software reuse is the domain. Most organizations work in only a few domains. They repeatedly build similar systems within a given domain with variations to meet different customer needs. Rather than building each new system variant from scratch, significant savings may be achieved by reusing portions of previous systems in the domain to build new ones.
The process of identifying domains, bounding them, and discovering commonalities and variabilities among the systems in the domain is called domain analysis. This information is captured in models that are used in the domain implementation phase to create artifacts such as reusable components, a domain-specific language, or application generators that can be used to build new systems in the domain.
In product line engineering, as defined by ISO26550:2015, Domain Engineering is complemented by Application Engineering which takes care of the life cycle of the individual products derived from the product line.
Purpose
Domain engineering is designed to improve the quality of developed software products through reuse of software artifacts. Domain engineering shows that most developed software systems are not new systems but rather variants of other systems within the same field. As a result, through the use of domain engineering, businesses can maximize profits and reduce time-to-market by using the concepts and implementations from prior software systems and applying them to the target system. The reduction in cost is evident even during the implementation phase. One study showed that the use of domain-specific languages allowed code size, in both number of methods and number of symbols, to be reduced by over 50%, and the total number of lines of code to be reduced by nearly 75%.
Domain engineering focuses
Document 2:::
Unimolecular ion decomposition is the fragmentation of a gas phase ion in a reaction with a molecularity of one. Ions with sufficient internal energy may fragment in a mass spectrometer, which in some cases may degrade the mass spectrometer performance, but in other cases, such as tandem mass spectrometry, the fragmentation can reveal information about the structure of the ion.
Wahrhaftig diagram
A Wahrhaftig diagram (named after Austin L. Wahrhaftig) illustrates the relative contributions in unimolecular ion decomposition of direct fragmentation and fragmentation following rearrangement. The x-axis of the diagram represents the internal energy of the ion. The lower part of the diagram shows the logarithm of the rate constant k for unimolecular dissociation whereas the upper portion of the diagram indicates the probability of forming a particular product ion. The green trace in the lower part of the diagram indicates the rate of the rearrangement reaction given by
ABCD+ -> {AD+} + BC
and the blue trace indicates the direct cleavage reaction
ABCD+ -> {AB+} + CD
A rate constant of 106 s−1 is sufficiently fast for ion decomposition within the ion source of a typical mass spectrometer. Ions with rate constants less than 106 s−1 and greater than approximately 105 s−1 (lifetimes between 10−5 and 10−6 s) have a high probability of decomposing in the mass spectrometer between the ion source and the detector. These rate constants are indicated in the Wahrhaftig diagram by the log k = 5 and log k = 6 dashed lines.
Indicated on the rate constant plot are the reaction critical energy (also called the activation energy) for the formation of AD+, E0(AD+) and AB+, E0(AB+). These represent the minimum internal energy of ABCD+ required to form the respective product ions: the difference in the zero point energy of ABCD+ and that of the activated complex.
When the internal energy of ABCD+ is greater than Em(AD+), the ions are metastable (indicated by m*); this occurs near lo
Document 3:::
In geometry, a vertex (: vertices or vertexes), also called a corner, is a point where two or more curves, lines, or line segments meet or intersect. For example, the point where two lines meet to form an angle and the point where edges of polygons and polyhedra meet are vertices.
Definition
Of an angle
The vertex of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments, and lines that result in two straight "sides" meeting at one place.
Of a polytope
A vertex is a corner point of a polygon, polyhedron, or other higher-dimensional polytope, formed by the intersection of edges, faces or facets of the object.
In a polygon, a vertex is called "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than π radians (180°, two right angles); otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex, if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and is concave otherwise.
Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices.
However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve, there will be a point of extreme curvature near each polygon vertex.
Of a plane tiling
A vertex of a plane tiling or tessellatio
Document 4:::
A snake-arm robot is a slender hyper-redundant manipulator. The high number of degrees of freedom allows the arm to “snake” along a path or around an obstacle – hence the name “snake-arm”.
Definition
Snake-arm robots are also described as continuum robots and elephant's trunk robots although these descriptions are restrictive in their definitions and cannot be applied to all snake-arm robots.
A continuum robot is a continuously curving manipulator, much like the arm of an octopus.
An elephant's trunk robot is a good descriptor of a continuum robot. This has generally been associated with whole arm manipulation – where the entire arm is used to grasp and manipulate objects, in the same way that an elephant would pick up a ball.
This is an emerging field and as such there is no agreement on the best term for this class of robot.
Snake-arm robots are often used in association with another device. The function of the other device is to introduce the snake-arm into the confined space. Examples of possible introduction axes include mounting a snake-arm on a remote controlled vehicle or an industrial robot or designing a bespoke a linear actuator. In this case the shape of the arm is coordinated with the linear movement of the introduction axis enabling the arm to follow a path into confined spaces.
Other features which are usually (but not always) associated with snake-arm robots:
Continuous diameter along the length of the arm
Self-supporting
Either tendon-driven or pneumatically controlled in most cases.
A snake-arm robot is not to be confused with a snakebot which mimics the biomorphic motion of a snake in order to slither along the ground.
Applications
The ability to reach into confined spaces lends itself to many applications involving access problems. The list below is not intended to be an exhaustive list of possibilities but merely an indication of where these robots are being used or developed for use.
Industry
Nuclear
Decommissioning
Repair
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What notable AI systems did Tuomas Sandholm develop that defeated top human players in poker?
A. AlphaGo and DeepMind
B. Libratus and Pluribus
C. Watson and Siri
D. DeepBlue and ChatGPT
Answer:
|
B. Libratus and Pluribus
|
Relavent Documents:
Document 0:::
A virtually safe dose (VSD) may be determined for those carcinogens not assumed to have a threshold. Virtually safe doses are calculated by regulatory agencies to represent the level of exposure to such carcinogenic agents at which an excess of cancers greater than that level accepted by society is not expected.
Document 1:::
Boletellus dicymbophilus is a species of fungus in the family Boletaceae. Found in Guyana, it was described as new to science in 2008.
References
Document 2:::
A graphing calculator (also graphics calculator or graphic display calculator) is a handheld computer that is capable of plotting graphs, solving simultaneous equations, and performing other tasks with variables. Most popular graphing calculators are programmable calculators, allowing the user to create customized programs, typically for scientific, engineering or education applications. They have large screens that display several lines of text and calculations.
History
An early graphing calculator was designed in 1921 by electrical engineer Edith Clarke. The calculator was used to solve problems with electrical power line transmission.
Casio produced the first commercially available graphing calculator in 1985. Sharp produced its first graphing calculator in 1986, with Hewlett Packard following in 1988, and Texas Instruments in 1990.
Features
Computer algebra systems
Some graphing calculators have a computer algebra system (CAS), which means that they are capable of producing symbolic results. These calculators can manipulate algebraic expressions, performing operations such as factor, expand, and simplify. In addition, they can give answers in exact form without numerical approximations. Calculators that have a computer algebra system are called symbolic or CAS calculators.
Laboratory usage
Many graphing calculators can be attached to devices like electronic thermometers, pH gauges, weather instruments, decibel and light meters, accelerometers, and other sensors and therefore function as data loggers, as well as WiFi or other communication modules for monitoring, polling and interaction with the teacher. Student laboratory exercises with data from such devices enhances learning of math, especially statistics and mechanics.
Games and utilities
Since graphing calculators are typically user-programmable, they are also widely used for utilities and calculator gaming, with a sizable body of user-created game software on most popular platforms. The ability
Document 3:::
The sweet pea, Lathyrus odoratus, is a flowering plant in the genus Lathyrus in the family Fabaceae (legumes), native to Sicily, southern Italy and the Aegean Islands.
It is an annual climbing plant, growing to a height of , where suitable support is available. The leaves are pinnate with two leaflets and a terminal tendril, which twines around supporting plants and structures, helping the sweet pea to climb. In the wild plant the flowers are purple, broad; they are larger and highly variable in color in the many cultivars. Flowers are usually strongly scented.
The annual species, L. odoratus, may be confused with the everlasting pea, L. latifolius, a perennial.
Horticultural development
Sweet peas, native to Sicily and Sardinia, were first mentioned by the Franciscan monk and botanist Francesco Cupani in the Hortus Catholicus (1696). Cupani first studied medicine, before entering the Franciscan order in 1681 at the age of 24, where he continued to cultivate his interest in natural sciences and botany, particularly to the study of the endemic flora of Sicily. In 1692, Cupani became the first Director of the botanic garden at Misilmeri, where he is believed to have cultivated sweet peas.
Cupani's scrambling sweet peas had small, short-stalked, bicolored flowers arranged in pairs and were sweetly scented, but went largely unnoticed by gardeners. Cupani is believed to have sent seed to a number of botanists, including the English botanists Robert Uvedale in Enfield, and Jacob Bobart in Oxford, and the Dutch botanist Jan Commelin who published a description and illustration of sweet peas growing in Amsterdam.
Despite the general lack of early interest amongst gardeners, some nurserymen including the British horticulturist Robert Furber began to offer sweet peas for sale as early 1730. Still, by the mid 19th century only 5 cultivars or variants were available; Cupani's wild type sweet pea and types with white, black (or very dark purple), red, or mixed pink and whi
Document 4:::
Dr. Wolfgang Benjamin Klemperer (January 18, 1893 – March 25, 1965) was born in Dresden, Germany, the son of the Austrian nationals Leon and Charlotte Klemperer. He was in his time a prominent aviation and aerospace scientist and engineer, who ranks among the pioneers of early aviation.
He is probably best known for his work on Properties of Rosette Configurations of Gravitating Bodies in Homographic Equilibrium, which have been named after him as Klemperer rosettes.
Klemperer was engaged in the development of airships or zeppelins, both in Germany and the US, high altitude ballons, specialized optics, a high-speed wide-angle cine-camera, analogue computers, equipment for data processing, and flight simulators. He became the preeminent missile scientist of Douglas Aircraft Corporation and in 1958 the director of the guided missile research section, staff assistant to the vice-president, and director of product development.
Throughout his career he published regularly, often with his first and middle names abbreviated to W. B. Klemperer, in scientific magazines on aerodynamics, space flight and navigation, as well as sailplanes.
In 1962 he published his work on the Klemperer rosette.
European years
Klemperer grew up in the city of Dresden, where he also attended school. After graduating from grammar school, he enrolled at Dresden Institute of Technology. At the outbreak of World War 1 in 1914, being an Austrian National, he served his military service in the Austrian Air Force.
After the war in 1918 he continued his studies at Dresden University of Technology (TU Dresden) and graduated there in 1920 as Dipl.-Ingenieur.
In 1920 he joined the Aachen Aerodynamics Institute as an assistant to Professor Theodore von Kármán, who later became the founder of the Jet Propulsion Laboratory (JPL). Alongside his academic career he also worked from 1922 to 1924 for Dr. Hugo Junkers at his glider plane manufacturing plant Aachener Segelflugzeugbau in Aachen, Germany.
From
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the significance of Boletellus dicymbophilus in the context of scientific research?
A. It is a well-known edible mushroom.
B. It was first discovered in the United States.
C. It is a newly described species of fungus.
D. It belongs to the family Agaricaceae.
Answer:
|
C. It is a newly described species of fungus.
|
Relavent Documents:
Document 0:::
The I. I. Rabi Award, founded in 1983, is awarded annually by IEEE.
"The Rabi Award is to recognize outstanding contributions related to the fields of atomic and molecular frequency standards, and time transfer and dissemination."
The award is named after Isidor Isaac Rabi, Nobel Prize winner in 1944. He was the first recipient of the award, for his experimental and theoretical work on atomic beam resonance spectroscopy.
Recipients
1983 - I. I. Rabi
1984 - David W. Allan
1985 - Norman Ramsey, Nobel Prize in 1989
1986 - Jerrold R. Zacharias
1987 - Louis Essen
1988 - Gernot M. R. Winkler
1989 - Leonard S. Cutler
1990 - Claude Audoin
1991 - Andrea De Marchi
1992 - James A. Barnes
1993 - Robert F. C. Vessot
1994 - Jacques Vanier
1995 - Fred L. Walls
1996 - Andre Clairon and Robert E. Drullinger
1997 - Harry E. Peters and Nikolai A. Demidov
1998 - David J. Wineland, Nobel Prize in 2012
1999 - Bernard Guinot
2000 - William J. Riley Jr.
2001 - Lute Maleki
2002 - Jon H. Shirley
2003 - Andreas Bauch
2005 - Theodor W. Hänsch, Nobel Prize in 2005
2004 - John L. Hall, Nobel Prize in 2005
2006 - James C. Bergquist
2007 - Patrick Gill and Leo Hollberg
2008 - Hidetoshi Katori
2009 - John D. Prestage
2010 - Long Sheng Ma
2011 - Fritz Riehle
2012 - James Camparo
2013 - Judah Levine
2014 - Harald R. Telle
2015 - Ulrich L. Rohde
2016 - John Kitching
2017 - Scott Diddams
2018 - Jun Ye
2019 - Steven Jefferts
2020 - Robert Lutwak
2021 - Ekkehard Peik
See also
List of physics awards
References
Document 1:::
In finance, the Treynor reward-to-volatility model (sometimes called the reward-to-volatility ratio or Treynor measure), named after American economist Jack L. Treynor, is a measurement of the returns earned in excess of that which could have been earned on an investment that has no risk that can be diversified (e.g., Treasury bills or a completely diversified portfolio), per unit of market risk assumed.
The Treynor ratio relates excess return over the risk-free rate to the additional risk taken; however, systematic risk is used instead of total risk. The higher the Treynor ratio, the better the performance of the portfolio under analysis.
Formula
where:
Treynor ratio,
portfolio i'''s return,
risk free rate
[[Beta coefficient|portfolio i's beta]]
Example
Taking the equation detailed above, let us assume that the expected portfolio return is 20%, the risk free rate is 5%, and the beta of the portfolio is 1.5. Substituting these values, we get the following
Limitations
Like the Sharpe ratio, the Treynor ratio (T'') does not quantify the value added, if any, of active portfolio management. It is a ranking criterion only. A ranking of portfolios based on the Treynor Ratio is only useful if the portfolios under consideration are sub-portfolios of a broader, fully diversified portfolio. If this is not the case, portfolios with identical systematic risk, but different total risk, will be rated the same. But the portfolio with a higher total risk is less diversified and therefore has a higher unsystematic risk which is not priced in the market.
An alternative method of ranking portfolio management is Jensen's alpha, which quantifies the added return as the excess return above the security market line in the capital asset pricing model. As these two methods both determine rankings based on systematic risk alone, they will rank portfolios identically.
See also
Bias ratio (finance)
Hansen-Jagannathan bound
Jensen's alpha
Modern portfolio theory
Modigliani r
Document 2:::
The Center for the Blue Economy (CBE) is a research center managed by the Middlebury Institute of International Studies (MIIS) in Monterey, California. The CBE research focuses on the Blue Economy. The CBE was founded in 2011. It received the initial fund of $1 million from Robin and Deborah Hicks, the parents of the Middlebury College students, in their capacities as trustees of the Loker Foundation.
Professor Jason Scorse, who is also the Head/Chair of International Environmental Policy (IEP) program at MIIS, is the Director for the Center for the Blue Economy. The CBE was created to address the issues related to "Blue Economy" in the ocean and coastal areas.
Research focus
The research at the CBE mainly focuses on determining the factors that ensure sustainability and economics of oceans and coastal regions. The research at the center provides open-access data to different stakeholders, including businesses, governments, nonprofits that could help them to make decisions for managing ocean and coastal resources. The research also focuses on climate change adaptation in coastal areas, governing the environmental issues and providing possible solutions considering the ocean and coastal issues. The center collaborates with various local and national organizations and they worked on a wide range of topics related to ocean and coastal areas. The center also offers specialization course of Ocean and Coastal Resource Management(OCRM) for the IEP program.
CBE Advisory Council
The CBE Advisory Council has experts from different backgrounds and experiences including marine science, policy and business. The team aims to make a change and shape the future of blue economy. The organogram of the CBE Advisory Council is shown in Figure.
Grants and funding
The CBE 2018 received grants from three major sources:71% federal government, 28% state and local agencies, and 1% other sources.
Speaker series
The CBE hosts the Speakers Series which are events where professionals fr
Document 3:::
Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal.
Overview
Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal.
The RPDE value is a scalar in the range zero to one. For purely periodic signals, , whereas for purely i.i.d., uniform white noise, .
Method description
The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors:
for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point in the phase space, an -neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density:
is the RPDE value, where is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE i
Document 4:::
Ralph Philip Boas Jr. (August 8, 1912 – July 25, 1992) was a mathematician, teacher, and journal editor. He wrote over 200 papers, mainly in the fields of real and complex analysis.
Biography
He was born in Walla Walla, Washington, the son of an English professor at Whitman College, but moved frequently as a child; his younger sister, Marie Boas Hall, later to become a historian of science, was born in Springfield, Massachusetts, where his father had become a high school teacher. He was home-schooled until the age of eight, began his formal schooling in the sixth grade, and graduated from high school while still only 15. After a gap year auditing classes at Mount Holyoke College (where his father had become a professor) he entered Harvard, intending to major in chemistry and go into medicine, but ended up studying mathematics instead. His first mathematics publication was written as an undergraduate, after he discovered an incorrect proof in another paper. He got his A.B. degree in 1933, received a Sheldon Fellowship for a year of travel, and returned to Harvard for his doctoral studies in 1934. He earned his doctorate there in 1937, under the supervision of David Widder.
After postdoctoral studies at Princeton University with Salomon Bochner, and then the University of Cambridge in England, he began a two-year instructorship at Duke University, where he met his future wife, Mary Layne, also a mathematics instructor at Duke. They were married in 1941, and when the United States entered World War II later that year, Boas moved to the Navy Pre-flight School in Chapel Hill, North Carolina. In 1942, he interviewed for a position in the Manhattan Project, at the Los Alamos National Laboratory, but ended up returning to Harvard to teach in a Navy instruction program there, while his wife taught at Tufts University.
Beginning when he was an instructor at Duke University, Boas had become a prolific reviewer for Mathematical Reviews, and at the end of the war he took a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is one significant benefit of virtual machining according to the text?
A. It guarantees the lowest cost for part manufacturing.
B. It allows for testing of new alloys without real machining.
C. It eliminates the need for skilled machine tool operators.
D. It helps reveal errors in machining processes without wasting materials.
Answer:
|
D. It helps reveal errors in machining processes without wasting materials.
|
Relavent Documents:
Document 0:::
Jewelry wire is wire, usually copper, brass, nickel, aluminium, silver, or gold, used in jewelry making.
Wire is defined today as a single, usually cylindrical, elongated strand of drawn metal. However, when wire was first invented over 2,000 years BC, it was made from gold nuggets pounded into flat sheets, which were then cut into strips. The strips were twisted and then rolled into the round shape we call wire. This early wire, which was used in making jewelry, can be distinguished from modern wire by the spiral line along the wire created by the edges of the sheet.
Modern wire is manufactured in a different process that was discovered in Ancient Rome. In this process, a solid metal cylinder is pulled through a draw plate with holes of a defined size. Thinner sizes of wire are made by pulling wire through successively smaller holes in the draw plate until the desired size is reached.
When wire was first invented, its use was limited to making jewelry. Today, wire is used extensively in many applications including fencing, the electronics industry, electrical distribution, and the making of wire wrapped jewelry.
Wire hardness
All metals have a property called hardness, which is the property of the metal that resists bending. Soft metals are pliable and easy to bend while hard metals are stiff and hard to bend. The hardness of metals can be changed by annealing with heat treatment, or by work hardening a wire by bending it.
Most modern manufacturers of jewelry wire make the wire with a defined hardness, generally a hardness of 0, 1, 2, 3, or 4. Historically, these numbers were associated with the number of times that the wire was pulled through a draw plate, becoming harder or stiffer each time it was drawn through the drawplate. A hardness of 0 meant that the wire had been drawn through only once and was as soft and as pliable as possible. A hardness of 4 meant that the wire had been drawn through five or more times and the wire was as stiff and as hard as possible. Most jewelry wire that is sold now is designated dead soft, half-hard, or hard, where dead soft is wire that is manufactured with a hardness of 0, half-hard is wire manufactured with a hardness of 2, and fully hardened wire is wire with a hardness of 4.
Dead soft wire is extremely soft and pliable. It can be easily bent and is excellent for making rounded shapes such as spirals. It is also excellent for wrapping wire around beads to make them look as though they are encased. The disadvantage of using soft wire is that the finished piece can be bent out of shape if not properly handled.
Half-hard wire is slightly stiffer than dead soft wire. Half-hard wire is excellent for making tight, angular bends, for making loops in wire, and for wrapping wire around itself. However, it is not very useful for making spirals. Finished pieces made with half-hard wire are usually more permanent than pieces made with soft wire.
Hard wire is very stiff and tends to spring back after being bent, making it harder to work with when using a jig; it cannot be used to make a spiral. Pieces made with hard wire have the advantage that they are not easily accidentally deformed.
As in many things, no single wire is perfect for all applications. Soft wire is easy to bend and shape, but the finished product may be bent out of shape if squeezed. Hard wire is difficult to bend but makes permanent shapes. Half-hard wire is a compromise between the two. Wire-wrapped jewelry can be made by wire which is initially soft, simplifying fabrication, but later hardened by hammering or by work hardening.
Wire shape
Historically, all wire was round. Advances in technology now allow the manufacture of jewelry wire with different cross-sectional shapes, including circular, square, and half-round. Half round wire is often wrapped around other pieces of wire to connect them. Square wire is used for its appearance: the corners of the square add interest to the finished jewelry. Square wire can be twisted to create interesting visual effects.
Wire size
For jewelry applications, gauges 12–28 are most common. The size of wire is defined by one of two measuring systems. The American wire gauge (AWG) and the Standard wire gauge (SWG) systems. AWG is usually, but not always, the standard for defining the sizes of wire used in the United States, and SWG is usually, but not always, the standard wire sizing system used in the United Kingdom. With both the AWG and SWG systems, the larger the number, the smaller the gauge. For example: 2-gauge wire is large (like a pencil) and 30-gauge wire is fine, like thread. In much of the world wire diameter is often expressed in millimeters.
For making jump rings, 10- to 18-gauge wire (2.5 to 1.3 mm) is used. Bracelet and necklace wire components are generally made out of wire that is 16-, 18- or 20-gauge (1.3 to 0.8 mm). Earring wires are usually made out of 18- or 20-gauge wire (1.0 to 0.8 mm). When making wire wrapped jewelry, these components are connected to one another with wire that is generally 20- to 26-gauge (0.8 to 0.4 mm). Frequently the connections between wire components will include a bead on the wire connector in a technique called a wire-wrapped loop. Most glass beads (but not all) are manufactured with a hole that is 1 mm in size. This will accommodate 20-gauge wire, but will probably not accommodate 18-gauge wire. Some glass beads, almost all freshwater pearls and some gemstone beads will have smaller holes and will require the use of wire thinner than 20-gauge. (The largest wire that can go through the beads is generally chosen. Beads and gemstones are much harder than the wire, and will over time saw into the wire; so thicker wire will last longer.)
Thick wire, of 16-gauge and heavier, is harder to bend and requires more expert handling. Hammering wire with a plastic or rawhide mallet will harden wire without changing its shape. Hammering wire with a metal jeweler's hammer (chasing hammer) will harden and flatten wire.
For thickness of body jewelry sizes, gauges of all sizes can be found, notably with stretching.
See also
Body jewelry sizes
:Category: Jewellery components
French wire
Handmade jewelry
Jig (jewellery)
Wire gauge
Wire sculpture
Wire wrapped jewelry
References
Ogden, Jack, 1992, Ancient Jewelry (in the Interpreting the Past series), University of California Press,
Document 1:::
The fungi imperfecti or imperfect fungi are fungi which do not fit into the commonly established taxonomic classifications of fungi that are based on biological species concepts or morphological characteristics of sexual structures because their sexual form of reproduction has never been observed. They are known as imperfect fungi because only their asexual and vegetative phases are known. They have asexual form of reproduction, meaning that these fungi produce their spores asexually, in the process called sporogenesis.
There are about 25,000 species that have been classified in the phylum Deuteromycota and many are Basidiomycota or Ascomycota anamorphs. Fungi producing the antibiotic penicillin and those that cause athlete's foot and yeast infections are algal fungi. In addition, there are a number of edible imperfect fungi, including the ones that provide the distinctive characteristics of Roquefort and Camembert cheese.
Other, more informal names besides phylum Deuteromycota (or class "Deuteromycetes") and fungi imperfecti are anamorphic fungi, or mitosporic fungi, but these are terms without taxonomic rank. Examples are Alternaria, Colletotrichum, Trichoderma etc. The class Phycomycetes ("algal fungi") has also been used.
Problems in taxonomic classification
Although Fungi imperfecti/Deuteromycota is no longer formally accepted as a taxon, many of the fungi it included have yet to find a place in modern fungal classification. This is because most fungi are classified based on characteristics of the fruiting bodies and spores produced during sexual reproduction, and members of the Deuteromycota have been observed to reproduce only asexually or produce no spores.
Mycologists formerly used a unique dual system of nomenclature in classifying fungi, which was permitted by Article 59 of the International Code of Botanical Nomenclature (the rules governing the naming of plants and fungi). However, the system of dual nomenclature for fungi was abolished in the 2011
Document 2:::
Spinnbarkeit (), also known as fibrosity, is a biomedical rheology term which refers to the stringy or stretchy property found to varying degrees in mucus, saliva, albumen and similar viscoelastic fluids. The term is used especially with reference to cervical mucus at the time just prior to or during ovulation.
Under the influence of estrogens, cervical mucus becomes abundant, clear, and stretchable, and somewhat like egg white. The stretchability of the mucus is described by its spinnbarkeit, from the German word for the ability to be spun. Only such mucus appears to be able to be penetrated by sperm. After ovulation, the character of cervical mucus changes, and under the influence of progesterone it becomes thick, scant, and tacky. Sperm typically cannot penetrate it.
Saliva does not always exhibit spinnbarkeit, but it can under certain circumstances. The thickness and spinnbarkeit of nasal mucus are factors in whether or not the nose seems to be blocked.
Mucociliary transport depends on the interaction of fibrous mucus with beating cilia.
See also
Mucorrhea
References
Document 3:::
Ankyrin 1, also known as ANK-1, and erythrocyte ankyrin, is a protein that in humans is encoded by the ANK1 gene.
Tissue distribution
The protein encoded by this gene, Ankyrin 1, is the prototype of the ankyrin family, was first discovered in erythrocytes, but since has also been found in brain and muscles.
Genetics
Complex patterns of alternative splicing in the regulatory domain, giving rise to different isoforms of ankyrin 1 have been described, however, the precise functions of the various isoforms are not known. Alternative polyadenylation accounting for the different sized erythrocytic ankyrin 1 mRNAs, has also been reported. Truncated muscle-specific isoforms of ankyrin 1 resulting from usage of an alternate promoter have also been identified.
Disease linkage
Mutations in erythrocytic ankyrin 1 have been associated in approximately half of all patients with hereditary spherocytosis.
ANK1 shows altered methylation and expression in Alzheimer's disease. A gene expression study of postmortem brains has suggested ANK1 interacts with interferon-γ signalling.
Function
The ANK1 protein belongs to the ankyrin family that are believed to link the integral membrane proteins to the underlying spectrin-actin cytoskeleton and play key roles in activities such as cell motility, activation, proliferation, contact, and maintenance of specialized membrane domains. Multiple isoforms of ankyrin with different affinities for various target proteins are expressed in a tissue-specific, developmentally regulated manner. Most ankyrins are typically composed of three structural domains: an amino-terminal domain containing multiple ankyrin repeats; a central region with a highly conserved spectrin-binding domain; and a carboxy-terminal regulatory domain, which is the least conserved and subject to variation.
The small ANK1 (sAnk1) protein splice variants makes contacts with obscurin, a giant protein surrounding the contractile apparatus in striated muscle.
Interactions
A
Document 4:::
.
Sexual communication is a conversation between partners about sex, which is necessary to obtain sexual consent, to learn about likes and dislikes, and to obtain sexual satisfaction.
Sexual communication is a transitional stage from the romantic period of a relationship to a closer intimate and sexual relationship between partners.
Sexual communication in different countries is based on the partners' chosen religion and marriage customs, so it can start at different stages of the partners' relationship. Sexual communication is not primary in the relationship of partners, and in harmonious relationships it occurs after the spiritual perception of the partner.
Meaning for sex life
Sexual communication is necessary for partners to share their sexual experiences with each other based on trust and respect, in a safe manner.
In order to initiate sexual relations, consent to sex is essential, meaning that consent is free, informed, revocable, enthusiastic and specific. When "NO" means - NO.
Sexual relations that began without consent to sex, as well as consent to sex that was obtained through a false promise to marry, are considered rape.
Sexual communication includes being aware of the STD (sexually transmitted disease) status of partners, discussing the purpose of the relationship, getting tested for STDs together, and safe sex and contraceptives.
Positive sexual communication is associated with sexual satisfaction in relationships and well-being.
Sexual communication helps partners to better understand each other to build intimate relationships, to understand the differences in perceptions and feelings of partners in sexual activity.
Talking about sex opens up self-awareness for self-reflection for conscious sex with a partner.
Forms of sexual communication
Sexual communication occurs in various forms:
Physical communication is physical actions that speak louder than words: a look, a gently placed hand can speak better than verbally.
Sexual cues are v
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary metal used in the production of jewelry wire mentioned in the text?
A. Copper
B. Silver
C. Gold
D. Brass
Answer:
|
A. Copper
|
Relavent Documents:
Document 0:::
A trackball is a pointing device consisting of a ball held by a socket containing sensors to detect a rotation of the ball about two axes—like an upside-down ball mouse with an exposed protruding ball. Users roll the ball to position the on-screen pointer, using their thumb, fingers, or the palm of the hand, while using the fingertips to press the buttons.
With most trackballs, operators have to lift their finger, thumb or hand and reposition it on the ball to continue rolling, whereas a mouse would have to be lifted itself and re-positioned. Some trackballs have notably low friction, as well as being made of a dense material such as phenolic resin, so they can be spun to make them coast. The trackball's buttons may be in similar positions to those of a mouse, or configured to suit the user.
Large trackballs are common on CAD workstations for easy precision. Before the advent of the touchpad, small trackballs were common on portable computers (such as the BlackBerry Tour) where there may be no desk space on which to run a mouse. Some small "thumballs" are designed to clip onto the side of the keyboard and have integral buttons with the same function as mouse buttons.
History
The trackball was invented as part of a post-World War II-era radar plotting system named Comprehensive Display System (CDS) by Ralph Benjamin when working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented a ball tracker system called the roller ball for this purpose in 1946. The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built and the device was kept as a military secret. Production versions of the CDS used joysticks.
The CDS system had also been viewed by a number of engineers from Ferra
Document 1:::
This is a list of lighthouses in Antarctica.
Lighthouses
See also
Lists of lighthouses and lightvessels
References
External links
Document 2:::
In the field of bioengineering, biofunctionalisation (or biofunctionalization) is the modification of a material to have biological function and/or stimulus, whether permanent or temporary, while at the same time being biologically compatible.
Various types of medical implants are designed to biofunctionalize so that they are accepted by the host organism to replace or repair a defective biological function.
References
Document 3:::
Vomeronasal type-1 receptor 4 is a protein that in humans is encoded by the VN1R4 gene.
References
Further reading
Document 4:::
A star war was a decisive conflict between rival polities of the Maya civilization during the first millennium AD. The term comes from a specific type of glyph used in the Maya script, which depicts a star showering the earth with liquid droplets, or a star over a shell. It represents a verb but its phonemic value and specific meaning have not yet been deciphered. The name "star war" was coined by the epigrapher Linda Schele to refer to the glyph, and by extension to the type of conflict that it indicates.
Examples
Maya inscriptions assign episodes of Maya warfare to four distinct categories, each represented by its own glyph. Those accorded the greatest significance by the Maya were described with the "star war" glyph, representing a major war resulting in the defeat of one polity by another. This represents the installation of a new dynastic line of rulers, complete dominion of one polity over another, or a successful war of independence by a formerly dominated polity.
Losing a star war could be disastrous for the defeated party. The first recorded star war in 562, between Caracol and Tikal, resulted in a 120-year hiatus for the latter city. It saw a decline in Tikal's population, a cessation of monument erection, and the destruction of certain monuments in the Great Plaza. When Calakmul defeated Naranjo in a star war on December 25, 631, it resulted in Naranjo's ruler being tortured to death and possibly eaten. Another star war in February 744 resulted in Tikal sacking Caracol and capturing a personal god effigy of its ruler. An inscription from a monument found at Tortuguero (dating from 669) describes the aftermath of a star war: "the blood was pooled, the skulls were piled".
Astronomical connections
Mayanists have noted that the dates of recorded star wars often coincide with astronomical events involving the planet Venus, either when it was first visible in the morning or night sky or during its absence at inferior conjunction. Venus was known to Mesoame
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What was the first successful Japanese-designed and constructed airplane called, and who designed it?
A. Yoshitoshi
B. Tokorozawa
C. Captain Yoshitoshi Tokugawa
D. Aviation Museum
Answer:
|
C. Captain Yoshitoshi Tokugawa
|
Relavent Documents:
Document 0:::
The Paraná Delta () is the delta of the Paraná River in Argentina and it consists of several islands known as the Islas del Paraná. The Paraná flows north–south and becomes an alluvial basin (a flood plain) between the Argentine provinces of Entre Ríos, Santa Fe and Buenos Aires then emptying into the .
It covers about and starts to form between the cities of Santa Fe and Rosario, where the river splits into several arms, creating a network of islands and wetlands. Most of it is in the jurisdiction of Entre Ríos Province, and parts in the north of Buenos Aires Province.
The Paraná Delta is conventionally divided into three parts:
the Upper Delta, from the Diamante – Puerto Gaboto line to Villa Constitución;
the Middle Delta, from Villa Constitución to the Ibicuy Islands;
the Lower Delta, from the Ibicuy Islands to the mouth of the river.
The total length of the delta is about , and its width varies between . It carries 160 million tonnes of suspended sediment (about half of it coming from the Bermejo River through the Paraguay River) and advances from (depending on the source) per year over the Río de la Plata. It is the world's only river delta that is in contact not with the sea but with another river.
The Lower Delta was the site of the first modern settlements in the Paraná-Plata basin and is today densely populated, being the agricultural and industrial core of Argentina and host to several major ports. The main course of the Paraná lies on the west of the delta, and is navigable downstream from Puerto General San Martín by ships up to Panamax kind.
Rivers of the delta
Among the many arms of the river are the Paraná Pavón, the Paraná Ibicuy, the Paraná de las Palmas, the Paraná Guazú and the smaller Paraná Miní and Paraná Bravo.
The Paraná Pavón is the first major branch. It has a meandering course that starts on the eastern side, opposite Villa Constitución. Between the main Paraná and the Paraná Pavón lie the Lechiguanas Islands. The Paraná Pavón
Document 1:::
The Shore durometer is a device for measuring the hardness of a material, typically of polymers.
Higher numbers on the scale indicate a greater resistance to indentation and thus harder materials. Lower numbers indicate less resistance and softer materials.
The term is also used to describe a material's rating on the scale, as in an object having a "'Shore durometer' of 90."
The scale was defined by Albert Ferdinand Shore, who developed a suitable device to measure hardness in the 1920s. It was neither the first hardness tester nor the first to be called a durometer (ISV duro- and -meter; attested since the 19th century), but today that name usually refers to Shore hardness; other devices use other measures, which return corresponding results, such as for Rockwell hardness.
Durometer scales
There are several scales of durometer, used for materials with different properties. The two most common scales, using slightly different measurement systems, are the ASTM D2240 type A and type D scales.
The A scale is for softer ones, while the D scale is for harder ones. The image of Bareiss digital durometer is shown in the photo.
However, the ASTM D2240-00 testing standard calls for a total of 12 scales, depending on the intended use: types A, B, C, D, DO, E, M, O, OO, OOO, OOO-S, and R. Each scale results in a value between 0 and 100, with higher values indicating a harder material.
Method of measurement
Durometer, like many other hardness tests, measures the depth of an indentation in the material created by a given force on a standardized presser foot. This depth is dependent on the hardness of the material, its viscoelastic properties, the shape of the presser foot, and the duration of the test. ASTM D2240 durometers allow for a measurement of the initial hardness, or the indentation hardness after a given period of time. The basic test requires applying the force in a consistent manner, without shock, and measuring the hardness (depth of the indentation). If
Document 2:::
Spiruchostatins are a group of chemical compounds isolated from Pseudomonas sp. as gene expression-enhancing substances. They possess novel bicyclic depsipeptides involving 4-amino-3-hydroxy-5-methylhexanoic acid and 4-amino-3-hydroxy-5-methylheptanoic acid residues. The two main forms are spiruchostatin A and spiruchostatin B.
Uses
Spiruchostatin A is a natural product that is showing positive signs of development as a prodrug. Very often, prodrugs are made by simple addition of groups, such as acetyl or alkyl chains, to the drug through amide or ester formation. Similarly, in nature there are examples in which protecting groups are added to an NP, as a strategy for self resistance. This evolutionary trait allows the organism to carry the toxic compound harmlessly until the target is presented that activates the NP.
Spiruchostatin A is a member of a class of cyclic, cysteine-containing, depsipeptidic natural products. Due to their histone deacetylase (HDAC) inhibitory activity, tremendous effort has been thrust into the synthesis of these compounds and their analogs as potential chemotherapeutic targets. In human cancers, HDACs are often over-expressed and are recruited to reduce the transcription of tumor suppressor genes and other cell growth regulators. HDAC inhibitors are therefore thought to increase the expression of tumor suppressor genes, and they are an emerging compound class in the treatment of cancers, with several agents either approved or in development.
Document 3:::
Histology,
also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms.
Biological tissues
Animal tissue classification
There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma).
Plant tissue classification
For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types:
Dermal tissue
Vascular tissue
Ground tissue
Meristematic tissue
Medical histology
Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations.
Occupations
The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological spec
Document 4:::
Coronavirus HKU15, sometimes called Porcine coronavirus HKU15 (PorCoV HKU15) is a virus first discovered in a surveillance study in Hong Kong, China, and first reported to be associated with porcine diarrhea in February 2014. In February 2014, PorCoV HKU15 was identified in pigs with clinical diarrhea disease in the U.S. state of Ohio. The complete genome of one US strain has been published. Since then, it has been identified in pig farms in Canada. The virus has been referred to as Porcine coronavirus HKU15, Swine deltacoronavirus and Porcine deltacoronavirus.
See also
Porcine epidemic diarrhea virus
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is another name for Coronavirus HKU15 as mentioned in the text?
A. Porcine epidemic diarrhea virus
B. Swine deltacoronavirus
C. Porcine deltacoronavirus
D. Bovine coronavirus
Answer:
|
B. Swine deltacoronavirus
|
Relavent Documents:
Document 0:::
In formal language theory and computer science, a substring is a contiguous sequence of characters within a string. For instance, "the best of" is a substring of "It was the best of times". In contrast, "Itwastimes" is a subsequence of "It was the best of times", but not a substring.
Prefixes and suffixes are special cases of substrings. A prefix of a string is a substring of that occurs at the beginning of ; likewise, a suffix of a string is a substring that occurs at the end of .
The substrings of the string "apple" would be:
"a", "ap", "app", "appl", "apple",
"p", "pp", "ppl", "pple",
"pl", "ple",
"l", "le"
"e", ""
(note the empty string at the end).
Substring
A string is a substring (or factor) of a string if there exists two strings and such that . In particular, the empty string is a substring of every string.
Example: The string ana is equal to substrings (and subsequences) of banana at two different offsets:
banana
|||||
ana||
|||
ana
The first occurrence is obtained with b and na, while the second occurrence is obtained with ban and being the empty string.
A substring of a string is a prefix of a suffix of the string, and equivalently a suffix of a prefix; for example, nan is a prefix of nana, which is in turn a suffix of banana. If is a substring of , it is also a subsequence, which is a more general concept. The occurrences of a given pattern in a given string can be found with a string searching algorithm. Finding the longest string which is equal to a substring of two or more strings is known as the longest common substring problem.
In the mathematical literature, substrings are also called subwords (in America) or factors (in Europe).
Prefix
A string is a prefix of a string if there exists a string such that . A proper prefix of a string is not equal to the string itself; some sources in addition restrict a proper prefix to be non-empty. A prefix can be seen as a special case of a substring.
Example: The string b
Document 1:::
OCEANUS (Origins and Composition of the Exoplanet Analog Uranus System) is a mission concept conceived in 2016 and presented in 2017 as a potential future contestant as a New Frontiers program mission to the planet Uranus. The concept was developed in a different form by the astronautical engineering students of Purdue University during the 2017 NASA/JPL Planetary Science Summer School. OCEANUS is an orbiter, which would enable a detailed study of the structure of the planet's magnetosphere and interior structure that would not be possible with a flyby mission.
Because of the required technology development and planetary orbital dynamics, the concept suggests a launch in August 2030 on an Atlas V 511 rocket and entering Uranus' orbit in 2041.
Overview
Ice giant sized planets are the most common type of planet according to Kepler data. The little data available on Uranus, an ice giant planet, come from ground-based observations and the single flyby of the Voyager 2 spacecraft, so its exact composition and structure are essentially unknown, as are its internal heat flux, and the causes of its unique magnetic fields and extreme axial tilt or obliquity, making it a compelling target for exploration according to the Planetary Science Decadal Survey. The primary science objectives of OCEANUS are to study Uranus' interior structure, magnetosphere, and the Uranian atmosphere.
The required mission budget is estimated at $1.2 billion. The mission concept has not been formally proposed to NASA's New Frontiers program for assessment and funding. The mission is named after Oceanus, the Greek god of the ocean; he was son of the Greek god Uranus.
Power and propulsion
Since Uranus is extremely distant from the Sun (20 AU), and relying on solar power is not possible past Jupiter, the orbiter is proposed to be powered by three multi-mission radioisotope thermoelectric generators (MMRTG), a type of radioisotope thermoelectric generator. , there was enough plutonium available
Document 2:::
The propagation of grapevines is an important consideration in commercial viticulture and winemaking. Grapevines, most of which belong to the Vitis vinifera family, produce one crop of fruit each growing season with a limited life span for individual vines. While some centenarian old vine examples of grape varieties exist, most grapevines are between the ages of 10 and 30 years. As vineyard owners seek to replant their vines, a number of techniques are available which may include planting a new cutting that has been selected by either clonal or mass (massal) selection. Vines can also be propagated by grafting a new plant vine upon existing rootstock or by layering one of the canes of an existing vine into the ground next to the vine and severing the connection when the new vine develops its own root system.
In commercial viticulture, grapevines are rarely propagated from seedlings as each seed contains unique genetic information from its two parent varieties (the flowering parent and the parent that provided the pollen that fertilized the flower) and would, theoretically, be a different variety than either parent. This would be true even if two hermaphroditic vine varieties, such as Chardonnay, cross pollinated each other. While the grape clusters that would arise from the pollination would be considered Chardonnay any vines that sprang from one of the seeds of the grape berries would be considered a distinct variety other than Chardonnay. It is for this reason that grapevines are usually propagated from cuttings while grape breeders will utilize seedlings to come up with new grape varieties including crossings that include parents of two varieties within the same species (such as Cabernet Sauvignon which is a crossing of the Vitis vinifera varieties Cabernet Franc and Sauvignon blanc) or hybrid grape varieties which include parents from two different Vitis species such as the Armagnac grape Baco blanc, which was propagated from the vinifera grape Folle blanche and
Document 3:::
The Niementowski quinoline synthesis is the chemical reaction of anthranilic acids and ketones (or aldehydes) to form γ-hydroxyquinoline derivatives.
Overview
In 1894, Niementowski reported that 2-phenyl-4-hydroxyquinoline was formed when anthranilic acid and acetophenone were heated to 120–130 °C. He later found that at higher heat, 200 °C, anthranilic acid and heptaldehyde formed minimal yields of 4-hydroxy-3-pentaquinoline.
Several reviews have been published.
Variations
The temperatures required for this reaction make it less popular than other quinoline synthetic procedures. However, variations have been proposed to make this a more pragmatic and useful reaction. Adding phosphorus oxychloride to the reaction mixture mediates a condensation to make both isomers of an important precursor to an important α1-adrenoreceptor antagonist. When the 3 position of an arylketone is substituted, it has been shown that a Niementowski-type reaction with propionic acid can produce a 4-hydroxyquinoline with 2-thiomethyl substitute. The method has also been altered to occur with a catalytic amount of base, or in the presence of polyphosphoric acid.
Mechanism
Because of the similarity of these to the reagents in the Friedlander quinolone synthesis, a benzaldehyde with an aldehyde or ketone, the Niementowski quinoline synthesis mechanism is minimally different from that of the Friedländer synthesis. While studied in depth, two reaction pathways are possible and both have significant support. The reaction is thought to begin with the formation of a Schiff base, and then proceed via an intra-molecular condensation to make an imine intermediate (see below). There is then a loss of water that leads to a ring closing and formation of the quinoline derivative. Most evidence supports this as the mechanism in normal conditions of 120–130 °C. Alternatively, the reaction begins with an intermolecular condensation and subsequent formation of the imine intermediate. The latter has been
Document 4:::
The Lawson Boom was the macroeconomic conditions prevailing in the United Kingdom at the end of the 1980s, which became associated with the policies of Margaret Thatcher's Chancellor of the Exchequer, Nigel Lawson. The economic boom saw strong economic growth during the second half of the 1980s, sparking a sharp fall in unemployment, which was still in excess of 3 million at the end of 1986, but had fallen to 1.6 million (the lowest for some 10 years) by the end of 1989.
The term Lawson Boom was used by analogy with the phrase "The Barber Boom", an earlier period of rapid expansion under the tenure as chancellor of Anthony Barber in the Conservative government of Edward Heath. In his 1987 and 1988 budgets, Lawson cut standard rate income tax from 29p to 25p and cut the top rate to 40p. He did this because he believed that the economy was slowing down to a more sustainable rate, and "projected a huge surplus that justified his income tax cuts". However, before long inflation began to rise substantially, and as a result of his interest rate cuts, house prices rose by 20%. Just a few months later Lawson had to double interest rates and the UK was running its largest ever balance-of-payments deficit.
Critics of Lawson assert that a combination of the abandonment of monetarism, the adoption of a de facto exchange-rate target of 2.95 deutschmarks to the pound, and excessive fiscal laxity (in particular the 1988 budget) unleashed an inflationary spiral.
In 1990, Shadow Chancellor of the Exchequer John Smith referred to the period as the "Lawson boom" in a House of Commons debate, in which he said
The "dash for growth" had failed to save the Conservative government from electoral defeat at the hands of Labour in the 1964 general election, and was followed by a difficult period for the economy (during which unemployment nearly doubled) in the second half of the 1960s. The "Barber Boom" a decade later had contributed to a recession and was a factor in the Conservatives
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the two main phases of gameplay in Unnatural Selection?
A. Exploration and Combat
B. Breeding and Battle
C. Strategy and Defense
D. Training and Competition
Answer:
|
B. Breeding and Battle
|
Relavent Documents:
Document 0:::
A waste-to-energy plant is a waste management facility that combusts wastes to produce electricity. This type of power plant is sometimes called a trash-to-energy, municipal waste incineration, energy recovery, or resource recovery plant.
Modern waste-to-energy plants are very different from the trash incinerators that were commonly used until a few decades ago. Unlike modern ones, those plants usually did not remove hazardous or recyclable materials before burning. These incinerators endangered the health of the plant workers and the nearby residents, and most of them did not generate electricity.
Waste-to-energy generation is being increasingly looked at as a potential energy diversification strategy, especially by Sweden, which has been a leader in waste-to-energy production over the past 20 years. The typical range of net electrical energy that can be produced is about 500 to 600 kWh of electricity per ton of waste incinerated. Thus, the incineration of about 2,200 tons per day of waste will produce about 1,200 MWh of electrical energy.
Operation
Most waste-to-energy plants burn municipal solid waste, but some burn industrial waste or hazardous waste. A modern, properly run waste-to-energy plant sorts material before burning it and can co-exist with recycling. The only items that are burned are not recyclable, by design or economically, and are not hazardous.
Waste-to-energy plants are similar in their design and equipment with other steam-electric power plants, particularly biomass plants. First, the waste is brought to the facility. Then, the waste is sorted to remove recyclable and hazardous materials. The waste is then stored until it is time for burning. A few plants use gasification, but most combust the waste directly because it is a mature, efficient technology. The waste can be added to the boiler continuously or in batches, depending on the design of the plant.
In terms of volume, waste-to-energy plants incinerate 80 to 90 percent of waste. Somet
Document 1:::
The Fifth Giant is a hypothetical ice giant proposed as part of the Five-planet Nice model, an extension of the Nice model of solar system evolution. This hypothesis suggests that the early Solar System once contained a fifth giant planet in addition to the four currently known giant planets: Jupiter, Saturn, Uranus, and Neptune. The Fifth Giant is theorized to have been ejected from the Solar System due to gravitational interactions during the chaotic phase of planetary migration, approximately 4 billion years ago.
Background
The Nice model, developed in the early 2000s, describes the dynamical evolution of the Solar System following the dissipation of the protoplanetary disk. It posits that the giant planets initially formed in a more compact configuration and subsequently migrated to their current orbits due to interactions with a massive disk of planetesimals. These interactions are believed to have triggered a period of orbital instability, resulting in the dispersal of the planetesimal disk and the capture of irregular moons.
The addition of a fifth giant planet to this model arose as researchers attempted to resolve discrepancies between the Nice Model's predictions and observational data, particularly regarding the current orbital distribution of the outer planets and the Kuiper belt.
Characteristics
The Fifth Giant is hypothesized to have been an ice giant, similar in composition to Uranus and Neptune. It likely had a mass between 10 and 20 Earth masses and an orbit initially located between those of Saturn and Uranus. Computer simulations indicate that such a planet could have influenced the dynamical evolution of the Solar System, shaping the orbits of the outer planets and accounting for the observed gaps in the Kuiper belt.
Ejection mechanism
The ejection of the Fifth Giant is believed to have occurred during the early Solar System's period of instability, when gravitational interactions between the giant planets became chaotic. The planet likely encountered a series of close gravitational encounters with Jupiter or Saturn, resulting in its eventual expulsion from the Solar System. Such an event would have minimized the disruption to the orbits of the remaining planets while aligning with constraints derived from their current orbital architecture.
The ejection process may have also played a role in scattering planetesimals to form the Oort cloud or altering the trajectories of comets and asteroids.
Observational evidence
Direct evidence for the Fifth Giant's existence is lacking, as the planet would have been ejected into interstellar space and is no longer gravitationally bound to the Sun. However, indirect evidence has been cited to support the hypothesis:
Orbital Resonances: The current orbital spacing and resonances among the giant planets are better explained in simulations that include an additional giant planet.
Kuiper Belt Structure: The sculpting of the Kuiper belt and the distribution of trans-Neptunian objects are more consistent with models involving a fifth giant planet.
Irregular Moons: The capture of irregular moons around Jupiter, Saturn, Uranus, and Neptune aligns with the chaotic conditions predicted during the Fifth Giant's ejection.
Related hypotheses
The concept of an additional giant planet is distinct from the search for Planet Nine, a hypothetical planet proposed to explain the clustering of certain trans-Neptunian objects. While both hypotheses suggest the presence of a missing planet, the Fifth Giant would have been ejected billions of years ago, whereas Planet Nine is theorized to remain within the Solar System. However, it is possible that if Planet Nine exists, it could very well be the Fifth Giant as stated by Michael E. Brown during a Twitter inquiry.
Document 2:::
Software analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics (e.g., software metrics) as well as related processes of their development and evolution. It aims at describing, monitoring, predicting, and improving the efficiency and effectiveness of software engineering throughout the software lifecycle, in particular during software development and software maintenance. The data collection is typically done by mining software repositories, but can also be achieved by collecting user actions or production data.
Definitions
"Software analytics aims to obtain insightful and actionable information from software artifacts that help practitioners accomplish tasks related to software development, systems, and users." --- centers on analytics applied to artifacts a software system is composed of.
"Software analytics is analytics on software data for managers and software engineers with the aim of empowering software development individuals and teams to gain and share insight form their data to make better decisions." --- strengthens the core objectives for methods and techniques of software analytics, focusing on both software artifacts and activities of involved developers and teams.
"Software analytics (SA) represents a branch of big data analytics. SA is concerned with the analysis of all software artifacts, not only source code. [...] These tiers vary from the higher level of the management board and setting the enterprise vision and portfolio management, going through project management planning and implementation by software developers." --- reflects the broad scope including various stakeholders.
Aims
Software analytics aims at supporting decisions and generating insights, i.e., findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, quality, evolution as well as about the activities of various stakeholders of these proce
Document 3:::
A canary trap is a method for exposing an information leak by giving different versions of a sensitive document to each of several suspects and seeing which version gets leaked. It could be one false statement, to see whether sensitive information gets out to other people as well. Special attention is paid to the quality of the prose of the unique language, in the hopes that the suspect will repeat it verbatim in the leak, thereby identifying the version of the document.
The term was coined by Tom Clancy in his novel Patriot Games, although Clancy did not invent the technique. The actual method (usually referred to as a barium meal test in espionage circles) has been used by intelligence agencies for many years. The fictional character Jack Ryan describes the technique he devised for identifying the sources of leaked classified documents:
Each summary paragraph has six different versions, and the mixture of those paragraphs is unique to each numbered copy of the paper. There are over a thousand possible permutations, but only ninety-six numbered copies of the actual document. The reason the summary paragraphs are so lurid is to entice a reporter to quote them verbatim in the public media. If he quotes something from two or three of those paragraphs, we know which copy he saw and, therefore, who leaked it.
A refinement of this technique uses a thesaurus program to shuffle through synonyms, thus making every copy of the document unique.
Barium meal test
According to the book Spycatcher by Peter Wright (published in 1987), the technique is standard practice that has been used by MI5 (and other intelligence agencies) for many years, under the name "barium meal test", named for the medical procedure. A barium meal test is more sophisticated than a canary trap because it is flexible and may take many different forms. However, the basic premise is to reveal a supposed secret to a suspected enemy (but nobody else) then monitor whether there is evidence of the fake infor
Document 4:::
Malic acid is an organic compound with the molecular formula . It is a dicarboxylic acid that is made by all living organisms, contributes to the sour taste of fruits, and is used as a food additive. Malic acid has two stereoisomeric forms (L- and D-enantiomers), though only the L-isomer exists naturally. The salts and esters of malic acid are known as malates. The malate anion is a metabolic intermediate in the citric acid cycle.
Etymology
The word 'malic' is derived from Latin , meaning 'apple'. The related Latin word , meaning 'apple tree', is used as the name of the genus Malus, which includes all apples and crabapples; and is the origin of other taxonomic classifications such as Maloideae, Malinae, and Maleae.
Biochemistry
L-Malic acid is the naturally occurring form, whereas a mixture of L- and D-malic acid is produced synthetically.
Malate plays an important role in biochemistry. In the C4 carbon fixation process, malate is a source of CO2 in the Calvin cycle. In the citric acid cycle, (S)-malate is an intermediate, formed by the addition of an -OH group on the si face of fumarate. It can also be formed from pyruvate via anaplerotic reactions.
Malate is also synthesized by the carboxylation of phosphoenolpyruvate in the guard cells of plant leaves. Malate, as a double anion, often accompanies potassium cations during the uptake of solutes into the guard cells in order to maintain electrical balance in the cell. The accumulation of these solutes within the guard cell decreases the solute potential, allowing water to enter the cell and promote aperture of the stomata.
In food
Malic acid was first isolated from apple juice by Carl Wilhelm Scheele in 1785. Antoine Lavoisier in 1787 proposed the name acide malique, which is derived from the Latin word for apple, mālum—as is its genus name Malus.
In German it is named Äpfelsäure (or Apfelsäure) after plural or singular of a sour thing from the apple fruit, but the salt(s) are called Malat(e).
Malic acid is the
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main reason researchers proposed the existence of the Fifth Giant in the Solar System?
A. To explain the formation of the Sun
B. To resolve discrepancies in the Nice Model's predictions
C. To account for the presence of Earth-like planets
D. To identify the origin of asteroids
Answer:
|
B. To resolve discrepancies in the Nice Model's predictions
|
Relavent Documents:
Document 0:::
Ring is a dynamically typed, general-purpose programming language. It can be embedded in C/C++ projects, extended using C/C++ code or used as a standalone language. The supported programming paradigms are imperative, procedural, object-oriented, functional, meta, declarative using nested structures, and natural programming. The language is portable (Windows, Linux, macOS, Android, WebAssembly, etc.) and can be used to create console, GUI, web, game and mobile applications.
History
In 2009, Mahmoud Samir Fayed created a minor domain-specific language called Supernova that focuses on User interface (UI) creation and uses some ideas related to Natural Language Programming, then he realized the need for a new language that is general-purpose and can increase the productivity of natural language creation. Ring aims to offer a language focused on helping the developer with building natural interfaces and declarative DSLs.
Goals
The general goals behind Ring:
Applications programming language.
Productivity and developing high quality solutions that can scale.
Small and flexible language that can be embedded in C/C++ projects.
Simple language that can be used in education and introducing Compiler/VM concepts.
General-Purpose language that can be used for creating domain-specific libraries, frameworks and tools.
Practical language designed for creating the next version of the Programming Without Coding Technology software.
Examples
Hello World program
The same program can be written using different styles. Here is an example of the standard "Hello, World!" program using four different styles.
The first style:
see "Hello, World!"
The second style:
put "Hello, World!"
The third style:
print("Hello, World!")
Another style: similar to xBase languages like Clipper and Visual FoxPro
? "Hello, World!"
Change the keywords and operators
Ring supports changing the language keywords and operators.
This could be done many times in the same source file, and is us
Document 1:::
d5SICS is an artificial nucleoside containing 6-methylisoquinoline-1-thione-2-yl group instead of a base.
It pairs up with dNaM in a hydrophobic interaction. It was not able to be removed by the error-correcting machinery of the E. coli into which it was inserted. The pairing of d5SICS–dNaM is mediated by packing and hydrophobic forces instead of hydrogen bonding, which occurs in natural base pairs. Therefore, in free DNA, rings of d5SICS and dNaM are placed in parallel planes instead of the same plane. The d5SICS-dNaM pairing replaced an older dNaM-dTPT3 pairing.
Document 2:::
Plants are constantly exposed to different stresses that result in wounding. Plants have adapted to defend themselves against wounding events, like herbivore attacks or environmental stresses. There are many defense mechanisms that plants rely on to help fight off pathogens and subsequent infections. Wounding responses can be local, like the deposition of callose, and others are systemic, which involve a variety of hormones like jasmonic acid and abscisic acid.
Overview
There are many forms of defense that plants use to respond to wounding events. There are physical defense mechanisms that some plants utilize, through structural components, like lignin and the cuticle. The structure of a plant cell wall is incredibly important for wound responses, as both protect the plant from pathogenic infections by preventing various molecules from entering the cell.
Plants are capable of activating innate immunity, by responding to wounding events with damage-associated Molecular Patterns (DAMPs). Additionally, plants rely on microbe-associated molecular patterns (MAMPs) to defend themselves upon sensing a wounding event. There are examples of both rapid and delayed wound responses, depending on where the damage took place.
MAMPs/ DAMPS & Signaling Pathways
Plants have pattern recognition receptors (PRRs) that recognize MAMPs, or microbe-associated molecular patterns. Upon entry of a pathogen, plants are vulnerable to infection and lose a fair amount of nutrients to said pathogen. The constitutive defenses are the physical barriers of the plant; including the cuticle or even the metabolites that act toxic and deter herbivores. Plants maintain an ability to sense when they have an injured area and induce a defensive response. Within wounded tissues, endogenous molecules become released and become Damage Associated Molecular Patterns (DAMPs), inducing a defensive response. DAMPs are typically caused by insects that feed off the plant. Such responses to wounds are found at the site of the wound and also systemically. These are mediated by hormones.[1]
As a plant senses a wound, it immediately sends a signal for innate immunity. These signals are controlled by hormones such as jasmonic acid, ethylene and abscisic acid. Jasmonic acid induces the prosystemin gene along with other defense related genes such as abscisic acid, and ethylene, contributing to a rapid induction of defense responses. Other physical factors also play a vital role in wound signaling, which include hydraulic pressure and electrical pulses. Most of these that are involved within wound signaling also function in signaling other defense responses. Cross-talk events regulate the activation of different roles.
Callose, Damaged Sieve Tube Elements, and P-Proteins
Sieve elements are very rich in sugars and various organic molecules. Plants don't want to lose these sugars when the sieve elements get damaged, as the molecules are a very large energy investment. The plants have both short-term and long-term mechanisms to prevent sieve element sap loss. The short-term mechanism involves sap proteins, and the long-term mechanism involves callose, which helps to close the open channels in broken sieve plates.
The main mechanism for closing damaged sieve elements involves P-proteins, which act as a plug in the sieve element pores. P-proteins essentially plug the pores that form in sieve elements. They act as a stopper in the damaged sieve elements by blocking the open channels so that no additional sap or sugar can be lost.
A longer-term solution to wounded sieve tube elements involves the production of callose at the sieve pores. Callose is a β-1,3 glucan synthesized by callose synthase, which is an enzyme that's localized within the plasma membrane. Callose gets synthesized after the sieve tube elements undergo damage and/or stress. The use of wound callose occurs when callose gets deposited following sieve element damage. Wound callose is proven to first be deposited at the sieve plate pores, or the intracellular connections, where it then spreads to different regions. Essentially, wound callose seals off the parts that were damaged, and separates them from the parts that are still healthy and not broken. Once the sieve elements get fixed, the callose is always dissipated by callose-hydrolyzing enzyme. Callose is also synthesized during normal plant growth and development, and it typically responds to things like high temperatures, or allows the plant to prepare for more dormant seasons.
When the sieve elements get damaged, the sap, sugar, and other molecules inside rush to the end that was damaged. If there was no mechanism to stop the sugars from leaking out the plant would lose an incredibly large amount of invested energy.
Jasmonic Acid
Jasmonic acid (JA) is a plant hormone that increases in concentration in response to insect herbivore damage. The rise in JA induces the production of proteins functioning in plant defenses. JA also induces the transcription of multiple genes coding for key enzymes of the major pathways for secondary metabolites. Its structure and synthesis show parallels to oxylipins, which function in inflammatory responses. JA is synthesized by the octadecanoid pathway, which is activated in response to wound-induced signals. It is a derivative of the most rich fatty acid in the lipids of leaf membranes, alpha-linolenic acid. When plants experience mechanical wounding or herbivory, JA is synthesized de novo and induces genome-wide changes in gene expression. JA travels through plants via the phloem, and accumulates in vascular tissue. JA acts as an intracellular signal in order to promote responses in distal tissues. The perception of jasmonate in distal responding leaves is necessary for recognition of the transmissible signal that coordinates responses to wounding stress. JA mutants, which lack the gene encoding jasmonic acid, are killed by insect herbivore damage that would otherwise not harm normal-type plants. Upon the application of JA to the same mutants, resistance is restored. Signaling agents such as ethylene, methyl salicylate, and salicylic acid can pair with JA and enhance JA responses.
Protections Against Abiotic Stress
Morphological Changes
Plants can protect themselves from abiotic stress in many different ways, and most include a physical change in the plant’s morphology. Phenotypic plasticity is a plant’s ability to alter and adapt its morphology in response to the external environments to protect themselves against stress. One way that plants alter their morphology is by reducing the area of their leaves. Though large and flat leaves are favorable for photosynthesis because there is a larger surface area for the leaf to absorb sunlight, bigger leaves are more vulnerable to environmental stresses. For example, it is easier for water to evaporate off of large surface areas which can rapidly deplete the soil of its water and cause drought stress. Plants will reduce leaf cell division and expansion and alter the shape to reduce leaf area.
Another way that plants alter their morphology to protect against stress is by changing the leaf orientation. Plants can suffer from heat stress if the sun’s rays are too strong. Changing the orientation of their leaves in different directions (parallel or perpendicular) allows plants to reduce damage from intense light. Leaves also wilt in response to stress, because it changes the angle at which the sun hits the leaf. Leaf rolling also minimizes how much of the leaf area is exposed to the sun.
Constitutive structures
Trichomes are small, hair-like growths on plant leaves and stems which help the plant protect itself. Although not all trichomes are alive (some undergo apoptosis, but their cell walls are still present) they protect the leaf by keeping its surface cool and reducing evaporation. In order for trichomes to successfully protect the plant, they must be dense. Oftentimes, trichomes will appear white on a plant, meaning that they are densely packed and are able to reflect a large amount of light off of the plant to prevent heat and light stress. Although trichomes are used for protection, they can be disadvantageous for plants at times because trichomes may reflect light away from the plant that can be used to photosynthesize.
The cuticle is a layered structure of waxes and hydrocarbons located on the outer layer of the epidermis which also helps protect the plant from stress. Cuticles can also reflect light, like trichomes, which reduces light intensity and heat. Plant cuticles can also limit the diffusion of water and gases from the leaves which helps maintain them under stress conditions. Thicker cuticles have been found to decrease evaporation, so some plants will increase the thickness of their cuticles in response to drought stress.
Symbiotic Relationships
Plants are also further protected from both abiotic and biotic stresses when plant growth promoting Rhizobacteria (PGPRs) are present. Rhizobacteria are root-colonizing and non-pathogenic, and they form symbiotic relationships with plants that can elicit stress responsive pathways. PGPRs also improve key physiological processes in plants such as water and nutrient uptake, photosynthesis, and source-sink relationships. Bacteria will respond to substances secreted by plant roots and optimize nutrient acquisition for the plant with their own metabolic processes. Though dependent on the strain, most Rhizobacteria will produce major phytohormones such as auxins, gibberellins, cytokinins, abscisic acid (ABA) and ethylene, which stimulate plant growth and increase the plant’s resistance to pathogens. Other substances are also released by Rhizobacteria, including nitric oxide, enzymes, organic acids, and osmolytes.
See also
Embryo rescue
Somatic embryogenesis
Wound healing
Jasmonic acid
Herbivore
Trichome
Cuticle
Pathogen
Document 3:::
Qatar Sustainability Assessment System (QSAS) is a green building certification system developed for the State of Qatar. The primary objective of Qatar Sustainability Assessment System [QSAS] is to create a sustainable built environment that minimizes ecological impact while addressing the specific regional needs and environment of Qatar.
Overview
QSAS was developed by the Gulf Organisation for Research and Development (GORD) in collaboration with the T.C. Chan Center for Building Simulation and Energy Studies at the University of Pennsylvania . Since its deployment in 2009, over 128 buildings in Qatar have been certified through QSAS. In December 2010, QSAS was adopted into the curriculum of the environmental design faculty at King Fahd University and Qatar University. In March, 2011 the State of Qatar integrated QSAS into the Qatar Construction Specifications [QCS] making the implementation of certain criteria mandatory for buildings developed in Qatar.
The development of the rating system took advantage of a comprehensive review of combined best practices employed by a mix of established international and regional rating systems. This review has been performed while taking into consideration the needs that are specific to Qatar’s local environment, culture, and policies. This has led to adaptations and additions to sustainability criteria. Measurements for the rating system are designed to be performance-based and quantifiable. The result is a performance-based sustainable building rating system customized to the unique conditions and requirements of the State of Qatar.
Rating System
QSAS consists of a series of sustainable categories and criteria, each with a direct impact on environmental stress mitigation. Each category measures a different aspect of the project’s environmental impact. The categories define these broad impacts and address ways in which a project can mitigate the negative environmental effects. These categories are then broken down into spec
Document 4:::
An ornamental animal is an animal kept for display or curiosity, often in a park. A wide range of mammals, birds and fish have been kept as ornamental animals. Ornamental animals have often formed the basis of introduced populations, sometimes with negative ecological effects, but a history of being kept as ornamental animals has also preserved breeds, types and even species which have become rare or extinct elsewhere.
This article does not cover animals kept in zoos, wildfowl collections or aquaria.
History
Ornamental animals have been kept for many centuries in several cultures.
Introduced species
Some ornamental animals have escaped from captivity and have formed feral populations.
Conservation
A number of animals have been protected from local or worldwide extinction by being kept as ornamental animals.
List of ornamental animals
The following are breeds or species whose history has included a significant period as ornamental animals, either globally or in particular regions (animals kept primarily in modern zoos, aquaria or waterfowl collections are not included):
Deer
Père David's deer (Elaphurus davidianus): Became extinct in the wild before 1900, but survived in the park of the Chinese Emperor near Peking. Died out there too, but survived in Woburn Park in England. Now reintroduced to the wild in China.
Reeves's muntjac (Muntiacus reevesi): Escaped from Woburn Park in England, and now established as a feral animal throughout much of lowland England, causing considerable damage to native woodland.
Sika deer (Cervus nippon): Established in the UK and other places from park escapes. Poses a hybridisation threat to other deer species such as red deer Cervus elephus.
Water deer (Hydropotes inermis): Another species which has escaped from Woburn Park, having a feral population in the surrounding area of England.
Chital (Axis axis): Endangered deer species in Bangladesh. It is a medium-sized deer species. It is an ornamental animal in Bangladesh and I
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What role does jasmonic acid (JA) play in plant responses to wounding events?
A. It inhibits the production of defense proteins.
B. It decreases leaf area to reduce evaporation.
C. It induces the production of proteins functioning in plant defenses.
D. It prevents the synthesis of callose in damaged sieve elements.
Answer:
|
C. It induces the production of proteins functioning in plant defenses.
|
Relavent Documents:
Document 0:::
Cost–benefit analysis (CBA), sometimes also called benefit–cost analysis, is a systematic approach to estimating the strengths and weaknesses of alternatives. It is used to determine options which provide the best approach to achieving benefits while preserving savings in, for example, transactions, activities, and functional business requirements. A CBA may be used to compare completed or potential courses of action, and to estimate or evaluate the value against the cost of a decision, project, or policy. It is commonly used to evaluate business or policy decisions (particularly public policy), commercial transactions, and project investments. For example, the U.S. Securities and Exchange Commission must conduct cost–benefit analyses before instituting regulations or deregulations.
CBA has two main applications:
To determine if an investment (or decision) is sound, ascertaining if – and by how much – its benefits outweigh its costs.
To provide a basis for comparing investments (or decisions), comparing the total expected cost of each option with its total expected benefits.
CBA is related to cost-effectiveness analysis. Benefits and costs in CBA are expressed in monetary terms and are adjusted for the time value of money; all flows of benefits and costs over time are expressed on a common basis in terms of their net present value, regardless of whether they are incurred at different times. Other related techniques include cost–utility analysis, risk–benefit analysis, economic impact analysis, fiscal impact analysis, and social return on investment (SROI) analysis.
Cost–benefit analysis is often used by organizations to appraise the desirability of a given policy. It is an analysis of the expected balance of benefits and costs, including an account of any alternatives and the status quo. CBA helps predict whether the benefits of a policy outweigh its costs (and by how much), relative to other alternatives. This allows the ranking of alternative policies in terms of a cost–benefit ratio. Generally, accurate cost–benefit analysis identifies choices which increase welfare from a utilitarian perspective. Assuming an accurate CBA, changing the status quo by implementing the alternative with the lowest cost–benefit ratio can improve Pareto efficiency. Although CBA can offer an informed estimate of the best alternative, a perfect appraisal of all present and future costs and benefits is difficult; perfection, in economic efficiency and social welfare, is not guaranteed.
The value of a cost–benefit analysis depends on the accuracy of the individual cost and benefit estimates. Comparative studies indicate that such estimates are often flawed, preventing improvements in Pareto and Kaldor–Hicks efficiency. Interest groups may attempt to include (or exclude) significant costs in an analysis to influence its outcome.
History
The concept of CBA dates back to an 1848 article by Jules Dupuit, and was formalized in subsequent works by Alfred Marshall. Jules Dupuit pioneered this approach by first calculating "the social profitability of a project like the construction of a road or bridge" In an attempt to answer this, Dupuit began to look at the utility users would gain from the project. He determined that the best method of measuring utility is by learning one's willingness to pay for something. By taking the sum of each user's willingness to pay, Dupuit illustrated that the social benefit of the thing (bridge or road or canal) could be measured. Some users may be willing to pay nearly nothing, others much more, but the sum of these would shed light on the benefit of it. It should be reiterated that Dupuit was not suggesting that the government perfectly price-discriminate and charge each user exactly what they would pay. Rather, their willingness to pay provided a theoretical foundation on the societal worth or benefit of a project. The cost of the project proved much simpler to calculate. Simply taking the sum of the materials and labor, in addition to the maintenance afterward, would give one the cost. Now, the costs and benefits of the project could be accurately analyzed, and an informed decision could be made.
The Corps of Engineers initiated the use of CBA in the US, after the Federal Navigation Act of 1936 mandated cost–benefit analysis for proposed federal-waterway infrastructure. The Flood Control Act of 1939 was instrumental in establishing CBA as federal policy, requiring that "the benefits to whomever they accrue [be] in excess of the estimated costs."
More recently, cost–benefit analysis has been applied to decisions regarding investments in cybersecurity-related activities (e.g., see the Gordon–Loeb model for decisions concerning cybersecurity investments).
Public policy
CBA's application to broader public policy began with the work of Otto Eckstein, who laid out a welfare economics foundation for CBA and its application to water-resource development in 1958. It was applied in the US to water quality, recreational travel, and land conservation during the 1960s, and the concept of option value was developed to represent the non-tangible value of resources such as national parks.
CBA was expanded to address the intangible and tangible benefits of public policies relating to mental illness, substance abuse, college education, and chemical waste. In the US, the National Environmental Policy Act of 1969 required CBA for regulatory programs; since then, other governments have enacted similar rules. Government guidebooks for the application of CBA to public policies include the Canadian guide for regulatory analysis, the Australian guide for regulation and finance, and the US guides for health-care and emergency-management programs.
Transportation investment
CBA for transport investment began in the UK with the M1 motorway project and was later used for many projects, including the London Underground's Victoria line. The New Approach to Appraisal (NATA) was later introduced by the Department for Transport, Environment and the Regions. This presented balanced cost–benefit results and detailed environmental impact assessments. NATA was first applied to national road schemes in the 1998 Roads Review, and was subsequently rolled out to all transport modes. Maintained and developed by the Department for Transport, it was a cornerstone of UK transport appraisal in 2011.
The European Union's Developing Harmonised European Approaches for Transport Costing and Project Assessment (HEATCO) project, part of the EU's Sixth Framework Programme, reviewed transport appraisal guidance of EU member states and found significant national differences. HEATCO aimed to develop guidelines to harmonise transport appraisal practice across the EU.
Transport Canada promoted CBA for major transport investments with the 1994 publication of its guidebook. US federal and state transport departments commonly apply CBA with a variety of software tools, including HERS, BCA.Net, StatBenCost, Cal-BC, and TREDIS. Guides are available from the Federal Highway Administration, Federal Aviation Administration, Minnesota Department of Transportation, California Department of Transportation (Caltrans), and the Transportation Research Board's Transportation Economics Committee.
Accuracy
In health economics, CBA may be an inadequate measure because willingness-to-pay methods of determining the value of human life can be influenced by income level. Variants, such as cost–utility analysis, QALY and DALY to analyze the effects of health policies, may be more suitable.
For some environmental effects, cost–benefit analysis can be replaced by cost-effectiveness analysis. This is especially true when one type of physical outcome is sought, such as a reduction in energy use by an increase in energy efficiency. Using cost-effectiveness analysis is less laborious and time-consuming, since it does not involve the monetization of outcomes (which can be difficult in some cases).
It has been argued that if modern cost–benefit analyses had been applied to decisions such as whether to mandate the removal of lead from gasoline, block the construction of two proposed dams just above and below the Grand Canyon on the Colorado River, and regulate workers' exposure to vinyl chloride, the measures would not have been implemented (although all are considered highly successful). The US Clean Air Act has been cited in retrospective studies as a case in which benefits exceeded costs, but knowledge of the benefits (attributable largely to the benefits of reducing particulate pollution) was not available until many years later.
Process
A generic cost–benefit analysis has the following steps:
Define the goals and objectives of the action.
List alternative actions.
List stakeholders.
Select measurement(s) and measure all cost and benefit elements.
Predict outcome of costs and benefits over the relevant time period.
Convert all costs and benefits into a common currency.
Apply discount rate.
Calculate the net present value of actions under consideration.
Perform sensitivity analysis.
Adopt the recommended course of action.
In United States regulatory policy, cost–benefit analysis is governed by OMB Circular A-4.
Evaluation
CBA attempts to measure the positive or negative consequences of a project. A similar approach is used in the environmental analysis of total economic value. Both costs and benefits can be diverse. Costs tend to be most thoroughly represented in cost–benefit analyses due to relatively-abundant market data. The net benefits of a project may incorporate cost savings, public willingness to pay (implying that the public has no legal right to the benefits of the policy), or willingness to accept compensation (implying that the public has a right to the benefits of the policy) for the policy's welfare change. The guiding principle of evaluating benefits is to list all parties affected by an intervention and add the positive or negative value (usually monetary) that they ascribe to its effect on their welfare.
The actual compensation an individual would require to have their welfare unchanged by a policy is inexact at best. Surveys (stated preferences) or market behavior (revealed preferences) are often used to estimate compensation associated with a policy. Stated preferences are a direct way of assessing willingness to pay for an environmental feature, for example. Survey respondents often misreport their true preferences, however, and market behavior does not provide information about important non-market welfare impacts. Revealed preference is an indirect approach to individual willingness to pay. People make market choices of items with different environmental characteristics, for example, revealing the value placed on environmental factors.
The value of human life is controversial when assessing road-safety measures or life-saving medicines. Controversy can sometimes be avoided by using the related technique of cost–utility analysis, in which benefits are expressed in non-monetary units such as quality-adjusted life years. Road safety can be measured in cost per life saved, without assigning a financial value to the life. However, non-monetary metrics have limited usefulness for evaluating policies with substantially different outcomes. Other benefits may also accrue from a policy, and metrics such as cost per life saved may lead to a substantially different ranking of alternatives than CBA. In some cases, in addition to changing the benefit indicator, the cost–benefit analysis strategy is directly abandoned as a measure. In the 1980s, to ensure workers' safety, the US Supreme Court made an important decision to abandon the consideration of return on investment and instead seek the lowest cost–benefit to meet specific standards.
Another metric is valuing the environment, which in the 21st century is typically assessed by valuing ecosystem services to humans (such as air and water quality and pollution). Monetary values may also be assigned to other intangible effects such as business reputation, market penetration, or long-term enterprise strategy alignment.
Time and discounting
CBA generally attempts to put all relevant costs and benefits on a common temporal footing, using time value of money calculations. This is often done by converting the future expected streams of costs () and benefits () into a present value amount with a discount rate () and the net present value defined as:The selection of a discount rate for this calculation is subjective. A smaller rate values the current generation and future generations equally. Larger rates (a market rate of return, for example) reflects human present bias or hyperbolic discounting: valuing money which they will receive in the near future more than money they will receive in the distant future. Empirical studies suggest that people discount future benefits in a way similar to these calculations. The choice makes a large difference in assessing interventions with long-term effects. An example is the equity premium puzzle, which suggests that long-term returns on equities may be higher than they should be after controlling for risk and uncertainty. If so, market rates of return should not be used to determine the discount rate because they would undervalue the distant future.
Methods for choosing a discount rate
For publicly traded companies, it is possible to find a project's discount rate by using an equilibrium asset pricing model to find the required return on equity for the company and then assuming that the risk profile of a given project is similar to that the company faces. Commonly used models include the capital asset pricing model (CAPM):and the Fama-French model:where the terms correspond to the factor loadings. A generalization of these methods can be found in arbitrage pricing theory, which allows for an arbitrary number of risk premiums in the calculation of the required return.
Risk and uncertainty
Risk associated with project outcomes is usually handled with probability theory. Although it can be factored into the discount rate (to have uncertainty increasing over time), it is usually considered separately. Particular consideration is often given to agent risk aversion: preferring a situation with less uncertainty to one with greater uncertainty, even if the latter has a higher expected return.
Uncertainty in CBA parameters can be evaluated with a sensitivity analysis, which indicates how results respond to parameter changes. A more formal risk analysis may also be undertaken with the Monte Carlo method. However, even a low parameter of uncertainty does not guarantee the success of a project.
Principle of maximum entropy
Suppose that we have sources of uncertainty in a CBA that are best treated with the Monte Carlo method, and the distributions describing uncertainty are all continuous. How do we go about choosing the appropriate distribution to represent the sources of uncertainty? One popular method is to make use of the principle of maximum entropy, which states that the distribution with the best representation of current knowledge is the one with the largest entropy - defined for continuous distributions as:where is the support set of a probability density function . Suppose that we impose a series of constraints that must be satisfied:
, with equality outside of
where the last equality is a series of moment conditions. Maximizing the entropy with these constraints leads to the functional:where the are Lagrange multipliers. Maximizing this functional leads to the form of a maximum entropy distribution:There is a direct correspondence between the form of a maximum entropy distribution and the exponential family. Examples of commonly used continuous maximum entropy distributions in simulations include:
Uniform distribution
No constraints are imposed over the support set
It is assumed that we have maximum ignorance about the uncertainty
Exponential distribution
Specified mean over the support set
Gamma distribution
Specified mean and log mean over the support set
The exponential distribution is a special case
Normal distribution
Specified mean and variance over the support set
If we have a specified mean and variance on the log scale, then the lognormal distribution is the maximum entropy distribution
CBA under US administrations
The increased use of CBA in the US regulatory process is often associated with President Ronald Reagan's administration. Although CBA in US policy-making dates back several decades, Reagan's Executive Order 12291 mandated its use in the regulatory process. After campaigning on a deregulation platform, he issued the 1981 EO authorizing the Office of Information and Regulatory Affairs (OIRA) to review agency regulations and requiring federal agencies to produce regulatory impact analyses when the estimated annual impact exceeded $100 million. During the 1980s, academic and institutional critiques of CBA emerged. The three main criticisms were:
That CBA could be used for political goals. Debates on the merits of cost and benefit comparisons can be used to sidestep political or philosophical goals, rules and regulations.
That CBA is inherently anti-regulatory, and therefore a biased tool. The monetization of policy impacts is an inappropriate tool for assessing mortality risks and distributional impacts.
That the length of time necessary to complete CBA can create significant delays, which can impede policy regulation.
These criticisms continued under the Clinton administration during the 1990s. Clinton furthered the anti-regulatory environment with his Executive Order 12866. The order changed some of Reagan's language, requiring benefits to justify (rather than exceeding) costs and adding "reduction of discrimination or bias" as a benefit to be analyzed. Criticisms of CBA (including uncertainty valuations, discounting future values, and the calculation of risk) were used to argue that it should play no part in the regulatory process. The use of CBA in the regulatory process continued under the Obama administration, along with the debate about its practical and objective value. Some analysts oppose the use of CBA in policy-making, and those in favor of it support improvements in analysis and calculations.
Criticisms
As a concept in economics, cost–benefit analysis has provided a valuable reference for many public construction and governmental decisions, but its application has gradually revealed a number of drawbacks and limitations. A number of critical arguments have been put forward in response. That include concerns about measuring the distribution of costs and benefits, discounting the costs and benefits to future generations, and accounting for the diminishing marginal utility of income. in addition, relying solely on cost–benefit analysis may lead to neglecting the multifaceted value factors of a project.
Distribution
CBA has been criticized in some disciplines as it relies on the Kaldor-Hicks criterion which does not take into account distributional issues. This means, that positive net-benefits are decisive, independent of who benefits and who loses when a certain policy or project is put into place. As a result, CBA can overlook concerns of equity and fairness, potentially favoring policies that disproportionately benefit certain groups while imposing burdens on others. Phaneuf and Requate phrased it as follows "CBA today relies on the Kaldor-Hicks criteria to make statements about efficiency without addressing issues of income distribution. This has allowed economists to stay silent on issues of equity, while focusing on the more familiar task of measuring costs and benefits". The challenge raised is that it is possible for the benefits of successive policies to consistently accrue to the same group of individuals, and CBA is ambivalent between providing benefits to those that have received them in the past and those that have been consistently excluded. Policy solutions, such as progressive taxation can address some of these concerns.
Discounting and future generations
Others have critiqued the practice of discounting future costs and benefits for a variety of reasons, including the potential undervaluing of the temporally distant cost of climate change and other environmental damage, and the concern that such a practice effectively ignores the preferences of future generations. Some scholars argue that the use of discounting makes CBA biased against future generations, and understates the potential harmful impacts of climate change. The growing relevance of climate change has led to a re-examination of the practice of discounting in CBA. These biases can lead to biased resource allocation.
Marginal utility
The main criticism stems from the diminishing marginal utility of income. According to this critique, without using weights in the CBA, it is not the case that everyone "matters" the same but rather that people with greater ability to pay receive a higher weight. One reason for this is that for high income people, one monetary unit is worth less relative to low income people, so they are more willing to give up one unit in order to make a change that is favourable for them. This means that there is no symmetry in agents, i.e. some people benefit more from the same absolute monetary benefit. Any welfare change, no matter positive or negative, affects people with a lower income stronger than people with a higher income, even if the exact monetary impacts are identical. This is more than just a challenge to the distribution of benefits in CBA, it is a critique of the ability of CBA to accurately measure benefits as, according to this critique, using unweighted absolute willingness to pay overstates the costs and benefits to the wealthy, and understates those costs and benefits to the poor. Sometimes this is framed in terms of an argument about democracy, that each person's preferences should be given the same weight in an analysis (one person one vote), while under a standard CBA model the preferences of the wealthy are given greater weight.
Taken together, according to this objection, not using weights is a decision in itself – richer people receive de facto a bigger weight. To compensate for this difference in valuation, it is possible to use different methods. One is to use weights, and there are a number of different approaches for calculating these weights. Often, a Bergson-Samuelson social welfare function is used and weights are calculated according to the willingness-to-pay of people. Another method is to use percentage willingness to pay, where willingness to pay is measured as a percentage of total income or wealth to control for income. These methods would also help to address distributional concerns raised by the Kaldor-Hick criterion.
Limitations in the scope of assessment
Economic cost–benefit analysis tends to limit the assessment of benefits to economic values, ignoring the importance of other value factors such as the wishes of minority groups, inclusiveness and respect for the rights of others. These value factors are difficult to rank and measure in terms of weighting, yet cost–benefit analysis suffers from the inability to consider these factors comprehensively, thus lacking the integrity and comprehensiveness of social welfare judgements. Therefore, for projects with a higher standard of evaluation, other evaluation methods need to be used and referred to in order to compensate for these shortcomings and to assess the impact of the project on society in a more comprehensive and integrated manner.
See also
Balance sheet
Return on time invested
References
Further reading
David, R., Ngulube, P. & Dube, A., 2013, "A cost–benefit analysis of document management strategies used at a financial institution in Zimbabwe: A case study", SA Journal of Information Management 15(2), Art. #540, 10 pages.
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 8, “The Positive Biases of Technology Assessments and Cost Benefit Analyses”, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp.
External links
Benefit–Cost Analysis Center at the University of Washington's Daniel J. Evans School of Public Affairs
Benefit–Cost Analysis site maintained by the Transportation Economics Committee of the Transportation Research Board(TRB).
Intro to Cost–Benefit Analysis
Engineering Risk-Benefit Analysis (MIT OpenCourseWare)
Document 1:::
The Middle East Youth Initiative is a program at the Wolfensohn Center for Development, housed in the Global Economy and Development program at the Brookings Institution. It was launched in July 2006 as a joint effort between the Wolfensohn Center and the Dubai School of Government.
The Initiative performs vigorous research on issues pertaining to regional youth (ages 15–24) on the topics of Youth Exclusion, education, employment, marriage, housing, and credit, and on the ways in which all of these elements are linked during young people’s experience of waithood. In addition to research and policy recommendation, the Initiative serves as a hub for networking between policymakers, regional actors in development, government officials, representatives from the private sector, and youth.
Current fellows with the Initiative include Djavad Salehi-Isfahani.
References
Document 2:::
Various copyright alternatives in an alternative compensation systems (ACS) have been proposed as ways to allow the widespread reproduction of digital copyrighted works while still paying the authors and copyright owners of those works. This article only discusses those proposals that involve some form of government intervention. Other models, such as the street performer protocol or voluntary collective licenses, could arguably be called "alternative compensation systems," although they are very different and generally less effective at solving the free rider problem.
The impetus for these proposals has come from the widespread use of peer-to-peer file-sharing networks. A few authors argue that an ACS is simply the only practical response to the situation. But most ACS advocates go further, holding that P2P file sharing is in fact greatly beneficial and that tax or levy-funded systems are actually more desirable tools for paying artists
than sales coupled with digital rights management copy prevention technologies.
Artistic freedom voucher
The artistic freedom voucher (AFV) proposal argues that the current copyright system providing a state enforced monopoly leads to "enormous inefficiencies and creates substantial enforcement problems". Under the AFV proposed system, individuals would be allowed to contribute a refundable tax credit of approximately $100 to a "creative worker", this contribution would act as a voucher that can only be used to support artistic or creative work.
Recipients of the AFV contribution would in turn be required to register with the government in similar fashion to that of religious or charitable institutions do so for tax-exempt status. The sole purpose of the registration would be to prevent fraud and would have no evaluation of the quality or work being produced. Alongside registration, artists would also now be ineligible for copyright protection for a set period of time (5 years for example) as the work is contributed to the publi
Document 3:::
The word substrate comes from the Latin sub - stratum meaning 'the level below' and refers to any material existing or extracted from beneath the topsoil, including sand, chalk and clay.
The term is also used for materials used in building foundations or else incorporated into plaster, brick, ceramic and concrete components, which are sometimes called 'filler' products.
See also
Firestop
Sealant
Caulking
Paint
References
Document 4:::
The knower paradox is a paradox belonging to the family of the paradoxes of self-reference (like the liar paradox). Informally, it consists in considering a sentence saying of itself that it is not known, and apparently deriving the contradiction that such sentence is both not known and known.
History
A version of the paradox occurs already in chapter 9 of Thomas Bradwardine’s Insolubilia. In the wake of the modern discussion of the paradoxes of self-reference, the paradox has been rediscovered (and dubbed with its current name) by the US logicians and philosophers David Kaplan and Richard Montague, and is now considered an important paradox in the area. The paradox bears connections with other epistemic paradoxes such as the hangman paradox and the paradox of knowability.
Formulation
The notion of knowledge seems to be governed by the principle that knowledge is factive:
(KF): If the sentence ' P ' is known, then P
(where we use single quotes to refer to the linguistic expression inside the quotes and where 'is known' is short for 'is known by someone at some time'). It also seems to be governed by the principle that proof yields knowledge:
(PK): If the sentence ' P ' has been proved, then ' P ' is known
Consider however the sentence:
(K): (K) is not known
Assume for reductio ad absurdum that (K) is known. Then, by (KF), (K) is not known, and so, by reductio ad absurdum, we can conclude that (K) is not known. Now, this conclusion, which is the sentence (K) itself, depends on no undischarged assumptions, and so has just been proved. Therefore, by (PK), we can further conclude that (K) is known. Putting the two conclusions together, we have the contradiction that (K) is both not known and known.
Solutions
Since, given the diagonal lemma, every sufficiently strong theory will have to accept something like (K), absurdity can only be avoided either by rejecting one of the two principles of knowledge (KF) and (PK) or by rejecting classical logic (which valida
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What are the two main applications of cost–benefit analysis (CBA) as described in the text?
A. To determine if an investment is sound and to estimate the overall environmental impact
B. To determine if an investment is sound and to provide a basis for comparing investments
C. To provide a basis for comparing investments and to estimate future economic growth
D. To evaluate business decisions and to assess worker satisfaction
Answer:
|
B. To determine if an investment is sound and to provide a basis for comparing investments
|
Relavent Documents:
Document 0:::
Habeas data is a writ and constitutional remedy available in certain nations. The literal translation from Latin of habeas data is "[we command] you have the data," or "you [the data subject] have the data." The remedy varies from country to country, but in general, it is designed to protect, by means of an individual complaint presented to a constitutional court, the data, image, privacy, honour, information self-determination and freedom of information of a person.
Habeas data can be sought by any citizen against any manual or automated data register to find out what information is held about his or her person. That person can request the rectification, update or the destruction of the personal data held. The legal nature of the individual complaint of habeas data is that of voluntary jurisdiction, which means that the person whose privacy is being compromised can be the only one to present it. The courts do not have any power to initiate the process by themselves.
History
Habeas data is an individual complaint filed before a constitutional court and related to the privacy of personal data. The first such complaint is the habeas corpus (which is roughly translated as "[we command] you have the body"). Other individual complaints include the writ of mandamus (USA), amparo (Spain, Mexico and Argentina), and respondeat superior (Taiwan).
The habeas data writ itself has a very short history, but its origins can be traced to certain European legal mechanisms that protected individual privacy. In particular, certain German constitutional rights can be identified as the direct progenitors of the habeas data right. In particular, the right to information self-determination was created by the German constitutional tribunal by interpretation of the existing rights of human dignity and personality. This is a right to know what type of data are stored in manual and automatic databases about an individual, and it implies that there must be transparency on the gathering and
Document 1:::
Linnaean taxonomy can mean either of two related concepts:
The particular form of biological classification (taxonomy) set up by Carl Linnaeus, as set forth in his Systema Naturae (1735) and subsequent works. In the taxonomy of Linnaeus there are three kingdoms, divided into classes, and the classes divided into lower ranks in a hierarchical order.
A term for rank-based classification of organisms, in general. That is, taxonomy in the traditional sense of the word: rank-based scientific classification. This term is especially used as opposed to cladistic systematics, which groups organisms into clades. It is attributed to Linnaeus, although he neither invented the concept of ranked classification (it goes back to Plato and Aristotle) nor gave it its present form. In fact, it does not have an exact present form, as "Linnaean taxonomy" as such does not really exist: it is a collective (abstracting) term for what actually are several separate fields, which use similar approaches.
Linnaean name also has two meanings, depending on the context: it may either refer to a formal name given by Linnaeus (personally), such as Giraffa camelopardalis Linnaeus, 1758; or a formal name in the accepted nomenclature (as opposed to a modernistic clade name).
The taxonomy of Linnaeus
In his Imperium Naturae, Linnaeus established three kingdoms, namely Regnum Animale, Regnum Vegetabile and Regnum Lapideum. This approach, the Animal, Vegetable and Mineral Kingdoms, survives today in the popular mind, notably in the form of the parlour game question: "Is it animal, vegetable or mineral?". The work of Linnaeus had a huge impact on science; it was indispensable as a foundation for biological nomenclature, now regulated by the nomenclature codes. Two of his works, the first edition of the Species Plantarum (1753) for plants and the tenth edition of the Systema Naturae (1758), are accepted as part of the starting points of nomenclature; his binomials (names for species) and generic names
Document 2:::
This article provides a list of sites in the United Kingdom which are recognised for their importance to biodiversity conservation. The list is divided geographically by region and county.
Inclusion criteria
Sites are included in this list if they are given any of the following designations:
Sites of importance in a global context
Biosphere Reserves (BR)
World Heritage Sites (WHS) (where biological interest forms part of the reason for designation)
all Ramsar Sites
Sites of importance in a European context
all Special Protection Areas (SPA)
all Special Area of Conservation (SAC)
all Important Bird Areas (IBA)
Sites of importance in a national context
all sites which were included in the Nature Conservation Review (NCR site)
all national nature reserves (NNR)
Sites of Special Scientific Interest (SSSI), where biological interest forms part of the justification for notification (SSSIs which are designated purely for their geological interest are not included unless they meet other criteria)
England
Southwest
Cornwall
Devon
Dorset
Somerset
Avon
Wiltshire
Gloucestershire
Southeast
Bedfordshire
Berkshire
Buckinghamshire
Essex
Greater London
Hampshire
Hertfordshire
Kent
Oxfordshire
Surrey
Sussex
Rye Harbour Nature Reserve
Midlands
Derbyshire
Herefordshire
Leicestershire
Northamptonshire
Shropshire
Staffordshire
Nottinghamshire
Warwickshire
Worcestershire
East Anglia
Northwest
Cheshire
Northeast
Lincolnshire
Yorkshire
County Durham
Wales
Anglesey
Scotland
Northeast Scotland
Shetland
Unst
Orkney
Outer Hebrides
Lewis and Harris
North Uist, South Uist and Benbecula
Other islands
See also
Conservation in the United Kingdom
National Nature Reserves in the United Kingdom
Sites of Special Scientific Interest
References
Document 3:::
Glovadalen (developmental code name UCB-0022) is a dopamine D1 receptor positive allosteric modulator which is under development for the treatment of Parkinson's disease. It has been found to potentiate the capacity of dopamine to activate the D1 receptor by 10-fold in vitro with no actions on other dopamine receptors. As of May 2024, glovadalen is in phase 2 clinical trials for this indication. The drug is under development by UCB Biopharma. It is described as an orally active, centrally penetrant small molecule.
Document 4:::
The death clock calculator is a conceptual idea of a predictive algorithm that uses personal socioeconomic, demographic, or health data (such as gender, age, or BMI) to estimate a person's lifespan and provide an estimated time of death.
Recent research
In December 2023, Nature Computational Science published a paper introducing the life2vec algorithm, developed as part of a scientific research project. Life2vec is a transformer-based model, similar to those used in natural language processing (e.g., ChatGPT or Llama), trained to analyze life trajectories. The project leverages rich registry data from Denmark, covering six million individuals, with event data related to health, demographics, and labor, recorded at a day-to-day resolution. While life2vec aims to provide insights into early mortality risks and life trends, it does not predict specific death dates, and it is not publicly available as of 2024.
Some media outlets and websites misrepresented the intent of life2vec by calling it a death clock calculator, leading to confusion and speculation about the capabilities of the algorithm. This misinterpretation has also led to fraudulent calculators pretending to use AI-based predictions, often promoted by scammers to deceive users.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary therapeutic target of glovadalen in the treatment of Parkinson's disease?
A. Dopamine D1 receptor
B. Dopamine D2 receptor
C. Serotonin receptor
D. Norepinephrine receptor
Answer:
|
A. Dopamine D1 receptor
|
Relavent Documents:
Document 0:::
Medicalization is the process by which human conditions and problems come to be defined and treated as medical conditions, and thus become the subject of medical study, diagnosis, prevention, or treatment. Medicalization can be driven by new evidence or hypotheses about conditions; by changing social attitudes or economic considerations; or by the development of new medications or treatments.
Medicalization is studied from a sociologic perspective in terms of the role and power of professionals, patients, and corporations, and also for its implications for ordinary people whose self-identity and life decisions may depend on the prevailing concepts of health and illness. Once a condition is classified as medical, a medical model of disability tends to be used in place of a social model. Medicalization may also be termed pathologization or (pejoratively) "disease mongering". Since medicalization is the social process through which a condition becomes seen as a medical disease in need of treatment, appropriate medicalization may be viewed as a benefit to human society. The identification of a condition as a disease can lead to the treatment of certain symptoms and conditions, which will improve overall quality of life.
History
The concept of medicalization was devised by sociologists to explain how medical knowledge is applied to behaviors which are not self-evidently medical or biological. The term medicalization entered the sociology literature in the 1970s in the works of Irving Zola, Peter Conrad and Thomas Szasz, among others. According to Eric Cassell's book, The Nature of Suffering and the Goals of Medicine (2004), the expansion of medical social control is being justified as a means of explaining deviance. These sociologists viewed medicalization as a form of social control in which medical authority expanded into domains of everyday existence, and they rejected medicalization in the name of liberation. This critique was embodied in works such as Conrad's art
Document 1:::
The Clarktor 6 Towing Tractor was an aircraft tug designed and produced by the Clark Tructractor Plant (founded 1919) in Battle Creek, Michigan; The plant is a division of the Clark Equipment Company founded in 1916.
Development
The first Clarktor was manufactured in 1926 and could tow a load over one ton on trolleys. This debut vehicle led to more powerful equipment.
The Clarktor 6 was developed in 1942 as an aircraft tug. Its power came from a six-cylinder flathead engine.
Production
By the end of World War II, thousands of different cargo handling vehicles were deployed to different theater of operations. The British Royal Air Force received 1,500 Clarktor 6. These versions were 'British Heavy' (BH) and supplied to Ministry of War Transport under the Lend Lease Act.
Operational History
World War II - This small tug weighed 2 tons with a drawbar capacity of over 2 tons and towing capacity of 90 tons. It serviced with the United States Army and the British Royal Air Forces. It had no problem towing British's Lancaster and Halifax bombers. The American B24 and B17 were in the 16 ton empty weight and 29 to 32 tons fully loaded and was within the vehicle's capacity.
Korean War - The Clarktor 6 continued operations by providing support not only in moving Air Force jets for service and maintenance, but also to move bombs, fuel, supplies, rations to different areas on an airfield. Soon the Army Bases and Navy yards used the tug for similar operations.
Variants
Light Duty - T105 or T125 Chrysler Engine.
Heavy Duty - T112 or T116 Chrysler Engine. Rated as Mill 44 and Mill 50
Display
Australia
The Australian War Memorial - Treloar Crescent, Campbell ACT 2612, Australia
United States
National Museum of the United States Air Force - Dayton, Ohio
USS Alabama Battleship Memorial Park - Mobile, Alabama
Museum of Aviation - Museum in Robins Air Force Base, Georgia
Texas Air Museum - Stinson Chapter - San Antonio, Texas
Commemorative Air Force - Dallas, Tex
Document 2:::
Boreotropical flora were plants that may have formed a belt of vegetation around the Northern Hemisphere during the Eocene epoch. These included forests composed of large, fast-growing trees (such as dawn redwoods) as far north as 80°N.
See also
Paleocene-Eocene Thermal Maximum
References
Fossil Forests and Palaeoecology of the Cretaceous and Tertiary
Document 3:::
In computer graphics, depth peeling is a method of order-independent transparency. Depth peeling has the advantage of being able to generate correct results even for complex images containing intersecting transparent objects.
Method
Depth peeling works by rendering the image multiple times. Depth peeling uses two Z buffers, one that works conventionally, and one that is not modified, and sets the minimum distance at which a fragment can be drawn without being discarded. For each pass, the previous pass' conventional Z-buffer is used as the minimal Z-buffer, so each pass draws what was "behind" the previous pass. The resulting images can be combined to form a single image.
References
Document 4:::
In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by speech sounds. The different sounds that can realize a particular phoneme are called its allophones.
Realization is also a subtask of natural language generation, which involves
creating an actual text in a human language (English, French, etc.) from a syntactic
representation. There are a number of software packages available for realization,
most of which have been developed by academic research groups in NLG. The remainder of this article concerns realization of this kind.
Example
For example, the following Java code causes the simplenlg system to print out the text The women do not smoke.:
NPPhraseSpec subject = nlgFactory.createNounPhrase("the", "woman");
subject.setPlural(true);
SPhraseSpec sentence = nlgFactory.createClause(subject, "smoke");
sentence.setFeature(Feature.NEGATED, true);
System.out.println(realiser.realiseSentence(sentence));
In this example, the computer program has specified the linguistic constituents of the sentence (verb, subject), and also linguistic features (plural subject, negated), and from this information the realiser has constructed the actual sentence.
Processing
Realisation involves three kinds of processing:
Syntactic realisation: Using grammatical knowledge to choose inflections, add function words and also to decide the order of components. For example, in English the subject usually precedes the verb, and the negated form of smoke is do not smoke.
Morphological realisation: Computing inflected forms, for example the plural form of woman is women (not womans).
Orthographic realisation: Dealing with casing, punctuation, and formatting. For example, capitalising The because it is the first word of the sentence.
The above examples
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the classification of Vendomyces according to the text?
A. Chytridiomycetes
B. Ascomycetes
C. Basidiomycetes
D. Zygomycetes
Answer:
|
A. Chytridiomycetes
|
Relavent Documents:
Document 0:::
In organic chemistry, umpolung () or polarity inversion is the chemical modification of a functional group with the aim of the reversal of polarity of that group. This modification allows secondary reactions of this functional group that would otherwise not be possible. The concept was introduced by D. Seebach (hence the German word for reversed polarity) and E.J. Corey. Polarity analysis during retrosynthetic analysis tells a chemist when umpolung tactics are required to synthesize a target molecule.
Introduction
The vast majority of important organic molecules contain heteroatoms, which polarize carbon skeletons by virtue of their electronegativity. Therefore, in standard organic reactions, the majority of new bonds are formed between atoms of opposite polarity. This can be considered to be the "normal" mode of reactivity.
One consequence of this natural polarization of molecules is that 1,3- and 1,5- heteroatom substituted carbon skeletons are extremely easy to synthesize (Aldol reaction, Claisen condensation, Michael reaction, Claisen rearrangement, Diels-Alder reaction), whereas 1,2-, 1,4-, and 1,6- heteroatom substitution patterns are more difficult to access via "normal" reactivity. It is therefore important to understand and develop methods to induce umpolung in organic reactions.
Examples
The simplest method of obtaining 1,2-, 1,4-, and 1,6- heteroatom substitution patterns is to start with them. Biochemical and industrial processes can provide inexpensive sources of chemicals that have normally inaccessible substitution patterns. For example, amino acids, oxalic acid, succinic acid, adipic acid, tartaric acid, and glucose are abundant and provide nonroutine substitution patterns.
Cyanide-type umpolung
The canonical umpolung reagent is the cyanide ion. The cyanide ion is unusual in that a carbon triply bonded to a nitrogen would be expected to have a (+) polarity due to the higher electronegativity of the nitrogen atom. Yet, the negative charge of the
Document 1:::
Augmented reality-based testing (ARBT) is a test method that combines augmented reality and software testing to enhance testing by inserting an additional dimension into the testers field of view. For example, a tester wearing a head-mounted display (HMD) or Augmented reality contact lenses that places images of both the physical world and registered virtual graphical objects over the user's view of the world can detect virtual labels on areas of a system to clarify test operating instructions for a tester who is performing tests on a complex system.
In 2009 as a spin-off to augmented reality for maintenance and repair (ARMAR), Alexander Andelkovic coined the idea 'augmented reality-based testing', introducing the idea of using augmented reality together with software testing.
Overview
The test environment of technology is becoming more complex, this puts higher demand on test engineers to have higher knowledge, testing skills and work effective. A powerful unexplored dimension that can be utilized is the virtual environment, a lot of information and data that today is available but unpractical to use due to overhead in time needed to gather and present can with ARBT be used instantly.
Application
ARBT can be of help in following test environments:
Support
Assembling and disassembling a test object can be learned out and practice scenarios can be run through to learn how to fix fault scenarios that may occur.
Guidance
Minimizing risk of misunderstanding complex test procedures can be done by virtually describing test steps in front of the tester on the actual test object.
Educational
Background information about test scenario with earlier bugs found pointed out on the test object and reminders to avoid repeating previous mistakes made during testing of selected test area.
Training
Junior testers can learn complex test scenarios with less supervision. Test steps will be pointed out and information about pass criteria need to be confirmed the junior tester c
Document 2:::
The Christensen failure criterion is a material failure theory for isotropic materials that attempts to span the range from ductile to brittle materials. It has a two-property form calibrated by the uniaxial tensile and compressive strengths T and C .
The theory was developed by Stanford professor Richard. M. Christensen and first published in 1997.
Description
The Christensen failure criterion is composed of two separate subcriteria representing competitive failure mechanisms. When expressed in principal stress components, it is given by :
Polynomial invariants failure criterion
For
Coordinated Fracture Criterion
For
The geometric form of () is that of a paraboloid in principal stress space. The fracture criterion () (applicable only over the partial range 0 ≤ T/C ≤ 1/2 ) cuts slices off the paraboloid, leaving three flattened elliptical surfaces on it. The fracture cutoff is vanishingly small at T/C=1/2 but it grows progressively larger as T/C diminishes.
The organizing principle underlying the theory is that all isotropic materials admit a distinct classification system based upon their T/C ratio. The comprehensive failure criterion () and () reduces to the Mises criterion at the ductile limit, T/C = 1. At the brittle limit, T/C = 0, it reduces to a form that cannot sustain any tensile components of stress.
Many cases of verification have been examined over the complete range of materials from extremely ductile to extremely brittle types. Also, examples of applications have been given. Related criteria distinguishing ductile from brittle failure behaviors have been derived and interpreted.
Applications have been given by Ha to the failure of the isotropic, polymeric matrix phase in fiber composite materials.
Document 3:::
IvanAnywhere is a simple, remote-controlled telepresence robot created by Sybase iAnywhere programmers to enable their co-worker, Ivan Bowman, to efficiently remote work. The robot enables Bowman to be virtually present at conferences and presentations, and to discuss product development with other developers face-to-face. IvanAnywhere is powered by SAP's mobile database product, SQL Anywhere.
IvanAnywhere evolution
Ivan Bowman has been a software developer at Sybase/iAnywhere/SAP since 1993, and now is an Engineering Director at SAP Canada. In 2002 his wife received a job in Halifax approximately from his place of work in Waterloo, Ontario, Canada, North America. His employers allowed him to remote work initially via email, instant messenger, and phone.
Using speakerphone during meetings was less than ideal because Ivan could not see his co-workers' visual communication clues, or what they wrote on the white board. The first solution was a stationary webcam with a speaker, which was kept in the corner of the office. The problem with this method was that the webcam was just that – stationary. Ivan could not see people if they were not standing near the webcam. More frustrating, perhaps, was that Ivan could hear distant conversations through the webcam's microphone, but was unable to contribute to the conversation if the impromptu meeting did not take place in his visual range.
Proof of concept
In November 2006, iAnywhere programmer Ian McHardy and Director of Engineering Glenn Paulley (Ivan’s immediate manager) conceived the idea of IvanAnywhere after Glenn saw a television commercial for a remote controlled toy blimp. In January 2007, after considering different possible designs and getting through a number of deadlines related to iAnywhere releases, Ian started working on a proof-of-concept: a tablet computer and webcam mounted on a radio-controlled toy truck.
In February 2007, even though the truck was challenging to drive and the webcam was only a few inch
Document 4:::
Haxi (stylized as HAXI) is a vehicle for hire company that enables users to share transport over short and mid range distances. The name is a portmanteau of "hack" and "taxi". Registered users can be drivers, passengers, or both. Drivers active for more than three days per month need an access pass or a subscription plan. Unregistered users cannot get contact details on other users. No registration is needed to logon. The firm's mobile application facilitates transportation by enabling passengers who need a ride to request one from available "community drivers."
History
Haxi was incorporated by Aleksander Soender, Joonas Kirsebom and Robert Daniel Nagy in February 2014. The service was launched as a web app in Stavanger, Norway December 2013. Applications for Android and iPhone was released in March 2014. Haxi is available in English, Spanish, Norwegian, Swedish and Danish. The company is based in London, Great Britain. Angel investor funding for Haxi was secured in June 2014.
Since December 2013, Haxi has grown to the biggest ridesharing network in Norway. In June 2014, it was estimated that 11,000 Norwegians were using Haxi. By August 2014, that number has risen to 31,000 users with over 2,000 registered drivers in Norway alone.
In September 2014, Haxi surpassed 3,000 registered drivers, and has 42,000 users, with 72% using the app more than once. At this growth rate, Haxi is expected to become bigger than the whole Norwegian taxi force combined by December 2014. Haxi is mentioned as one of the most interesting companies in the ridesharing market Worldwide. In June 2014, Norwegian Taxi Association CEO, Lars Hjelmeng, estimated that ridesharing via Haxi and social media is generating up to one billion NOK (Norwegian Krone) annually. In March 2017 the total number of active drivers on Haxi passed the 10.000 mark in Scandinavia.
Products
HaxiStar
The HaxiStar - Explorer Version is an electronic roof top sign and light designed by Lars Holme Larsen from Kilo De
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of the Environmental Design Research Association (EDRA)?
A. To promote architectural design in commercial spaces
B. To advance and disseminate environmental design research
C. To provide funding for design projects worldwide
D. To organize international competitions for architects
Answer:
|
B. To advance and disseminate environmental design research
|
Relavent Documents:
Document 0:::
Angiotensin is a peptide hormone that causes vasoconstriction and an increase in blood pressure. It is part of the renin–angiotensin system, which regulates blood pressure. Angiotensin also stimulates the release of aldosterone from the adrenal cortex to promote sodium retention by the kidneys.
An oligopeptide, angiotensin is a hormone and a dipsogen. It is derived from the precursor molecule angiotensinogen, a serum globulin produced in the liver. Angiotensin was isolated in the late 1930s (first named 'angiotonin' or 'hypertensin', later renamed 'angiotensin' as a consensus by the 2 groups that independently discovered it) and subsequently characterized and synthesized by groups at the Cleveland Clinic and Ciba laboratories.
Precursor and types
Angiotensinogen
Angiotensinogen is an α-2-globulin synthesized in the liver and is a precursor for angiotensin, but has also been indicated as having many other roles not related to angiotensin peptides. It is a member of the serpin family of proteins, leading to another name: Serpin A8, although it is not known to inhibit other enzymes like most serpins. In addition, a generalized crystal structure can be estimated by examining other proteins of the serpin family, but angiotensinogen has an elongated N-terminus compared to other serpin family proteins. Obtaining actual crystals for X-ray diffractometric analysis is difficult in part due to the variability of glycosylation that angiotensinogen exhibits. The non-glycosylated and fully glycosylated states of angiotensinogen also vary in molecular weight, the former weighing 53 kDa and the latter weighing 75 kDa, with a plethora of partially glycosylated states weighing in between these two values.
Angiotensinogen is also known as renin substrate. It is cleaved at the N-terminus by renin to result in angiotensin I, which will later be modified to become angiotensin II. This peptide is 485 amino acids long, and 10 N-terminus amino acids are cleaved when renin acts on it.
Document 1:::
Reinnervation is the restoration, either by spontaneous cellular regeneration or by surgical grafting, of nerve supply to a body part from which it has been lost or damaged.
See also
Denervation
Neuroregeneration
Targeted reinnervation
References
Document 2:::
Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below).
Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs backward in the case of weakest-preconditions, or runs forward in the case of strongest-postconditions.
Weakest preconditions
Definition
For a statement S and a postcondition R, a weakest precondition is a predicate Q such that for any precondition , if and only if . In other words, it is the "loosest" or least restrictive requirement needed to guarantee that R holds after S. Uniqueness follows easily from the definition: If both Q and Q' are weakest preconditions, then by the definition so and so , and thus . We often use to denote the weakest precondition for statement S with respect to a postcondition R.
Conventions
We use T to denote the predicate that is everywhere true and F to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselv
Document 3:::
Incremental operating margin is the increase or decrease of income from continuing operations before stock-based compensation, interest expense and income-tax expense between two periods, divided by the increase or decrease in revenue between the same two periods.
Document 4:::
In computer networks, a network element is a manageable logical entity uniting one or more physical devices. This allows distributed devices to be managed in a unified way using one management system.
According to the Telecommunications Act of 1996, the term 'network element' refers to a facility or to equipment used in the provision of a telecommunications service. This term also refers to features, functions, and capabilities that are provided by means of such facility or equipment. This includes items such as subscriber numbers, databases, signaling systems, and information that is sufficient for billing and collection. Alternatively, it's also included if it's used in the transmission, routing, or other provision of a telecommunications service.
Background
With development of distributed networks, network management had become an annoyance for administration staff. It was hard to manage each device separately even if they were of the same vendor. Configuration overhead as well as misconfiguration possibility were quite high. A provisioning process for a basic service required complex configurations of numerous devices.
It was also hard to store all network devices and connections in a plain list. Network structuring approach was a natural solution.
Examples
With structuring and grouping, it is very well seen that in any distributed network there are devices performing one complex function. At that, those devices can be placed in different location.
A telephone exchange is the most typical example of such a distributed group of devices. It typically contains subscriber line units, line trunk units, switching matrix, CPU and remote hubs.
A basic telephone service leans on all those units, so it is convenient for an engineer to manage a telephone exchange as one complex entity encompassing all those units inside.
Another good example of a network element is a computer cluster. A cluster can occupy a lot of space and may not fit one datacenter. For enterpr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary purpose of the Loewy decomposition in the context of differential equations?
A. To simplify the process of solving reducible linear ordinary differential equations
B. To provide a numerical approximation for differential equations
C. To transform partial differential equations into ordinary differential equations
D. To find the Galois group of a differential equation
Answer:
|
A. To simplify the process of solving reducible linear ordinary differential equations
|
Relavent Documents:
Document 0:::
Phylomedicine is an emerging discipline at the intersection of medicine, genomics, and evolution. It focuses on the use of evolutionary knowledge to predict functional consequences of mutations found in personal genomes and populations.
History
Modern technologies have made genome sequencing accessible, and biomedical scientists have profiled genomic variation in apparently healthy individuals and individuals diagnosed with a variety of diseases. This work has led to the discovery of thousands of disease-associated genes and genetic variants, elucidating a more robust picture of the amount and types of variations found within and between humans.
Proteins are encoded in genomic DNA by exons, and these comprise only ~1% of the human genomic sequence (aka the exome). The exome of an individual carries about 6,000–10,000 amino-acid-altering nSNVs, and many of these variants are already known to be associated with more than 1000 diseases. Although only a small fraction of these personal variants are likely to impact health, the sheer volume of known genomic and exomic variants is too large to apply traditional laboratory or experimental techniques to explore their functional consequences. Translating a personal genome into useful phenotypic information (e.g. relating to predisposition to disease, differential drug response, or other health concerns), is therefore a grand challenge in the field of genomic medicine.
Fortunately, results from the natural experiment of molecular evolution are recorded in the genomes of humans and other living species. All genomic variation is subjected to the process of natural selection which generally reduces mutations with negative effects on phenotype over time. With the availability of a large number of genomes from the tree of life, evolutionary conservation of individual genomic positions and the sets of mutations permitted among species informs the functional and health consequences of these mutations.
Consequently, phylomedicin
Document 1:::
The Helix fast-response system (HFRS) is a deep-sea oil spill response plan licensed by HWCG LLC, a consortium of 16 independent oil companies, to respond to subsea well incidents. Helix Energy Solutions Group designed the Helix fast-response system based on techniques used to contain the 2010 Gulf of Mexico oil spill.
On February 28, 2011 the drilling moratorium imposed as a result of the spill ended when the United States Department of the Interior approved the first drilling permit based on the availability of the HFRS to offshore oil companies.
The HFRS relies on the deployment of Helix ESG's Q4000 multipurpose semisubmersible platform and the Helix Producer 1 floating production unit. Both vessels are based in the Gulf of Mexico and played significant roles in the 2010 Deepwater Horizon spill response. At full production capacity, the HFRS can handle up to 55,000 barrels of oil per day (70,000 barrels of liquid or 95 million standard cubic feet per day) at 10,000 psi in water depths to 10,000 feet.
Overview
When deployed, the HFRS is assembled in stages following an ROV inspection of the damaged subsea well head. ROVs then lower a custom-designed well cap onto the blowout preventer (BOP) stack; if the flow of escaping hydrocarbons is not too extreme, the vents inside the well cap can be manually closed one by one to shut in the well. If the pressure is too extreme to shut in the well, ROVs will lower an intervention riser system (IRS) onto the top of the well cap. The hydrocarbons will then be transferred through a marine riser to the Q4000, which will use its gas flare to burn off much of the oil and gas while transferring the rest through a flexible riser to the Helix Producer 1.
The Helix Producer 1 will activate its flare to burn off the gas while it processes the oil and transfers it to a nearby oil tanker through another flexible riser. As the oil and gas is successfully captured, processed or burned and then transferred onto a tanker, drillships will be required to drill a relief well to permanently kill the damaged subsea well.
Developments
In January 2011 Helix ESG signed an agreement with Clean Gulf Associates, a non-profit industry group, to make the HFRS available for a two-year period to CGA member companies in the event of a future subsea well incident in the Gulf of Mexico.
In 2012 Helix ESG said it was working develop an additional consortium for the Caribbean and would likely base the system out of Port of Spain Trinidad.
Document 2:::
Kepler-1513 is a main-sequence star about away in the constellation Lyra. It has a late-G or early-K spectral type, and it hosts at least one, and likely two, exoplanets.
Planetary system
Kepler-1513b (KOI-3678.01) was confirmed in 2016 as part of a study statistically validating hundreds of Kepler planets. In November 2022, an exomoon candidate was reported around Kepler-1513b based on transit-timing variations (TTVs). Unlike previous giant exomoon candidates in the Kepler-1625 and Kepler-1708 systems, this exomoon would have been terrestrial-mass, ranging from 0.76 Lunar masses to 0.34 Earth masses depending on the planet's mass and the moon's orbital period.
In October 2023, a follow-up study by the same team of astronomers using additional observations found that the observed TTVs cannot be explained by an exomoon, but can be explained by a second, outer planet, Kepler-1513c, with a mass comparable to Saturn.
See also
Kepler-90g
References
Document 3:::
This is a list of dams and reservoirs in Romania.
References
Document 4:::
Phyz (Dax Phyz) is a public domain, 2.5D physics engine with built-in editor and DirectX graphics and sound. In contrast to most other real-time physics engines, it is vertex based and stochastic. Its integrator is based on a SIMD-enabled assembly version of the Mersenne Twister random number generator, instead of traditional LCP or iterative methods, allowing simulation of large numbers of micro objects with Brownian motion and macro effects such as object resonance and deformation.
Description
Purpose
Dax Phyz is used to model and simulate physical phenomena, to animate static graphics, and to create videos, GUI front-ends and games. There is no specified correlation between Phyz and reality.
Features
Source:
Deformable and breakable objects (soft body dynamics).
N-body particle simulation.
Rod, stick, pin, slot, rocket, charge, magnet, heat, actuator and custom constraints.
Turing complete, real-time logic components (Phyz Logics).
Explosives.
Collision and break sound effects.
Message-based application programming interface.
Real-time, constraint-aware editing.
Metaballics effects.
Bitmap import.
OpenMP 2.0 support.
Platform availability
Phyz requires Windows with DirectX 9.0c or later, a display adapter with hardware support for DirectX 9, a CPU with full SSE2 support, and 1 GB of free RAM.
The metaballics effects require a GPGPU-capable display adapter.
PhyzLizp
PhyzLizp, included with Phyz, is an external application based on the Lisp programming language (Lizp 4). It can be used to measure and control events in Phyz, and to create Phyz extensions such as graphical interfaces, network gateways, non-linear constraints or games.
Screenshots
Hammer scene (upper left; deformable objects): The hammer's centre of mass is displaced from its rotational axis, creating a torque which keeps the ruler from rotating.
Wedge scene (upper right; breakable objects): How to make an impression.
Yoda scene (lower left; bitmap import, metaballics): 3.446 vert
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main function of the Helix fast-response system (HFRS) as described in the text?
A. To drill new oil wells
B. To respond to subsea well incidents
C. To transport oil to tankers
D. To monitor underwater ecosystems
Answer:
|
B. To respond to subsea well incidents
|
Relavent Documents:
Document 0:::
Cascade Falls, also called "The Cascades", is a waterfall located at E. B. Jeffress Park on the Blue Ridge Parkway in the Blue Ridge Mountains of North Carolina, United States.
Geology
The waterfall is located on Falls Creek. It begins with a small free-fall before cascading and sliding down the rock face. The water here eventually ends up in the W. Kerr Scott Reservoir and the Yadkin River. Like many other falls in the northern North Carolina Mountains, the falls are high, but limited in water.
Visiting the falls
The falls is located in E. B. Jeffress Park on the Blue Ridge Parkway at milepost 271.9, 4.4 miles north of where U. S. Highway 421 crosses the parkway at Deep Gap. The easy-to-moderate loop trail begins at the north end of the parking area for a total loop of approximately 0.8 mile. The trail, which includes educational material concerning local wildflowers and other plants, such as mountain laurel, descends 50 feet to an upper overlook and another 200 feet to a lower overlook. The upper overlook allows the visitors to see approximately 20 feet of cascades above the overlook and a long slide below the overlook. The lower overlook allows a different view of the slide.
The lower section of the falls is no longer accessible from the trail. A scramble path used to be available, but damage to the plant life and the banks of the stream resulted in the construction of a fence to restrict further access and potential injury to both the falls and to visitors.
Nearby falls
Betsey's Rock Falls
References
Document 1:::
Gastropexy is a surgical operation in which the stomach is sutured to the abdominal wall or the diaphragm. Gastropexies in which the stomach is sutured to the diaphragm are sometimes performed as a treatment of GERD to prevent the stomach from moving up into the chest.
Document 2:::
Relieving tackle is tackle employing one or more lines attached to a vessel's steering mechanism, to assist or substitute for the whipstaff or ship's wheel in steering the craft. This enabled the helmsman to maintain control in heavy weather, when the rudder is under more stress and requires greater effort to handle, and also to steer the vessel were the helm damaged or destroyed.
In vessels with whipstaffs (long vertical poles extending above deck, acting as a lever to move the tiller below deck), relieving lines were attached to the tiller or directly to the whipstaff. When wheels were introduced, their greater mechanical advantage lessened the need for such assistance, but relieving tackle could still be used on the tiller, located on a deck underneath the wheel. Relieving tackle was also rigged on vessels going into battle, to assist in steering in case the helm was damaged or shot away. When a storm threatened, or battle impended, the tackle would be affixed to the tiller, and hands assigned to man them. Additional tackle was available to attach directly to the rudder as surety against loss of the tiller.
The term can also refer to lines or cables attached to a vessel that has been careened (laid over to one side for maintenance). The lines passed under the hull and were secured to the opposite side, to keep the vessel from overturning further, and to aid in righting the ship when the work was finished.
References
Document 3:::
Gamma Piscis Austrini, Latinized from γ Piscis Austrini, is three-star system in the southern constellation of Piscis Austrinus. It is visible to the naked eye with a combined apparent visual magnitude of +4.448. Based upon an annual parallax shift of as seen from the Earth, the system is located about 216 light years from the Sun.
The A and B components, as of 2010, are separated by 4 arc seconds in the sky along a position angle of 255°. The "A" component is itself a binary, made up of two stars orbiting each other with an orbital period of 15 years and a separation of nine astronomical units, with a combined apparent magnitude of 4.59. The component Aa has 2.65 times more mass than the Sun and 2.9 times its radius, being a chemically peculiar star with a spectral type . The Ab component is smaller, at 0.94 times the Sun's mass and 0.84 times its radius. The fainter magnitude 8.20 companion, component B, is an F-type main sequence star with a class of F5 V. It has 20% more mass than the Sun and a radius 15% larger.
Gamma Piscis Austrini is moving through the Galaxy at a speed of 24.1 km/s relative to the Sun. Its projected Galactic orbit carries it between and from the center of the Galaxy. It came closest to the Sun 1.8 million years ago at a distance of .
The current age of the system is 350 million years. It will become a triple white dwarf system within 14 billion years.
Naming
In Chinese, (), meaning Decayed Mortar, refers to an asterism consisting of refers to an asterism consisting of γ Piscis Austrini, γ Gruis, λ Gruis and 19 Piscis Austrini. Consequently, the Chinese name for γ Piscis Austrini itself is (, .)
References
External links
Document 4:::
Jantar Mantar in New Delhi is an observatory, designed to be used with the naked eye. It is one of five Jantar Mantar in India. "Jantar Mantar" means "instruments for measuring the harmony of the heavens". It consists of 13 architectural astronomy instruments.
The site is one of five built by Maharaja Jai Singh II of Jaipur, from 1723 onwards, revising the calendar and astronomical tables. Jai Singh, born in 1688 into a royal Rajput family that ruled the regional kingdom, was born into an era of education that maintained a keen interest in astronomy. There is a plaque fixed on one of the structures in the Jantar Mantar observatory in New Delhi that was placed there in 1910 mistakenly dating the construction of the complex to the year 1710. Later research, though, suggests 1724 as the actual year of construction. Its height is .
The primary purpose of the observatory was to compile astronomical tables, and to predict the times and movements of the Sun, Moon and planets. Some of these purposes nowadays would be classified as astronomy.
Completed in 1724, the Delhi Jantar Mantar had decayed considerably by 1857 uprising. The Ram Yantra, the Samrat Yantra, the Jai Prakash Yantra and the Misra Yantra are the distinct instruments of Jantar Mantar. The most famous of these structures, the Jaipur, had also deteriorated by the end of the nineteenth century until in 1901 when Maharaja Ram Singh set out to restore the instrument.
History
Jantar Mantar is located in New Delhi and built by Maharaja Jai Singh II of Jaipur in the year 1724. The maharaja built five observatories during his rule in the 18th century. Among these five, the one in Delhi was the first to be built. The other four observatories are located in Ujjain, Mathura, Varanasi, and Jaipur.
The objective behind the construction of these observatories was to assemble astronomical data and to accurately predict the movement of the planets, Moon, Sun, etc. in the Solar System. It was a one of a kind in its tim
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary focus of Matthew Fuchter's research as mentioned in the text?
A. Development of renewable energy sources
B. Novel functional molecular systems in materials and medicine
C. Study of historical chemical processes
D. Agricultural chemistry advancements
Answer:
|
B. Novel functional molecular systems in materials and medicine
|
Relavent Documents:
Document 0:::
Worm bagging (also referred to as facultative vivipary or endotokia matricida) is a form of vivipary observed in nematodes, namely Caenorhabditis elegans. The process is characterized by eggs hatching within the parent and the larvae proceeding to consume and emerge from the parent.
History
While the phenomenon was mentioned as a result of fluorodeoxyuridine treatment as early as 1979 and egg-laying mutants were identified in 1984, the natural circumstances and mechanisms resulting in this behavior were not fully explored until 2003. From this point, modest explorations of the mechanisms underlying this behavior have been observed.
Proximate causes
Bagging will occur in vulvaless or egg-laying mutants of C. elegans but can also be induced in wild-type strains. Identified stressors that can induce bagging are starvation, high salt concentration, and antagonistic bacteria.
It has been observed in larval development, that the WRT-5 protein is secreted into the pharyngeal lumen and the pharyngeal expression changes in a cycle that is connected to the molting cycle. Deletion mutations in wrt-5 cause embryonic lethality, which are temperature sensitive and more severe at 15 degrees C than at 25 degrees C. Additionally, animals that hatch exhibit variable abnormal morphology, for example, bagging worms, blistering, molting defects, or Roller phenotypes.
Internal hatching is initiated by genes and is not restricted to the widely used laboratory strain N2. Internal hatching is rare when worms are maintained under standard laboratory conditions. However, axenic condition which is a transfer from solid to liquid medium along with adverse environmental conditions, such as starvation, exposure to harsh compounds, and bacteria can increase the frequency of worm bags.
In a study C. elegans were starved and in stressful conditions such as a high salt environment. As a result there was a connection drawn between the pathway leading to the dauer stage and the pathway leading to
Document 1:::
Analyst is a biweekly peer-reviewed scientific journal covering all aspects of analytical chemistry, bioanalysis, and detection science. It is published by the Royal Society of Chemistry and the editor-in-chief is Melanie Bailey (University of Surrey). The journal was established in 1877 by the Society for Analytical Chemistry.
Abstracting and indexing
The journal is abstracted and indexed in MEDLINE and Analytical Abstracts. According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.2.
Analytical Communications
In 1999, the Royal Society of Chemistry closed the journal Analytical Communications because it felt that the material submitted to that journal would be best included in a new communications section of Analyst. Predecessor journals of Analytical Communications were Proceedings of the Society for Analytical Chemistry, 1964–1974; Proceedings of the Analytical Division of the Chemical Society, 1975–1979; Analytical Proceedings, 1980–1993; Analytical Proceedings including Analytical Communications, 1994–1995.
References
External links
Document 2:::
Metvuw Weather and Climate service is run by James McGregor. The site was formerly associated with Victoria University of Wellington (hence the VUW acronym), but is now run independently. Much of the weather content and forecast material is available directly from the website, free. A range of different weather information is available, as different pages, under the following headings.
Metvuw home
Featuring updates advice about the service, and 'photo of the day' as well as links to previous photos of the day. Many images of quirky and unusual weather-related phenomena are to be found, especially clouds of the Mammatus variety.
Satellite imagery
9 panels of monochrome satellite derived cloud pictures are shown, in 3 hour intervals, ending in current time slot. Images can be enlarged.
Data provided by MetService, Image enhancement by Metvuw, Himawari satellite data courtesy of Japanese Meteorological Agency.
Weather radar
3 panels each showing 9 locations throughout New Zealand, at 3 hour intervals, ending in current time slot. Colour coding is derived from 'reflectivity' (DBz). Images can be enlarged.
Data provided by MetService, Image enhancement by Metvuw.
Radiosondes (weather balloons) measuring upper air temperatures and winds are routinely launched from five stations around New Zealand. At Whenuapai, Paraparaumu and Invercargill they are launched at 1100 and 2300 NZT. At Raoul Island they are launched daily at 1100 NZT. The upper air data are displayed on tephigrams.
Forecast charts
A large range between 1 and 10 day forecasts are available, delivering 4 and 40 images, respectively, spaced at +6 hour intervals, beginning with the forecast time.
These weather forecast charts are generated by software written and maintained by James McGregor. The data used is obtained from the United States National Weather Service. These charts are updated approximately every 6 hours and provide forecasts up to 240h ahead of the time they were issued.
Current NZ weath
Document 3:::
A sweep generator is a piece of electronic test equipment similar to, and sometimes included on, a function generator which creates an electrical waveform with a linearly varying frequency and a constant amplitude. Sweep generators are commonly used to test the frequency response of electronic filter circuits. These circuits are mostly transistor circuits with inductors and capacitors to create linear characteristics.
Sweeps are a popular method in the field of audio measurement to describe the change in a measured output value over a progressing input parameter. The most commonly-used progressive input parameter is frequency varied over the standard audio bandwidth of 20 Hz to 20 kHz.
Glide Sweep
A glide sweep (or chirp) is a continuous signal in which the frequency increases or decreases logarithmically with time. This provides the complete range of testing frequencies between the start and stop frequency. An advantage over the stepped sweep is that the signal duration can be reduced by the user without any loss of frequency resolution in the results. This allows for rapid testing. Although the theory behind the glide sweep has been known for several decades, its use in audio measuring devices has only evolved over the past several years. The reason for this lies with the high computing power required.
Stepped Sweep
In a stepped sweep, one variable input parameter (frequency or amplitude) is incremented or decremented in discrete steps. After each change, the analyzer waits until a stable reading is detected before switching to the next step. The scaling of the steps is linear or logarithmic. Since the settling time of different test objects cannot be predicted, the duration of a stepped sweep cannot be determined exactly in advance. For the determination of amplitude or frequency response, the stepped sweep has been largely replaced by the glide sweep. The main application for the stepped sweep is to measure the linearity of systems. Here, the frequency of t
Document 4:::
The Fenambosy Chevron is one of four chevron-shaped land features on the southwest coast of Madagascar, near the tip of Madagascar, high and inland. Chevrons such as Fenambosy have been hypothesized as providing evidence of "megatsunamis" caused by comets or asteroids crashing into Earth. However, the megatsunami origin of the Fernambosy and other chevrons has been challenged by other geologists and oceanographers.
A feature called the Burckle crater lies about east-southeast of the Madagascar chevrons and in deep ocean. it is hypothesized to be an impact crater by the Holocene Impact Working Group. Although its sediments have not been directly sampled, cores from the area contain high levels of nickel and magnetic components that are argued to be associated with impact ejecta. Geologist Dallas Abbott estimates the age of this feature to be about 4,500 to 5,000 years old.
References
External links
Blakeslee, S., 2006, Ancient Crash, Epic Wave, The New York Times, November 14, 2006, last accessed September 28, 2014.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What year was the journal Analyst established?
A. 1867
B. 1877
C. 1887
D. 1897
Answer:
|
B. 1877
|
Relavent Documents:
Document 0:::
A MET (Materials, Energy, and Toxicity) Matrix is an analysis tool used to evaluate various environmental impacts of a product over its life cycle. The tool takes the form of a 3x3 matrix with descriptive text in each of its cells. One dimension of the matrix is composed of a qualitative input-output model that examines environmental concerns related to the product's materials use, energy use, and toxicity. The other dimension looks at the life cycle of the product through its production, use, and disposal phase. The text in each cell corresponds to the intersection of two particular aspects. For example, this means that by looking at certain cells, one can examine aspects such as energy use during the production phase, or levels of toxicity that may be a concern during the disposal phase.
References
Document 1:::
In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.
Definition
The probability density function of the wrapped normal distribution is
where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively. Expressing the above density function in terms of the characteristic function of the normal distribution yields:
where is the Jacobi theta function, given by
and
The wrapped normal distribution may also be expressed in terms of the Jacobi triple product:
where and
Moments
In terms of the circular variable the circular moments of the wrapped normal distribution are the characteristic function of the normal distribution evaluated at integer arguments:
where is some interval of length . The first moment is then the average value of z, also known as the mean resultant, or mean resultant vector:
The mean angle is
and the length of the mean resultant is
The circular standard deviation, which is a useful measure of dispersion for the wrapped normal distribution and its close relative, the von Mises distribution is given by:
Estimation of parameters
A series of N measurements zn = e iθn drawn from a wrapped normal distribution may be used to estimate certain parameters of the distribution. The average of the series is defined as
and its expectation value will be just the first moment:
In other words, is an unbiased estimator of the first moment. If we assume that the mean μ lies in the interval [−π, π), then Arg will be a (biased) estimator of the mean μ.
Viewing the zn as a set of vectors in the complex plane, the 2 statistic is the square of the length of the averaged vector:
and its expected value is:
In other words, the statistic
will be an unbiased estimator of e−σ2, and ln(1/Re2) will be a (biased) estimator of σ2
Entropy
The information entropy of the wrapped normal distribution is defined as:
where is any interval of length . Defining and , the Jacobi triple product representation for the wrapped normal is:
where is the Euler function. The logarithm of the density of the wrapped normal distribution may be written:
Using the series expansion for the logarithm:
the logarithmic sums may be written as:
so that the logarithm of density of the wrapped normal distribution may be written as:
which is essentially a Fourier series in . Using the characteristic function representation for the wrapped normal distribution in the left side of the integral:
the entropy may be written:
which may be integrated to yield:
External links
Circular Values Math and Statistics with C++11, A C++11 infrastructure for circular values (angles, time-of-day, etc.) mathematics and statistics
Document 2:::
Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis.
See also
Wavelet
Scale space
Multi-scale approaches
Multiresolution analysis
Singular value decomposition
Compressed sensing
Further reading
Document 3:::
OpenAI Five is a computer program by OpenAI that plays the five-on-five video game Dota 2. Its first public appearance occurred in 2017, where it was demonstrated in a live one-on-one game against the professional player Dendi, who lost to it. The following year, the system had advanced to the point of performing as a full team of five, and began playing against and showing the capability to defeat professional teams.
By choosing a game as complex as Dota 2 to study machine learning, OpenAI thought they could more accurately capture the unpredictability and continuity seen in the real world, thus constructing more general problem-solving systems. The algorithms and code used by OpenAI Five were eventually borrowed by another neural network in development by the company, one which controlled a physical robotic hand. OpenAI Five has been compared to other similar cases of artificial intelligence (AI) playing against and defeating humans, such as AlphaStar in the video game StarCraft II, AlphaGo in the board game Go, Deep Blue in chess, and Watson on the television game show Jeopardy!.
History
Development on the algorithms used for the bots began in November 2016. OpenAI decided to use Dota 2, a competitive five-on-five video game, as a base due to it being popular on the live streaming platform Twitch, having native support for Linux, and had an application programming interface (API) available. Before becoming a team of five, the first public demonstration occurred at The International 2017 in August, the annual premiere championship tournament for the game, where Dendi, a Ukrainian professional player, lost against an OpenAI bot in a live one-on-one matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks "like being a surgeon". OpenAI used a methodology called reinforcement learni
Document 4:::
In computing, data validation or input validation is the process of ensuring data has undergone data cleansing to confirm it has data quality, that is, that it is both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of a data dictionary, or by the inclusion of explicit application program validation logic of the computer and its application.
This is distinct from formal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property.
Overview
Data validation is intended to provide certain well-defined guarantees for fitness and consistency of data in an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts. Their implementation can use declarative data integrity rules, or procedure-based business rules.
The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system.
Different kinds
In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose.
For example:
Data type validation;
Range and constraint validation;
Code and cross-reference validation;
Structured validation; and
Consistency validation
Data-type check
Data type validation is customarily carried out on one or more simple data fields.
The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defi
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary application of the wrapped normal distribution as mentioned in the text?
A. Estimation of parameters
B. Theory of Brownian motion
C. Information entropy calculation
D. Circular standard deviation measurement
Answer:
|
B. Theory of Brownian motion
|
Relavent Documents:
Document 0:::
Dynamic Site Acceleration (DSA) is a group of technologies which make the delivery of dynamic websites more efficient. Manufacturers of application delivery controllers and content delivery networks (CDNs) use a host of techniques to accelerate dynamic sites, including:
Improved connection management, by multiplexing client connections and HTTP keep-alive
Prefetching of uncachable web responses
Dynamic cache control
On-the-fly compression
Full page caching
Off-loading SSL termination
Response based TTL-assignment (bending)
TCP optimization
Route optimization
Techniques
TCP multiplexing
An edge device, either an ADC or a CDN, is capable of TCP multiplexing which can be placed between web servers and clients to offload origin servers and accelerate content delivery.
Usually, each connection between client and server requires a dedicated process that lives on the origin for the duration of the connection. When clients have a slow connection, this occupies part of the origin server because the process has to stay alive while the server is waiting for a complete request. With TCP multiplexing, the situation is different. The device obtains a complete and valid request from the client before sending this to the origin when the request has fully arrived. This offloads application and database servers, which are slower and more expensive to use compared to ADCs or CDNs.
Dynamic cache control
HTTP has a built-in system for cache control, using headers such as ETag, "expires" and "last modified". Many CDNs and ADCs that claim to have DSA, have replaced this with their system, calling it dynamic caching or dynamic cache control. It gives them more options to invalidate and bypass the cache over the standard HTTP cache control.
The purpose of dynamic cache control is to increase the cache-hit ratio of a website, which is the rate between requests served by the cache and those served by the normal server.
Due to the dynamic nature of web 2.0 websites, it is difficult to
Document 1:::
In computer security, a shadow stack is a mechanism for protecting a procedure's stored return address, such as from a stack buffer overflow. The shadow stack itself is a second, separate stack that "shadows" the program call stack. In the function prologue, a function stores its return address to both the call stack and the shadow stack. In the function epilogue, a function loads the return address from both the call stack and the shadow stack, and then compares them. If the two records of the return address differ, then an attack is detected; the typical course of action is simply to terminate the program or alert system administrators about a possible intrusion attempt. A shadow stack is similar to stack canaries in that both mechanisms aim to maintain the control-flow integrity of the protected program by detecting attacks that tamper the stored return address by an attacker during an exploitation attempt.
Shadow stacks can be implemented by recompiling programs with modified prologues and epilogues, by dynamic binary rewriting techniques to achieve the same effect, or with hardware support. Unlike the call stack, which also stores local program variables, passed arguments, spilled registers and other data, the shadow stack typically just stores a second copy of a function's return address.
Shadow stacks provide more protection for return addresses than stack canaries, which rely on the secrecy of the canary value and are vulnerable to non-contiguous write attacks. Shadow stacks themselves can be protected with guard pages or with information hiding, such that an attacker would also need to locate the shadow stack to overwrite a return address stored there.
Like stack canaries, shadow stacks do not protect stack data other than return addresses, and so offer incomplete protection against security vulnerabilities that result from memory safety errors.
In 2016, Intel announced upcoming hardware support for shadow stacks with their Control-flow Enforcement Tech
Document 2:::
Short-range endemic (SRE) invertebrates are animals that display restricted geographic distributions, nominally less than 10,000 km2, that may also be disjunct and highly localised. The most appropriate analogy is that of an island, where the movement of fauna is restricted by the surrounding marine waters, therefore isolating the fauna from other terrestrial populations. Isolating mechanisms and features such as roads, urban infrastructure, large creek lines and ridges can act to prevent the dispersal and gene flow of the less mobile invertebrate species. Subterranean fauna, which include stygofauna and troglofauna, typically comprise short-range endemics.
Representative examples
Several animal groups studied in Australia consist largely of short-range endemics, including freshwater and terrestrial gastropods (snails and slugs), earthworms, velvet worms, mygalomorph spiders, schizomids, millipedes, phreatoicidean crustaceans, and freshwater crayfish.
Categories of short-range endemism
Currently, there is no accepted system to define the varying probabilities of a species to be an SRE. The uncertainty in categorising a specimen as SRE originates in a number of factors including:
Poor regional survey density (sometimes taxon-specific): A regional fauna is simply not known well enough to assess the distribution of species. This factor also considers that simply because a species has not been found regionally, does not mean it is really absent; this confirmation (‘negative proof’) is almost impossible to obtain (“absence of proof is not proof of absence”).
Lack of taxonomic resolution: many potential SRE taxa (based on preferences for typical SRE habitats, SRE status of closely related species, or morphological peculiarities such as troglomorphism) have never been taxonomically treated and identification to species level is very difficult or impossible as species-specific character systems have not been defined. Good taxonomic resolution does not necessarily requ
Document 3:::
A CondoSat is a satellite that supports separate operator payloads on the same spacecraft bus.
A condosat is designed to carry multiple payloads from customers that do not want the direct responsibility of satellite operations. The name derives from real estate condominium. They are a variation of the hosted payload concept whereby an operator enables a satellite to host payloads from multiple parties while meeting their on-orbit operations requirements centrally.
References
Document 4:::
In telecommunications, 8b/10b is a line code that maps 8-bit words to 10-bit symbols to achieve DC balance and bounded disparity, and at the same time provide enough state changes to allow reasonable clock recovery. This means that the difference between the counts of ones and zeros in a string of at least 20 bits is no more than two, and that there are not more than five ones or zeros in a row. This helps to reduce the demand for the lower bandwidth limit of the channel necessary to transfer the signal.
An 8b/10b code can be implemented in various ways with focus on different performance parameters. One implementation was designed by K. Odaka for the DAT digital audio recorder. Kees Schouhamer Immink designed an 8b/10b code for the DCC audio recorder. The IBM implementation was described in 1983 by Al Widmer and Peter Franaszek.
IBM implementation
As the scheme name suggests, eight bits of data are transmitted as a 10-bit entity called a symbol, or character. The low five bits of data are encoded into a 6-bit group (the 5b/6b portion) and the top three bits are encoded into a 4-bit group (the 3b/4b portion). These code groups are concatenated together to form the 10-bit symbol that is transmitted on the wire. The data symbols are often referred to as D.x.y where x ranges over 0–31 and y over 0–7. Standards using the 8b/10b encoding also define up to 12 special symbols (or control characters) that can be sent in place of a data symbol. They are often used to indicate start-of-frame, end-of-frame, link idle, skip and similar link-level conditions. At least one of them (i.e. a "comma" symbol) needs to be used to define the alignment of the 10-bit symbols. They are referred to as K.x.y and have different encodings from any of the D.x.y symbols.
Because 8b/10b encoding uses 10-bit symbols to encode 8-bit words, some of the possible 1024 (10 bit, 210) symbols can be excluded to grant a run-length limit of 5 consecutive equal bits and to ensure the difference between the count of zeros and ones to be no more than two. Some of the 256 possible 8-bit words can be encoded in two different ways. Using these alternative encodings, the scheme is able to achieve long-term DC-balance in the serial data stream. This permits the data stream to be transmitted through a channel with a high-pass characteristic, for example Ethernet's transformer-coupled unshielded twisted pair or optical receivers using automatic gain control.
Encoding tables and byte encoding
Note that in the following tables, for each input byte (represented as ), A denotes the least significant bit (LSB), and H the most significant (MSB). The output gains two extra bits, i and j. The bits are sent from LSB to MSB: a, b, c, d, e, i, f, g, h, and j; i.e., the 5b/6b code followed by the 3b/4b code. This ensures the uniqueness of the special bit sequence in the comma symbols.
The residual effect on the stream to the number of zero and one bits transmitted is maintained as the running disparity (RD) and the effect of slew is balanced by the choice of encoding for following symbols.
The 5b/6b code is a paired disparity code, and so is the 3b/4b code. Each 6- or 4-bit code word has either equal numbers of zeros and ones (a disparity of zero), or comes in a pair of forms, one with two more zeros than ones (four zeros and two ones, or three zeros and one one, respectively) and one with two less. When a 6- or 4-bit code is used that has a non-zero disparity (count of ones minus count of zeros; i.e., −2 or +2), the choice of positive or negative disparity encodings must be the one that toggles the running disparity. In other words, the non zero disparity codes alternate.
Running disparity
8b/10b coding is DC-free, meaning that the long-term ratio of ones and zeros transmitted is exactly 50%. To achieve this, the difference between the number of ones transmitted and the number of zeros transmitted is always limited to ±2, and at the end of each symbol, it is either +1 or −1. This difference is known as the running disparity (RD).
This scheme needs only two states for the running disparity of +1 and −1. It starts at −1.
For each 5b/6b and 3b/4b code with an unequal number of ones and zeros, there are two bit patterns that can be used to transmit it: one with two more "1" bits, and one with all bits inverted and thus two more zeros. Depending on the current running disparity of the signal, the encoding engine selects which of the two possible six- or four-bit sequences to send for the given data. Obviously, if the six-bit or four-bit code has equal numbers of ones and zeros, there is no choice to make, as the disparity would be unchanged, with the exceptions of sub-blocks D.07 (00111) and D.x.3 (011). In either case the disparity is still unchanged, but if RD is positive when D.07 is encountered 000111 is used, and if it is negative 111000 is used. Likewise, if RD is positive when D.x.3 is encountered 0011 is used, and if it is negative 1100 is used. This is accurately reflected in the charts below, but is worth making additional mention of as these are the only two sub-blocks with equal numbers of 1s and 0s that each have two possible encodings.
5b/6b code (abcdei)
† also used for the 5b/6b code of K.x.7
‡ exclusively used for the 5b/6b code of K.28.y
3b/4b code (fghj)
† For D.x.7, either the Primary (D.x.P7), or the Alternate (D.x.A7) encoding must be selected in order to avoid a run of five consecutive 0s or 1s when combined with the preceding 5b/6b code.Sequences of exactly five identical bits are used in comma symbols for synchronization issues. D.x.A7 is used only
when RD = −1: for x = 17, 18 and 20 and
when RD = +1: for x = 11, 13 and 14.
With x = 23, x = 27, x = 29, and x = 30, the 3b/4b code portion used for control symbols K.x.7 is the same as that for D.x.A7.Any other D.x.A7 code can't be used as it would result in chances for misaligned comma sequences.
‡ Only K.28.1, K.28.5, and K.28.7 generate comma symbols, that contain a bit sequence of five 0s or 1s.The symbol has the format 110000 01xx or 001111 10xx.
Control symbols
The control symbols within 8b/10b are 10b symbols that are valid sequences of bits (no more than six 1s or 0s) but do not have a corresponding 8b data byte. They are used for low-level control functions. For instance, in Fibre Channel, K28.5 is used at the beginning of four-byte sequences (called "Ordered Sets") that perform functions such as Loop Arbitration, Fill Words, Link Resets, etc.
Resulting from the 5b/6b and 3b/4b tables the following 12 control symbols are allowed to be sent:
† Within the control symbols, K.28.1, K.28.5, and K.28.7 are "comma symbols". Comma symbols are used for synchronization (finding the alignment of the 8b/10b codes within a bit-stream). If K.28.7 is not used, the unique comma sequences 00111110 or 11000001 cannot inadvertently appear at any bit position within any combination of normal codes.
‡ If K.28.7 is allowed in the actual coding, a more complex definition of the synchronization pattern than suggested by † needs to be used, as a combination of K.28.7 with several other codes forms a false misaligned comma symbol overlapping the two codes. A sequence of multiple K.28.7 codes is not allowable in any case, as this would result in undetectable misaligned comma symbols.
K.28.7 is the only comma symbol that cannot be the result of a single bit error in the data stream.
Example encoding of D31.1
Technologies that use 8b/10b
After the above-mentioned IBM patent expired, the scheme became even more popular and was chosen as a DC-free line code for several communication technologies.
Among the areas in which 8b/10b encoding finds application are the following:
Aurora
Camera Serial Interface
CoaXPress
Common Public Radio Interface (CPRI)
DVB Asynchronous serial interface (ASI)
DVI and HDMI Video Island (transition-minimized differential signaling)
DisplayPort 1.x
ESCON (Enterprise Systems Connection)
Fibre Channel
Gigabit Ethernet (except for the twisted pair–based 1000BASE-T)
IEEE 1394b (FireWire and others)
InfiniBand
JESD204B
OBSAI RP3 interface
PCI Express 1.x and 2.x
Serial RapidIO
SD UHS-II
Serial ATA
SAS 1.x, 2.x and 3.x
SSA
ServerNet (starting with ServerNet2)
SGMII
UniPro M-PHY
USB 3.0
Thunderbolt 1.x and 2.x
XAUI
SLVS-EC
Fibre Channel (4GFC and 8GFC variants only)
The FC-0 standard defines what encoding scheme is to be used (8b/10b or 64b/66b) in a Fibre Channel system higher speed variants typically use 64b/66b to optimize bandwidth efficiency (since bandwidth overhead is 20% in 8b/10b versus approximately 3% (~ 2/66) in 64b/66b systems). Thus, 8b/10b encoding is used for 4GFC and 8GFC variants; for 10GFC and 16GFC variants, it is 64b/66b. The Fibre Channel FC1 data link layer is then responsible for implementing the 8b/10b encoding and decoding of signals.
The Fibre Channel 8b/10b coding scheme is also used in other telecommunications systems. Data is expanded using an algorithm that creates one of two possible 10-bit output values for each input 8-bit value. Each 8-bit input value can map either to a 10-bit output value with odd disparity, or to one with even disparity. This mapping is usually done at the time when parallel input data is converted into a serial output stream for transmission over a fibre channel link. The odd/even selection is done in such a way that a long-term zero disparity between ones and zeroes is maintained. This is often called "DC balancing".
The 8-bit to 10-bit conversion scheme uses only 512 of the possible 1024 output values. Of the remaining 512 unused output values, most contain either too many ones (or too many zeroes) and therefore are not allowed. This still leaves enough spare 10-bit odd+even coding pairs to allow for at least 12 special non-data characters.
The codes that represent the 256 data values are called the data (D) codes. The codes that represent the 12 special non-data characters are called the control (K) codes.
All of the codes can be described by stating 3 octal values. This is done with a naming convention of "Dxx.x" or "Kxx.x". (Note that the tables in earlier sections are using decimal, rather than octal, values for Dxx.x or Kxx.x)
Example:
Input Data Bits: ABCDEFGH
Data is split: ABC DEFGH
Data is shuffled: DEFGH ABC
Now these bits are converted to decimal in the way they are paired.
Input data
C3 (HEX) = 11000011
= 110 00011
= 00011 110
= 3 6
E 8B/10B = D03.6
Digital audio
Encoding schemes 8b/10b have found a heavy use in digital audio storage applications, namely
Digital Audio Tape, US Patent 4,456,905, June 1984 by K. Odaka.
Digital Compact Cassette (DCC), US Patent 4,620,311, October 1986 by Kees Schouhamer Immink.
A differing but related scheme is used for audio CDs and CD-ROMs:
Compact disc Eight-to-fourteen modulation
Alternatives
Note that 8b/10b is the encoding scheme, not a specific code. While many applications do use the same code, there exist some incompatible implementations; for example, Transition Minimized Differential Signaling, which also expands 8 bits to 10 bits, but it uses a completely different method to do so.
64b/66b encoding, introduced for 10 Gigabit Ethernet's 10GBASE-R Physical Medium Dependent (PMD) interfaces, is a lower-overhead alternative to 8b/10b encoding, having a two-bit overhead per 64 bits (instead of eight bits) of encoded data. This scheme is considerably different in design from 8b/10b encoding, and does not explicitly guarantee DC balance, short run length, and transition density (these features are achieved statistically via scrambling). 64b/66b encoding has been extended to the 128b/130b and 128b/132b encoding variants for PCI Express 3.0 and USB 3.1, respectively, replacing the 8b/10b encoding in earlier revisions of each standard.
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main purpose of the 8b/10b encoding scheme in telecommunications?
A. To ensure data is transmitted at higher speeds
B. To achieve DC balance and bounded disparity in data transmission
C. To compress the size of the data being transmitted
D. To eliminate the need for error detection
Answer:
|
B. To achieve DC balance and bounded disparity in data transmission
|
Relavent Documents:
Document 0:::
A falling-block action (also known as a sliding-block or dropping-block action) is a single-shot firearm action in which a solid metal breechblock slides vertically in grooves cut into the breech of the weapon and is actuated by a lever.
Description
When the breechblock is in the closed (top) position, it seals the chamber from the high pressures created when the cartridge fires and safely transfers the recoil to the action and stock. When the breechblock is in the opened (bottom) position, the rear (breech) end of the chamber is exposed to allow ejection or extraction of the fired case and reloading of an unfired cartridge. It is a very strong action; when the breech is closed, the receiver essentially becomes a single piece of steel (as opposed to other actions that rely on lugs to lock the breech). This type of action is used on artillery pieces, as well as small arms. An additional advantage is the unobstructed loading path, which imposes no limit on the overall length of a cartridge; this was very significant in the mid to late 19th century period with the use of very long "buffalo" and "express" big-game cartridges.
Rifles using this action include the M1870 Belgian Comblain, M1872 Mylonas, Sharps rifle, Farquharson rifle, 1890 Stevens, Sharps-Borchardt Model 1878, Winchester Model 1885, Browning model 1885, Browning M78, the Ruger No. 1, and the Ruger No. 3. Falling-block action military rifles were common in the 19th century. They were replaced for military use by the faster bolt-action rifles, which were typically reloaded from a magazine holding several cartridges. A falling-block breech-loading rifle was patented in Belgium by J. F. Jobard in 1835 using a unique self-contained cartridge. A falling block pistol was also produced in 1847 by the French gunsmith Gastinne Renette who would file another patent in 1853 in France and through the patent agent Auguste Edouard Loradoux Bellford in Britain for the same system only using self-contained metallic ce
Document 1:::
Raymond Corbett Shannon (October 4, 1894 – March 7, 1945) was an American entomologist who specialised in Diptera and medical entomology.
Life and career
Shannon was born in Washington, D.C. He was orphaned as a child. His studies at Cornell University were interrupted by World War I, but he received his B.S. there in 1923. He was employed by the U.S. Bureau of Entomology from 1912 to 1916 and 1923 to 1925. In 1926, he began graduate studies at George Washington University, and from 1927, he was employed by the International Health Division of the Rockefeller Foundation.
He published over 100 articles on the characteristics, environment, and behavior of insects and on their aspects as disease vectors. One of his discoveries, in 1930, was the arrival of Anopheles gambiae, the mosquito that carries malaria, into the New World.
On his death at the age of 50, he left his library and insect collection to the Smithsonian Institution.
His wife was Elnora Pettit (Sutherlin) Hundley. His son was DePaul University accounting professor Donald Sutherlin Shannon, and his grandson is Academy Award-nominated actor Michael Shannon.
References
Further reading
.
.
. Published by the author.
.
Document 2:::
Railworthiness is the property or ability of a locomotive, passenger car, freight car, train or any kind of railway vehicle to be in proper operating condition or to meet acceptable safety standards of project, manufacturing, maintenance and railway use for transportation of persons, luggage or cargo.
Railworthiness is the condition of the rail system and its suitability for rail operations in that it has been designed, constructed, maintained and operated to approved standards and limitations by competent and authorised individuals, who are acting as members of an approved organisation and whose work is both certified as correct and accepted on behalf of the rail system owner.
See also
Airworthiness
Anticlimber
Buff strength
Crashworthiness
Cyberworthiness
Roadworthiness
Seaworthiness
Spaceworthiness
References
Document 3:::
Isopeptag is a 16-amino acid peptide tag (TDKDMTITFTNKKDAE) that can be genetically linked to proteins without interfering with protein folding. What makes the isopeptag different from other peptide tags is that it can bind its binding protein through a permanent and irreversible covalent bond. Other peptide tags generally bind their targets through weak non-covalent interactions, thus limiting their use in applications where molecules experience extreme forces. The isopeptag's covalent binding to its target overcomes these barriers and allows target proteins to be studied in harsher molecular environments.
Development
The isopeptag was developed by dissecting the pilin protein (Spy0128) from Streptococcus pyogenes. Spy0128 contains two intramolecular isopeptide bonds, and to generate the isopeptag one of these bonds was split by removing the last β-strand in the protein.
Mode of action
When the isopeptag is bound to a target protein, it spontaneously binds its binding partner through an isopeptide bond, an amide bond formed autocatalytically. The reaction is robust and occurs at various temperatures from 4-37 °C, a pH range of 5–8, and in the presence of commonly used detergents. Also, the reaction is independent of the redox state of the environment and can occur equally well in both reducing and oxidizing conditions.
Applications
The covalent binding of the isopeptag to its binding partner can be used to permanently link proteins together in the complex environment of a bacterial cell, to target proteins of interest for cellular imaging, and to develop new protein structures.
Document 4:::
Inner core super-rotation is a hypothesized eastward rotation of the inner core of Earth relative to its mantle, for a net rotation rate that is usually faster than Earth as a whole. A 1995 model of Earth's dynamo proposed super-rotations of up to 3 degrees per year; the following year, a seismic study claimed that the proposal was supported by observed discrepancies in the time that p-waves take to travel through the inner and outer core. However, the hypothesis of super-rotation is disputed in the later seismic studies.
The seismic support of inner core super-rotations was based on the changes of seismic waves that transversed inside the inner core and free oscillations of Earth. The results are inconsistent between the studies. A localized temporal change of the inner core surface was discovered in 2006
and the temporal change of inner core surface also provided an explanation to the seismic evidence that was attributed to the hypothesis of inner core super-rotations.
Recent studies indicated that a super-rotation of the inner core is inconsistent with the seismic data. Some studies proposed both the inner core super-rotation and localized temporal changes of inner core surface co-exist to consistently explain the seismic data, but other studies indicated that localized temporal changes of the inner core surface alone are enough to explain the seismic data.
Background
At the center of Earth is the core, a ball with a mean radius of 3480 kilometres that is composed mostly of iron. The outer core is liquid while the inner core, with a radius of 1220 km, is solid. Because the outer core has a low viscosity, it could be rotating at a different rate from the mantle and crust. This possibility was first proposed in 1975 to explain a phenomenon of Earth's magnetic field called westward drift: some parts of the field rotate about 0.2 degrees per year westward relative to Earth's surface. In 1981, David Gubbins of Leeds University predicted that a differential rotati
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the term "railworthiness" refer to in the context of railway vehicles?
A. The aesthetic appeal of railway vehicles
B. The ability of railway vehicles to meet safety and operating standards
C. The efficiency of fuel consumption in locomotives
D. The speed capabilities of passenger trains
Answer:
|
B. The ability of railway vehicles to meet safety and operating standards
|
Relavent Documents:
Document 0:::
Ant Wars is a 1982 board game published by Jason McAllister Games.
Gameplay
Ant Wars is a game in which tribes of ants battle each other for control of the back yard.
Reception
Steve Jackson reviewed Ant Wars in Space Gamer No. 66. Jackson commented that "On the whole [...] it's fun. I can't help it . . . I have to say this: I like it despite the bugs in it. Sorry . . . couldn't resist."
Ken Rolston reviewed Ant Wars in White Wolf #43 (May, 1994) and stated that "Another distinctive virtue of Ant Wars is that is permits gamers who feel vaguely uneasy about playing wargames ('No more war toys!') to enjoy the thrill of military campaigns. They can experience the moral burden of visiting bloodshed and hardships on human populations without the 'human' loss."
References
Document 1:::
The Brush Island is a continental island, contained within the Brush Island Nature Reserve, a protected nature reserve, known as Mit Island in the Dhurga language of the Murramamrang people of the Yuin nation see (http://press-files.anu.edu.au/downloads/press/p326831/html/ch01.xhtml) It is located off the south coast of New South Wales, Australia. The island and reserve is situated within the Tasman Sea, approximately south-east of the coastal village of Bawley Point.
The island was gazetted as a nature reserve in July 1963 and is important for breeding seabirds. The reserve is listed on Australia’s Register of the National Estate, and has an unmanned lighthouse.
Description
The island lies from the tip of Murramarang Point. It is long, with a maximum width of , and rises to about above sea level. Its shorelines are steep, rocky cliff faces with erosion gullies on the northern side. The gullies are both caused and used by the little penguins whose tracks and burrows cover most of the island.
History
The island was sighted by Captain James Cook on 22 April 1770 during his first voyage to the South Pacific Ocean. Cook had planned to shelter HMS Endeavour between the unnamed island and mainland but was prevented by high seas. Instead Endeavour continued its northward path along the coast, making her first Australian landfall a week later at Botany Bay.
Flora and fauna
The island supports a coastal vegetation cover of herbs, low shrubs and stunted trees, including Carpobrotus glaucescens, Lomandra longifolia, Einadia hastata, Myoporum insulare, Enchylaena tomentosa, Acacia longifolia, Westringia fruticosa, Banksia integrifolia and Casuarina glauca.
Seabird species nesting on the island include the wedge-tailed shearwater, short-tailed shearwater, little penguin and sooty oystercatcher. White-faced storm petrels and sooty shearwaters were found there for the first time in 2008.
Rat eradication
The island became infested with black rats in 1932 after a steamer
Document 2:::
HD 221420 (HR 8935; Gliese 4340) is a likely binary star system in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.81, allowing it to be faintly seen with the naked eye. The object is relatively close at a distance of 102 light years but is receding with a heliocentric radial velocity of .
HD 221420 has a stellar classification of G2 IV-V, indicating a solar analogue with a luminosity class intermediate between a subgiant and a main sequence star. The object is also extremely chromospherically inactive. It has a comparable mass to the Sun and a diameter of . It shines with a luminosity of from its photosphere at an effective temperature of , giving a yellow glow. HD 221420 is younger than the Sun at 3.65 billion years. Despite this, the star is already beginning to evolve off the main sequence. Like most planetary hosts, HD 221420 has a metallicity over twice of that of the Sun and spins modestly with a projected rotational velocity .
There is a mid-M-dwarf star with a similar proper motion and parallax to HD 221420, which is likely gravitationally bound to it. The two stars are separated by 698 arcseconds, corresponding to a distance of .
Planetary system
In a 2019 doppler spectroscopy survey, an exoplanet was discovered orbiting the star. The planet was originally thought to be a super Jupiter, having a minimum mass of . However, later observations using Hipparcos and Gaia astrometry found it to be a brown dwarf with a high-inclination orbit, revealing a true mass of .
References
Document 3:::
The convective overturn model of supernovae was proposed by Bethe and Wilson in 1985, and received a dramatic test with SN 1987A, and the detection of neutrinos from the explosion. The model is for type II supernovae, which take place in stars more massive than 8 solar masses.
When the iron core of a super massive star becomes heavier than electron degeneracy pressure can support, the core of the star collapses, and the iron core is compressed by gravity until nuclear densities are reached when a strong rebound sends a shock wave throughout the rest of the star and tears it apart in a large supernova explosion. The remains of this core will eventually become a neutron star. The collapse produces two reactions: one breaks apart iron nuclei into 13 helium atoms and 4 neutrons, absorbing energy; and the second produces a wave of neutrinos that form a shock wave. While all models agree there is a convective shock, there is disagreement as to how important that shock is to the supernova explosion.
In the convective overturn model, the core collapses faster and faster, exceeding the speed of sound inside the star, and producing a supersonic shock wave. This shock wave explodes outward until it stalls when it reaches the neutrinosphere, where the pressure of the star collapsing inward exceeds the pressure of the neutrinos radiating outwards. This point produces heavier elements as the neutrinos are absorbed.
The stalling of the shock wave represents the supernova problem, because once stalled, the shock wave should not be "reenergized". The prompt convection model states that the shock wave will increase the luminosity of the neutrinos produced by the core collapse, and this increase in energy will start the shock wave again. The neutron fingers model has instability near the core expel another wave of energized neutrinos which reenergizes the shock wave. The entropy convection model has matter falling inward from above the shock layer down to the gain radius, which wou
Document 4:::
OSSEC (Open Source HIDS SECurity) is a free, open-source host-based intrusion detection system (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed. OSSEC has a log analysis engine that is able to correlate and analyze logs from multiple devices and formats.
History
In June 2008, the OSSEC project and all the copyrights owned by Daniel B. Cid, the project leader, were acquired by Third Brigade, Inc. They promised to continue to contribute to the open source community and to extend commercial support and training to the OSSEC open source community.
In May 2009, Trend Micro acquired Third Brigade and the OSSEC project, with promises to keep it open source and free.
In 2018, Trend released the domain name and source code to the OSSEC Foundation.
The OSSEC project is currently maintained by Atomicorp who stewards the free and open source version and also offers a commercial version.
Characteristics
OSSEC consists of a main application, an agent, and a web interface.
Manager (or server), which is required for distributed network or stand-alone installations.
Agent, a small program installed on the systems to be monitored.
Agentless mode, can be used to monitor firewalls, routers, and even Unix systems.
Features
Log based Intrusion Detection (LID): Actively monitors and analyzes data from multiple log data points in real-time.
Rootkit and Malware Detection: Process and file level analysis to detect malicious applications and rootkits.
Active Response: Respond to attacks and changes on the system in real time through multiple mechanisms including firewall policies, integration with 3rd parties such as CDN's and support portals, as well
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the apparent magnitude of the star HD 221420, which allows it to be faintly seen with the naked eye?
A. 4.50
B. 5.81
C. 6.30
D. 7.25
Answer:
|
B. 5.81
|
Relavent Documents:
Document 0:::
The Doctrine of Chances was the first textbook on probability theory, written by 18th-century French mathematician Abraham de Moivre and first published in 1718. De Moivre wrote in English because he resided in England at the time, having fled France to escape the persecution of Huguenots. The book's title came to be synonymous with probability theory, and accordingly the phrase was used in Thomas Bayes' famous posthumous paper An Essay Towards Solving a Problem in the Doctrine of Chances, wherein a version of Bayes' theorem was first introduced.
Editions
The full title of the first edition was The Doctrine of Chances: or, A Method of Calculating the Probability of Events in Play. It was published in 1718, by W. Pearson, and ran for 175 pages.
Published in 1738 by Woodfall and running for 258 pages, the second edition of de Moivre's book introduced the concept of normal distributions as approximations to binomial distributions. In effect de Moivre proved a special case of the central limit theorem. Sometimes his result is called the theorem of de Moivre–Laplace.
A third edition was published posthumously in 1756 by A. Millar, and ran for 348 pages; additional material in this edition included an application of probability theory to actuarial science in the calculation of annuities.
References
Further reading
.
External links
The third edition of The Doctrine of Chances.
Full text of “The Doctrine of Chances”, 1st edition; from books.google.com
The Doctrine of Chance at MathPages
Document 1:::
The Atrato River () is a river of northwestern Colombia. It rises in the slopes of the Western Cordillera and flows almost due north to the Gulf of Urabá (or Gulf of Darién), where it forms a large, swampy delta. Its course crosses the Chocó Department, forming that department's border with neighboring Antioquia in two places. Its total length is about , and it is navigable as far as Quibdó (400 km / 250 mi), the capital of the department.
In 2016, the Constitutional Court of Colombia granted the river legal rights of personhood after years of degradation of the river basin from large-scale mining and illegal logging practices, which severely impacted the traditional ways of life for Afro-Colombians and Indigenous people.
Drainage area
The river’s total length is about , and it is navigable as far as Quibdó (400 km / 250 mi), the capital of the department. The basin occupies an area of and has an average annual precipitation of >5,000 mm/year that reaches up to 12,000 mm/year in the upper basin.
Flowing through a narrow valley between the Cordillera and coastal range, it has only short tributaries, the principal ones being the Truandó, the Sucio, and the Murrí rivers.
The gold and platinum mines of Chocó line some of its confluence, and the river sands are auriferous. Mining and its toxic leavings have adversely affected river and environmental quality, damaging habitat for many species and affecting the ethnic groups, the predominantly Afro-Colombian and Native American indigenous peoples who live along the river. The river is one of the few ways to move around in the Chocó region.
Wildlife
Northwestern Colombia encompasses an area of great diversity in wildlife. During the Pleistocene era at the height of the Atrato river, where it intersected the Cauca-Magdalena, the area was covered by a sea. It is proposed that this created a geographic barrier that may have caused many species to diverge through the process of allopatric speciation. For example, Philip
Document 2:::
A Time/Utility Function (TUF), née Time/Value Function, specifies the application-specific utility that an action (e.g., computational task, mechanical movement) yields depending on its completion time. TUFs and their utility interpretations (semantics), scales, and values are derived from application domain-specific subject matter knowledge. An example (but not the only) interpretation of utility is an action's relative importance, which otherwise is independent of its timeliness. The traditional deadline represented as a TUF is a special case—a downward step of utility from 1 to 0 at the deadline time—e.g., timeliness without importance. A TUF is more general—it has a critical time, with application-specific shapes and utility values on each side, after which it does not increase. The various researcher and practitioner definitions of firm and soft real-time can also be represented as special cases of the TUF model.
The optimality criterion for scheduling multiple TUF-constrained actions has historically in the literature been only maximal utility accrual (UA)—e.g., a (perhaps expected) weighted sum of the individual actions' completion utilities. This thus takes into account timeliness with respect to critical times. Additional criteria (e.g., energy, predictability), constraints (e.g., dependencies), system models, scheduling algorithms, and assurances have been added as the TUF/UA paradigm and its use cases have evolved. More expressively, TUF/UA allows accrued utility, timeliness, predictability, and other scheduling criteria and constraints to be traded off against one another for the schedule to yield situational application QoS—as opposed to only timeliness per se. Instances of the TUF/UA paradigm have been employed in a wide variety of application domains, most frequently in military systems.
Time/Utility Functions
The TUF/UA paradigm was originally created to address certain action timeliness, predictability of timeliness, and application QoS-based sc
Document 3:::
Calcium-activated potassium channel subunit beta-4 is a protein that in humans is encoded by the KCNMB4 gene.
MaxiK channels are large conductance, voltage and calcium-sensitive potassium channels which are fundamental to the control of smooth muscle tone and neuronal excitability. MaxiK channels can be formed by 2 subunits: the pore-forming alpha subunit and the modulatory beta subunit. The protein encoded by this gene is an auxiliary beta subunit which slows activation kinetics, leads to steeper calcium sensitivity, and shifts the voltage range of current activation to more negative potentials than does the beta 1 subunit.
See also
BK channel
Voltage-gated potassium channel
References
Further reading
Document 4:::
User-Managed Access (UMA) is an OAuth-based access management protocol standard for party-to-party authorization. Version 1.0 of the standard was approved by the Kantara Initiative on March 23, 2015.
As described by the charter of the group that developed UMA, the purpose of the protocol specifications is to “enable a resource owner to control the authorization of data sharing and other protected-resource access made between online services on the owner’s behalf or with the owner’s authorization by an autonomous requesting party”. This purpose has privacy and consent implications for web applications and the Internet of Things (IoT), as explored by the collection of case studies contributed by participants in the standards group.
Comparison to OAuth 2.0
The diagram from (see right) highlights key additions that UMA makes to OAuth 2.0.
In a typical OAuth flow:
A resource owner (RO), a human who uses a client application, is redirected to an authorization server (AS) to log in and consent to the issuance of an access token. This access token allows the client application to gain API access to the resource server (RS) on the resource owner's behalf in the future, likely in a scoped (limited) fashion. The resource server and authorization server most likely operate within the same security domain, and communication between them is not necessarily standardized by the main OAuth specification.
User-Managed Access adds three main concepts and corresponding structures and flows:
Protection API UMA defines a standardized Protection API for the authorization servers with which resource servers communicate about data security. This API enables multiple resource servers to communicate with one authorization server and vice versa. Because the Protection API is itself secured with OAuth, it allows for formal trust establishment between each pair. This also allows an authorization server to present a centralized user interface for resource owners.
Requesting Party (RqP) UM
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is a key feature of von Neumann algebras with a faithful normal tracial state?
A. They are always commutative.
B. They do not support conditional expectations.
C. They are particularly useful for defining conditional expectations.
D. They can only be defined in classical probability.
Answer:
|
C. They are particularly useful for defining conditional expectations.
|
Relavent Documents:
Document 0:::
A poison ring or pillbox ring is a type of ring with a container under the bezel or inside the bezel itself which could be used to hold poison or another substance; they became popular in Western Europe during the Middle Ages. The poison ring was used to slip poison into an enemy's food or drink. A powder or liquid poison was stored in these instances. In other cases, the poison ring was used to facilitate the suicide of the wearer in order to preclude capture or torture. People more commonly died from suicide rather than murder caused by the poison ring. The purpose of the compartment in the ring was not only limited to poison. Rings with such compartments were long before used for other reasons, before, during, and after the peak of poison rings.
Other names and uses
There were many uses for such rings. A very popular use for these rings was to store perfume, special items, talismans, keepsakes or small portraits. People would even store the teeth, hair, and bones of the dead, especially of saints or martyrs, because it was believed to protect and cast away misfortune. It was even thought to create and bond with God through it. Carrying holy relics was believed to bring happiness and good health, and to be in the good graces of God. These rings were also known as locket, socket, compartment, box, and funeral rings.
The difficulties of using the poison ring
The compartment in poison rings are relatively small. A strong potent poison was needed to be able to end a person's life. Carrying things in rings was common, but the science of making a deadly poison that could kill someone from just a drop was challenging for most. Seeing that it was no easy task of creating such a lethal death through such a small amount of poison, it is speculated that there were probably few deaths from the poison rings.
Known victims
Carthaginian General Hannibal committed suicide in order to avoid capture by Roman soldiers. Another victim of the poison ring was Marquis de Condorcet. H
Document 1:::
In astrodynamics, the orbital eccentricity of an astronomical object is a dimensionless parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit (or capture orbit), and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the Galaxy.
Definition
In a two-body problem with inverse-square-law force, every orbit is a Kepler orbit. The eccentricity of this Kepler orbit is a non-negative number that defines its shape.
The eccentricity may take the following values:
Circular orbit:
Elliptic orbit:
Parabolic trajectory:
Hyperbolic trajectory:
The eccentricity is given by
where is the total orbital energy, is the angular momentum, is the reduced mass, and the coefficient of the inverse-square law central force such as in the theory of gravity or electrostatics in classical physics:
( is negative for an attractive force, positive for a repulsive one; related to the Kepler problem)
or in the case of a gravitational force:
where is the specific orbital energy (total energy divided by the reduced mass), the standard gravitational parameter based on the total mass, and the specific relative angular momentum (angular momentum divided by the reduced mass).
For values of from to just under the orbit's shape is an increasingly elongated (or flatter) ellipse; for values of just over to infinity the orbit is a hyperbola branch making a total turn of decreasing from 180 to 0 degrees. Here, the total turn is analogous to turning number, but for open curves (an angle covered by velocity vector). The limit case between an ellipse and a hyperbola, when equals , is parabola.
Radial trajectories are
Document 2:::
Leslie B. Lamport (born February 7, 1941) is an American computer scientist and mathematician. Lamport is best known for his seminal work in distributed systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual.
Lamport was the winner of the 2013 Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distributed computing systems, in which several autonomous computers communicate with each other by passing messages. He devised important algorithms and developed formal modeling and verification protocols that improve the quality of real distributed systems. These contributions have resulted in improved correctness, performance, and reliability of computer systems.
Early life and education
Lamport was born into a Jewish family in Brooklyn, New York, the son of Benjamin and Hannah Lamport (née Lasser). His father was an immigrant from Volkovisk in the Russian Empire (now Vawkavysk, Belarus) and his mother was an immigrant from the Austro-Hungarian Empire, now southeastern Poland.
A graduate of Bronx High School of Science, Lamport received a B.S. in mathematics from the Massachusetts Institute of Technology in 1960, followed by M.A. (1963) and Ph.D. (1972) degrees in mathematics from Brandeis University. His dissertation, The analytic Cauchy problem with singular data, is about singularities in analytic partial differential equations.
Career and research
Lamport worked as a computer scientist at Massachusetts Computer Associates from 1970 to 1977, Stanford Research Institute (SRI International) from 1977 to 1985, and Digital Equipment Corporation and Compaq from 1985 to 2001. In 2001 he joined Microsoft Research in California, and he retired in January 2025.
Distributed systems
Lamport's research contributions have laid the foundations of the theory of distributed systems. Among his most notable papers are
"Time, Clocks, and the Ordering of Events in a Distributed System
Document 3:::
"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience.
In the paper, Ioannidis argued that a large number, if not the majority, of published medical research papers contain results that cannot be replicated. In simple terms, the essay states that scientists use hypothesis testing to determine whether scientific discoveries are significant. Statistical significance is formalized in terms of probability, with its p-value measure being reported in the scientific literature as a screening mechanism. Ioannidis posited assumptions about the way people perform and report these tests; then he constructed a statistical model which indicates that most published findings are likely false positive results.
While the general arguments in the paper recommending reforms in scientific research methodology were well-received, Ionnidis received criticism for the validity of his model and his claim that the majority of scientific findings are false. Responses to the paper suggest lower false positive and false negative rates than what Ionnidis puts forth.
Argument
Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by . When a study is conducted, the probability that a positive result is obtained is . Given these two factors, we want to compute the conditional probability , which is known as the positive predictive value (PPV). Bayes' theorem allows us to compute the PPV as:where is the type I error rate (false positives) and is the type II error rate (false negatives); the statistical power is . It is customary in most scientific research to desire and . If we assume for a given scientific field, then we may compute the PPV for different values of and :
However, the simple formula for PPV derived from Bayes' theorem does not account for bias in
Document 4:::
Automated medical scribes (also called artificial intelligence scribes, AI scribes, digital scribes, virtual scribes, ambient AI scribes, AI documentation assistants, and digital/virtual/smart clinical assistants) are tools for transcribing medical speech, such as patient consultations and dictated medical notes. Many also produce summaries of consultations. Automated medical scribes based on Large Language Models (LLMs, commonly called "AI", short for "artificial intelligence") increased drastically in popularity in 2024. There are privacy and antitrust concerns. Accuracy concerns also exist, and intensify in situations in which tools try to go beyond transcribing and summarizing, and are asked to format information by its meaning, since LLMs do not deal well with meaning (see weak artificial intelligence). Medics using these scribes are generally expected to understand the ethical and legal considerations, and supervise the outputs.
The privacy protections of automated medical scribes vary widely. While it is possible to do all the transcription and summarizing locally, with no connection to the internet, most closed-source providers require that data be sent to their own servers over the internet, processed there, and the results sent back (as with digital voice assistants). Some retailers say their tools use zero-knowledge encryption (meaning that the service provider can't access the data). Others explicitly say that they use patient data to train their AIs, or rent or resell it to third parties; the nature of privacy protections used in such situations is unclear, and they are likely not to be fully effective.
Most providers have not published any safety or utility data in academic journals, and are not responsive to requests from medical researchers studying their products.
Privacy
Some providers unclear about what happens to user data. Some may sell data to third parties. Some explicitly send user data to for-profit tech companies for secondary purposes,
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary habitat where Amanita silvicola is typically found?
A. Desert regions
B. Coniferous woods
C. Grasslands
D. Urban areas
Answer:
|
B. Coniferous woods
|
Relavent Documents:
Document 0:::
Protected computers is a term used in Title 18, Section 1030 of the United States Code, (the Computer Fraud and Abuse Act) which prohibits a number of different kinds of conduct, generally involving unauthorized access to, or damage to the data stored on, "protected computers". The statute, as amended by the National Information Infrastructure Protection Act of 1996, defines "protected computers" (formerly known as "federal interest computers") as:
a computer—
(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or
(B) which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States.
The law prohibits unauthorized obtaining of "information from any protected computer if the conduct involved an interstate or foreign communication," and makes it a felony to intentionally transmit malware to a protected computer if more than $5000 in damage (such as to the integrity of data) were to result.
Scope
The US Justice Department explains:
In the 1994 amendments (of the National Information Infrastructure Act), the reach of this subsection (E. Subsection 1030(a)(5)) was broadened by replacing the term "federal interest computer" with the term "computer used in interstate commerce or communications." The latter term is broader because the old definition of "federal interest computer" in 18 U.S.C. § 1030(e)(2)(B) covered a computer "which is one of two or more computers used in committing the offense, not all of which are located in the same State." This meant that a hacker who attacked other computers in the same state was not subject to federal jurisdiction, even when these actions may have severely affected interstate or foreign commerce. For example, individuals who attack telephone switches may disrupt interstate and foreign calls. The 1994 change remedied that defect.
However, the definition of federal interest computer actually covered more than simply interstate activity. More specifically, 18 U.S.C. § 1030(e)(2)(A) covered, generically, computers belonging to the United States Government or financial institutions, or those used by such entities on a non-exclusive basis if the conduct constituting the offense affected the Government's operation or the financial institution's operation of such computer. By changing § 1030(a)(5) from "federal interest computer" to "computer used in interstate commerce or communications," Congress may have inadvertently eliminated federal protection for those government and financial institution computers not used in interstate communications. For example, the integrity and availability of classified information contained in an intrastate local area network may not have been protected under the 1994 version of 18 U.S.C. § 1030(a)(5), although its confidentiality continued to be protected under 18 U.S.C. § 1030(a)(1). To remedy this situation in the 1996 Act, 18 U.S.C. § 1030(a)(5) was redrafted to cover any "protected computer," a new term defined in § 1030(e)(2) and used throughout the new statute--in § 1030(a)(5), as well as in §§ 1030(a)(2), (a)(4), and the new (a)(7). The definition of "protected computer" includes government computers, financial institution computers, and any computer "which is used in interstate or foreign commerce or communications."
This broad definition addresses the original concerns regarding intrastate "phone phreakers" (i.e., hackers who penetrate telecommunications computers). It also specifically includes those computers used in "foreign" communications. With the continually expanding global information infrastructure, with numerous instances of international hacking, and with the growing possibility of increased global industrial espionage, it is important that the United States have jurisdiction over international computer crime cases. Arguably, the old definition of "federal interest computer" contained in 18 U.S.C. § 1030(e)(2) conferred such jurisdiction because the requirement that the computers used in committing the offense not all be located in the same state might be satisfied if one computer were located overseas. As a general rule, however, Congress's laws have been presumed to be domestic in scope only, absent a specific grant of extraterritorial jurisdiction. E.E.O.C. v. Arabian American Oil Co., 499 U.S. 244 (1991). To ensure clarity, the statute was amended to reference international communications explicitly.
See also
Computer crime
Computer trespass
Defense Intelligence Agency (DIA)
FBI
Immigration and Customs Enforcement
Interpol
National Information Infrastructure Protection Act
United States v. Swartz
United States Secret Service
References
External links
Cornell Law posting of US Code 18 § 1030
U. S. Department of Justice Computer Crime Policy & Programs
Document 1:::
The DARPA Agent Markup Language (DAML) was the name of a US funding program at the US Defense Advanced Research Projects Agency (DARPA) started in 1999 by then-Program Manager James Hendler, and later run by Murray Burke, Mark Greaves and Michael Pagels. The program focused on the creation of machine-readable representations for the Web.
One of the Investigators working on the program was Tim Berners-Lee. Working with the program managers and other participants, Tim helped shape the effort to create technologies and demonstrations for what is now called the Semantic Web, leading in turn to the growth of knowledge graph technology.
A primary outcome of the DAML program was the DAML language, an agent markup language based on RDF. This language was followed by an extension entitled DAML+OIL which included researchers outside of the DARPA program in the design. The 2002 submission of the DAML+OIL language to the World Wide Web Consortium (W3C) captures the work done by DAML contractors and the EU/U.S. ad hoc Joint Committee on Agent Markup Languages. This submission was the starting point for the language (later called OWL) to be developed by W3C's web ontology working group, WebOnt.
DAML+OIL was a syntax, layered on RDF and XML, that could be used to describe sets of facts making up an ontology.
DAML+OIL had its roots in three main languages - DAML, as described above, OIL (Ontology Inference Layer) and SHOE, an earlier US research project.
A major innovation of the languages was to use RDF and XML for a basis, and to use RDF namespaces to organize and assist with the integration of arbitrarily many different and incompatible ontologies.
Articulation ontologies can link these competing ontologies through codification of analogous subsets in a neutral point of view, as is done in the Wikipedia.
Current ontology research derived in part from DAML is leading toward the expression of ontologies and rules for reasoning and action.
Much of the work in DAML
Document 2:::
λ Lyrae, Latinized as Lambda Lyrae, is a suspected binary star system in the northern constellation of Lyra. It is an orange-hued point of light that is dimly visible to the naked eye with an apparent visual magnitude of 4.94. The system is located approximately 1,300 light years distant from the Sun based on parallax, but is drifting closer with a radial velocity of −17.7 km/s.
The visible component is an aging giant star with a stellar classification of K2.5III:Ba0.5 and an estimated age of around 58 million years. The suffix notation indicates this is a mild barium star and hence it may have a white dwarf companion, while the colon indicates there is some uncertainty about the class. Having exhausted the supply of hydrogen at its core, it has cooled and expanded off the main sequence, and now has 105 times the radius of the Sun. The star has six times the mass of the Sun and is radiating over 4,300 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,361 K.
References
Document 3:::
AnyBand () is the K-pop EP, released by South Korean project group AnyBand. It featured three songs: "Talk Play Love," "Promise U," and "Daydream."
Musical group AnyBand is a promotional band formed by cell phone brand Samsung's Anycall, which promoted their product "AnyBand" new cell phones. Musical group AnyBand consists of BoA, Xiah Junsu, Tablo, and jazz pianist Jin Bora.
Background
Anyband is the fourth Anycall music drama from Samsung, a series of music videos and commercials that promotes their cellphones with popular celebrities. This is the fourth drama/commercial following Anymotion, Anyclub, and Anystar which have featured Lee Hyori and many famous male celebrities. This is the first time since the start of this series that Samsung is working with another main model, going from Lee Hyori to BoA.
The motto for this campaign is "Talk Play Love." While most people believe that technology isolates people, Samsung opted to promote the concept that technology unites people.
Concept
The nine-minute commercial was shot in Brazil, depicting a city controlled by a Big Brother-type figurehead that constantly watches over its oppressed citizens. The law is for "no talk, no play, no love," and the citizens are strictly controlled and monitored by their government. However, four individuals form a secret alliance to challenge the government's power. Their motto is "Talk, Play, Love," and they keep in contact with each other through the use of their Anycall cellphones, a possession that is closely guarded as it is illegal to possess cellphones. Escaping the police and security, they infiltrate the government's computer system and broadcast their message through their song TPL (Talk, Play, Love), also recorded using their Anycall cellphones. They later go out in the open on top of the city's high rises to send a live message through their music, "Promise U." The people follow and rise, and the oppressive government in the end is defeated.
Performances
Anyband Con
Document 4:::
The Coma Cluster (Abell 1656) is a large cluster of galaxies that contains over 1,000 identified galaxies. Along with the Leo Cluster (Abell 1367), it is one of the two major clusters comprising the Coma Supercluster. It is located in and takes its name from the constellation Coma Berenices.
The cluster's mean distance from Earth is 99 Mpc (321 million light years). Its ten brightest spiral galaxies have apparent magnitudes of 12–14 that are observable with amateur telescopes larger than 20 cm. The central region is dominated by two supergiant elliptical galaxies: NGC 4874 and NGC 4889. The cluster is within a few degrees of the north galactic pole on the sky. Most of the galaxies that inhabit the central portion of the Coma Cluster are ellipticals. Both dwarf and giant ellipticals are found in abundance in the Coma Cluster.
Cluster members
As is usual for clusters of this richness, the galaxies are overwhelmingly elliptical and S0 galaxies, with only a few spirals of younger age, and many of them probably near the outskirts of the cluster.
The full extent of the cluster was not understood until it was more thoroughly studied in the 1950s by astronomers at Mount Palomar Observatory, although many of the individual galaxies in the cluster had been identified previously.
Dark matter
The Coma Cluster is one of the first places where observed gravitational anomalies were considered to be indicative of unobserved mass. In 1933 Fritz Zwicky showed that the galaxies of the Coma Cluster were moving too fast for the cluster to be bound together by the visible matter of its galaxies. Though the idea of dark matter would not be accepted for another fifty years, Zwicky wrote that the galaxies must be held together by "dunkle Materie" (dark matter).
About 90% of the mass of the Coma cluster is believed to be in the form of dark matter. The distribution of dark matter throughout the cluster, however, is poorly constrained.
X-ray source
An extended X-ray source centered a
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the term used in Title 18, Section 1030 of the United States Code to refer to computers that are protected under the Computer Fraud and Abuse Act?
A. Federal interest computers
B. Protected computers
C. Government computers
D. Financial institution computers
Answer:
|
B. Protected computers
|
Relavent Documents:
Document 0:::
In fluid dynamics, the entrance length is the distance a flow travels after entering a pipe before the flow becomes fully developed. Entrance length refers to the length of the entry region, the area following the pipe entrance where effects originating from the interior wall of the pipe propagate into the flow as an expanding boundary layer. When the boundary layer expands to fill the entire pipe, the developing flow becomes a fully developed flow, where flow characteristics no longer change with increased distance along the pipe. Many different entrance lengths exist to describe a variety of flow conditions. Hydrodynamic entrance length describes the formation of a velocity profile caused by viscous forces propagating from the pipe wall. Thermal entrance length describes the formation of a temperature profile. Awareness of entrance length may be necessary for the effective placement of instrumentation, such as fluid flow meters.
Hydrodynamic entrance length
The hydrodynamic entrance region refers to the area of a pipe where fluid entering a pipe develops a velocity profile due to viscous forces propagating from the interior wall of a pipe. This region is characterized by a non-uniform flow. The fluid enters a pipe at a uniform velocity, then fluid particles in the layer in contact with the surface of the pipe come to a complete stop due to the no-slip condition. Due to viscous forces within the fluid, the layer in contact with the pipe surface resists the motion of adjacent layers and slows adjacent layers of fluid down gradually, forming a velocity profile. For the conservation of mass to hold true, the velocity of layers of the fluid in the center of the pipe increases to compensate for the reduced velocities of the layers of fluid near the pipe surface. This develops a velocity gradient across the cross-section of the pipe.
Boundary layer
The layer in which the shearing viscous forces are significant, is called the boundary layer. This boundary layer is a
Document 1:::
HD 128198 is a giant star in the northern constellation of Boötes, about 549 light years away.
References
External links
HR 5448
Image HD 128198
Document 2:::
In linear elasticity, the equations describing the deformation of an elastic body subject only to surface forces (or body forces that could be expressed as potentials) on the boundary are (using index notation) the equilibrium equation:
where is the stress tensor, and the Beltrami-Michell compatibility equations:
A general solution of these equations may be expressed in terms of the Beltrami stress tensor. Stress functions are derived as special cases of this Beltrami stress tensor which, although less general, sometimes will yield a more tractable method of solution for the elastic equations.
Beltrami stress functions
It can be shown that a complete solution to the equilibrium equations may be written as
Using index notation:
{| class="collapsible collapsed" width="30%" style="text-align:left"
!Engineering notation
|-
|
|
|
|-
|
|
|
|-
|
|
|
|}
where is a symmetric but otherwise arbitrary second-rank tensor field that is at least twice differentiable, and is known as the Beltrami stress tensor. Its components are known as Beltrami stress functions. is the Levi-Civita pseudotensor, with all values equal to zero except those in which the indices are not repeated. For a set of non-repeating indices the component value will be +1 for even permutations of the indices, and -1 for odd permutations. And is the Nabla operator. For the Beltrami stress tensor to satisfy the Beltrami-Michell compatibility equations in addition to the equilibrium equations, it is further required that is at least four times continuously differentiable.
Maxwell stress functions
The Maxwell stress functions are defined by assuming that the Beltrami stress tensor is restricted to be of the form.
The stress tensor which automatically obeys the equilibrium equation may now be written as:
{|
|-
|
|
|
|-
|
|
|
|-
|
|
|
|}
The solution to the elastostatic problem now consists of finding the t
Document 3:::
In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actions and conditions. Although pseudocode shares features with regular programming languages, it is intended for human reading rather than machine control. Pseudocode typically omits details that are essential for machine implementation of the algorithm, meaning that pseudocode can only be verified by hand. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The reasons for using pseudocode are that it is easier for people to understand than conventional programming language code and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications to document algorithms and in planning of software and other algorithms.
No broad standard for pseudocode syntax exists, as a program in pseudocode is not an executable program; however, certain limited standards exist (such as for academic assessment). Pseudocode resembles skeleton programs, which can be compiled without errors. Flowcharts, drakon-charts and Unified Modelling Language (UML) charts can be thought of as a graphical alternative to pseudocode, but need more space on paper. Languages such as HAGGIS bridge the gap between pseudocode and code written in programming languages.
Application
Pseudocode is commonly used in textbooks and scientific publications related to computer science and numerical computation to describe algorithms in a way that is accessible to programmers regardless of their familiarity with specific programming languages. Textbooks often include an introduction explaining the conventions in use, and the detail of pseudocode may sometimes approach that of formal programming l
Document 4:::
Racetrack memory or domain-wall memory (DWM) is an experimental non-volatile memory device under development at IBM's Almaden Research Center by a team led by physicist Stuart Parkin. It is a current topic of active research at the Max Planck Institute of Microstructure Physics in Dr. Parkin's group. In early 2008, a 3-bit version was successfully demonstrated. If it were to be developed successfully, racetrack memory would offer storage density higher than comparable solid-state memory devices like flash memory.
Description
Racetrack memory uses a spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire about 200 nm across and 100 nm thick. As current is passed through the wire, the domains pass by magnetic read/write heads positioned near the wire, which alter the domains to record patterns of bits. A racetrack memory device is made up of many such wires and read/write elements. In general operational concept, racetrack memory is similar to the earlier bubble memory of the 1960s and 1970s. Delay-line memory, such as mercury delay lines of the 1940s and 1950s, are a still-earlier form of similar technology, as used in the UNIVAC and EDSAC computers. Like bubble memory, racetrack memory uses electrical currents to "push" a sequence of magnetic domains through a substrate and past read/write elements. Improvements in magnetic detection capabilities, based on the development of spintronic magnetoresistive sensors, allow the use of much smaller magnetic domains to provide far higher bit densities.
In production, it was expected that the wires could be scaled down to around 50 nm. There were two arrangements considered for racetrack memory. The simplest was a series of flat wires arranged in a grid with read and write heads arranged nearby. A more widely studied arrangement used U-shaped wires arranged vertically over a grid of read/write heads on an underlying substrate. This would allow the wires to be much longer without increasing
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What significant achievement did Pars Rocketry Group accomplish in June 2016 during the Intercollegiate Rocket Engineering Competition?
A. They won first place among 44 teams.
B. They won the 6th position among 44 teams.
C. They developed a new type of rocket engine.
D. They launched the most powerful rocket engine in Turkey.
Answer:
|
B. They won the 6th position among 44 teams.
|
Relavent Documents:
Document 0:::
Gaisberg Transmitter is a facility for FM and TV-transmission on the Gaisberg mountain near Salzburg, Austria. It was the first large transmitter in Austria finished after the war and started its work on 22 August 1956 (however, a provisional transmitter already broadcast a VHF radio signal since 1953 with 1kW). It used a lattice tower and broadcast Austria's first radio station on 99.0MHz and third radio station on 94.8 MHz, each with 50 kW, as well as a TV station on channel 8 with 60/12 kW (picture/sound). During the 1980s an UHF antenna was put on top of the tower, bringing its height to 100 meters.
The ALDIS (Austrian Lightning Detection & Information System) maintains the Austrian Lightning Research Station Gaisberg next to the transmitter .
Document 1:::
Eta Boötis (η Boötis, abbreviated Eta Boo, η Boo) is a binary star in the constellation of Boötes. Based on parallax measurements obtained during the Hipparcos mission, it is approximately 37 light-years (11 parsecs) distant from the Sun. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It forms a double star with the star BD+19 2726.
As a constituent of a double pair, Eta Boötis is also designated WDS J13547+1824A, with its two components being designated Aa (formally named Muphrid , the traditional name for the entire system) and Ab. (As part of a binary pair, they are also designated Eta Boötis A and B, respectively.) BD +19 2726 is also designated WDS J13547+1824B.
Nomenclature
η Boötis (Latinised to Eta Boötis) is the binary pair's Bayer designation; η Boötis A and B those of its two components. The designations of the two constituents of the double pair as WDS J13547+1824A and B and those of A's components - Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
Eta Boötis bore the traditional names Muphrid and Saak. Muphrid is from the Arabic مفرد الرامح mufrid ar-rāmiħ "the (single) one of the lancer". In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Muphrid for the component WDS J13547+1824Aa (Eta Boötis A) on 12 September 2016 and it is now so included in the List of IAU-approved Star Names.
In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Ramih al Ramih (رمح الرامح rumḥ al rāmiḥ), which was translated into Latin as Lancea Lanceator, possibly meaning the lance of the lancer.
In Chinese, (), meaning "the Right Conductor", refers t
Document 2:::
LNCaP cells are a cell line of human cells commonly used in the field of oncology. LNCaP cells are androgen-sensitive human prostate adenocarcinoma cells derived from the left supraclavicular lymph node metastasis from a 50-year-old caucasian male in 1977. They are adherent epithelial cells growing in aggregates and as single cells.
One major obstacle to conducting the most clinically relevant prostate cancer (PCa) research has been the lack of cell lines that closely mimic human disease progression. Two hallmarks of metastatic human prostate cancer include the shift of aggressive PCa from androgen-sensitivity to an Androgen Insensitive (AI) state, and the propensity of PCa to metastasize to bone. Although the generation of AI cell lines has been quite successful as demonstrated in the “classic” cell lines DU145 and PC3, the behavior of these cells in bone does not fully mimic clinical human disease. It is well established that human PCa bone metastasis form osteoblastic lesions rather than osteolytic lesions seen in other cancers like breast cancer. Similarly, PC-3 and DU145 cells form osteolytic tumors. To develop an AI-PCa cell model that more closely mimics clinical disease, LNCaP sublines have been generated to provide the most clinically relevant tissue culture tools to date.
History
The LNCaP (Lymph Node Carcinoma of the Prostate) cell line was established from a metastatic lesion of human prostatic adenocarcinoma. The LNCaP cells grow readily in vitro (up to 8 x 105 cells/sq cm; doubling time, 60 hr), form clones and are highly resistant to human fibroblast interferon. LNCaP cells have a modal chromosome number of 76 to 91, indicative of a human male karyotype with several marker chromosomes. The malignant properties of LNCaP cells are maintained in athymic nude mice which develop tumors at the injection site and show a similar doubling time in vivo.
High-affinity specific androgen and estrogen receptors are present in the cytosol and nuclear fraction
Document 3:::
Matricity is the interaction of a matrix with its environment. This word is used particularly of protein interactions, where a polymerised protein (a matrix) interacts with a membrane or another polymer.
Protein interactions can normally be described by their affinities for each other. The interactions of clustered proteins for multivalent ligands are described by avidities (avidity), while matricity describes a semisolid state interaction of a matrix with its environment. As an example matricity has been used to describe the interaction of polymerised clathrin with adaptor complexes bound to the membrane.
References
Document 4:::
A price override is a feature of a retail management system which allows an authorised person to change the automated price of a product or service, in order to apply a discount.
Price overrides occur for a variety of reasons. One common reason is to discount damaged goods. Another is employee discount and discounts given to other groups. An override may also be necessary due to the displayed price of an item not matching the price shown at the register. This can occur when staff have not caught up with changes to the display caused by recent price changes on the retail management system, or it may simply be a mistake. The customer is then given the old price for goodwill, or for legal reasons. Other reasons include coupon redemption and items on sale.
Price overrides can present an opportunity for employee fraud. Unauthorised large discounts can be given to associates of the employee. Alternatively, an employee can make the sale at the full price, then, after serving the customer, void the sale and enter it again, but this time with a discount which the employee pockets. For these reasons the ability to override prices is often limited to senior employees only, or alternatively, junior employees have a limit placed on the amount they can override, typically 10%. This can be further refined by the retail management system distinguishing between the various types of transactions and setting appropriate limits and controls for each individually.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does matricity specifically refer to in the context of protein interactions?
A. The interaction of a matrix with its environment
B. The affinity of proteins for each other
C. The clustering of proteins for multivalent ligands
D. The solid state of a matrix without interaction
Answer:
|
A. The interaction of a matrix with its environment
|
Relavent Documents:
Document 0:::
Amenamevir (trade name Amenalief) is an antiviral drug used for the treatment of shingles (herpes zoster).
It acts as an inhibitor of the zoster virus's helicase–primase complex. Amenamevir was approved in Japan for the treatment of shingles in 2017.
Document 1:::
In mathematics and mathematical physics, potential theory is the study of harmonic functions.
The term "potential theory" was coined in 19th-century physics when it was realized that the two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation.
There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of Poisson's equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other.
Modern potential theory is also intimately connected with probability and the theory of Markov chains. In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others.
Symmetry
A useful starting point and organizing principle in the study of harmonic functions is a consideration of the symmetries of the Laplace equation. Although it
Document 2:::
NGC 4815 is an open cluster in the constellation Musca. It was discovered by John Herschel in 1834. It is located approximately 10,000 light years away from Earth.
Characteristics
NGC 4815 is an intermediate age cluster. Carraro and Ortolani determined the age of the cluster to be 500 million years, based on BV photometry, which is nearly the same as the Hyades, while Sagar et al. determined its age at 400±50 million years and Kharchenko et al. at 400 million years. Friel at al. estimated its age to be between 500 and 630 million years.
There are 69 probable member stars within the angular radius of the cluster and 39 within the central part of the cluster. Five blue stragglers have been detected in the cluster, and 15 red giants are probable members of the cluster. The earliest main sequence stars are of type A0. One of the stars in the cluster region shows extremely red colour along with blue ultraviolet colour and it could be an interacting binary star. The turnoff point is at 2.6 ±0.1 .
The members show small dispersions in abundance, with the exception of Mg, which is common for clusters of rather low mass, as NGC 4815. The mean metallicity of the cluster is [Fe/H] = +0.03 ± 0.05 dex (as estimated by Frieal et al.) or −0.01 ± 0.04 (as estimated by Tautvaišienė et al.). Alpha-elements [Ca/Fe] and [Si/Fe] show solar ratios, [Mg/Fe] is moderately enhanced, [Ti/Fe] is slightly subsolar, [Al/Fe] is enhanced, and [Na/Fe] is significantly enhanced. THE CNO abundances in the cluster are [C/H] = −0.17 ± 0.08, [N/H] = 0.53 ± 0.07, [O/H] = 0.12 ± 0.09, and [C/N] = 0.79 ± 0.08.
Document 3:::
Lymphatic disease is a class of disorders which directly affect the components of the lymphatic system.
Examples include Castleman's disease and lymphedema.
Types
Diseases and disorder
Hodgkin's Disease/Hodgkin's Lymphoma
Hodgkin lymphoma This is a type of cancer of the lymphatic system. It can start almost anywhere in the body. It is believed to be caused by HIV, Epstein-Barr Syndrome, age, and family history. Symptoms include weight gain, fever, swollen lymph nodes, night sweats, itchy skin, fatigue, chest pain, coughing, or trouble swallowing.
Non-Hodgkin's Lymphoma
Lymphoma is usually malignant cancer. It is caused by the body producing too many abnormal white blood cells. It is not the same as Hodgkin's Disease. Symptoms usually include painless, enlarged lymph node or nodes in the neck, weakness, fever, weight loss, and anemia.
Lymphadenitis
Lymphadenitis is an infection of the lymph nodes usually caused by a virus, bacteria or fungi. Symptoms include redness or swelling around the lymph node.
Lymphangitis
Lymphangitis is an inflammation of the lymph vessels. Symptoms usually include swelling, redness, warmth, pain or red streaking around the affected area.
Lymphedema
Lymphedema is the chronic pooling of lymph fluid in the tissue. Lymphedema can start anywhere in the lymphatic system of the body. It's also a side-effect of some surgical procedures. Kathy Bates is an advocate and supporter for further research for lymphedema.
Lymphocytosis
Lymphocytosis is a high lymphocyte count. It can be caused by an infection, blood cancer, lymphoma, or autoimmune disorders that are accompanied by chronic swelling.
References
Document 4:::
A histiocyte is a vertebrate cell that is part of the mononuclear phagocyte system (also known as the reticuloendothelial system or lymphoreticular system). The mononuclear phagocytic system is part of the organism's immune system. The histiocyte is a tissue macrophage or a dendritic cell (histio, diminutive of histo, meaning tissue, and cyte, meaning cell). Part of their job is to clear out neutrophils once they've reached the end of their lifespan.
Development
Histiocytes are derived from the bone marrow by multiplication from a stem cell. The derived cells migrate from the bone marrow to the blood as monocytes. They circulate through the body and enter various organs, where they undergo differentiation into histiocytes, which are part of the mononuclear phagocytic system (MPS).
However, the term histiocyte has been used for multiple purposes in the past, and some cells called "histocytes" do not appear to derive from monocytic-macrophage lines. The term Histiocyte can also simply refer to a cell from monocyte origin outside the blood system, such as in a tissue (as in rheumatoid arthritis as palisading histiocytes surrounding fibrinoid necrosis of rheumatoid nodules).
Some sources consider Langerhans cell derivatives to be histiocytes. The Langerhans cell histiocytosis embeds this interpretation into its name.
Structure
Histiocytes have common histological and immunophenotypical characteristics (demonstrated by immunostains). Their cytoplasm is eosinophilic and contains variable amounts of lysosomes. They bear membrane receptors for opsonins, such as IgG and the fragment C3b of complement. They express LCAs (leucocyte common antigens) CD45, CD14, CD33, and CD4 (also expressed by T helper cells).
Macrophages and dendritic cells
These histiocytes are part of the immune system by way of two distinct functions: phagocytosis and antigen presentation. Phagocytosis is the main process of macrophages and antigen presentation the main property of dendritic cel
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What challenges do economists face when trying to measure economic worth over time according to the text?
A. The lack of consumer demand for economic measurements
B. The difficulty of relating past and present indicators
C. The availability of an abundance of historical data
D. The simplicity of measuring changes in market conditions
Answer:
|
B. The difficulty of relating past and present indicators
|
Relavent Documents:
Document 0:::
Gigya, Inc. was a technology company founded in Tel Aviv, Israel and headquartered in Mountain View, California, with additional offices in New York, Tel Aviv, London, Paris, Hamburg, and Sydney.
Gigya was purchased by SAP in 2017.
Products
Gigya offered a customer identity management platform for managing profiles, preference, opt-in and consent settings, and an identity management platform for businesses that included products for customized registration, social login, user profile and preference management, user engagement and loyalty, and integrations with third-party marketing and services platforms. Its technology was used by corporations including Fox, Forbes, Repsol, Toyota, RTL Netherlands, Campbells, Fairfax Media, Wacom, ASOS, and Turner, according to the company's website.
History
Gigya was founded in Tel Aviv, Israel in 2006. Patrick Salyer became CEO in March 2011. Gigya has worked with SAP Hybris since 2013. As of November 2014, Gigya had raised $104M from Intel Capital, Benchmark Capital, Mayfield Fund, First Round Capital, Advance Publications (parent company of Condé Nast), DAG Ventures, Common Fund Capital, Vintage Investment Partners, and Greenspring Associates. Software maker Adobe Systems is also an investor.
On 27 November 2014, the Syrian Electronic Army hijacked the gigya.com domain by changing its DNS configuration at the domain registrar directly, outside of Gigya's system and control. Shortly after the incident, the CEO of Gigya, Patrick Salyer confirmed the news officially on Gigya's blog, stating that no data was compromised, and that the issue had been resolved within an hour of Gigya identifying the issue. The next day, on 28 November 2014, the Syrian Electronic Army issued a statement taking responsibility for the attack.
In September 2017, the company was acquired by SAP for $350 million. In October 2017, Gigya Introduced enterprise preference manager to address new privacy regulations.
See also
Economy of Israel
Science an
Document 1:::
Rapid casting is an integration of investment casting with rapid prototyping/3D printing. In this technique disposable patterns that are used for forming molds are created by any 3D printing technique, including fused deposition modeling and stereolithography.
Advantages
Cheap for batch production
Reduced turnaround time
Representative prototypes
Easier to make patterns
Possibility to make the part lighter by removing unwanted material and stiffer by adding rib features.
Advantages of pressure die casting
Cheap at scale
Large parts
Good surface finish
High dimensional accuracy
High tensile strength
Procedure
A disposable pattern is 3D printed (can be of wax or any plastic used in 3D printing PLA, PETG, Etc.).
The pattern, if made of wax, undergoes wax infiltration and other procedures to increase its strength and dewaxing properties.
A mold is made by coating the printed pattern using a ceramic slurry.
The pattern is melted out of the ceramic mold.
Molten metal is poured into mold.
Document 2:::
Concept creep is the process by which harm-related topics experience semantic expansion to include topics which would not have originally been envisaged to be included under that label. It was first described in a Psychological Inquiry article by Nick Haslam in 2016, who identified its effects on the concepts of abuse, bullying, trauma, mental disorder, addiction, and prejudice. Others have identified its effects on terms like "gaslight" and "emotional labour". The phenomenon can be related to the concept of hyperbole.
It has been criticised for making people more sensitive to harms and for blurring people's thinking and understanding of such terms, by categorising too many things together which should not be, and by losing the clarity and specificity of a term.
Although the initial research on concept creep has focused on concepts central to the political left's ideology, psychologists have also found evidence that people identifying with the political right have more expansive interpretations of concepts central to their own ideology (ex. sexual deviance, personal responsibility and terrorism).
Document 3:::
Luminex Corporation is a biotechnology company which develops, manufactures and markets proprietary biological testing technologies with applications in life-sciences.
Background
Luminex's Multi-Analyte Profiling (xMAP) technology allows simultaneous analysis of up to 500 bioassays from a small sample volume, typically a single drop of fluid, by reading biological tests on the surface of microscopic polystyrene beads called microspheres.
The xMAP technology combines this miniaturized liquid array bioassay capability with small lasers, light emitting diodes (LEDs), digital signal processors, photo detectors, charge-coupled device imaging and proprietary software to create a system offering advantages in speed, precision, flexibility and cost. The technology is currently being used within various segments of the life sciences industry, which includes the fields of drug discovery and development, and for clinical diagnostics, genetic analysis, bio-defense, food safety and biomedical research.
The Luminex MultiCode technology is used for real-time polymerase chain reaction (PCR) and multiplexed PCR assays. Luminex Corporation owns 315 issued patents worldwide, including over 124 issued patents in the United States based on its multiplexing xMAP platform.
References
External links
Document 4:::
In mathematics and theoretical physics, and especially gauge theory, the deformed Hermitian Yang–Mills (dHYM) equation is a differential equation describing the equations of motion for a D-brane in the B-model (commonly called a B-brane) of string theory. The equation was derived by Mariño-Minasian-Moore-Strominger in the case of Abelian gauge group (the unitary group ), and by Leung–Yau–Zaslow using mirror symmetry from the corresponding equations of motion for D-branes in the A-model of string theory.
Definition
In this section we present the dHYM equation as explained in the mathematical literature by Collins-Xie-Yau. The deformed Hermitian–Yang–Mills equation is a fully non-linear partial differential equation for a Hermitian metric on a line bundle over a compact Kähler manifold, or more generally for a real -form. Namely, suppose is a Kähler manifold and is a class. The case of a line bundle consists of setting where is the first Chern class of a holomorphic line bundle . Suppose that and consider the topological constant
Notice that depends only on the class of and . Suppose that . Then this is a complex number
for some real and angle which is uniquely determined.
Fix a smooth representative differential form in the class . For a smooth function write , and notice that . The deformed Hermitian Yang–Mills equation for with respect to is
The second condition should be seen as a positivity condition on solutions to the first equation. That is, one looks for solutions to the equation such that . This is in analogy to the related problem of finding Kähler-Einstein metrics by looking for metrics solving the Einstein equation, subject to the condition that is a Kähler potential (which is a positivity condition on the form ).
Discussion
Relation to Hermitian Yang–Mills equation
The dHYM equations can be transformed in several ways to illuminate several key properties of the equations. First, simple algebraic manipulation shows that the dHYM
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main criticism of concept creep as mentioned in the text?
A. It enhances clarity and specificity of terms.
B. It makes people less sensitive to harms.
C. It blurs understanding by categorizing too many unrelated topics.
D. It is only relevant to the political left.
Answer:
|
C. It blurs understanding by categorizing too many unrelated topics.
|
Relavent Documents:
Document 0:::
Denitrifying bacteria are a diverse group of bacteria that encompass many different phyla. This group of bacteria, together with denitrifying fungi and archaea, is capable of performing denitrification as part of the nitrogen cycle. Denitrification is performed by a variety of denitrifying bacteria that are widely distributed in soils and sediments and that use oxidized nitrogen compounds such as nitrate and nitrite in the absence of oxygen as a terminal electron acceptor. They metabolize nitrogenous compounds using various enzymes, including nitrate reductase (NAR), nitrite reductase (NIR), nitric oxide reductase (NOR) and nitrous oxide reductase (NOS), turning nitrogen oxides back to nitrogen gas () or nitrous oxide ().
The reducing power can be supplied by organic carbon compounds (termed "heterotrophic denitrification") or inorganic substances such as hydrogen, reduced iron, or sulfur species (termed "autotrophic denitrification"). Some microbes can use either organic or inorganic sources of reducing power (termed "mixotrophs").
Diversity of denitrifying bacteria
There is a great diversity in biological traits. Denitrifying bacteria have been identified in over 50 genera with over 125 different species and are estimated to represent 10-15% of bacteria population in water, soil and sediment.
Denitrifying include for example several species of Pseudomonas, Alcaligenes , Bacillus and others.
The majority of denitrifying bacteria are facultative aerobic heterotrophs that switch from aerobic respiration to denitrification when oxygen as an available terminal electron acceptor (TEA) runs out. This forces the organism to use nitrate to be used as a TEA. Because the diversity of denitrifying bacteria is so large, this group can thrive in a wide range of habitats including some extreme environments such as environments that are highly saline and high in temperature. Aerobic denitrifiers can conduct an aerobic respiratory process in which nitrate is converted gradually
Document 1:::
A neutral network is a set of genes all related by point mutations that have equivalent function or fitness. Each node represents a gene sequence and each line represents the mutation connecting two sequences. Neutral networks can be thought of as high, flat plateaus in a fitness landscape. During neutral evolution, genes can randomly move through neutral networks and traverse regions of sequence space which may have consequences for robustness and evolvability.
Genetic and molecular causes
Neutral networks exist in fitness landscapes since proteins are robust to mutations. This leads to extended networks of genes of equivalent function, linked by neutral mutations. Proteins are resistant to mutations because many sequences can fold into highly similar structural folds. A protein adopts a limited ensemble of native conformations because those conformers have lower energy than unfolded and mis-folded states (ΔΔG of folding). This is achieved by a distributed, internal network of cooperative interactions (hydrophobic, polar and covalent). Protein structural robustness results from few single mutations being sufficiently disruptive to compromise function. Proteins have also evolved to avoid aggregation as partially folded proteins can combine to form large, repeating, insoluble protein fibrils and masses. There is evidence that proteins show negative design features to reduce the exposure of aggregation-prone beta-sheet motifs in their structures.
Additionally, there is some evidence that the genetic code itself may be optimised such that most point mutations lead to similar amino acids (conservative). Together these factors create a distribution of fitness effects of mutations that contains a high proportion of neutral and nearly-neutral mutations.
Evolution
Neutral networks are a subset of the sequences in sequence space that have equivalent function, and so form a wide, flat plateau in a fitness landscape. Neutral evolution can therefore be visualised as a popula
Document 2:::
The P. I. Baranov Central Institute of Aviation Motor Development (also known as the "Central Institute for Aviation Motor Development named after P. I. Baranov" or simply "Central Institute of Aviation Motors", CIAM or TsIAM, Tsentralniy Institut Aviatsionnogo Motorostroeniya, ) is the only specialized Russian research and engineering facility dealing with advanced aerospace propulsion research, aircraft engine certification and other gas dynamics-related issues. It was founded on 3 December 1930 upon merging of the propeller engine department under the umbrella organisation Central Institute of Aerohydrodynamics and the department of experimental engine building of the M. V. Frunze Aviation Plant.
CIAM operates the largest aerospace engine testing facility in Europe, surpassed only by the United States's Arnold Engineering Development Center and Glenn Research Center. It is based in Lefortovo (the southeast okrug of Moscow) with an address of 2 Aviamotornaya street, Moscow, Postcode 111116. CIAM also operates a scientific testing center in Lytkarino, Moscow Oblast.
History
The bases of the institute were formed by such academics as Keldysh, Klimov and Chelomey. Since its foundation in 1930, CIAM designed nearly all Russian aviation motors and gas turbines. In 1933 CIAM was named after the late Soviet Vice-Narkom of Heavy Industry Petr Ionovich Baranov, who was one of the leading theorists of the Soviet aviation industry. Before World War II, all engine-design work was transferred to mass-production motor-building plants and their own design bureaus. CIAM focused on theoretical and experimental research and modernization of prototypes up to the production stage.
After the war, CIAM was engaged with reactive (jet) engines for airplanes, successors to the first-generation turbojets. In the early 1950s, the largest test base in Europe was built in Lytkarino. In the 1970s, the institute began work on a ramjet engine using the special hypersonic "flying laboratory"
Document 3:::
Tensilica Inc. was a company based in Silicon Valley that developed semiconductor intellectual property (SIP) cores. Tensilica was founded in 1997 by Chris Rowen. In April 2013, the company was acquired by Cadence Design Systems for approximately $326 million.
Products
Cadence Tensilica develops SIP blocks to be included on the chip (IC) designs of products of their licensees, such as system on a chip for embedded systems. Tensilica processors are delivered as synthesizable RTL to aid integration with other chips.
Xtensa configurable cores
Xtensa processors range from small, low-power cache-less microcontroller to more performance-oriented SIMD processors, multiple-issue VLIW DSP cores, and neural network processors. Cadence standard DSPs are based on the Xtensa architecture. The architecture offers a user-customizable instruction set through automated customization tools that can extend the base instruction set, including and not limited to, addition of new SIMD instructions and register files.
Xtensa instruction set
The Xtensa instruction set is a 32-bit architecture with a compact 16- and 24-bit instruction set. The base instruction set has 82 RISC instructions and includes a 32-bit ALU, 16 general-purpose 32-bit registers, and one special-purpose register.
Audio and voice DSP IP
HiFi Mini Audio DSP — A small low power DSP core for voice triggering and voice recognition
HiFi 2 Audio DSP — DSP core for low power MP3 audio processing
HiFi EP Audio DSP — A superset of HiFi 2 with optimizations for DTS Master Audio, voice pre- and post-processing, and cache management
HiFi 3 Audio DSP — 32-bit DSP for audio enhancement algorithms, wideband voice codecs, and multi-channel audio
HiFi 3z Audio DSP — For lower-powered audio, wideband voice codecs, and neural-network-based speech recognition.
HiFi 4 DSP - Higher performance DSP for applications such as multi-channel object-based audio standards.
HiFi 5 DSP - For digital assistants, infotainment, and voice
Document 4:::
Blueberry leaf mottle virus (BLMV) is a Nepovirus that was first discovered in Michigan in 1977. It has also appeared in New York, eastern Canada, Bulgaria, Hungary, and Portugal.
Structure
The BLMV genome is bipartite containing two segmented regions of linear, positive-sense, single stranded RNA. The entire genome of the virus is 14600 nucleotides long, and the RNA-1 has a partially sequenced region that is 7600 nucleotides long. The virus consists of a naked, icosahedral capsid that is 28 nm in diameter. The genome of the virus codes for both structural and non-structural proteins, and the lipids of this virus are unknown.
Strains of the virus
There are currently three known strains of Blueberry leaf mottle virus. The original strain (BLMV) infects highbush and lowbush varieties in Michigan and eastern Canada. A second strain (BLMV-NY) has only been reported in one case in New York and infected an American grapevine. The third strain was discovered in Bulgarian grapevines and is known as the grapevine Bulgarian latent nepovirus. Grapevines with this strain are asymptomatic.
Hosts
The primary host of BLMV is the highbush blueberry (Vaccinium corymbosum). Other hosts include lowbush blueberry types (V. angustifolium and V. myrtilloides), hybrids of highbush and lowbush blueberry types (V. corymbosum x V.angutifolium), American grapevines (Vitis labrusca), and Bulgarian grapevines (Vitis vinifera).
Symptoms
Symptoms of BLMV include malformed, mottled leaves that are pale green in color and have smaller than healthy leaves. The leaves may also have transparent spots that are visible when they are held up to the light. Bushes may be stunted and if bushes are badly infected, they may show dieback of stems with little regrowth. Symptoms do not appear until three to four years after infection.
Transmission
Unlike many nepoviruses, BLMV does not appear to have a nematode vector and is instead spread by honeybees during pollination via infected pollen. The viru
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary concern of space weather as described in the text?
A. The effects of solar wind on Earth's atmosphere
B. The correlation between sunspots and Earth climate
C. The impact of space weather on spacecraft and human activities
D. The historical observations of auroras at high latitudes
Answer:
|
C. The impact of space weather on spacecraft and human activities
|
Relavent Documents:
Document 0:::
A needle valve is a type of valve with a small port and a threaded, needle-shaped plunger. It allows precise regulation of flow, although it is generally only capable of relatively low flow rates.
Construction and operation
An instrument needle valve uses a tapered pin to gradually open a space for fine control of flow. The flow can be controlled and regulated with the use of a spindle. A needle valve has a relatively small orifice with a long, tapered seat, and a needle-shaped plunger on the end of a screw, which exactly fits the seat.
As the screw is turned and the plunger retracted, flow between the seat and the plunger is possible; however, until the plunger is completely retracted, the fluid flow is significantly impeded. Since it takes many turns of the fine-threaded screw to retract the plunger, precise regulation of the flow rate is easily possible.
The virtue of the needle valve is from the vernier effect of the ratio between the needle's length and its diameter, or the difference in diameter between needle and seat. A long travel axially (the control input) makes for a very small and precise change radially (affecting the resultant flow).
Needle valves may also be used in vacuum systems, when a precise control of gas flow is required, at low pressure, such as when filling gas-filled vacuum tubes, gas lasers and similar devices.
Uses
Needle valves are usually used in flow-metering applications, especially when a constant, calibrated, low flow rate must be maintained for some time, such as the idle fuel flow in a carburettor.
Note that the float valve of a carburettor (controlling the fuel level within the carburettor) is not a needle valve, although it is commonly described as one. It uses a bluntly conical needle, but it seats against a square-edged seat rather than a matching cone. The intention here is to obtain a well-defined seat between two narrow mating surfaces, giving firm shutoff of the flow from only a light float pressure.
Needle valv
Document 1:::
Craven Gap (el. ) is a mountain pass between Peach Knob and Rice Knob, part of the Elk Mountains and Great Craggy Mountains. NC 694 (Town Mountain Road) connects with the Blue Ridge Parkway at the gap, where it provides direct access to downtown Asheville. The gap also has trails for hikers and is a popular bicycle rest area.
References
Document 2:::
Natural resources are resources that are drawn from nature and used with few modifications. This includes the sources of valued characteristics such as commercial and industrial use, aesthetic value, scientific interest, and cultural value. On Earth, it includes sunlight, atmosphere, water, land, all minerals along with all vegetation, and wildlife.
Natural resources are part of humanity's natural heritage or protected in nature reserves. Particular areas (such as the rainforest in Fatu-Hiva) often feature biodiversity and geodiversity in their ecosystems. Natural resources may be classified in different ways. Natural resources are materials and components (something that can be used) found within the environment. Every man-made product is composed of natural resources (at its fundamental level).
A natural resource may exist as a separate entity such as freshwater, air, or any living organism such as a fish, or it may be transformed by extractivist industries into an economically useful form that must be processed to obtain the resource such as metal ores, rare-earth elements, petroleum, timber and most forms of energy. Some resources are renewable, which means that they can be used at a certain rate and natural processes will restore them. In contrast, many extractive industries rely heavily on non-renewable resources that can only be extracted once.
Natural resource allocations can be at the centre of many economic and political confrontations both within and between countries. This is particularly true during periods of increasing scarcity and shortages (depletion and overconsumption of resources). Resource extraction is also a major source of human rights violations and environmental damage. The Sustainable Development Goals and other international development agendas frequently focus on creating more sustainable resource extraction, with some scholars and researchers focused on creating economic models, such as circular economy, that rely less on resource ex
Document 3:::
In electrochemistry, CO stripping is a special process of voltammetry where a monolayer of carbon monoxide already adsorbed on the surface of an electrocatalyst is electrochemically oxidized and thus removed from the surface. A well-known process of this type is CO stripping on Pt/C electrocatalysts in which the electrooxidation peak occurs somewhere between 0.5 and 0.9 V depending on the characteristics and structural properties of the specimen.
Principle
Some metals, such as platinum, readily adsorb carbon monoxide, which is usually undesirable as it results in catalyst poisoning.
However, the strong affinity of CO to such catalysts also presents an opportunity: since carbon monoxide is a small molecule with a strong affinity to the catalyst, a large enough amount of CO will adsorb to the entire available surface area of the catalyst. That, in turn, means that by evaluating the amount of CO adsorbed, the catalyst's available surface area can be indirectly measured.
That surface area - also known as "real surface area" or "electrochemically active surface area" - can be measured by electrochemically oxidizing the adsorbed carbon monoxide, as the charge expended in oxidizing CO is directly proportional to the amount of CO adsorbed on the surface and therefore, the surface area of the catalyst.
Usage
CO stripping is one of the methods used to determine the electrochemically active surface area of electrodes and catalysts that irreversibly adsorb carbon monoxide, most notably ones containing platinum and other transition metals.
Document 4:::
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller (indeed, infinitesimally smaller) than any relevant dimension of the body; so that its geometry and the constitutive properties of the material (such as density and stiffness) at each point of space can be assumed to be unchanged by the deformation.
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory, or small displacement-gradient theory. It is contrasted with the finite strain theory where the opposite assumption is made.
The infinitesimal strain theory is commonly adopted in civil and mechanical engineering for the stress analysis of structures built from relatively stiff elastic materials like concrete and steel, since a common goal in the design of such structures is to minimize their deformation under typical loads. However, this approximation demands caution in the case of thin flexible bodies, such as rods, plates, and shells which are susceptible to significant rotations, thus making the results unreliable.
Infinitesimal strain tensor
For infinitesimal deformations of a continuum body, in which the displacement gradient tensor (2nd order tensor) is small compared to unity, i.e. ,
it is possible to perform a geometric linearization of any one of the finite strain tensors used in finite strain theory, e.g. the Lagrangian finite strain tensor , and the Eulerian finite strain tensor . In such a linearization, the non-linear or second-order terms of the finite strain tensor are neglected. Thus we have
or
and
or
This linearization implies that the Lagrangian description and the Eulerian description are approximately the same as there is little difference in the material and spatial coordinates of a given material point
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the chemical formula of dysprosium phosphide?
A. DyP
B. Dy3P
C. Dy2P3
D. DyP2
Answer:
|
A. DyP
|
Relavent Documents:
Document 0:::
Skin grafting, a type of graft surgery, involves the transplantation of skin without a defined circulation. The transplanted tissue is called a skin graft.
Surgeons may use skin grafting to treat:
extensive wounding or trauma
burns
areas of extensive skin loss due to infection such as necrotizing fasciitis or purpura fulminans
specific surgeries that may require skin grafts for healing to occur – most commonly removal of skin cancers
Skin grafting often takes place after serious injuries when some of the body's skin is damaged. Surgical removal (excision or debridement) of the damaged skin is followed by skin grafting. The grafting serves two purposes: reducing the course of treatment needed (and time in the hospital), and improving the function and appearance of the area of the body which receives the skin graft.
There are two types of skin grafts:
Partial-thickness: The more common type involves removing a thin layer of skin from a healthy part of the body (the donor section).
Full-thickness: Involves excising a defined area of skin, with a depth of excision down to the fat. The full thickness portion of skin is then placed at the recipient site.
A full-thickness skin graft is more risky, in terms of the body accepting the skin, yet it leaves only a scar line on the donor section, similar to a Cesarean-section scar. In the case of full-thickness skin grafts, the donor section will often heal much more quickly than the injury and causes less pain than a partial-thickness skin graft. A partial thickness donor site must heal by re-epithelialization which can be painful and take an extensive length of time.
Medical uses
Two layers of skin created from animal sources has been found to be useful in venous leg ulcers.
Classification
Grafts can be classified by their thickness, the source, and the purpose. By source:
Autologous: The donor skin is taken from a different site on the same individual's body (also known as an autograft).
Isogeneic: The donor an
Document 1:::
Mechanography (also referred to as jumping mechanography or Muscle Mechanography) is a medical diagnostic measurement method for motion analysis and assessment of muscle function and muscle power by means of physical parameters. The method is based on measuring the variation of the ground reaction forces over the time for motion patterns close to typical every day movements (e.g. chair rise or jumps). From these ground reaction forces centre of gravity related physical parameters like relative maximum forces, velocity, power output, kinetic energy, potential energy, height of jump or whole body stiffness are calculated. If the ground reaction forces are measured separately for left and right leg in addition body imbalances during the motions can be analysed. This enables for example to document the results of therapy. The same methodology can also be used for gait analysis or for analysis of stair climbing, grip strength and Posturography. Due to the utilization of every-day movements reproducibility is high over a wide age range
Fields of application
Typical fields of applications of Mechanography are in the field of geriatrics especially in the field of Sarcopenia but also for Master Athletes.
Mechanography is also used frequently in pediatrics for basic research ins muscle function and growth, reference Data as well as in specific diseases like Prader–Willi syndrome, Obesity, Osteogenesis Imperfecta and cerebral palsy.
In opposite to many other established measurements methods like Chair Rising Test, Stand-up and Go test and others the maximum power output relative to body weight during a jump of maximum height measured by Mechanography is a much better reproducible and does not have a training effect even when repeated more frequently.
Based on this test (maximum relative power output of a jump as high as possible) Runge et al. and Schönau et al. defined reference values of a fit population in order to match the individual power output in relation to body
Document 2:::
The Blue Button is a system for patients to view online and download their own personal health records. Several Federal agencies, including the Departments of Defense, Health and Human Services, and Veterans Affairs, implemented this capability for their beneficiaries. In addition, Blue Button has pledges of support from numerous health plans and some vendors of personal health record vendors across the United States. Data from Blue Button-enabled sites can be used to create portable medical histories that facilitate dialog among health care providers, caregivers, and other trusted individuals or entities.
, widespread Blue Button usage supported downloading human-readable data in ASCII. In January 2013, the Office of the National Coordinator for Health IT announced an implementation guide for data holders and developers to enable automated data exchange among Blue Button+ compliant applications using structured data formats. Blue Button+ is designed to enhance the ways consumers get and share their health information in human-readable and machine-readable formats; and to enable the use of this information in third-party applications.
History
Origins
The Blue Button initiative began during the Markle Foundation Work Group on Consumer Engagement meeting in New York City on January 27, 2010. At the time, Meaningful Use was newly authorized and dominated health care IT dialogue in the United States. Members of the Markle Connecting for Health Community felt that while Meaningful Use criteria embraced patient engagement, the rules should also "[c]onsider individuals as information participants—not as mere recipients, but as information contributors, knowledge creators, and shared decision makers and care planners." Markle convened the Work Group on Consumer Engagement to focus on using health information technology (IT) to achieve this kind of patient engagement.
The January 2010 meeting included representatives from private industry, not-for-profit foundations
Document 3:::
Biomimetic materials are materials developed using inspiration from nature. This may be useful in the design of composite materials. Natural structures have inspired and innovated human creations. Notable examples of these natural structures include: honeycomb structure of the beehive, strength of spider silks, bird flight mechanics, and shark skin water repellency.
The etymological roots of the neologism "biomimetic" derive from Greek, since means "life" and means "imitative".
Tissue engineering
Biomimetic materials in tissue engineering are materials that have been designed such that they elicit specified cellular responses mediated by interactions with scaffold-tethered peptides from extracellular matrix (ECM) proteins; essentially, the incorporation of cell-binding peptides into biomaterials via chemical or physical modification. Amino acids located within the peptides are used as building blocks by other biological structures. These peptides are often referred to as "self-assembling peptides", since they can be modified to contain biologically active motifs. This allows them to replicate information derived from tissue and to reproduce the same information independently. Thus, these peptides act as building blocks capable of conducting multiple biochemical activities, including tissue engineering. Tissue engineering research currently being performed on both short chain and long chain peptides is still in early stages.
Such peptides include both native long chains of ECM proteins as well as short peptide sequences derived from intact ECM proteins. The idea is that the biomimetic material will mimic some of the roles that an ECM plays in neural tissue. In addition to promoting cellular growth and mobilization, the incorporated peptides could also mediate by specific protease enzymes or initiate cellular responses not present in a local native tissue.
In the beginning, long chains of ECM proteins including fibronectin (FN), vitronectin (VN), and laminin (LN
Document 4:::
Wardriving is the act of searching for Wi-Fi wireless networks as well as cell towers, usually from a moving vehicle, using a laptop or smartphone. Software for wardriving is freely available on the internet.
Warbiking, warcycling, warwalking and similar use the same approach but with other modes of transportation.
Etymology
War driving originated from wardialing, a method popularized by a character played by Matthew Broderick in the film WarGames, and named after that film. War dialing consists of dialing every phone number in a specific sequence in search of modems.
Variants
Warbiking or warcycling is similar to wardriving, but is done from a moving bicycle or motorcycle. This practice is sometimes facilitated by mounting a Wi-Fi enabled device on the vehicle.
Warwalking, or warjogging, is similar to wardriving, but is done on foot rather than from a moving vehicle. The disadvantages of this method are a slower speed of travel (leading to the discovery of more infrequently discovered networks) and the absence of a convenient computing environment. Consequently, handheld devices such as pocket computers, which can perform such tasks while users are walking or standing, have dominated this practice. Technology advances and developments in the early 2000s expanded the extent of this practice. Advances include computers with integrated Wi-Fi, rather than CompactFlash (CF) or PC Card (PCMCIA) add-in cards in computers such as Dell Axim, Compaq iPAQ and Toshiba pocket computers starting in 2002. Later, the active Nintendo DS and Sony PSP enthusiast communities gained Wi-Fi abilities on these devices. Further, nearly all modern smartphones integrate Wi-Fi and Global Positioning System (GPS).
Warrailing, or Wartraining, is similar to wardriving, but is done on a train or tram rather than from a slower more controllable vehicle. The disadvantages of this method are higher speed of travel (resulting in less discovery of more infrequently discovered networks) and often
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the premise of the television series The 100?
A. A group of grounders tries to reclaim Earth from the Ark survivors.
B. Juvenile delinquents are sent back to Earth to determine if it is habitable after a nuclear apocalypse.
C. A family struggles to survive in a post-apocalyptic world dominated by AI.
D. A group of space travelers seeks a new planet after Earth is destroyed.
Answer:
|
B. Juvenile delinquents are sent back to Earth to determine if it is habitable after a nuclear apocalypse.
|
Relavent Documents:
Document 0:::
In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies.
Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard.
Plug and play devices can have resources allocated at boot-time only, or may be
Document 1:::
In meteorology, prevailing wind in a region of the Earth's surface is a surface wind that blows predominantly from a particular direction. The dominant winds are the trends in direction of wind with the highest speed over a particular point on the Earth's surface at any given time. A region's prevailing and dominant winds are the result of global patterns of movement in the Earth's atmosphere. In general, winds are predominantly easterly at low latitudes globally. In the mid-latitudes, westerly winds are dominant, and their strength is largely determined by the polar cyclone. In areas where winds tend to be light, the sea breeze-land breeze cycle (powered by differential solar heating and night cooling of sea and land) is the most important cause of the prevailing wind. In areas which have variable terrain, mountain and valley breezes dominate the wind pattern. Highly elevated surfaces can induce a thermal low, which then augments the environmental wind flow. Wind direction at any given time is influenced by synoptic-scale and mesoscale weather like pressure systems and fronts. Local wind direction can also be influenced by microscale features like buildings.
Wind roses are tools used to display the history of wind direction and intensity. Knowledge of the prevailing wind allows the development of prevention strategies for wind erosion of agricultural land, such as across the Great Plains. Sand dunes can orient themselves perpendicular to the prevailing wind direction in coastal and desert locations. Insects drift along with the prevailing wind, but the flight of birds is less dependent on it. Prevailing winds in mountain locations can lead to significant rainfall gradients, ranging from wet across windward-facing slopes to desert-like conditions along their lee slopes.
Wind rose
A wind rose is a graphic tool used by meteorologists to give a succinct view of how wind speed and direction are typically distributed at a particular location. Presented in a polar co
Document 2:::
The take-grant protection model is a formal model used in the field of computer security to establish or disprove the safety of a given computer system that follows specific rules. It shows that even though the question of safety is in general undecidable, for specific systems it is decidable in linear time.
The model represents a system as directed graph, where vertices are either subjects or objects. The edges between them are labeled, and the label indicates the rights that the source of the edge has over the destination. Two rights occur in every instance of the model: take and grant. They play a special role in the graph rewriting rules describing admissible changes of the graph.
There are a total of four such rules:
take rule allows a subject to take rights of another object (add an edge originating at the subject, or a right to an existing edge)
grant rule allows a subject to grant own rights to another object (add an edge terminating at the subject, or a right to an existing edge)
create rule allows a subject to create new objects (add a vertex and an edge from the subject to the new vertex)
remove rule allows a subject to remove rights it has over on another object (remove a right from an edge originating at the subject, removes the edge if all rights are removed from it)
There are also and , which can be used to take and grant where the above rules would not allow it.
Preconditions for :
subject s has the right Take for o.
object o has the right r on p.
Preconditions for :
subject s has the right Grant for o.
s has the right r on p.
Using the rules of the take-grant protection model, one can reproduce in which states a system can change, with respect to the distribution of rights. Therefore, one can show if rights can leak with respect to a given safety model.
Document 3:::
Ferhan Çeçen (born June 15, 1961) is a Turkish environmental engineer and chemist, researching wastewater treatment, environmental biotechnology, and adsorption processes. She is currently a professor at the Boğaziçi University Institute of Environmental Sciences.
Early life and education
Ferhan Çeçen was born on June 15, 1961, in Istanbul, Turkey. She graduated from the Deutsche Schule Istanbul in 1980 and speaks fluent English and German in addition to her native Turkish. Çeçen completed a B.S. in chemical engineering at Boğaziçi University in 1984.
In 1986, she completed a M.S. in environmental engineering at the Istanbul Technical University (ITU). Her Master's thesis was titled Metal Complexation and its Implications on related Technologies. Her graduate advisor was .
Çeçen earned a Ph.D. in environmental engineering at ITU in 1990. Her dissertation was titled Nitrogen Removal from High-strength Wastewaters by Upflow Submerged Nitrification and Denitrification Filters. Her doctoral advisor was .
Career and research
In November 1990, Çeçen joined the faculty at the Boğaziçi University Institute of Environmental Sciences as an instructor. She was promoted to assistant professor in March 1993, associate professor in October 1993, and full professor in June 1999.
Çeçen researches water and wastewater treatment, environmental biotechnology, adsorption processes, and the impacts of hazardous substances on biological treatment.
She has over 100 publications including "Activated Carbon", "Review of experimental biodegradation data on pharmaceuticals and comparison with predictive BIOWIN models", and "Inhibitory Effect of Silver and Nanosilver on Activated Sludge Fed with Proteinaceous Feed".
Awards
In November of 2000, Ferhan Cecen won two awards: The 2000 TUBITAK (Turkish Scientific and Technical Research Institution) Encouragement Award in Engineering Sciences, and 2000 Bogazici University, Research Award for Young Scientists.
Selected works
Document 4:::
NS Puppis (NS Pup) is an irregular variable star in the constellation Puppis. Its apparent magnitude varies between 4.4 and 4.5.
NS Puppis is a naked eye star, given the h1 and the Bright Star Catalogue number 3225. It was considered to be a stable star until 1966. It was given the variable star designation NS Puppis in 1975.
h2 Puppis is another luminous K-type star with almost the same visual magnitude about a degree to the southeast.
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What major health crisis affected Antwerp every ten to fifteen years between 1832 and 1892, resulting in thousands of deaths?
A. Typhoid fever
B. Cholera
C. Influenza
D. Tuberculosis
Answer:
|
B. Cholera
|
Relavent Documents:
Document 0:::
Dr. Amanda Bradford is a marine mammal biologist who is currently researching cetacean population dynamics for the National Marine Fisheries Service of the National Oceanic and Atmospheric Administration. Bradford is currently a Research Ecologist with the Pacific Islands Fisheries Science Center's Cetacean Research Program. Her research primarily focuses on assessing populations of cetaceans, including evaluating population size, health, and impacts of human-caused threats, such as fisheries interactions. Bradford is a cofounder and organizer of the Women in Marine Mammal Science (WIMMS) Initiative.
Education
Undergraduate education
Bradford received her Bachelor of Science in Marine Biology from Texas A&M University in Galveston, Texas in 1998. She worked in the lab of Bernd Würsig.
While Bradford was an undergraduate, she was a volunteer at the Texas Marine Mammal Stranding Network from 1994 to 1998. Bradford, monitored live stranded delphinids and performed basic husbandry and life-support for bottlenose dolphins and false killer whales. Bradford also participated in marine mammal necropsies.
During her senior year, Bradford began analyzing photo-identification data from the western North Pacific population of gray whales. Shortly after graduation, Bradford traveled to northeastern Sakhalin Island in the Russian Far East to join a collaborative Russia-U.S. field study of these whales on their primary feeding ground. Once Bradford returned from the field, she spent a year as a research assistant for this project based at the Southwest Fisheries Science Center in La Jolla, California.
Graduate education
Bradford attended the University of Washington, School of Aquatic and Fishery Sciences (SAFS) in Seattle, Washington, receiving her Masters of Science in 2003 and then Doctorate of Philosophy (PhD) in 2011. Bradford studied under the late Glenn VanBlaricom for both degrees.
During her time at SAFS, Bradford spent 10 summers in the Russian Far East studying the endangered western population of gray whales. Bradford's graduate research focused on estimating survival, abundance, anthropogenic impacts, and body condition of these whales. Her results showed that calf survival in the population was notably low, the population numbered only around 100 whales in the early 2000s, whales were vulnerable to fishing gear entanglement and vessel collisions, and that body condition varied by season and year. Lactating females where found to have the poorest body condition and did not always appear to recover by the end of a feeding season. Bradford also studied the age at sexual maturity and the birth-interval of the western gray whales, both important parameters for understanding the dynamics of this endangered population.
Bradford spent a lot of time as a graduate student working on photo-identification of the western gray whale population and published a paper on how to identify calves based on their barnacle scars and pigmentation patterns.
Academic awards and honors
Bradford received the National Marine Fisheries Service - Sea Grant Joint Fellowship Program in Population and Ecosystem Dynamics and Marine Resource Economics. This fellowship is designed to support and train highly qualified PhD students to pursue careers in these fields.
Career and research
Graduate research and early career
The majority of Bradford's work while completing her PhD focused on the western gray whale population. While the population is currently listed as endangered on the Red List of the International Union for Conservation of Nature (IUCN) and considered to be increasing, when Bradford was researching them they were listed as critically endangered. Much of what is known about the western gray whales is a result of the work of Bradford and her international colleagues.
Western Gray Whale Advisory Panel - International Union for Conservation of Nature
Bradford was responsible for synthesizing data and assisting with population analyses for the Western Gray Whale Advisory Panel between 2007 and 2011. Bradford also participated in two ship-based western gray whale satellite tagging surveys off Sakhalin Island, Russia.
Western Gray Whale Project, Russia-U.S. Collaboration
Bradford participated and eventually lead western gray whale boat-based photo-identification and genetic-monitoring surveys between 1998 and 2010, which included her putting in over 1,500 hours of small boat work. Further, Bradford collected gray whale behavioral data and theodolite-tracked movement data. In addition to the gray whale work, Bradford collected information on spotted seals in the early years of the collaboration.
Pacific Islands Fisheries Science Center
Shortly before graduating with her PhD, Bradford took a position at the Pacific Islands Fisheries Science Center, a part of NOAA Fisheries. Bradford is in the Cetacean Research Program of the Protected Species Division, where she studies population dynamics and demography, line-transect abundance estimation, mark-recapture parameter estimation, and health and injury assessment.
Bradford's work has been relevant to estimating thee bycatch of false killer whales in the Hawaii-based deep-set longline fishery. False killer whales are known for depredating catch and bait in this fishery and due to this behavior, they are one of the most often accidentally caught marine mammals. Bradford was involved in a study of false killer whale behavior and interactions with the fisheries in an effort to try and reduce the bycatch of this species and achieve conservation goals.
Bradford has also been working on a population study of Megaptera novaeangeliae, the humpback whale, and coauthored a paper in 2020 on a newfound breeding ground for the endangered western North Pacific humpback whale population off the Marina Archipelago. In order to promote the recovery of this population, it is vital to know the full extent of their breeding grounds to be able to assess and eliminate threats.
Bradford regularly participates in ship-based and small boat surveys for cetaceans in the Pacific Islands region. She also plays a leading role in efforts to incorporate unmanned aircraft systems, automated photo-identification using machine learning, and open data science practices into the data collection and analysis workflows of the Cetacean Research program. She regularly gives presentations, contributes to web stories, and otherwise communicates to stakeholders and members of the public.
Outreach and service
Women in Marine Mammal Science
Bradford is a cofounder and organizer of Women in Marine Mammal Science (WIMMS), an initiative aimed at amplifying women and helping them advance their careers in the field of marine mammal science. The initiative was formed following a workshop in 2017 at the Society for Marine Mammalogy Biennial Conference on the Biology of Marine Mammals. The workshop focused on identifying barriers that women face in the marine mammal science field and provided strategies to overcome these barriers. As a part of WIMMS, Bradford conducted a survey and analyzed results on gender-specific experiences in marine mammal science.
In 2020, Bradford signed a petition to the Society of Marine Mammalogy asking for them to help eliminate unpaid research positions within the field as the prevalence of these positions decreases the accessibility of the field and limits the diversity and inclusion.
Society for Marine Mammalogy
Bradford served as the Student-Member-at-Large for the Society for Marine Mammalogy's Board of Governors from 2006 to 2008. Bradford served as the student representative, facilitated student participation in the Society, and promoted the growth of the student chapters.
Select publications
Bradford A. et al. (2021). Line-transect abundance estimates of cetaceans in U.S. waters around the Hawaiian Islands in 2002, 2010 and 2017. U.S. Department of Commerce, NOAA Tech. Memo. NMFS-PIFSC-115.52pp.
Bradford A. et al. (2020). Abundance estimates of false killers whales in Hawaiian waters and the broader central Pacific. U.S. Department of Commerce, NOAA Tech. Memo. NMFS-PIFSC-104.78pp
Hill M. and Bradford A. et al.(2020). Found: a missing breeding ground for endangered western North Pacific humpback whales in the Mariana Archipelago. Endangered Species Research. 91–103. 10.3354/esr01010.
Bradford A. et al. (2018). Abundance estimates for management of endangered false killer whales in the main Hawaiian Islands. Endangered Species Research 36:297-313.
Weller D. and Bradford A. et al. (2018). Prevalence of Killer Whale Tooth Rake Marks on Gray Whales off Sakhalin Island, Russia. Aquatic Mammals. 44. 643–652. 10.1578/AM.44.6.2018.643.
Bradford A. Forney K, Oleson E, Barlow J. (2017). Abundance estimates of cetaceans from a line-transect survey within the U.S. Hawaiian Islands Exclusive Economic Zone. Fishery Bulletin 115:129-142.
Bradford A. Forney K, Oleson E, Barlow J. (2014). Accounting for subgroup structure in line-transect abundance estimates of false killer whales (Pseudorca crassidens) in Hawaiian waters. PLoS ONE 9:e90464.
Bradford A. et al. (2012). Leaner leviathans: Body condition variation in a critically endangered whale population. Journal of Mammalogy. 93. 251–266. 10.1644/11-MAMM-A-091.1.
Bradford A, Weller D, Burdin A, Brownell R. (2011). Using barnacle and pigmentation characteristics to identify gray whale calves on their feeding grounds. Marine Mammal Science - MAR MAMMAL SCI. 27. 10.1111/j.1748-7692.2010.00413.x.
Bradford A. et al. (2009). Anthropogenic scarring of western gray whales (Eschrichtius robustus). Marine Mammal Science 25:161-175.
Document 1:::
In philosophy and sociology, a biofact is a being that is both an artifact and living being, or both natural and artificial. This being has been created by purposive human action but exists by processes of growth. The word is a neologism coined from the combination of the words bios and artifact.
There are sources who cite some creations of genetic engineering as examples of biofacts.
History
Biofact was introduced as early as 2001 by the German philosopher Nicole C. Karafyllis although her book Biofakte published in 2003 is commonly used as reference for the introduction of the term. According to Karafyllis, the word biofact first appeared in a German article (entitled 'Biofakt und Artefakt') in 1943, written by the Austrian protozoologist Bruno M. Klein. Addressing both microscopy and philosophy, Klein named a biofact something that is a visible dead product emerging from a living being while this being is still alive (e.g. a shell). However, Klein's distinction operated with the difference biotic/abiotic and dead/alive, not with nature/technology and growth/man-made. For her part, Karafyllis described biofact as a hermeneutic concept that allows the comparison between nature and technology in the domain of the living.
Philosophy
With the term biofact, Karafyllis wants to emphasize that living entities can be highly artificial due to methods deriving from agriculture, gardening (e.g. breeding) or biotechnology (e.g. genetic engineering, cloning). Biofacts show signatures of culture and technique.
Primarily, the concept aims to argue against the common philosophical tradition to summarize all kinds of living beings under the category nature. The concept biofact questions if the phenomenon of growth is and was a secure candidate for differentiating between nature and technology.
For the philosophy of technology the questions arise if a) biotechnology and agriculture should not be an integral part of reflexion, thereby adding new insights to the common focus
Document 2:::
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently.
Counting sort is not a comparison sort; it uses key values as indexes into an array and the lower bound for comparison sorting will not apply. Bucket sort may be used in lieu of counting sort, and entails a similar time analysis. However, compared to counting sort, bucket sort requires linked lists, dynamic arrays, or a large amount of pre-allocated memory to hold the sets of items within each bucket, whereas counting sort stores a single number (the count of items) per bucket.
Input and output assumptions
In the most general case, the input to counting sort consists of a collection of items, each of which has a non-negative integer key whose maximum value is at most .
In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort. For instance, when used as a subroutine in radix sort, the keys for each call to counting sort are individual digits of larger item keys; it would not suffice to return only a sorted list of the key digits, separated from the items.
In applications such as in radix sort, a bound on the maximum key value will be known in advance, and can be assumed to be p
Document 3:::
Before data.europa.eu, the EU Open Data Portal was the point of access to public data published by the EU institutions, agencies and other bodies. On April 21, 2021 it was consolidated to the data.europa.eu portal, together with the European Data Portal: a similar initiative aimed at the EU Member States.
Public data can be used and reused for commercial or non‑commercial purposes. The portal was a key instrument of the EU open data strategy. By ensuring easy and free access to data, their innovative use and economic potential can be enhanced. The goal of the portal was also to make the institutions and other EU bodies more transparent and accountable.
Legal basis and launch of the portal
Launched in December 2012, the portal was formally established by Commission Decision of 12 December 2011 (2011/833/EU) on the reuse of Commission documents to promote accessibility and reuse.
Based on this decision, all the EU institutions were invited - and are still today - to publish information such as open data and to make it accessible to the public whenever possible.
The operational management of the portal was the task of the Publications Office of the European Union. Implementation of EU open data policy was the responsibility of the Directorate General for Communications Networks, Content and Technology (DG CONNECT) of the European Commission. This is still true today with data.europa.eu.
Features
The portal enabled users to search, explore, link, download and easily re-use data for commercial or non-commercial purposes, through a common metadata catalogue. From the portal, users could access data published on the websites of the various institutions, agencies and other bodies of the EU.
Semantic technologies offered additional functionalities. The metadata catalogue could be searched via an interactive search engine and through SPARQL queries.
Users could suggest data they think is missing on the portal and give feedback on the quality of data obtainable.
The
Document 4:::
Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where the Barnes scheme is important is in weather forecasting where measurements are made wherever monitoring stations may be located, the positions of which are constrained by topography. Such interpolation is essential in data visualisation, e.g. in the construction of contour plots or other representations of analytic surfaces.
Introduction
Barnes proposed an objective scheme for the interpolation of two dimensional data using a multi-pass scheme. This provided a method to interpolating sea-level pressures across the entire United States of America, producing a synoptic chart across the country using dispersed monitoring stations. Researchers have subsequently improved the Barnes method to reduce the number of parameters required for calculation of the interpolated result, increasing the objectivity of the method.
The method constructs a grid of size determined by the distribution of the two dimensional data points. Using this grid, the function values are calculated at each grid point. To do this the method utilises a series of Gaussian functions, given a distance weighting in order to determine the relative importance of any given measurement on the determination of the function values. Correction passes are then made to optimise the function values, by accounting for the spectral response of the interpolated points.
Method
Here we describe the method of interpolation used in a multi-pass Barnes interpolation.
First pass
For a given grid point i, j the interpolated function g(xi, yi) is first approximated by the inverse weighting of the data points. To do this as weighting values is assigned to each Gaussian for each grid point, such that
where is a falloff parameter that controls the width of the Gaussian fu
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is Dr. Amanda Bradford's primary research focus as a marine mammal biologist?
A. Studying the effects of climate change on marine ecosystems
B. Evaluating population dynamics and health of cetaceans
C. Investigating deep-sea fish species
D. Analyzing coral reef restoration techniques
Answer:
|
B. Evaluating population dynamics and health of cetaceans
|
Relavent Documents:
Document 0:::
A kernel debugger is a debugger present in some operating system kernels to ease debugging and kernel development by the kernel developers. A kernel debugger might be a stub implementing low-level operations, with a full-blown debugger such as GNU Debugger (gdb), running on another machine, sending commands to the stub over a serial line or a network connection, or it might provide a command line that can be used directly on the machine being debugged.
Operating systems and operating system kernels that contain a kernel debugger:
The Windows NT family includes a kernel debugger named KD, which can act as a local debugger with limited capabilities (reading and writing kernel memory, and setting breakpoints) and can attach to a remote machine over a serial line, IEEE 1394 connection, USB 2.0 or USB 3.0 connection. The WinDbg GUI debugger can also be used to debug kernels on local and remote machines.
BeOS and Haiku include a kernel debugger usable with either an on-screen console or over a serial line. It features various commands to inspect memory, threads, and other kernel structures. In Haiku, the debugger is called "Kernel Debugging Land" (KDL).
DragonFly BSD
Linux kernel; No kernel debugger was included in the mainline Linux tree prior to version 2.6.26-rc1 because Linus Torvalds didn't want a kernel debugger in the kernel.
KDB (local)
KGDB (remote)
MDB (local/remote)
NetBSD has DDB for local and KGDB for remote.
macOS has ddb for local and kdp for remote.
OpenBSD includes ddb which has a syntax is similar to GNU Debugger.
References
Document 1:::
Stone Flower (; ) is a monument to the victims of the Ustasha genocide of the Serbs during the World War II in Yugoslavia, located in Jasenovac, Croatia. Designed by Bogdan Bogdanović and unveiled in 1966, it serves as a reminder of the atrocities perpetrated in the Jasenovac concentration camp.
External links
Official Site of the Jasenovac Memorial – includes history and profile of architect
Official Site of the Jasenovac Memorial – Gallery
Spomenik Database - Jasenovac
Document 2:::
The Comptroller and Auditor General (C&AG) () is the constitutional officer responsible for public audit in Ireland. The Office of the Comptroller and Auditor General is the public audit body for the Republic of Ireland and is headed by the C&AG.
The Comptroller and Auditor General is ex-officio member of the Standards in Public Office Commission and was an ex-officio member of the Referendum Commission, before it was superseded by the Electoral Commission.
The C&AG is a member of the International Organization of Supreme Audit Institutions.
The current Comptroller and Auditor-General is Seamus McCarthy (since 28 May 2012).
Constitutional officer and independence
The office of Comptroller and Auditor General was established under Article 33 of the Constitution of Ireland. The C&AG is appointed by the President on the nomination of Dáil Éireann. The office has its origins in Article 62 of the Constitution of the Irish Free State of 1922 as implemented by the Comptroller and Auditor-General Act 1923, which provided that the Comptroller and Auditor-General () (as the office was then known) was to be appointed by Dáil Éireann. The 1923 Act and the Comptroller and Auditor General (Amendment) Act 1993 are the two principal legislative instruments governing the C&AG. Both the Constitutional and legislative provisions ensure the absolute independence of the C&AG and the Office of the C&AG to operate independently of government. The C&AG makes reports to Dáil Éireann on the accounts of bodies falling within its remit, essentially the State or State bodies and certain specified agencies or bodies receiving State funds. The reports are examined in detail by the Dáil Committee of Public Accounts ().
Function
The Comptroller and Auditor General audits and reports on the use of public funds. It establishes that they are legal and used appropriately. It also examines the internal audit systems of public bodies and applies tests for value for money and effectiveness.
Th
Document 3:::
The New River Company, formally The Governor and Company of the New River brought from Chadwell and Amwell to London, was a privately-owned water supply company in London, England, originally formed around 1609 and incorporated in 1619 by royal charter. Founded by Hugh Myddelton with the involvement of King James I, it was one of the first joint-stock utility companies, and paved the way for large-scale private investment in London's water infrastructure in the centuries which followed.
The New River Company was formed to manage the New River, a artificial aqueduct which had been completed a few years earlier by Myddelton, with the backing of the King and the City, to supply fresh water to London. During its history, the company maintained a large network of pipes to distribute water around much of North London, collecting rates from water users.
The company's headquarters were at New River Head in Clerkenwell, Islington, and the company became a significant landowner in the surrounding area, laying out streets which take their name from people and places associated with the company, including Amwell Street, River Street, Mylne Street, Chadwell Street and Myddelton Square.
The company was finally dissolved in 1904 when London's water supply was taken into municipal ownership, and its assets were acquired by the newly-formed Metropolitan Water Board.
Background
Although London's water supply infrastructure dates back at least to the construction of the Great Conduit in 1247, by the early 17th century water was still scarce, and most Londoners relied on pumps or water carriers which supplied increasingly polluted water. The City of London had funded several early water works in the 16th century, which supplemented the conduit system by drawing water from the tidal Thames, but these were small operations, and London's population continued to grow.
Construction of the New River
Edmund Colthurst originally conceived a scheme to build an artificial waterway from
Document 4:::
A data architect is a practitioner of data architecture, a data management discipline concerned with designing, creating, deploying and managing an organization's data architecture. Data architects define how the data will be stored, consumed, integrated and managed by different data entities and IT systems, as well as any applications using or processing that data in some way. It is closely allied with business architecture and is considered to be one of the four domains of enterprise architecture.
Role
According to the Data Management Body of Knowledge, the data architect “provides a standard common business vocabulary, expresses strategic data requirements, outlines high level integrated designs to meet these requirements, and aligns with enterprise strategy and related business architecture.”
According to the Open Group Architecture Framework (TOGAF), a data architect is expected to set data architecture principles, create models of data that enable the implementation of the intended business architecture, create diagrams showing key data entities, and create an inventory of the data needed to implement the architecture vision.
Responsibilities
Organizes data at the macro level.
Organizes data at the micro level, data models, for a new application.
Provides a logical data model as a standard for the golden source and for consuming applications to inherit.
Provides a logical data model with elements and business rules needed for the creation of data quality (DQ) rules.
Skills
Bob Lambert describes the necessary skills of a data architect as follows:
Foundation in systems development: the data architect should understand the system development life cycle; software project management approaches; and requirements, design, and test techniques. The data architect is asked to conceptualize and influence application and interface projects, and therefore must understand what advice to give and where to plug in to steer toward desirable outcomes.
Depth in d
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main function of a kernel debugger in operating systems?
A. To provide a graphical user interface for applications
B. To ease debugging and kernel development by developers
C. To manage system resources and processes
D. To enhance the performance of the operating system
Answer:
|
B. To ease debugging and kernel development by developers
|
Relavent Documents:
Document 0:::
James Stirling (1835–1917) was a Scottish mechanical engineer. He was Locomotive Superintendent of the Glasgow and South Western Railway and later the South Eastern Railway. Stirling was born on 2 October 1835, a son of Robert Stirling, rector of Galston, East Ayrshire.
Career
Glasgow and South Western Railway
After working for a village millwright he joined the Glasgow and South Western Railway (GSWR) where he was apprenticed to his brother Patrick, who had been Locomotive Superintendent of that railway since 1853. On completion of his apprenticeship, he spent a year as a fitter at Sharp Stewart in Manchester, before returning to the GSWR drawing office at Kilmarnock; he later became works manager.
On 1 March 1866, his brother Patrick left the GSWR for the Great Northern Railway (GNR), where he became Works Manager at Doncaster, and James was appointed Locomotive Superintendent of the GSWR in his place. Patrick became the Locomotive Superintendent of the GNR from 1 October 1866),
South Eastern Railway
At the end of June 1878 he left the GSWR for the South Eastern Railway. He retired in 1898 and died in Ashford, Kent in 1917.
Locomotives
Like his brother, James Stirling favoured the domeless boiler, known as the "straightback" and cabs for the enginemen. Although not the first British locomotive engineer to use the 4-4-0 type, he was the first to produce a 4-4-0 which could be regarded as successful, with his G&SWR 6 Class of 1873. Stirling also invented a steam reverser, using it on most of his designs from 1874.
On the South Eastern Railway, Stirling designed just six classes of locomotive in his twenty years – three of these were of the 4-4-0 type for express passenger work, each more capable than the last; his other three classes were an 0-6-0 for goods, an 0-4-4T for suburban passenger, and an 0-6-0T for shunting. At his retirement at the end of 1898, the SER had 459 engines, of which 384 were to Stirling's design, and seven others had been purchased to
Document 1:::
The word chemistry derives from the word alchemy, which is found in various forms in European languages.
The word 'alchemy' itself derives from the Arabic word al-kīmiyāʾ (), wherein al- is the definite article 'the'. The ultimate origin of the word is uncertain, but the Arabic term kīmiyāʾ () is likely derived from either the Ancient Greek word khēmeia () or the similar khēmia ().
The Greek term khēmeia, meaning "cast together" may refer to the art of alloying metals, from root words χύμα (khúma, "fluid"), from χέω (khéō, "I pour"). Alternatively, khēmia may be derived from the ancient Egyptian name of Egypt, khem or khm, khame, or khmi, meaning "blackness", likely in reference to the rich dark soil of the Nile river valley.
Overview
There are two main views on the derivation of the Greek word. According to one, the word comes from the greek χημεία (chimeía), pouring, infusion, used in connexion with the study of the juices of plants, and thence extended to chemical manipulations in general; this derivation accounts for the old-fashioned spellings "chymist" and "chymistry". The other view traces it to khem or khame, hieroglyph khmi, which denotes black earth as opposed to barren sand, and occurs in Plutarch as χημία (chimía); on this derivation alchemy is explained as meaning the "Egyptian art". The first occurrence of the word is said to be in a treatise of Julius Firmicus, an astrological writer of the 4th century, but the prefix al there must be the addition of a later Arabic copyist. In English, Piers Plowman (1362) contains the phrase "experimentis of alconomye", with variants "alkenemye" and " alknamye". The prefix al began to be dropped about the middle of the 16th century (further details of which are given below).
Egyptian origin
According to the Egyptologist Wallis Budge, the Arabic word al-kīmiyaʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", kēme (or its equivalent in the Mediaeval Bohairic dialect of Coptic, khēme). This Coptic word derives from Demotic kmỉ, itself from ancient Egyptian kmt. The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "Black Land", by contrast with the "Red Land", the surrounding desert); so this etymology could also explain the nickname "Egyptian black arts". However, according to Mahn, this theory may be an example of folk etymology. Assuming an Egyptian origin, chemistry is defined as follows:
Chemistry, from the ancient Egyptian word "khēmia" meaning transmutation of earth, is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals.
Thus, according to Budge and others, chemistry derives from an Egyptian word khemein or khēmia, "preparation of black powder", ultimately derived from the name khem, Egypt. A decree of Diocletian, written about 300 AD in Greek, speaks against "the ancient writings of the Egyptians, which treat of the khēmia transmutation of gold and silver".
Greek origin
Arabic al-kīmiyaʾ or al-khīmiyaʾ ( or ), according to some, is thought to derive from the Koine Greek word khymeia () meaning "the art of alloying metals, alchemy"; in the manuscripts, this word is also written khēmeia () or kheimeia (), which is the probable basis of the Arabic form. According to Mahn, the Greek word χυμεία khumeia originally meant "cast together", "casting together", "weld", "alloy", etc. (cf. Gk. kheein () "to pour"; khuma (), "that which is poured out, an ingot"). Assuming a Greek origin, chemistry is defined as follows:
Chemistry, from the Greek word (khēmeia) meaning "cast together" or "pour together", is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals.
From alchemy to chemistry
Later medieval Latin had alchimia / alchymia "alchemy", alchimicus "alchemical", and alchimista "alchemist". The mineralogist and humanist Georg Agricola (died 1555) was the first to drop the Arabic definite article al-. In his Latin works from 1530 on he exclusively wrote chymia and chymista in describing activity that we today would characterize as chemical or alchemical. As a humanist, Agricola was intent on purifying words and returning them to their classical roots. He had no intent to make a semantic distinction between chymia and alchymia.
During the later sixteenth century Agricola's new coinage slowly propagated. It seems to have been adopted in most of the vernacular European languages following Conrad Gessner's adoption of it in his extremely popular pseudonymous work, Thesaurus Euonymi Philiatri De remediis secretis: Liber physicus, medicus, et partim etiam chymicus (Zurich 1552). Gessner's work was frequently re-published in the second half of the 16th century in Latin and was also published in a number of vernacular European languages, with the word spelled without the al-.
In the 16th and 17th centuries in Europe the forms alchimia and chimia (and chymia) were synonymous and interchangeable. The semantic distinction between a rational and practical science of chimia and an occult alchimia arose only in the early eighteenth century.
In 16th, 17th and early 18th century English the spellings — both with and without the "al" — were usually with an i or y as in chimic / chymic / alchimic / alchymic. During the later 18th century the spelling was re-fashioned to use a letter e, as in chemic in English. In English after the spelling shifted from chimical to chemical, there was corresponding shift from alchimical to alchemical, which occurred in the early 19th century. In French, Italian, Spanish and Russian today it continues to be spelled with an i as in for example Italian chimica.
See also
History of chemistry
History of science
History of thermodynamics
List of Arabic loanwords in English
List of chemical element name etymologies
References
Document 2:::
Nicholas (Nick) Andersen is a cybersecurity professional with extensive experience in both government and private sectors. He served as the Principal Deputy Assistant Secretary and performed the duties of the Assistant Secretary for the Office of Cybersecurity, Energy Security, and Emergency Response (CESER) at the U.S. Department of Energy where he led national efforts to secure U.S. energy infrastructure against various hazards.
Education
Andersen earned a Bachelor of Science degree from American Military University, a Master of Science degree from Western Governors University, a Master of Science degree from Brown University, and an Executive Certificate in Public Policy from the Harvard Kennedy School.
Career
Prior to his role at the Department of Energy, Andersen was the Federal Cybersecurity Lead and Senior Cybersecurity Advisor to the Federal Chief Information Officer at the White House Office of Management and Budget (OMB). In this capacity, he led the OMB Cyber Team and was responsible for government-wide cybersecurity policy development and compliance of shared federal security services.
Andersen also served as the Chief Information Security Officer (CISO) for the State of Vermont, overseeing the state's data security and compliance initiatives. His earlier career includes significant roles in military and intelligence sectors, such as Chief Information Officer for Navy Intelligence and Head of the Office of Intelligence, Surveillance, and Reconnaissance Systems and Technologies at the U.S. Coast Guard. He served on active duty with the U.S. Marine Corps, managing intelligence mission systems in Iraq, Europe, and Africa. He is a veteran of Operation Iraqi Freedom.
In the private sector, Andersen has held leadership positions including Chief Information Security Officer for Public Sector at Lumen Technologies and President/Chief Operating Officer at Invictus International Consulting, LLC. He has also been associated with the Atlantic Council as a nonr
Document 3:::
In meteorology, the different types of precipitation often include the character, formation, or phase of the precipitation which is falling to ground level. There are three distinct ways that precipitation can occur. Convective precipitation is generally more intense, and of shorter duration, than stratiform precipitation. Orographic precipitation occurs when moist air is forced upwards over rising terrain and condenses on the slope, such as a mountain.
Precipitation can fall in either liquid or solid phases, is mixed with both, or transition between them at the freezing level. Liquid forms of precipitation include rain and drizzle and dew. Rain or drizzle which freezes on contact with a surface within a subfreezing air mass gains the preceding adjective "freezing", becoming the known freezing rain or freezing drizzle. Slush is a mixture of both liquid and solid precipitation. Frozen forms of precipitation include snow, ice crystals, ice pellets (sleet), hail, and graupel. Their respective intensities are classified either by rate of precipitation, or by visibility restriction.
Phases
Precipitation falls in many forms, or phases. They can be subdivided into:
Liquid precipitation:
Drizzle (DZ)
Rain (RA)
Cloudburst (CB)
Freezing/Mixed precipitation:
Freezing drizzle (FZDZ)
Freezing rain (FZRA)
Rain and snow mixed / Slush (RASN)
Drizzle and snow mixed / Slush (DZSN)
Frozen precipitation:
Snow (SN)
Snow grains (SG)
Ice crystals (IC)
Ice pellets / Sleet (PL)
Snow pellets / Graupel (GS)
Hail (GR)
Megacryometeors (MC)
The parenthesized letters are the shortened METAR codes for each phenomenon.
Mechanisms
Precipitation occurs when evapotranspiration takes place and local air becomes saturated with water vapor, and so can no longer maintain the level of water vapor in gaseous form, which creates clouds. This occurs when less dense moist air cools, usually when an air mass rises through the atmosphere to higher and cooler altitudes. However, an air ma
Document 4:::
In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actions and conditions. Although pseudocode shares features with regular programming languages, it is intended for human reading rather than machine control. Pseudocode typically omits details that are essential for machine implementation of the algorithm, meaning that pseudocode can only be verified by hand. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The reasons for using pseudocode are that it is easier for people to understand than conventional programming language code and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications to document algorithms and in planning of software and other algorithms.
No broad standard for pseudocode syntax exists, as a program in pseudocode is not an executable program; however, certain limited standards exist (such as for academic assessment). Pseudocode resembles skeleton programs, which can be compiled without errors. Flowcharts, drakon-charts and Unified Modelling Language (UML) charts can be thought of as a graphical alternative to pseudocode, but need more space on paper. Languages such as HAGGIS bridge the gap between pseudocode and code written in programming languages.
Application
Pseudocode is commonly used in textbooks and scientific publications related to computer science and numerical computation to describe algorithms in a way that is accessible to programmers regardless of their familiarity with specific programming languages. Textbooks often include an introduction explaining the conventions in use, and the detail of pseudocode may sometimes approach that of formal programming l
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary meaning of the Greek word "khēmeia" as it relates to the origin of chemistry?
A. Preparation of black powder
B. The art of alloying metals
C. Transmutation of gold and silver
D. Study of plant juices
Answer:
|
B. The art of alloying metals
|
Relavent Documents:
Document 0:::
The Database of Macromolecular Motions is a bioinformatics database and software-as-a-service tool that attempts to
categorize macromolecular motions, sometimes also known as conformational change. It was originally developed by Mark B. Gerstein, Werner Krebs, and Nat Echols in the Molecular Biophysics & Biochemistry Department at Yale University.
Discussion
Since its introduction in the late 1990s, peer-reviewed papers on the database have received thousands of citations. The database has been mentioned in news articles in major scientific journals, book chapters, and elsewhere.
Users can search the database for a particular motion by either protein name or Protein Data Bank ID number. Typically, however, users will enter the database via the Protein Data Bank, which often provides a hyperlink to the molmovdb entry for proteins found in both databases.
The database includes a web-based tool (the Morph server) which allows non-experts to animate and visualize certain types of protein conformational change through the generation of short movies. This system uses molecular modelling techniques to interpolate the structural changes between two different protein conformers and to generate a set of intermediate structures. A hyperlink pointing to the morph results is then emailed to the user.
The Morph Server was originally primarily a research tool rather than general molecular animation tool, and thus offered only limited user control over rendering, animation parameters, color, and point of view, and the original methods sometimes required a fair amount of CPU time to completion. Since their initial introduction in 1996, the database and associated morph server have undergone development to try to address some of these shortcomings as well as add new features, such as Normal Mode Analysis. Other research grounds have subsequently developed alternative systems, such as MovieMaker from the University of Alberta.
Commercialization
Bioinformatics vendor DNASTAR has incorporated morphs from the database into its commercial Protean3D product. The connection between DNASTAR and the authors of the database, if any, is not immediately clear.
See also
Database of protein conformational diversity
Notes
References
External links
The Database of Macromolecular Motions (molmovdb)
MovieMaker from the University of Alberta
Document 1:::
BERMAD CS Ltd. is a developer and manufacturer of residential irrigation and water management systems for mining and construction.
The company was founded in 1965 as a producer of irrigation systems, mainly those found in agriculture. Its product offering includes a variety of industries, including filtration systems, reservoir management, air valves, water meters and fire protection solutions. Bermad has products that are marketed in 70 countries, through a number of subsidiaries and distributors around the world, including subsidiaries in the United States, the United Kingdom, Australia, Brazil, Mexico, China, France, Singapore, Italy, Spain and India.
Divisions
Bermad is divided into four different business units or divisions, each aimed at a different industry or market.
Irrigation
Irrigation solutions have been the primary products developed since the founding of Bermad in 1965. The range of irrigation products includes hydraulic control valves and flow management products for a range of irrigation system types such as drip irrigation and sprinklers.
Buildings & construction
The company's building & construction business unit includes a number of flow and distribution management products aimed at the construction industry, such as pressure and level control, pump and flow, dry pipes, and deluge valves.
Waterworks
The waterworks unit offers pumping stations, water distribution grid networks and sewage management. This includes various water meters, pressure valves, and flow, pump, surge, level and burst control valves.
Fire Protection
The fire protection unit offers sprinkler and fire fighting systems, with a number of products for fail-safe and flow management of water and other fire protection fluids or foams.
References
External links
Document 2:::
BlackBerry Tablet OS is an operating system from BlackBerry Ltd based on the QNX Neutrino real-time operating system designed to run Adobe AIR and BlackBerry WebWorks applications, currently available for the BlackBerry PlayBook tablet computer.
The BlackBerry Tablet OS is the first tablet running an operating system from QNX (now a subsidiary of RIM).
BlackBerry Tablet OS supports standard BlackBerry Java applications. Support for Android apps has also been announced, through sandbox "app players" which can be ported by developers or installed through sideloading by users. A BlackBerry Tablet OS Native Development Kit, to develop native applications with the GNU toolchain is currently in closed beta testing. The first device to run BlackBerry Tablet OS was the BlackBerry PlayBook tablet computer.
A similar QNX-based operating system, known as BlackBerry 10, replaced the long-standing BlackBerry OS on handsets after version 7.
See also
BlackBerry OS
BlackBerry 10
References
Document 3:::
3K3A-Activated Protein C (3K3A-APC) is an experimental drug for the treatment of stroke developed by ZZ Biotech. It is also being assessed for the treatment of Alzheimer's disease. It is the subject of RHAPSODY, a phase II trial to determine safety and tolerability.
A phase III trial, RHAPSODY-2, to determine safety and efficacy for treatment of ischemic stroke, is planned. However, on November 16, 2023, the National Institutes of Health (NIH) paused the start of the human trial of 3K3A-APC after an investigation by Science Magazine.
The drug was designed to have more activity at protease-activated receptor 1 (PAR-1) and less anticoagulative effect than activated protein C.
Document 4:::
The Gliwice Radio Tower is a transmission tower in the Szobiszowice district of Gliwice, Upper Silesia, Poland. Nazi Germany staged a false flag attack on the tower in 1939, which was used as a pretext for invading Poland, beginning World War II.
Structure
The Gliwice Radio Tower is tall, with a wooden framework of impregnated siberian larch linked by brass connectors. It was nicknamed "the Silesian Eiffel Tower" by the local population. The tower has four platforms, at , , and above ground. The top platform measures square. A ladder with 365 steps provides access to the top.
The tower is the tallest wooden structure in Europe. The tower was originally designed to carry aerials for medium wave broadcasting, but that transmitter is no longer in service because the final stage is missing. Today, the Gliwice Radio Tower carries multiple transceiver antennas for mobile phone services and a low-power FM transmitter broadcasting on 93.4 MHz.
History
The tower was erected from 1 August 1934 as Sendeturm Gleiwitz (Gleiwitz Radio Tower), when the territory was part of Germany. It was operated by the Reichssender Breslau (former Schlesische Funkstunde broadcasting corporation) of the Reichs-Rundfunk-Gesellschaft radio network. The tower was modeled on the Mühlacker radio transmitter, it replaced a smaller transmitter in Gleiwitz situated nearby on Raudener Straße and went in service on 23 December 1935.
On 31 August 1939, the German SS staged a 'Polish' attack on Gleiwitz radio station, which next morning was used as justification (Seit 5 Uhr 45 wird jetzt zurückgeschossen! / We are now, since 5.45, returning fire!) for the invasion of Poland. The transmission facility was not demolished in World War II. From 4 October 1945 until the inauguration of the new transmitter in Ruda Śląska in 1955 the Gliwice transmitter was used for medium-wave transmissions by the Polish state broadcaster Polskie Radio. After 1955, it was used to jam medium-wave stations (such as Radio Fr
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary function of the Database of Macromolecular Motions?
A. To provide a platform for protein sequencing
B. To categorize macromolecular motions and visualize protein conformational changes
C. To develop new bioinformatics software
D. To store genetic information
Answer:
|
B. To categorize macromolecular motions and visualize protein conformational changes
|
Relavent Documents:
Document 0:::
The laminar flamelet model is a mathematical method for modelling turbulent combustion. The laminar flamelet model is formulated specifically as a model for non-premixed combustion
The concept of ensemble of laminar flamelets was first introduced by Forman A. Williams in 1975, while the theoretical foundation was developed by Norbert Peters in the early 80s.
Theory
The flamelet concept considers the turbulent flame as an aggregate of thin, laminar (Re < 2000), locally one-dimensional flamelet structures present within the turbulent flow field. Counterflow diffusion flame is a common laminar flame which is used to represent a flamelet in a turbulent flow. Its geometry consists of opposed and axi-symmetric fuel and oxidizer jets. As the distance between the jets is decreased and/or the velocity of the jets is increased, the flame is strained and departs from its chemical equilibrium until it eventually extinguishes. The mass fraction of species and temperature fields can be measured or calculated in laminar counterflow diffusion flame experiments. When calculated, a self-similar solution exists, and the governing equations can be simplified to only one dimension i.e. along the axis of the fuel and oxidizer jets. It is in this direction where complex chemistry calculations can be performed affordably.
Logic and formulae
To model a non-premixed combustion, governing equations for fluid elements are required. The conservation equation for the species mass fraction is as follows:-
Lek → lewis number of kth species and the above formula was derived with keeping constant heat capacity. The energy equation with variable heat capacity:-
As can be seen from above formulas that the mass fraction and temperature are dependent on
1. Mixture fraction Z
2. Scalar dissipation χ
3. Time
Many times we neglect the unsteady terms in above equation and assume the local flame structure having a balance between steady chemical equations and steady diffusion equation which r
Document 1:::
A heath () is a shrubland habitat found mainly on free-draining infertile, acidic soils and is characterised by open, low-growing woody vegetation. Moorland is generally related to high-ground heaths with—especially in Great Britain—a cooler and damper climate.
Heaths are widespread worldwide but are rapidly disappearing and considered a rare habitat in Europe. They form extensive and highly diverse communities across Australia in humid and sub-humid areas where fire regimes with recurring burning are required for the maintenance of the heathlands. Even more diverse though less widespread heath communities occur in Southern Africa. Extensive heath communities can also be found in the Texas chaparral, New Caledonia, central Chile, and along the shores of the Mediterranean Sea. In addition to these extensive heath areas, the vegetation type is also found in scattered locations across all continents, except Antarctica.
Characteristics
Heathland is favored where climatic conditions are typically hard and dry, particularly in summer, and soils acidic, of low fertility, and often sandy and very free-draining; a mire may occur where drainage is poor, but usually is only small in extent. Heaths are dominated by low shrubs, to tall.
Heath vegetation can be extremely plant-species rich, and heathlands of Australia are home to some 3,700 endemic or typical species in addition to numerous less restricted species. The fynbos heathlands of South Africa are second only to tropical rainforests in plant biodiversity with over 7,000 species. In marked contrast, the tiny pockets of heathland in Europe are extremely depauperate with a flora consisting primarily of heather (Calluna vulgaris), heath (Erica species) and gorse (Ulex species).
The bird fauna of heathlands are usually cosmopolitan species of the region. In the depauperate heathlands of Europe, bird species tend to be more characteristic of the community, and include Montagu's harrier and the tree pipit. In Australia t
Document 2:::
Eucalyptus merrickiae, commonly known as goblet mallee, is a species of mallee that is endemic to a small area on the south coast of Western Australia. It has rough, flaky bark on part or all of the trunk, sometimes on the base of the larger branches, linear adult leaves, flower buds in groups of three, creamy white flowers and cylindrical fruit.
Description
Eucalyptus merrickiae is a mallee that typically grows to a height of , sometimes a bushy shrub, and forms a lignotuber. It has rough, flaky greyish bark on part or all of the trunk, sometimes also on the base of the larger branches, and smooth grey bark above. The adult leaves are the same shade of green on both sides, linear, long and wide on a petiole long. The flower buds are arranged in groups of three on an unbranched peduncle long, the individual buds on pedicels up to long. The individual buds are cylindrical, long and wide with a conical or rounded operculum. Flowering occurs between August and November and the flowers are creamy white to pinkish. The fruit is a woody cylindrical capsule long and wide with the valves close to rim level.
Taxonomy and naming
Eucalyptus merrickiae was first formally described in 1925 by Joseph Maiden and William Blakely and the description was published in the Journal and Proceedings of the Royal Society of New South Wales. The specific epithet honours Mary Merrick, who was employed at the Royal Botanic Gardens, Sydney from 1921 to about 1927.
Distribution and habitat
Goblet mallee is mostly found near the northeastern side of salt lakes to the east of the highway between Scadden and Salmon Gums where it grows in sand and sandy clay.
Conservation status
This eucalypt is classed as "vulnerable" under the Australian Government Environment Protection and Biodiversity Conservation Act 1999 and as "Threatened Flora (Declared Rare Flora — Extant)" by the Department of Environment and Conservation (Western Australia). The main threats to the species are the small num
Document 3:::
BAY 2413555 is an experimental drug that acts as a potent and selective positive allosteric modulator of the Muscarinic acetylcholine receptor M2. It has been investigated as an agent to stabilise heart rhythm.
Document 4:::
Najas guadalupensis is a species of aquatic plant known by the common names southern waternymph, guppy grass, najas grass, and common water nymph. It is native to the Americas, where it is widespread. It is considered native to Canada (from Alberta to Quebec), and most of the contiguous United States, Mexico, Central America, the West Indies and South America. It has been introduced in Japan, and Palestine and Israel.
Najas guadalupensis is an annual, growing submerged in aquatic habitat types such as ponds, ditches, and streams. It produces a slender, branching stem up to 60 to 90 centimeters in maximum length. The thin, somewhat transparent, flexible leaves are up to 3 centimeters long and just 1 or 2 millimeters wide. They are edged with minute, unicellular teeth. Tiny flowers occur in the leaf axils; staminate flowers grow toward the end of the plant and pistillate closer to the base. They are also a popular aquarium plant for beginners due to their hardiness as well as growth rate, which helps provide shelter for aquarium fish.
Subspecies
Numerous varietal and subspecific names have been proposed. Only four are currently recognized:
Najas guadalupensis subsp. floridana (R.R.Haynes & Wentz) R.R.Haynes & Hellq – Alabama, Florida, and Georgia (U.S. state); naturalized in Japan
Najas guadalupensis subsp. guadalupensis
Najas guadalupensis subsp. muenscheri (R.T.Clausen) R.R.Haynes & Hellq. – New York State
Najas guadalupensis subsp. olivacea (Rosend. & Butters) R.R.Haynes & Hellq. – Canada (Manitoba, Ontario, Quebec) and northern United States (Iowa, Minnesota, Indiana, Michigan, New York state)
References
External links
Jepson Manual Treatment
Photo gallery
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is Elsinoë randii classified as in terms of its biological family?
A. Bacterium
B. Fungus
C. Virus
D. Alga
Answer:
|
B. Fungus
|
Relavent Documents:
Document 0:::
The GF method, sometimes referred to as FG method, is a classical mechanical method introduced by Edgar Bright Wilson to obtain certain internal coordinates for a vibrating semi-rigid molecule, the so-called normal coordinates Qk. Normal coordinates decouple the classical vibrational motions of the molecule and thus give an easy route to obtaining vibrational amplitudes of the atoms as a function of time. In Wilson's GF method it is assumed that the molecular kinetic energy consists only of harmonic vibrations of the atoms, i.e., overall rotational and translational energy is ignored. Normal coordinates appear also in a quantum mechanical description of the vibrational motions of the molecule and the Coriolis coupling between rotations and vibrations.
It follows from application of the Eckart conditions that the matrix G−1 gives the kinetic energy in terms of arbitrary linear internal coordinates, while F represents the (harmonic) potential energy in terms of these coordinates. The GF method gives the linear transformation from general internal coordinates to the special set of normal coordinates.
The GF method
A non-linear molecule consisting of N atoms has 3N − 6 internal degrees of freedom, because positioning a molecule in three-dimensional space requires three degrees of freedom, and the description of its orientation in space requires another three degree of freedom. These degrees of freedom must be subtracted from the 3N degrees of freedom of a system of N particles.
The interaction among atoms in a molecule is described by a potential energy surface (PES), which is a function of 3N − 6 coordinates. The internal degrees of freedom s1, ..., s3N−6 describing the PES in an optimal way are often non-linear; they are for instance valence coordinates, such as bending and torsion angles and bond stretches. It is possible to write the quantum mechanical kinetic energy operator for such curvilinear coordinates, but it is hard to formulate a general theory appl
Document 1:::
Spectral regularization is any of a class of regularization techniques used in machine learning to control the impact of noise and prevent overfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart.
Spectral regularization algorithms rely on methods that were originally defined and studied in the theory of ill-posed inverse problems (for instance, see) focusing on the inversion of a linear operator (or a matrix) that possibly has a bad condition number or an unbounded inverse. In this context, regularization amounts to substituting the original operator by a bounded operator called the "regularization operator" that has a condition number controlled by a regularization parameter, a classical example being Tikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise. The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues". Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization, Landweber iteration, and truncated singular value decomposition (TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this p
Document 2:::
The Parallel Element Processing Ensemble (PEPE) was one of the very early parallel computing systems. Bell began researching the concept in the mid-1960s as a way to provide high-performance computing support for the needs of anti-ballistic missile (ABM) systems. The goal was to build a computer system that could simultaneously track hundreds of incoming ballistic missile warheads. A single PEPE system was built by Burroughs Corporation in the 1970s, by which time the US Army's ABM efforts were winding down. The design later evolved into the Burroughs Scientific Processor for commercial sales, but a lack of sales prospects led to it being withdrawn from the market.
History
PEPE came about as a result of predictions of the sorts of ICBM forces that would be expected in the event of an all-out Soviet attack during the 1970s. Missile fleets of both the US and USSR were growing through the 1960s, but a bigger issue was the rapid increase in the number of warheads as a result of the move to multiple independently targetable reentry vehicles (MIRV). Computers designed for the Nike-X system were largely similar to systems like the IBM 7030, and would have been able to handle attacks of perhaps a dozen warheads arriving simultaneously. With MIRV, hundreds of targets, both warheads and decoys, would arrive at the same time, and the CPUs being used simply did not have the performance needed to analyze their trajectories quickly enough to leave time to attack them.
Bell Labs, which had been the primary industry partner in previous ABM systems, proposed development of a new system able to track 200 to 300 missiles at a time. The program officially started in 1969. Development was led by System Development Corporation (SDC), which had formed in 1955 to develop the software for the SAGE air defense computer system. PEPE was designed by a team led by George Mueller, formerly of NASA. He described the ultimate goal to produce 300 million instructions per second, far in advance of
Document 3:::
In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group.
That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative.
The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis.
See also
, a topological abelian group that is compact and connected
References
Fourier analysis on Groups, by Walter Rudin.
Document 4:::
A gelding (/ˈɡɛldɪŋ/) is a castrated male horse or other equine, such as a pony, donkey or a mule. The term is also used with certain other animals and livestock, such as domesticated camels. By comparison, the equivalent term for castrated male cattle would be steer (or bullock), and wether for sheep and goats.
Castration allows a male animal to be more calm, better-behaved, less sexually aggressive, and more responsive to training efforts. This makes the animal generally more suitable as an everyday working animal, or as a pet in the case of companion animals. The gerund and participle "gelding" and the infinitive "to geld" refer to the castration procedure itself.
Etymology
The verb "to geld" comes from the Old Norse , from the adjective . The noun "gelding" is from the Old Norse .
History
The Scythians are thought to have been among the first to geld their horses, as they valued war horses that were quiet and less defensive, as well as easier to keep in groups and less likely to be territorial, without the temptation of reproductive/mating urges. Aristotle is said to have mentioned gelding as early as 350 BC.
Reasons for gelding
A male horse is often gelded to make him better-behaved and easier to control. Gelding can also remove lower-quality animals from the gene pool.
To allow only the finest animals to breed on, while preserving adequate genetic diversity, only a small percentage of all male horses should remain stallions. Mainstream sources place the percentage of stallions that should be kept as breeding stock at about 10%, while an extreme view states that only 0.5% of all males should be bred. In wild herds, the 10% ratio is largely maintained naturally, as a single dominant stallion usually protects and breeds with a herd which is seldom larger than 10 or 12 mares, though he may permit a less dominant junior stallion to live at the fringes of the herd. There are more males than just herd stallions, so unattached male horses group together for protection in small all-male "bachelor herds", where, in the absence of mares, they tend to behave much like geldings.
Geldings are preferred over stallions for working purposes because they are calmer, easier to handle, and more tractable. Geldings are therefore a favorite for many equestrians. In some horse shows, due to the dangers inherent in handling stallions, which require experienced handlers, youth exhibitors are not permitted to show stallions in classes limited to just those riders.
Geldings are often preferred over mares, because some mares become temperamental when in heat and the use of mares may be limited during the later months of pregnancy and while caring for a young foal.
In horse racing, castrating a stallion may be considered worthwhile if the animal is easily distracted by other horses, difficult to handle, or otherwise not running to his full potential due to behavioral issues. While this means the horse loses any breeding value, a successful track career can often be a boost to the value of the stallion that sired the gelding.
Sometimes a stallion used for breeding is castrated later in life, possibly due to sterility, because the offspring of the stallion are not up to expectations, or simply because the horse is not used much for breeding. Castration may allow a stallion to live peacefully with other horses, allowing a more social and comfortable existence.
Under British National Hunt racing (i.e. Steeplechase) rules, to minimize health and safety risks, nearly all participating horses are gelded. On the other hand, in other parts of Europe, geldings are excluded from many of the most prestigious flat races including the Classics and the Prix de l'Arc de Triomphe (with an exception being the French classic Prix Royal-Oak, open to geldings since 1986). In North American Thoroughbred racing, geldings, if otherwise qualified by age, winnings, or experience, are allowed in races open to intact males. The same applies in Australia.
Concerns about gelding
Some cultures historically did not and still seldom geld male horses, most notably the Arabs, who usually used mares for everyday work and for war. In these cultures, most stallions are still not used for breeding, only those of the best quality. When used as ordinary riding animals, they are kept only with or near other male horses in a "bachelor" setting, which tends to produce calmer, less stallion-like behavior. Sometimes religious reasons for these practices exist; for example, castration of both animals and humans was categorically forbidden in the Hebrew Bible and is prohibited in Jewish law.
Although castrations generally have few complications, there are risks. Castration can have complications, such as swelling, hemorrhage or post-operative bleeding, infections, and eventration. It can take up to six weeks for residual testosterone to clear from the new gelding's system and he may continue to exhibit stallion-like behaviors in that period. For reasons not always clear, about 30% of all geldings may still display a stallion-like manner, some because of a cryptorchid testicle retained in the horse, some due to previously learned behavior, but some for no clear reason. Training to eliminate these behaviors is generally effective. If a standing castration is performed, it is possible for the horse to injure the veterinarian during the procedure. If complications arise, the horse must be immediately anesthetized. Castration does not automatically change bad habits and poor manners. This must be accomplished by proper training.
Time of gelding
A horse may be gelded at any age; however, if an owner intends to geld a particular foal, it is now considered best to geld the horse prior to becoming a yearling, and definitely before he reaches sexual maturity. While it was once recommended to wait until a young horse was well over a year old, even two, this was a holdover from the days when castration was performed without anesthesia and was thus far more stressful on the animal. Modern veterinary techniques can now accomplish castration with relatively little stress and minimal discomfort, so long as appropriate analgesics are employed. A few horse owners delay gelding a horse on the grounds that the testosterone gained from being allowed to reach sexual maturity will make him larger. However, recent studies have shown that this is not so: any apparent muscle mass gained solely from the presence of hormones will be lost over time after the horse is gelded, and in the meantime, the energy spent developing muscle mass may actually take away from the energy a young horse might otherwise put into skeletal growth; the net effect is that castration has no effect on rate of growth (although it may increase the amount of fat the horse carries).
Many older stallions, no longer used at stud due to age or sterility, can benefit from being gelded. Modern veterinary techniques make gelding an even somewhat elderly stallion a fairly low-risk procedure, and the horse then has the benefit of being able to be turned out safely with other horses and allowed to live a less restricted and isolated life than was allowed for a stallion.
Specialized maintenance of geldings
Owners of male horses, both geldings and stallions, need to occasionally check the horse's sheath, the pocket of skin that protects the penis of the horse when it is not in use for urination (or, in the case of stallions, breeding). Geldings tend to accumulate smegma and other debris at a higher rate than stallions, probably because geldings rarely fully extrude the penis, and thus dirt and smegma build up in the folds of skin.
Castration techniques
There are two major techniques commonly used in castrating a horse, one requiring only local anaesthesia and the other requiring general anaesthesia. Each technique has advantages and disadvantages.
Standing castration
Standing castration is a technique where a horse is sedated and local anaesthesia is administered, without throwing the horse to the ground or putting him completely "under". It has the benefit that general anaesthesia (GA) is not required. This method is advocated for simple procedures because the estimated mortality for GA in horses at a modern clinic is low, approximately one or two in 1000. Mortality in the field (where most horse castrations are performed) is probably higher, due to poorer facilities.
For standing castration, the colt or stallion is sedated, typically with detomidine with or without butorphanol, and often physically restrained. Local anaesthetic is injected into the parenchyma of both testes. An incision is made through the scrotum and the testes are removed, then the spermatic cord is crushed, most commonly with either ligatures or emasculators, or both. The emasculators are applied for two to three minutes, then removed, and a careful check is made for signs of haemorrhage. Assuming that bleeding is at a minimum, the other side is castrated in the same manner. Most veterinarians remove the testis held most "tightly" (or close to the body) by the cremaster muscle first, so as to minimize the risk of the horse withdrawing it to the point where it is inaccessible. The horse, now a gelding, is allowed to recover.
Standing castration can be performed in more complicated cases. Some authorities have described a technique for the removal of abdominally retained testes from cryptorchid animals, but most surgeons still advocate a recumbent technique, as described below. The primary drawback to standing castration is the risk that, even with sedation and restraint, the horse may object to the procedure and kick or otherwise injure the individual performing the operation.
Recumbent castration
Putting a horse under general anaesthesia for castration is preferred by some veterinarians because "surgical exposure is improved and it carries less (overall) risk for surgeon and patient". For simple castration of normal animals, the advantages to recumbent castration are that the horse is prone, better asepsis (sterile environment) can be maintained, and better haemostasis (control of bleeding) is possible. In addition, there is significantly less risk of the surgeon or assistants being kicked. In a more complex situation such as castration of cryptorchid animals, the inguinal canal is more easily accessed. There are several different techniques (such as "open", "closed", and "semi-closed") that may be employed, but the basic surgery is similar. However, general anaesthesia is not without risks, including post-anaesthetic myopathy (muscle damage) and neuropathy (nerve damage), respiratory dysfunction (V/Q mismatch), and cardiac depression. These complications occur with sufficient frequency that castration has a relatively high overall mortality rate. To minimize these concerns, the British Equine Veterinary Association guidelines recommend two veterinary surgeons should be present when an equine general anaesthesia is being performed.
Aftercare
With both castration techniques, the wound should be kept clean and allowed to drain freely to reduce the risk of hematoma formation, or development of an abscess. The use of tetanus antitoxin and analgesics (painkillers) are necessary and antibiotics are also commonly administered. The horse is commonly walked in hand for some days to reduce the development of edema.
Possible complications
Minor complications following castration are relatively common, while serious complications are rare. According to one in-depth study, for standing castration the complication rate is 22%, while for recumbent castration it is 6% (although with a 1% mortality).
The more common complications are:
Post-operative swelling (edema) – minor and very common
Scrotal/incisional infection – local seroma/abscess formation is relatively common, when the skin seals over before the deeper pocket has time to seal. This requires reopening the skin incision, to establish adequate drainage. To prevent the wounds from closing too quickly the horse needs to be exercised at least once daily after the procedure. It is common to treat the horse with a nonsteroidal anti-inflammatory drug to reduce the swelling and sometimes it is necessary to give antibiotics.
Chronic infection leads to a schirrous cord – the formation of a granuloma at the incision site, that may not be obvious for months or even years
Evisceration, a condition where the abdominal organs "fall out" of the surgical incision, is uncommon, and while the survival rate is 85–100% if treated promptly, the mortality rate is high for those not dealt with immediately.
See also
Spaying and neutering
References
External links
Update on sheath cleaning, with how-to video link
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the main purpose of gelding a male horse?
A. To increase its breeding potential
B. To make it calmer and easier to control
C. To enhance its physical strength
D. To improve its reproductive health
Answer:
|
B. To make it calmer and easier to control
|
Relavent Documents:
Document 0:::
DeepSeek is a chatbot created by the Chinese artificial intelligence company DeepSeek.
Released on 10 January 2025, DeepSeek-R1 surpassed ChatGPT as the most downloaded freeware app on the iOS App Store in the United States by 27 January. DeepSeek's success against larger and more established rivals has been described as "upending AI" and initiating "a global AI space race". DeepSeek's compliance with Chinese government censorship policies and its data collection practices have also raised concerns over privacy and information control in the model, prompting regulatory scrutiny in multiple countries. However, it has also been praised for its open weights and infrastructure code, energy efficiency and contributions to the open-source community.
History
On 10 January 2025, DeepSeek released the chatbot, based on the DeepSeek-R1 model, for iOS and Android. By 27 January, DeepSeek-R1 surpassed ChatGPT as the most-downloaded freeware app on the iOS App Store in the United States, which resulted in an 18% drop in Nvidia’s share price. After a "large-scale" cyberattack on the same day disrupted the proper functioning of its servers, DeepSeek limited its new user registration to phone numbers from mainland China, email addresses, or Google account logins.
On 3 April 2025, in collaboration with researchers at Tsinghua University, DeepSeek published a paper unveiling a new model that combines the techniques generative reward modeling (GRM) and self-principled critique tuning (SPCT). The resulting model is referred to as DeepSeek-GRM. The goal of using these techniques is to foster more effective inference-time scaling within their LLM and chatbot services. Notably DeepSeek has said that these new models will be released and made open source.
On 30 April 2025, Deepseek released its math-focused Artificial Intelligence Model named "DeepSeek-Prover-V2-671B". This Model is useful for formal theorem proving and mathematical reasoning.
Usage
DeepSeek can answer questions, solv
Document 1:::
Municipal territory (in Spanish: término municipal), ejido or municipal radius is the territory over which the administrative action of a local government (city council or municipality) extends.
Spain
A municipal territory (in Spanish: término municipal, T.M.), in Spain, is the territory, perfectly delimited, of a municipality; the territory to which the administrative action of a city council extends. Law 7/1985, of April 2, 1985, Regulating the Bases of the Local Regime, in its Article 12.1 defines it as follows:The municipal district is the territory in which the municipality exercises its competences.Each Spanish province is defined as the territorial grouping of its municipalities. Practically the entire national territory is divided into municipalities. There are currently 8131 municipalities in Spain.
The extension of a municipality, according to the National Statistics Institute, is the extension of its municipal area.
Within the municipal area there may be one or several singular population entities. One of these, where the town hall is located, is the capital of the municipality.
The singular entities can be grouped into collective population entities, which receive different names depending on the area: parishes, pedanías, elizates, etc.
Argentina
The municipal territory in Argentina is called ejido or municipal radius and in its origin, which dates back to the viceroyalty, they were public spaces at the exit of the urban layout. They were first administered by the cabildo, then by the provincial treasury and, from 1857, by the newly formed municipality. In the last decades of the 19th century, they were privatized.
The general guidelines for the division of the territory of the provinces and the establishment of the limits of the municipalities are established in the provincial constitutions and the definitive establishment of limits is delegated to the Legislative Power.
There are various systems for the territorial determination of the municip
Document 2:::
Ladinin-1 is a protein that in humans is encoded by the LAD1 gene.
The protein encoded by this gene may be an anchoring filament that is a component of basement membranes. It may contribute to the stability of the association of the epithelial layers with the underlying mesenchyme.
References
Further reading
Document 3:::
The Mīzāb al-Raḥma (, 'gutter of mercy'), also known as the Mīzāb al-Kaʿba ('gutter of the Kaʿba'), is a rain gutter projecting from the roof of the Kaʿba enabling rainwater to pour to the ground below.
Architecture
The roof of the Kaʿba is flat, but slopes gently down to the north-west corner. From this corner, the mīzāb juts out, conducting rainwater from the roof. The lip of the mīzāb has an appendage known as the "beard of the mīzāb". The ground below is paved with marble slabs and decorated with inlaid mosaic designs. The design of the mīzāb has changed over the years; the current form is golden. Its length is , which is included in the wall of the Kaaba, its cavity width is , the height of each side is , and its entry into the roof wall is .
A detailed description of the mīzāb around 1183–85 CE is offered by Ibn Jubayr:
The Mizab is on the top of the wall which overlooks the Hijr. It is of gilded copper and projects four cubits over the Hijr, its breadth being a span. This place under the waterspout is also considered as being a place where, by the favour of God Most High, prayers are answered. The Yemen corner is the same. The wall connecting this place with the Syrian corner is called al-Mustajar [The Place of Refuge]. Underneath the water-spout, and in the court of the Hijr near to the wall of the blessed House, is the tomb of Isma'il [Ishmael] - may God bless and preserve him. Its mark is a slab of green marble, almost oblong and in the form of a mihrab. Beside it is a round green slab of marble, and both [they are verde antico] are remarkable to look upon.
Role in worship
In his Kitāb Akhbār Makka, the ninth-century scholar al-Azraqī wrote with reference to the mīzāb that "anyone who performs the ṣalāt under the mat̲h̲ʿab becomes as pure as on the day when his mother bore him".
Ibn Jubayr offers a vivid account of worship at the mīzāb in 1183 CE:
One of the things that deserve to be confirmed and recorded for the blessings and favour of seeing and o
Document 4:::
An apron is a raised section of ornamental stonework below a window ledge, stone tablet, or monument.
Aprons were used by Roman engineers to build Roman bridges. The main function of apron was to surround the feet of the piers.
Notes
References
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the primary factor that affects propulsive efficiency in aerospace vehicles according to the text?
A. Exhaust expulsion velocity
B. Weight of the vehicle
C. Altitude of operation
D. Fuel type
Answer:
|
A. Exhaust expulsion velocity
|
Relavent Documents:
Document 0:::
The history of numerical control (NC) began when the automation of machine tools first incorporated concepts of abstractly programmable logic, and it continues today with the ongoing evolution of computer numerical control (CNC) technology.
The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the controls to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized the machining processes.
Earlier forms of automation
Cams
The automation of machine tool control began in the 19th century with cams that "played" a machine tool in the way that cams had long been playing musical boxes or operating elaborate cuckoo clocks. Thomas Blanchard built his gun-copying lathes (1820s–30s), and the work of people such as Christopher Miner Spencer developed the turret lathe into the screw machine (1870s). Cam-based automation had already reached a highly advanced state by World War I (1910s).
However, automation via cams is fundamentally different from numerical control because it cannot be abstractly programmed. Cams can encode information, but getting the information from the abstract level (engineering drawing, CAD model, or other design intent) into the cam is a manual process that requires machining or filing. In contrast, numerical control allows information to be transferred from design intent to machine control using abstractions such as numbers and programming languages.
Various forms of abstractly programmable control had existed during the 19th century: those of the Jacquard loom, player pianos, and mechanical computers pioneered by Charles Babbage and others. These developments had the potential for convergence with the automation of machine tool control starting in that century, but the convergence did not happen until many decades later.
Tracer control
The application of hydraulics to cam-based automation resulted in tracing machines that used a stylus to trace a template, such as the enormous Pratt & Whitney "Keller Machine", which could copy templates several feet across. Another approach was "record and playback", pioneered at General Motors (GM) in the 1950s, which used a storage system to record the movements of a human machinist, and then play them back on demand. Analogous systems are common even today, notably the "teaching lathe" which gives new machinists a hands-on feel for the process. None of these were numerically programmable, however, and required an experienced machinist at some point in the process, because the "programming" was physical rather than numerical.
Servos and synchros
One barrier to complete automation was the required tolerances of the machining process, which are routinely on the order of thousandths of an inch. Although connecting some sort of control to a storage device like punched cards was easy, ensuring that the controls were moved to the correct position with the required accuracy was another issue. The movement of the tool resulted in varying forces on the controls that would mean a linear input would not result in linear tool motion. In other words, a control such as that of the Jacquard loom could not work on machine tools because its movements were not strong enough; the metal being cut "fought back" against it with more force than the control could properly counteract.
The key development in this area was the introduction of the servomechanism, which produced powerful, controlled movement, with highly accurate measurement information. Attaching two servos together produced a synchro, where a remote servo's motions were accurately matched by another. Using a variety of mechanical or electrical systems, the output of the synchros could be read to ensure proper movement had occurred (in other words, forming a closed-loop control system).
The first serious suggestion that synchros could be used for machining control was made by Ernst F. W. Alexanderson, a Swedish immigrant to the U.S. working at General Electric (GE). Alexanderson had worked on the problem of torque amplification that allowed the small output of a mechanical computer to drive very large motors, which GE used as part of a larger gun laying system for US Navy ships. Like machining, gun laying requires very high accuracy – fractions of a degree – and the forces during the motion of the gun turrets was non-linear, especially as the ships pitched in waves.
In November 1931 Alexanderson suggested to the Industrial Engineering Department that the same systems could be used to drive the inputs of machine tools, allowing it to follow the outline of a template without the strong physical contact needed by existing tools like the Keller Machine. He stated that it was a "matter of straight engineering development". However, the concept was ahead of its time from a business development perspective, and GE did not take the matter seriously until years later, when others had pioneered the field.
Parsons Corp. and Sikorsky
The birth of NC is generally credited to John T. Parsons and Frank L. Stulen, working out of Parsons Corp. of Traverse City, Michigan. For this contribution, they were jointly awarded the National Medal of Technology in 1985 for "Revolutioniz[ing] Production Of Cars And Airplanes With Numerical Controls For Machines".
In 1942, Parsons was told that helicopters were going to be the "next big thing" by the former head of Ford Trimotor production, Bill Stout. He called Sikorsky Aircraft to inquire about possible work, and soon got a contract to build the wooden stringers in the rotor blades. At the time, rotor blades (rotary wings) were built in the same fashion that fixed wings were, consisting of a long tubular steel spar with stringers (or more accurately ribs) set on them to provide the aerodynamic shape that was then covered with a stressed skin. The stringers for the rotors were built from a design provided by Sikorsky, which was sent to Parsons as a series of 17 points defining the outline. Parsons then had to "fill in" the dots with a French curve to generate an outline. A wooden jig was built up to form the outside of the outline, and the pieces of wood forming the stringer were placed under pressure against the inside of the jig so they formed the proper curve. A series of trusswork members were then assembled inside this outline to provide strength.
After setting up production at a disused furniture factory and ramping up production, one of the blades failed and it was traced to a problem in the spar. At least some of the problem appeared to stem from spot welding a metal collar on the stringer to the metal spar. The collar was built into the stringer during construction, then slid onto the spar and welded in the proper position. Parsons suggested a new method of attaching the stringers directly to the spar using adhesives, never before tried on an aircraft design.
That development led Parsons to consider the possibility of using stamped metal stringers instead of wood. These would not only be much stronger, but far easier to make as well, as they would eliminate the complex layup and glue and screw fastening on the wood. Duplicating this in a metal punch would require the wooden jig to be replaced by a metal cutting tool made of tool steel. Such a device would not be easy to produce given the complex outline. Looking for ideas, Parsons visited Wright Field to see Frank L. Stulen, the head of the Propeller Lab Rotary Wing Branch. During their conversation, Stulen concluded that Parsons didn't really know what he was talking about. Parsons realized Stulen had reached this conclusion, and hired him on the spot. Stulen started work on 1 April 1946 and hired three new engineers to join him.
Stulen's brother worked at Curtis Wright Propeller, and mentioned that they were using punched card calculators for engineering calculations. Stulen decided to adopt the idea to run stress calculations on the rotors, the first detailed automated calculations on helicopter rotors. When Parsons saw what Stulen was doing with the punched card machines, he asked Stulen if they could be used to generate an outline with 200 points instead of the 17 they were given, and offset each point by the radius of a mill cutting tool. If you cut at each of those points, it would produce a relatively accurate cutout of the stringer. This could cut the tool steel and then easily be filed down to a smooth template for stamping metal stringers.
Stullen had no problem making such a program, and used it to produce large tables of numbers that would be taken onto the machine floor. Here, one operator read the numbers off the charts to two other operators, one on each of the X- and Y- axes. For each pair of numbers the operators would move the cutting head to the indicated spot and then lower the tool to make the cut. This was called the "by-the-numbers method", or more technically, "plunge-cutting positioning". It was a labor-intensive prototype of today's 2.5 axis machining (two-and-a-half-axis machining).
Punch cards and first tries at NC
At that point Parsons conceived of a fully automated machine tool. With enough points on the outline, no manual working would be needed to clean it up. However, with manual operation the time saved by having the part more closely match the outline was offset by the time needed to move the controls. If the machine's inputs were attached directly to the card reader, this delay, and any associated manual errors, would be removed and the number of points could be dramatically increased. Such a machine could repeatedly punch out perfectly accurate templates on command. But at the time Parsons had no funds to develop his ideas.
When one of Parsons's salesmen was on a visit to Wright Field, he was told of the problems the newly formed U.S. Air Force was having with new jet-powered designs. He asked if Parsons had anything to help them. Parsons showed Lockheed their idea of an automated mill, but they were uninterested. They decided to use 5-axis template copiers to produce the stringers, cutting from a metal template, and had already ordered the expensive cutting machine. But as Parsons noted:
Now just picture the situation for a minute. Lockheed had contracted to design a machine to make these wings. This machine had five axes of cutter movement, and each of these was tracer controlled using a template. Nobody was using my method of making templates, so just imagine what chance they were going to have of making an accurate airfoil shape with inaccurate templates.
Parson's worries soon came true, and Lockheed's protests that they could fix the problem eventually rang hollow. In 1949 the Air Force arranged funding for Parsons to build his machines on his own. Early work with Snyder Machine & Tool Corp proved the system of directly driving the controls from motors failed to give the accuracy needed to set the machine for a perfectly smooth cut. Since the mechanical controls did not respond in a linear fashion, one could not simply drive it with a given amount of power, because the differing forces meant the same amount of power would not always produce the same amount of motion in the controls. No matter how many points were included, the outline would still be rough. Parsons was confronted by the same problem that had prevented convergence of Jacquard-type controls with machining.
First commercial numerically controlled machine
In 1952, Arma Corporation which had done much defense work on rangefinders during the war, announced the first commercial numerically controlled lathe, developed by Dr. F. W. Cunningham. Arma's first automated lathe was made in 1948, and announced in 1950.
Parsons Corp. and MIT
This was not an impossible problem to solve, but would require some sort of feedback system, like a selsyn, to directly measure how far the controls had actually turned. Faced with the daunting task of building such a system, in the spring of 1949 Parsons turned to Gordon S. Brown's Servomechanisms Laboratory at MIT, which was a world leader in mechanical computing and feedback systems. During the war the Lab had built a number of complex motor-driven devices like the motorized gun turret systems for the Boeing B-29 Superfortress and the automatic tracking system for the SCR-584 radar. They were naturally suited to technological transfer into a prototype of Parsons's automated "by-the-numbers" machine.
The MIT team was led by William Pease assisted by James McDonough. They quickly concluded that Parsons's design could be greatly improved; if the machine did not simply cut at points A and B, but instead moved smoothly between the points, then not only would it make a perfectly smooth cut, but could do so with many fewer points – the mill could cut lines directly instead of having to define a large number of cutting points to "simulate" a line. A three-way agreement was arranged between Parsons, MIT, and the Air Force, and the project officially ran from July 1949 to June 1950. The contract called for the construction of two "Card-a-matic Milling Machines", a prototype and a production system. Both to be handed to Parsons for attachment to one of their mills in order to develop a deliverable system for cutting stringers.
Instead, in 1950 MIT bought a surplus Cincinnati Milling Machine Company "Hydro-Tel" mill of their own and arranged a new contract directly with the Air Force that froze Parsons out of further development. Parsons would later comment that he "never dreamed that anybody as reputable as MIT would deliberately go ahead and take over my project." In spite of the development being handed to MIT, Parsons filed for a patent on "Motor Controlled Apparatus for Positioning Machine Tool" on 5 May 1952, sparking a filing by MIT for a "Numerical Control Servo-System" on 14 August 1952. Parsons received US Patent 2,820,187 on 14 January 1958, and the company sold an exclusive license to Bendix. IBM, Fujitsu and General Electric all took sub-licenses after having already started development of their own devices.
MIT's machine
MIT fitted gears to the various handwheel inputs and drove them with roller chains connected to motors, one for each of the machine's three axes (X, Y, and Z). The associated controller consisted of five refrigerator-sized cabinets that, together, were almost as large as the mill they were connected to. Three of the cabinets contained the motor controllers, one controller for each motor, the other two the digital reading system.
Unlike Parsons's original punched card design, the MIT design used standard 7-track punch tape for input. Three of the tracks were used to control the different axes of the machine, while the other four encoded various control information. The tape was read in a cabinet that also housed six relay-based hardware registers, two for each axis. With every read operation the previously read point was copied into the "starting point" register, and the newly read one into the "ending point" register. The tape was read continually and the number in the registers incremented with each hole encountered in their control track until a "stop" instruction was encountered, four holes in a line.
The final cabinet held a clock that sent pulses through the registers, compared them, and generated output pulses that interpolated between the points. For instance, if the points were far apart the output would have pulses with every clock cycle, whereas closely spaced points would only generate pulses after multiple clock cycles. The pulses were sent into a summing register in the motor controllers, counting up by the number of pulses every time they were received. The summing registers were connected to a digital-to-analog converter that increased power to the motors as the count in the registers increased, making the controls move faster.
The registers were decremented by encoders attached to the motors and the mill itself, which would reduce the count by one for every one degree of rotation. Once the second point was reached the counter would hold a zero, the pulses from the clock would stop, and the motors would stop turning. Each 1 degree rotation of the controls produced a 0.0005 inch movement of the cutting head. The programmer could control the speed of the cut by selecting points that were closer together for slow movements, or further apart for rapid ones.
The system was publicly demonstrated in September 1952, appearing in that month's Scientific American. MIT's system was an outstanding success by any technical measure, quickly making any complex cut with extremely high accuracy that could not easily be duplicated by hand. However, the system was terribly complex, including 250 vacuum tubes, 175 relays and numerous moving parts, reducing its reliability in a production environment. It was also expensive; the total bill presented to the Air Force was $360,000.14 ($2,641,727.63 in 2005 dollars). Between 1952 and 1956 the system was used to mill a number of one-off designs for various aviation firms, in order to study their potential economic impact.
Proliferation of NC
The Air Force Numeric Control and Milling Machine projects formally concluded in 1953, but development continued at the Giddings and Lewis Machine Tool Co. and other locations. In 1955 many of the MIT team left to form Concord Controls, a commercial NC company with Giddings' backing, producing the Numericord controller. Numericord was similar to the MIT design, but replaced the punch tape with a magnetic tape reader that General Electric was working on. The tape contained a number of signals of different phases, which directly encoded the angle of the various controls. The tape was played at a constant speed in the controller, which set its half of the selsyn to the encoded angles while the remote side was attached to the machine controls. Designs were still encoded on paper tape, but the tapes were transferred to a reader/writer that converted them into magnetic form. The magtapes could then be used on any of the machines on the floor, where the controllers were greatly reduced in complexity. Developed to produce highly accurate dies for an aircraft skinning press, the Numericord "NC5" went into operation at G&L's plant at Fond du Lac, Wisconsin in 1955.
Monarch Machine Tool also developed a numerical controlled lathe, starting in 1952. They demonstrated their machine at the 1955 Chicago Machine Tool Show (predecessor of today's IMTS), along with a number of other vendors with punched card or paper tape machines that were either fully developed or in prototype form. These included Kearney and Trecker's Milwaukee-Matic II that could change its cutting tool under numerical control, a common feature on modern machines.
A Boeing report noted that "numerical control has proved it can reduce costs, reduce lead times, improve quality, reduce tooling and increase productivity.” In spite of these developments, and glowing reviews from the few users, uptake of NC was relatively slow. As Parsons later noted:
The NC concept was so strange to manufacturers, and so slow to catch on, that the US Army itself finally had to build 120 NC machines and lease them to various manufacturers to begin popularizing its use.
In 1958 MIT published its report on the economics of NC. They concluded that the tools were competitive with human operators, but simply moved the time from the machining to the creation of the tapes. In Forces of Production, Noble claims that this was the whole point as far as the Air Force was concerned; moving the process off of the highly unionized factory floor and into the non-unionized white collar design office. The cultural context of the early 1950s, a second Red Scare with a widespread fear of a bomber gap and of domestic subversion, sheds light on this interpretation. It was strongly feared that the West would lose the defense production race to the Communists, and that syndicalist power was a path toward losing, either by "getting too soft" (less output, greater unit expense) or even by Communist sympathy and subversion within unions (arising from their common theme of empowering the working class).
Aside from what ever economic inefficiencies the first attempts at NC displayed, the time and effort required in the creation of the tapes also introduced possibilities for production errors. This would be a motivation for Air Force contracts ongoing in 1958 like the Automatically Programmed Tool project and the report, then later project, Computer-Aided Design: A Statement of Objectives 1960 of Douglas (Doug) T. Ross.
CNC arrives
Many of the commands for the experimental parts were programmed "by hand" to produce the punch tapes that were used as input. During the development of Whirlwind, MIT's real-time computer, John Runyon coded a number of subroutines to produce these tapes under computer control. Users could enter a list of points and speeds, and the program would calculate the points needed and automatically generate the punch tape. In one instance, this process reduced the time required to produce the instruction list and mill the part from 8 hours to 15 minutes. This led to a proposal to the Air Force to produce a generalized "programming" language for numerical control, which was accepted in June 1956. Doug Ross was given leadership of the project and was made head of another newly created MIT research department. He chose to name the unit the Computer Applications Group feeling the word "application" fit with the vision that general purpose machines could be "programmed" to fill many roles.
Starting in September, Ross and Pople outlined a language for machine control that was based on points and lines, developing this over several years into the APT programming language. In 1957 the Aircraft Industries Association (AIA) and Air Materiel Command at Wright-Patterson Air Force Base joined with MIT to standardize this work and produce a fully computer-controlled NC system. On 25 February 1959 the combined team held a press conference showing the results, including a 3D machined aluminum ash tray that was handed out in the press kit. In 1959 they also described the use of APT on a 60-foot mill at Boeing since 1957.
Meanwhile, Patrick Hanratty was making similar developments at GE as part of their partnership with G&L on the Numericord. His language, PRONTO, beat APT into commercial use when it was released in 1958. Hanratty then went on to develop MICR magnetic ink characters that were used in cheque processing, before moving to General Motors to work on the groundbreaking DAC-1 CAD system.
APT was soon extended to include "real" curves in 2D-APT-II. With its release into the Public Domain, MIT reduced its focus on NC as it moved into CAD experiments. APT development was picked up with the AIA in San Diego, and in 1962, by Illinois Institute of Technology Research. Work on making APT an international standard started in 1963 under USASI X3.4.7, but any manufacturers of NC machines were free to add their own one-off additions (like PRONTO), so standardization was not completed until 1968, when there were 25 optional add-ins to the basic system.
Just as APT was being released in the early 1960s, a second generation of lower-cost transistorized computers was hitting the market that were able to process much larger volumes of information in production settings. This reduced the cost of programming for NC machines and by the mid-1960s, APT runs accounted for a third of all computer time at large aviation firms.
CADCAM meets CNC
While the Servomechanisms Lab was in the process of developing their first mill, in 1953, MIT's Mechanical Engineering Department dropped the requirement that undergraduates take courses in drawing. The instructors formerly teaching these programs were merged into the Design Division, where an informal discussion of computerized design started. Meanwhile, the Electronic Systems Laboratory, the newly rechristened Servomechanisms Laboratory, had been discussing whether or not design would ever start with paper diagrams in the future.
In January 1959, an informal meeting was held involving individuals from both the Electronic Systems Laboratory and the Mechanical Engineering Department's Design Division. Formal meetings followed in April and May, which resulted in the "Computer-Aided Design Project". In December 1959, the Air Force issued a one-year contract to ESL for $223,000 to fund the project, including $20,800 earmarked for 104 hours of computer time at $200 per hour. This proved to be far too little for the ambitious program they had in mind In 1959 that was a lot of money. Newly graduated engineers were making perhaps $500 to $600 per month at the time. To augment the Air Force's commitment Ross replayed the success of the APT development model. The AED Cooperative Program which ultimately ran for a five-year period had outside corporate staff, deeply experienced design manpower on loan from companies . Some relocating to MIT for half a year to 14 or 18 months at a time. Ross later estimated this value at almost six million dollars in support of AED development work, systems research, compilers. AED was a machine independent software engineering job and an extension of ALGOL 60 the standard for the publication of algorithms by research computer scientists. Development started out in parallel on the IBM 709 and the TX-0 which later enabled projects to run at various sites. The engineering calculation and systems development system, AED, was released to the Public Domain in March 1965.
In 1959, General Motors started an experimental project to digitize, store and print the many design sketches being generated in the various GM design departments. When the basic concept demonstrated that it could work, they started the DAC-1 – Design Augmented by Computer – project with IBM to develop a production version. One part of the DAC project was the direct conversion of paper diagrams into 3D models, which were then converted into APT commands and cut on milling machines. In November 1963 a design for the lid of a trunk moved from 2D paper sketch to 3D clay prototype for the first time. With the exception of the initial sketch, the design-to-production loop had been closed.
Meanwhile, MIT's offsite Lincoln Labs was building computers to test new transistorized designs. The ultimate goal was essentially a transistorized Whirlwind known as TX-2, but in order to test various circuit designs a smaller version known as TX-0 was built first. When construction of TX-2 started, time in TX-0 freed up and this led to a number of experiments involving interactive input and use of the machine's CRT display for graphics. Further development of these concepts led to Ivan Sutherland's groundbreaking Sketchpad program on the TX-2.
Sutherland moved to the University of Utah after his Sketchpad work, but it inspired other MIT graduates to attempt the first true CAD system. It was Electronic Drafting Machine (EDM), sold to Control Data and known as "Digigraphics", which Lockheed used to build production parts for the C-5 Galaxy, the first example of an end-to-end CAD/CNC production system.
By 1970 there were a wide variety of CAD firms including Intergraph, Applicon, Computervision, Auto-trol Technology, UGS Corp. and others, as well as large vendors like CDC and IBM.
Proliferation of CNC
The price of computer cycles fell drastically during the 1960s with the widespread introduction of useful minicomputers. Eventually it became less expensive to handle the motor control and feedback with a computer program than it was with dedicated servo systems. Small computers were dedicated to a single mill, placing the entire process in a small box. PDP-8's and Data General Nova computers were common in these roles. The introduction of the microprocessor in the 1970s further reduced the cost of implementation, and today almost all CNC machines use some form of microprocessor to handle all operations.
The introduction of lower-cost CNC machines radically changed the manufacturing industry. Curves are as easy to cut as straight lines, complex 3-D structures are relatively easy to produce, and the number of machining steps that required human action have been dramatically reduced. With the increased automation of manufacturing processes with CNC machining, considerable improvements in consistency and quality have been achieved with no strain on the operator. CNC automation reduced the frequency of errors and provided CNC operators with time to perform additional tasks. CNC automation also allows for more flexibility in the way parts are held in the manufacturing process and the time required changing the machine to produce different components. Additionally, as CNC operators become more in demand, automation becomes a more viable choice than labor.
During the early 1970s the Western economies were mired in slow economic growth and rising employment costs, and NC machines started to become more attractive. The major U.S. vendors were slow to respond to the demand for machines suitable for lower-cost NC systems, and into this void stepped the Germans. In 1979, sales of German machines (eg. Siemens Sinumerik) surpassed the U.S. designs for the first time. This cycle quickly repeated itself, and by 1980 Japan had taken a leadership position, U.S. sales dropping all the time. Once sitting in the #1 position in terms of sales on a top-ten chart consisting entirely of U.S. companies in 1971, by 1987 Cincinnati Milacron was in 8th place on a chart heavily dominated by Japanese firms.
Many researchers have commented that the U.S. focus on high-end applications left them in an uncompetitive situation when the economic downturn in the early 1970s led to greatly increased demand for low-cost NC systems. Unlike the U.S. companies, who had focused on the highly profitable aerospace market, German and Japanese manufacturers targeted lower-profit segments from the start and were able to enter the low-cost markets much more easily. Additionally large Japanese companies established their own subsidiaries or strengthened their machine divisions to produce the machines they needed. This was seen as a national effort and largely encouraged by MITI, the Japanese Ministry for International Trade and Industry. In the early years of the development, MITI provided focused resources for the transfer of technological know-how. National efforts in the US were focused on integrated manufacturing from the historical perspective the defence sector maintained. This evolved in the later 1980s, as the so-called machine tool crisis was recognized, into a number of programs that sought to broaden transfer of know how to domestic tool makers. The Air Force sponsored Next Generation Controller Program 1989 as an example. This process continued through the 1990s to the present day from DARPA incubators and myriad research grants.
As computing and networking evolved, so did direct numerical control (DNC). Its long-term coexistence with less networked variants of NC and CNC is explained by the fact that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. This explains why machine tool models and tape storage media persist in grandfathered fashion even as the state of the art advances.
DIY, hobby, and personal CNC
Recent developments in small scale CNC have been enabled, in large part, by the Enhanced Machine Controller project in 1989 from the National Institute of Standards and Technology (NIST), an agency of the US Government's Department of Commerce. EMC [LinuxCNC] is a public domain program operating under the Linux operating system and working on PC based hardware. After the NIST project ended, development continued, leading to LinuxCNC which is licensed under the GNU General Public License and GNU Lesser General Public License (GPL and LGPL). Derivations of the original EMC software have also led to several proprietary low cost PC based programs notably TurboCNC, and Mach3, as well as embedded systems based on proprietary hardware. The availability of these PC based control programs has led to the development of DIY CNC, allowing hobbyists to build their own using open source hardware designs. The same basic architecture has allowed manufacturers, such as Sherline and Taig, to produce turnkey lightweight desktop milling machines for hobbyists.
The easy availability of PC based software and support information of Mach3, written by Art Fenerty, lets anyone with some time and technical expertise make complex parts for home and prototype use. Fenerty is considered a principal founder of Windows-based PC CNC machining.
Eventually, the homebrew architecture was fully commercialized and used to create larger machinery suitable for commercial and industrial applications. This class of equipment has been referred to as Personal CNC. Parallel to the evolution of personal computers, Personal CNC has its roots in EMC and PC based control, but has evolved to the point where it can replace larger conventional equipment in many instances. As with the Personal Computer, Personal CNC is characterized by equipment whose size, capabilities, and original sales price make it useful for individuals, and which is intended to be operated directly by an end user, often without professional training in CNC technology.
Today
Tape readers may still be found on current CNC facilities, since machine tools have a long operating life. Other methods of transferring CNC programs to machine tools, such as diskettes or direct connection of a portable computer, are also used.
Punched mylar tapes are more robust. Floppy disks, USB flash drives and local area networking have replaced the tapes to some degree, especially in larger environments that are highly integrated.
The proliferation of CNC led to the need for new CNC standards that were not encumbered by licensing or particular design concepts, like proprietary extensions to APT. A number of different "standards" proliferated for a time, often based around vector graphics markup languages supported by plotters. One such standard has since become very common, the "G-code" that was originally used on Gerber Scientific plotters and then adapted for CNC use. The file format became so widely used that it has been embodied in an EIA standard. In turn, while G-code is the predominant language used by CNC machines today, there is a push to supplant it with STEP-NC, a system that was deliberately designed for CNC, rather than grown from an existing plotter standard.
While G-code is the most common method of programming, some machine-tool/control manufacturers also have invented their own proprietary "conversational" methods of programming, trying to make it easier to program simple parts and make set-up and modifications at the machine easier (such as Mazak's Mazatrol, Okuma's IGF, and Hurco). These have met with varying success.
A more recent advancement in CNC interpreters is support of logical commands, known as parametric programming (also known as macro programming). Parametric programs include both device commands as well as a control language similar to BASIC. The programmer can make if/then/else statements, loops, subprogram calls, perform various arithmetic, and manipulate variables to create a large degree of freedom within one program. An entire product line of different sizes can be programmed using logic and simple math to create and scale an entire range of parts, or create a stock part that can be scaled to any size a customer demands.
Since about 2006, the idea has been suggested and pursued to foster the convergence with CNC and DNC of several trends elsewhere in the world of information technology that have not yet much affected CNC and DNC. One of these trends is the combination of greater data collection (more sensors), greater and more automated data exchange (via building new, open industry-standard XML schemas), and data mining to yield a new level of business intelligence and workflow automation in manufacturing. Another of these trends is the emergence of widely published APIs together with the aforementioned open data standards to encourage an ecosystem of user-generated apps and mashups, which can be both open and commercial – in other words, taking the new IT culture of app marketplaces that began in web development and smartphone app development and spreading it to CNC, DNC, and the other factory automation systems that are networked with the CNC/DNC. MTConnect is a leading effort to bring these ideas into successful implementation.
Cited sources
Further reading
Herrin, Golden E. "Industry Honors The Inventor Of NC", Modern Machine Shop, 12 January 1998.
Siegel, Arnold. "Automatic Programming of Numerically Controlled Machine Tools", Control Engineering, Volume 3 Issue 10 (October 1956), pp. 65–70.
Vasilash, Gary. "Man of Our Age",
Christopher jun Pagarigan (Vini) Edmonton Alberta Canada. CNC Informatic, Automotive Design & Production.
Document 1:::
Fostering, in falconry and reintroduction of birds, is a method of breeding birds for their introduction into the wild that consists of placing chicks in the nest of a couple that has others of a similar age and size. Sometimes it can also be used when the chicks have already left the nest but continue to be fed by their parents.
This method can be used in those species that do not have siblicide behaviors and that are capable of carrying out this adoption without rejecting the new chicks. In addition, the parents must have previously been assessed to find out if they are capable of feeding more chicks.
References
Document 2:::
QRIO ("Quest for cuRIOsity", originally named Sony Dream Robot or SDR) was a bipedal humanoid entertainment robot developed and marketed (but never sold) by Sony to follow up on the success of its AIBO entertainment robot. QRIO stood approximately 0.6 m (2 feet) tall and weighed 7.3 kg (16 pounds). QRIO's slogan was "Makes life fun, makes you happy!"
On January 26, 2006, on the same day as it announced its discontinuation of AIBO and other products, Sony announced that it would stop development of QRIO.
Development
The QRIO prototypes were developed and manufactured by Sony Intelligence Dynamics Laboratory, Inc. The number of these prototypes in existence is unknown. Up to ten QRIO have been seen performing a dance routine together; this was confirmed by a Sony representative at the Museum of Science in Boston, MA on January 22, 2006. Numerous videos of this can be found on the web.
Four fourth-generation QRIO prototype robots were featured dancing in the Hell Yes music video by recording artist Beck. These prototypes lacked a third camera in the center of the forehead and the improved hands and wrists which were added to later prototypes. It took programmers three weeks to program their choreography.
QRIO is capable of voice and face recognition, making it able to remember people as well as their likes and dislikes. A video on QRIO's website shows it speaking with several children. QRIO can run at 23 cm/s, and is credited in Guinness World Records (2005 edition) as being the first bipedal robot capable of running (which it defines as moving while both legs are off the ground at the same time). The 4th generation QRIO's internal battery lasts about 1 hour.
In popular culture
In 2005, four QRIO robots appeared in the music video for "Hell Yes" by Beck. The robots dance along to the music.
In the 2009 series finale of the reimagined television series Battlestar Galactica, the virtual Number Six and virtual Baltar appear in coda set on modern-day Earth. They co
Document 3:::
In biochemistry, avidity refers to the accumulated strength of multiple affinities of individual non-covalent binding interactions, such as between a protein receptor and its ligand, and is commonly referred to as functional affinity. Avidity differs from affinity, which describes the strength of a single interaction. However, because individual binding events increase the likelihood of occurrence of other interactions (i.e., increase the local concentration of each binding partner in proximity to the binding site), avidity should not be thought of as the mere sum of its constituent affinities but as the combined effect of all affinities participating in the biomolecular interaction. A particular important aspect relates to the phenomenon of 'avidity entropy'. Biomolecules often form heterogenous complexes or homogeneous oligomers and multimers or polymers. If clustered proteins form an organized matrix, such as the clathrin-coat, the interaction is described as a matricity.
Antibody-antigen interaction
Avidity is commonly applied to antibody interactions in which multiple antigen-binding sites simultaneously interact with the target antigenic epitopes, often in multimerized structures. Individually, each binding interaction may be readily broken; however, when many binding interactions are present at the same time, transient unbinding of a single site does not allow the molecule to diffuse away, and binding of that weak interaction is likely to be restored.
Each antibody has at least two antigen-binding sites, therefore antibodies are bivalent to multivalent. Avidity (functional affinity) is the accumulated strength of multiple affinities. For example, IgM is said to have low affinity but high avidity because it has 10 weak binding sites for antigen as opposed to the 2 stronger binding sites of IgG, IgE and IgD with higher single binding affinities.
Affinity
Binding affinity is a measure of dynamic equilibrium of the ratio of on-rate (kon) and off-rate (koff) u
Document 4:::
The Irish logarithm was a system of number manipulation invented by Percy Ludgate for machine multiplication. The system used a combination of mechanical cams as lookup tables and mechanical addition to sum pseudo-logarithmic indices to produce partial products, which were then added to produce results.
The technique is similar to Zech logarithms (also known as Jacobi logarithms), but uses a system of indices original to Ludgate.
Concept
Ludgate's algorithm compresses the multiplication of two single decimal numbers into two table lookups (to convert the digits into indices), the addition of the two indices to create a new index which is input to a second lookup table that generates the output product. Because both lookup tables are one-dimensional, and the addition of linear movements is simple to implement mechanically, this allows a less complex mechanism than would be needed to implement a two-dimensional 10×10 multiplication lookup table.
Ludgate stated that he deliberately chose the values in his tables to be as small as he could make them; given this, Ludgate's tables can be simply constructed from first principles, either via pen-and-paper methods, or a systematic search using only a few tens of lines of program code. They do not correspond to either Zech logarithms, Remak indexes or Korn indexes.
Pseudocode
The following is an implementation of Ludgate's Irish logarithm algorithm in the Python programming language:
table1 = [50, 0, 1, 7, 2, 23, 8, 33, 3, 14]
table2 = [ 1, 2, 4, 8, 16, 32, 64, 3, 6, 12,
24, 48, 0, 0, 9, 18, 36, 72, 0, 0,
0, 27, 54, 5, 10, 20, 40, 0, 81, 0,
15, 30, 0, 7, 14, 28, 56, 45, 0, 0,
21, 42, 0, 0, 0, 0, 25, 63, 0, 0,
0, 0, 0, 0, 0, 0, 35, 0, 0, 0,
0, 0, 0, 0, 0, 0, 49, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What is the significance of the development of the servomechanism in the context of numerical control (NC) technology?
A. It allowed for the automation of template tracing.
B. It enabled powerful and controlled movement with accurate measurement.
C. It was the first machine tool developed for numerical control.
D. It simplified the process of creating punched tape.
Answer:
|
B. It enabled powerful and controlled movement with accurate measurement.
|
Relavent Documents:
Document 0:::
In mathematics, a smooth compact manifold M is called almost flat if for any there is a Riemannian metric on M such that and
is -flat, i.e. for the sectional curvature of we have .
Given , there is a positive number such that if an -dimensional manifold admits an -flat metric with diameter then it is almost flat. On the other hand, one can fix the bound of sectional curvature and get the diameter going to zero, so the almost-flat manifold is a special case of a collapsing manifold, which is collapsing along all directions.
According to the Gromov–Ruh theorem, is almost flat if and only if it is infranil. In particular, it is a finite factor of a nilmanifold, which is the total space of a principal torus bundle over a principal torus bundle over a torus.
References
Hermann Karcher. Report on M. Gromov's almost flat manifolds. Séminaire Bourbaki (1978/79), Exp. No. 526, pp. 21–35, Lecture Notes in Math., 770, Springer, Berlin, 1980.
Peter Buser and Hermann Karcher. Gromov's almost flat manifolds. Astérisque, 81. Société Mathématique de France, Paris, 1981. 148 pp.
Peter Buser and Hermann Karcher. The Bieberbach case in Gromov's almost flat manifold theorem. Global differential geometry and global analysis (Berlin, 1979), pp. 82–93, Lecture Notes in Math., 838, Springer, Berlin-New York, 1981.
.
.
Document 1:::
Material flow cost accounting (MFCA) is a management tool that assists organizations in better understanding the potential environmental and financial consequences of their material and energy practices and seeks to improve them via changes in those practices. It does so by assessing the physical material flows in a company or a supply chain and assign adequate associated costs to these flows.
History
The method was developed in Germany in the 1980s and is related to approaches such as eco balances, flow cost accounting and "Reststoffkostenrechnung".
The method became a huge success in Japan in the 2000s. By the year 2010 up to 300 companies had applied the MFCA approach, which was highly supported by the Japanese government.
In 2011 the International Organization for Standardization (ISO) published a norm on MFCA (i.e. EN ISO 14051:2011). In 2020 an analysis of 73 case studies was made about the experiences companies have made when applying MFCA including success factors and obstacles.
Objective and principles
The aim of MFCA is to enhance both environmental and economic performance through improved material and energy use. Since 2011 a general framework for MFCA has been provided by the ISO 14051 norm. In order to improve material and energy efficiency MFCA aims to:
Increasing transparency regarding material and energy flows and the respective costs
Supporting organizational decisions in areas such as process engineering, production planning, quality control, product design and supply chain management
Improving coordination and communication on material and energy use within organizations.
See also
Business process reengineering
Sustainable design
References
Document 2:::
YAFFS (Yet Another Flash File System) is a file system designed and written by Charles Manning for the company Aleph One.
YAFFS1 was the first version of this file system and was designed for the then-current NAND chips with 512 byte page size (+ 16 byte spare (OOB; Out-Of-Band) area). Work started in 2002, and it was first released later that year. The initial work was sponsored by Toby Churchill Ltd, and Brightstar Engineering.
These older chips also generally allow 2 or 3 write cycles per page. YAFFS takes advantage of this: dirty pages are marked by writing to a specific spare area byte. Newer NAND flash chips have larger pages, first 2K pages (+ 64 bytes OOB), later 4K, with stricter write requirements. Each page within an erase block (128 kilobytes) must be written to in sequential order, and each page must be written only once.
Designing a storage system that enforces a "write once rule" ("write once property") has several advantages.
YAFFS2 was designed to accommodate these newer chips. It was based on the YAFFS1 source code, with the major difference being that internal structures are not fixed to assume 512 byte sizing, and a block sequence number is placed on each written page. In this way older pages can be logically overwritten without violating the "write once" rule. It was released in late 2003.
YAFFS is a robust log-structured file system that holds data integrity as a high priority. A secondary YAFFS goal is high performance. YAFFS will typically outperform most alternatives. It is also designed to be portable and has been used on Linux, WinCE, pSOS, RTEMS, eCos, ThreadX, and various special-purpose OSes. A variant 'YAFFS/Direct' is used in situations where there is no OS, embedded OSes or bootloaders: it has the same core filesystem but simpler interfacing to both the higher and lower level code and the NAND flash hardware.
The YAFFS codebase is licensed both under the GPL and under per-product licenses available from Aleph One.
YAFFS is loc
Document 3:::
The Feighner Criteria are a set of influential psychiatric diagnostic criteria developed at Washington University in St. Louis between the late 1950s to the early 1970s.
Overview
The criteria are named after a psychiatric paper published in 1972 of which John Feighner was the first listed author. It became the most cited article in psychiatry for some time. The development of the criteria had been led by a trio of psychiatrists working together on the project for a medical model of psychiatric diagnosis since the late 1950s: Eli Robins, Samuel Guze and George Winokur.
Fourteen conditions were defined, including primary affective disorders (such as depression), schizophrenia, anxiety neurosis and antisocial personality disorder.
The criteria were expanded in the publication of the Research Diagnostic Criteria on which many of the criteria of the American Psychiatric Association's DSM III (1980) were based, which in turn shaped the World Health Organization's ICD manual. "The historical record shows that the small group of individuals who created the Feighner criteria instigated a paradigm shift that has had profound effects on the course of American and, ultimately, world psychiatry."
External links
Commentary on the criteria by John Feighner (written in 1989)
Document 4:::
The Underhanded C Contest was a programming contest to turn out code that is malicious, but passes a rigorous inspection, and looks like an honest mistake even if discovered. The contest rules define a task, and a malicious component. Entries must perform the task in a malicious manner as defined by the contest, and hide the malice. Contestants are allowed to use C-like compiled languages to make their programs.
The contest was organized by Dr. Scott Craver of the Department of Electrical Engineering at Binghamton University. The contest was initially inspired by Daniel Horn's Obfuscated V contest in the fall of 2004. For the 2005 to 2008 contests, the prize was a $100 gift certificate to ThinkGeek. The 2009 contest had its prize increased to $200 due to the very late announcement of winners, and the prize for the 2013 contest is also a $200 gift certificate.
Contests
2005
The 2005 contest had the task of basic image processing, such as resampling or smoothing, but covertly inserting unique and useful "fingerprinting" data into the image. Winning entries from 2005 used uninitialized data structures, reuse of pointers, and an embedding of machine code in constants.
2006
The 2006 contest required entries to count word occurrences, but have vastly different runtimes on different platforms. To accomplish the task, entries used fork implementation errors, optimization problems, endian differences and various API implementation differences. The winner called strlen() in a loop, leading to quadratic complexity which was optimized out by a Linux compiler but not by Windows.
2007
The 2007 contest required entries to encrypt and decrypt files with a strong, readily available encryption algorithm such that a low percentage (1% - 0.01%) of the encrypted files may be cracked in a reasonably short time. The contest commenced on April 16 and ended on July 4. Entries used misimplementations of RC4, misused API calls, and incorrect function prototypes.
2008
The 2008 contest
The following are multiple choice questions (with answers) about knowledge and skills in advanced master-level STEM courses.
What does the parameter space define in mathematical models?
A. The space of all possible parameter values
B. The physical dimensions of an object
C. The limits of statistical distribution
D. The randomness of data points
Answer:
|
A. The space of all possible parameter values
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.