text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Martin Roček is a professor of theoretical physics at the State University of New York at Stony Brook and a member of the C. N. Yang Institute for Theoretical Physics. He received A.B. and Ph.D. degrees from Harvard University in 1975 and 1979. He did post-doctoral research at the University of Cambridge and Caltech before becoming a professor at Stony Brook University. He was one of the co-inventors of hyperkähler quotients, a hyperkahler analogue of Marsden–Weinstein reduction and the structure of Bihermitian manifolds. His research interests include supersymmetry, string theory and applications of generalized complex geometry, and with S. J. Gates, M. T. Grisaru, and W. Siegel, Rocek coauthored "Superspace, or One thousand and one lessons in supersymmetry" (1984), the first comprehensive book on supersymmetry. He is the local coordinator of the annual Simons Workshop in Mathematics and Physics jointly hosted by Yang Institute for Theoretical Physics and the Department of Mathematics of the Stony Brook University. | https://en.wikipedia.org/wiki?curid=5980095 |
Markarian 421 (Mrk 421, Mkn 421) is a blazar located in the constellation Ursa Major. The object is an active galaxy and a BL Lacertae object, and is a strong source of gamma rays. It is about 397 million light-years (redshift: z=0.0308 eq. 122Mpc) to 434 million light-years (133Mpc) from the Earth. It is one of the closest blazars to Earth, making it one of the brightest quasars in the night sky. It is suspected to have a supermassive black hole (SMBH) at its center due to its active nature. An early-type high inclination spiral galaxy (Markarian 421-5) is located 14 arc-seconds northeast of Markarian 421. It was first determined to be a very high energy gamma ray emitter in 1992 by M. Punch at the Whipple Observatory, and an extremely rapid outburst in very-high-energy gamma rays (15-minute rise-time) was measured in 1996 by J. Gaidos at Whipple Observatory also had an outburst in 2001 and is monitored by the Whole Earth Blazar Telescope project. Due to its brightness (around 13.3 magnitude, max. 11.6 mag. and min. 16 mag.) the object can also be viewed by amateurs in smaller telescopes. | https://en.wikipedia.org/wiki?curid=6003864 |
Generic drug A generic drug is a pharmaceutical drug that contains the same chemical substance as a drug that was originally protected by chemical patents. Generic drugs are allowed for sale after the patents on the original drugs expire. Because the active chemical substance is the same, the medical profile of generics is believed to be equivalent in performance. A generic drug has the same active pharmaceutical ingredient (API) as the original, but it may differ in some characteristics such as the manufacturing process, formulation, excipients, color, taste, and packaging. Although they may not be associated with a particular company, generic drugs are usually subject to government regulations in the countries in which they are dispensed. They are labeled with the name of the manufacturer and a generic non-proprietary name such as the United States Adopted Name (USAN) or International Non-proprietary Name (INN) of the drug. A generic drug must contain the same active ingredients as the original brand-name formulation. The U.S. Food and Drug Administration (FDA) requires generics to be identical to or within an acceptable bioequivalent range of their brand-name counterparts, with respect to pharmacokinetic and pharmacodynamic properties. (The FDA's use of the word "identical" is a legal interpretation, not literal.) Biopharmaceuticals, such as monoclonal antibodies, differ biologically from small molecule drugs | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug Biosimilars have active pharmaceutical ingredients that are almost identical to the original product and are typically regulated under an extended set of rules, but they are not the same as generic drugs as the active ingredients are not the same as those of their reference products. In most cases, generic products become available after the patent protections, afforded to a drug's original developer, expire. Once generic drugs enter the market, competition often leads to substantially lower prices for both the original brand-name product and its generic equivalents. In most countries, patents give 20 years of protection. However, many countries and regions, such as the European Union and the United States, may grant up to five years of additional protection ("patent term restoration") if manufacturers meet specific goals, such as conducting clinical trials for pediatric patients. Manufacturers, wholesalers, insurers, and drugstores can all increase prices at various stages of production and distribution. In 2014, according to an analysis by the Generic Pharmaceutical Association, generic drugs accounted for 88% of the 4.3 billion prescriptions filled in the United States. "Branded generics" on the other hand are defined by the FDA and NHS as "products that are (a) either novel dosage forms of off-patent products produced by a manufacturer that is not the originator of the molecule, or (b) a molecule copy of an off-patent product with a trade name | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug " Since the company making branded generics can spend little on research and development, it is able to spend on marketing alone, thus earning higher profits and driving costs down. For example, the largest revenues of Ranbaxy, now owned by Sun Pharma, came from branded generics. names are constructed using standardized affixes that distinguish drugs between and within classes and suggest their action. When a pharmaceutical company first markets a drug, it is usually under a patent that, until it expires, the company can use to exclude competitors by suing them for patent infringement. Pharmaceutical companies that develop new drugs generally only invest in drug candidates with strong patent protection as a strategy to recoup their costs of drug development (including the costs of the drug candidates that fail) and to make a profit. The average cost to a brand-name company of discovering, testing, and obtaining regulatory approval for a new drug, with a new chemical entity, was estimated to be as much as US$800 million in 2003 and US$2.6 billion in 2014. Drug companies that bring new products have several product line extension strategies they use to extend their exclusivity, some of which are seen as gaming the system and referred to by critics as "evergreening", but at some point there is no patent protection available | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug For as long as a drug patent lasts, a brand-name company enjoys a period of market exclusivity, or monopoly, in which the company is able to set the price of the drug at a level that maximizes profit. This profit often greatly exceeds the development and production costs of the drug, allowing the company to offset the cost of research and development of other drugs that are not profitable or do not pass clinical trials. Large pharmaceutical companies often spend millions protecting their patents from generic competition. Apart from litigation, they may reformulate a drug or license a subsidiary (or another company) to sell generics under the original patent. Generics sold under license from the patent holder are known as authorized generics. Generic drugs are usually sold for significantly lower prices than their branded equivalents and at lower profit margins. One reason for this is that competition increases among producers when a drug is no longer protected by patents. Generic companies incur fewer costs in creating generic drugs—only the cost of manufacturing, without the costs of drug discovery and drug development—and are therefore able to maintain profitability at a lower price. The prices are often low enough for users in less-prosperous countries to afford them. For example, Thailand has imported millions of doses of a generic version of the blood-thinning drug Plavix (used to help prevent heart attacks) from India, the leading manufacturer of generic drugs, at a cost of US$0.03 per dose | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug companies may also receive the benefit of the previous marketing efforts of the brand-name company, including advertising, presentations by drug representatives, and distribution of free samples. Many drugs introduced by generic manufacturers have already been on the market for a decade or more and may already be well known to patients and providers, although often under their branded name. India is a leading country in the world's generic drugs market, exporting US$20.0 billion worth of drugs in the 2019–20 (April–March) year. India exports generic drugs to the United States and the European Union. In the United Kingdom, generic drug pricing is controlled by the government's reimbursement rate. The price paid by pharmacists and doctors is determined mainly by the number of license holders, the sales value of the original brand, and the ease of manufacture. A typical price decay graph will show a "scalloped" curve, which usually starts at the brand-name price on the day of generic launch and then falls as competition intensifies. After some years, the graph typically flattens out at approximately 20% of the original brand price. In about 20% of cases, the price "bounces": Some license holders withdraw from the market when the selling price dips below their cost of goods, and the price then rises for a while until the license holders re-enter the market with new stock. The NHS spent about £4.3 billion on generic medicines in 2016–17 | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug In 2012, 84% of prescriptions in the US were filled with generic drugs, and in 2014, the use of generic drugs in the United States led to US$254 billion in health care savings. In the mid 2010s the generics industry began transitioning to the end of an era of giant patent cliffs in the pharmaceutical industry; patented drugs with sales of around US$28 billion were set to come off patent in 2018, but in 2019 only about US$10 billion in revenue was set to open for competition, and less the next year. Companies in the industry have responded with consolidation or turning to try to generate new drugs. Most nations require generic drug manufacturers to prove that their formulations are bioequivalent to their brand-name counterparts. Bioequivalence does not mean generic drugs must be exactly the same as the brand-name product ("pharmaceutical equivalent"). Chemical differences may exist; a different salt or ester may be used, for instance. Different inactive ingredients means that the generic may look different from the originator brand. However, the therapeutic effect of the drug must be the same ("pharmaceutical alternative"). Most small molecule drugs are accepted as bioequivalent if their pharmacokinetic parameters of area under the curve (AUC) and maximum concentration (C) are within a 90% confidence interval of 80–125%; most approved generics are well within this limit | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug For more complex products—such as inhalers, patch delivery systems, liposomal preparations, or biosimilar drugs—demonstrating pharmacodynamic or clinical equivalence is more challenging. Enacted in 1984, the Drug Price Competition and Patent Term Restoration Act, informally known as the Hatch–Waxman Act, standardized procedures for recognition of generic drugs. In 2007, the FDA launched the Generic Initiative for Value and Efficiency (GIVE): an effort to modernize and streamline the generic drug approval process, and to increase the number and variety of generic products available. Before a company can market a generic drug, it needs to file an Abbreviated New Drug Application (ANDA) with the Food and Drug Administration, seeking to demonstrate therapeutic equivalence to a previously approved "reference-listed drug" and proving that it can manufacture the drug safely and consistently. For an ANDA to be approved, the FDA requires that the 90% confidence interval of the geometric mean test/reference ratios for the total drug exposure (represented by the area under the curve or AUC) and the maximum plasma concentration (Cmax) should fall within limits of 80–125%. (This range is part of a statistical calculation, and does not mean that generic drugs are allowed to differ from their brand-name counterparts by up to 25 percent.) The FDA evaluated 2,070 studies conducted between 1996 and 2007 that compared the absorption of brand-name and generic drugs into a person's body | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug The average difference in absorption between the generic and the brand-name drug was 3.5 percent, comparable to the difference between two batches of a brand-name drug. Non-innovator versions of biologic drugs, or biosimilars, require clinical trials for immunogenicity in addition to tests establishing bioequivalency. These products cannot be entirely identical because of batch-to-batch variability and their biological nature, and they are subject to extra rules. When an application is approved, the FDA adds the generic drug to its Approved Drug Products with Therapeutic Equivalence Evaluations list and annotates the list to show the equivalence between the reference-listed drug and the generic. The FDA also recognizes drugs that use the same ingredients with different bioavailability and divides them into therapeutic equivalence groups. For example, as of 2006, diltiazem hydrochloride had four equivalence groups, all using the same active ingredient, but considered equivalent only within each group. In order to start selling a drug promptly after the patent on innovator drug expires, a generic company has to file its ANDA well before the patent expires. This puts the generic company at risk of being sued for patent infringement, since the act of filing the ANDA is considered "constructive infringement" of the patent | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug In order to incentivize generic companies to take that risk the Hatch-Waxman act granted a 180-day administrative exclusivity period to generic drug manufacturers who are the first to file an ANDA. When faced with patent litigation from the drug innovator or patent holder, generic companies will often counter-sue, challenging the validity of the patent. Like any litigation between private parties, the innovator and generic companies may choose to settle the litigation. Some of these settlement agreements have been struck down by courts when they took the form of reverse payment patent settlement agreements, in which the generic company basically accepts a payment to drop the litigation, delaying the introduction of the generic product and frustrating the purpose of the Hatch–Waxman Act. Innovator companies sometimes try to maintain some of the revenue from their drug after patents expire by allowing another company to sell an authorized generic; a 2011 FTC report found that consumers benefitted from lower costs when an authorized generic was introduced during the 180 day exclusivity period, as it created competition. Innovator companies may also present arguments to the FDA that the ANDA should not be accepted by filing an FDA citizen petition. The right of individuals or organizations to petition the federal government is guaranteed by the First Amendment to the United States Constitution | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug For this reason, the FDA has promulgated regulations that provide, among other things, that at any time, any "interested person" can request that the FDA "issue, amend, or revoke a regulation or order," and set forth a procedure for doing so. Some generic drugs are viewed with suspicion by doctors. For example, warfarin (Coumadin) has a narrow therapeutic window and requires frequent blood tests to make sure patients do not have a subtherapeutic or a toxic level. A study performed in Ontario showed that replacing Coumadin with generic warfarin was safe, but many physicians are not comfortable with their patients taking branded generic equivalents. In some countries (for example, Australia) where a drug is prescribed under more than one brand name, doctors may choose not to allow pharmacists to substitute a brand different from the one prescribed unless the consumer requests it. A series of scandals around the approval of generic drugs in the late 1980s shook public confidence in generic drugs; there were several instances in which companies obtained bioequivalence data fraudulently, by using the branded drug in their tests instead of their own product, and a congressional investigation found corruption at the FDA, where employees were accepting bribes to approve some generic companies' applications and delaying or denying others | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug In 2007, North Carolina Public Radio's "The People's Pharmacy" began reporting on consumers' complaints that generic versions of bupropion (Wellbutrin) were yielding unexpected effects. Subsequently, Impax Laboratories's 300 mg extended-release tablets, marketed by Teva Pharmaceutical Industries, were withdrawn from the US market after the FDA determined in 2012 that they were not bioequivalent. Problems with the quality of generic drugs – especially those produced outside the United States – are widespread as of 2019. The FDA does infrequent – less than annual – inspections of production sites outside the United States. The FDA normally gives advance notice of inspections, which can lead to cover-ups of problems before inspectors arrive; inspections performed with little or no advance notice have produced evidence of serious problems at a majority of generic drug manufacturing sites in India and China. Two women, each claiming to have suffered severe medical complications from a generic version of metoclopramide, lost their Supreme Court appeal on June 23, 2011. In a 5–4 ruling in "PLIVA, Inc. v. Mensing", the court held that generic companies cannot be held liable for information, or the lack of information, on the originator's label. The Indian government began encouraging more drug manufacturing by Indian companies in the early 1960s, and with the Patents Act in 1970 | https://en.wikipedia.org/wiki?curid=293885 |
Generic drug The Patents Act removed composition patents for foods and drugs, and though it kept process patents, these were shortened to a period of five to seven years. The resulting lack of patent protection created a niche in both the Indian and global markets that Indian companies filled by reverse-engineering new processes for manufacturing low-cost drugs. The code of ethics issued by the Medical Council of India in 2002 calls for physicians to prescribe drugs by their generic names only. India is a leading country in the world's generic drugs market, with Sun Pharmaceuticals being the largest pharmaceutical company in India. Indian generics companies exported US$17.3 billion worth of drugs in the 2017–18 (April–March) year. production is a large part of the pharmaceutical industry in China. Western observers have said that China lacks administrative protection for patents. However, entry to the World Trade Organization has brought a stronger patent system. As of 2019, several major companies traditionally dominate the generic drugs market, including Teva, Mylan, Novartis' Sandoz, Amneal and Endo International. Prices in traditional generic drugs have declined and newer companies such India-based Sun Pharma, Aurobindo Pharma, and Dr. Reddy's Laboratories as well as Canada-based Apotex have taken market share, which has led to a focus on biosimilars. | https://en.wikipedia.org/wiki?curid=293885 |
Superdeformation In nuclear physics a superdeformed nucleus is a nucleus that is very far from spherical, forming an ellipsoid with axes in ratios of approximately 2:1:1. Normal deformation is approximately 1.3:1:1. Only some nuclei can exist in superdeformed states. The first superdeformed states to be observed were the fission isomers, low-spin states of elements in the actinide and lanthanide series. The strong force decays much faster than the Coulomb force, which becomes stronger when nucleons are greater than 2.5 femtometers apart. For this reason, these elements undergo spontaneous fission. In the late 1980s, high-spin superdeformed rotational bands were observed in other regions of the periodic table. Specific elements include ruthenium, rhodium, palladium, silver, osmium, iridium, platinum, gold, and mercury. The existence of superdeformed states occurs because of a combination of macroscopic and microscopic factors, which together lower their energies, and make them stable minima of energy as a function of deformation. Macroscopically, the nucleus can be described by the liquid drop model. The liquid drop's energy as a function of deformation is at a minimum for zero deformation, due to the surface tension term. However, the curve may become soft with respect to high deformations because of the Coulomb repulsion (especially for the fission isomers, which have high Z) and also, in the case of high-spin states, because of the increased moment of inertia | https://en.wikipedia.org/wiki?curid=294514 |
Superdeformation Modulating this macroscopic behavior, the microscopic shell correction creates certain superdeformed magic numbers that are analogous to the spherical magic numbers. For nuclei near these magic numbers, the shell correction creates a second minimum in the energy as a function of deformation. Even more deformed states (3:1) are called hyperdeformed. | https://en.wikipedia.org/wiki?curid=294514 |
Sea of clouds A sea of clouds is an overcast layer of clouds, viewed from above, with a relatively uniform top which shows undulations of very different lengths resembling waves. A sea of fog is formed from stratus clouds or fog and does not show undulations. In both cases, the phenomenon looks very similar to the open ocean. The comparison is even more complete if some mountain peaks rise above the clouds, thus resembling islands. A sea of clouds forms generally in valleys or over seas in very stable air mass conditions such as in a temperature inversion. Humidity can then reach saturation and condensation leads to a very uniform stratocumulus cloud, stratus cloud or fog. Above this layer, the air must be dry. This is a common situation in a high-pressure area with cooling at the surface by radiative cooling at night in summer, or advection of cold air in winter or in a marine layer. | https://en.wikipedia.org/wiki?curid=294629 |
Deposition (chemistry) In chemistry, deposition occurs when molecules settle out of a solution. Deposition can be viewed as a reverse process to dissolution or particle re-entrainment. | https://en.wikipedia.org/wiki?curid=297039 |
Meteorological disasters are caused by extreme weather, e.g. rain, drought, snow, extreme heat or cold, ice, or wind. Violent, sudden and destructive to the environment related to, produced by, or affecting the earth's atmosphere, especially the weather-forming processes. Examples of weather disasters include blizzard, droughts, hailstorms, heat waves, hurricanes, floods (caused by rain), and tornadoes. | https://en.wikipedia.org/wiki?curid=298837 |
Kleiner Perkins Kleiner Perkins, formerly Caufield & Byers (KPCB), is an American venture capital firm which specializes in investing in incubation, early stage and growth companies. Since its founding in 1972, the firm has backed entrepreneurs in over 900 ventures, including America Online, Amazon.com, Tandem Computers, Compaq, Electronic Arts, JD.com, Square, Genentech, Google, Netscape, Sun Microsystems, Nest, Synack, Snap, AppDynamics, and Twitter. By 2019 it had raised around $9 billion in 19 venture capital funds and four growth funds. is headquartered in Menlo Park in Silicon Valley, with offices in San Francisco and Shanghai, China. The "New York Times" described as “perhaps Silicon Valley’s most famous venture firm.” "The Wall Street Journal" called it one of the "largest and most established" venture capital firms and "Dealbook" named it "one of Silicon Valley's top venture capital providers." The firm was formed in 1972 as Kleiner, Perkins, Caufield & Byers (KPCB) in Menlo Park, California, with a focus on seed, early-stage, and growth companies. The firm is named after its four founding partners: Eugene Kleiner, Tom Perkins, Frank J. Caufield, and Brook Byers. Kleiner was a founder of Fairchild Semiconductor, and Perkins was an early Hewlett-Packard executive. Byers joined in 1977. Located in Menlo Park, California, had access to the growing technology industries in the area | https://en.wikipedia.org/wiki?curid=299967 |
Kleiner Perkins By the early 1970s, there were many semiconductor companies based in the Santa Clara Valley as well as early computer firms using their devices and programming and service companies. Venture capital firms suffered a temporary downturn in 1974, when the stock market crashed and investors were naturally wary of this new kind of investment fund. Nevertheless, the firm was still active in this period. By 1996, had funded around 260 companies a total of $880 million. Beyond the original founders, notable members of the firm have included individuals such as John Doerr, Vinod Khosla, and Bill Joy. Colin Powell joined as a “strategic” partner in 2005, while Al Gore joined as partner in 2007 as part of a collaboration between and Generation Investment Management. Mary Meeker joined the firm in 2010, and that year expanded its practice to invest in growth stage companies. Meeker departed in 2019 to found Bond Capital. Mamoon Hamid from Social Capital and Ilya Fushman from Index Partners joined in 2017 and 2018 respectively, both as investing partners. The "New York Times" has described as “perhaps Silicon Valley’s most famous venture firm.” The firm was described by "Dealbook" in 2009 as "one of Silicon Valley's top venture capital providers,” and "The Wall Street Journal" in 2010 called it one of the "largest and most established" venture capital firms. By 2019 it had raised around $9 billion in 19 venture capital funds and four growth funds | https://en.wikipedia.org/wiki?curid=299967 |
Kleiner Perkins In May 2012, Ellen Pao, an employee, sued the firm for gender discrimination in "Pao v. Kleiner Perkins", which the firm has vigorously denied. On March 27, 2015, after a month-long trial, the jury found against Pao on all claims. In June 2015, Pao filed an appeal. In September 2015, Pao announced she would no longer appeal the jury verdict. In September 2018, announced it was spinning out its digital growth team into a new independent firm. The firm announced its 19th fund on January 31, 2019 after raising $600 million. The fund is focused on early stage investments in the "consumer, enterprise, hard tech and fintech" sectors. The firm raised US$600 million for its 18th fund, KP XVIII, in January 2019. In March 2008 announced the iFund, a $100 million venture capital investment initiative that funds concepts related to the iPhone, and doubled that investment a year later. It was reported in April 2008 that was raising funds for a $500 million growth-stage clean-technology fund. In October 2010, the firm launched a $250 million fund called sFund to focus on social startups, with co-investors such as Facebook, Zynga and Amazon.com. In early 2016, the firm raised $1.4 billion in KP XVII and DGF III. The firm has been an early investor in more than 900 technology and life sciences firms since its founding, paid $5 million in 1994 for around 25% of Netscape and profited from Netscape's IPO | https://en.wikipedia.org/wiki?curid=299967 |
Kleiner Perkins Its investment of $8 million in Cerent was worth around $2 billion when the optical equipment maker was sold to Cisco Systems for $6.9 billion in August 1999. In 1999, paid $12 million for a stake in Google. As of 2019, the market cap of Google's parent company was estimated at around $831 billion. As initial investors in Amazon.com scored returns in excess of $1 billion on an $8 million investment. The firm currently has five partners managing investments: | https://en.wikipedia.org/wiki?curid=299967 |
Weather-related fatalities in the United States may be caused by extreme temperatures, such as abnormal heat or cold, flooding, lightning, tornado, hurricane, wind, rip currents, and others. The National Weather Service compiles statistics on weather-related fatalities and publishes reports every year. In 2016, flooding was the number-one cause of weather-related fatalities, but over a 30-year period, on average, extreme heat is the deadliest form of weather. This table represents a 6-year period and only a select type of recorded weather events. The data was tabulated by running searches on the specified weather events recorded with at least "1" fatality. The yearly timeframes were selected to cover "January 1" to "December 31" of each year for each event type. This table does not provide a comprehensive total of all weather-related events. "Also", this is not necessarily a weather event severity comparison, but is more of an indicator of frequency of fatal occurrences for some events. However, the "unexpected" leader of fatalities comes from the temperature extremes category. That category includes heat waves as well as cold extremes. Between 1979 and 2014, the death rate as a direct result of exposure to heat (underlying cause of death) generally hovered around 0.5 to 1 deaths per million people, with spikes in certain years. Overall, a total of more than 9,000 Americans have died from heat-related causes since 1979, according to death certificates. Cold weather is deadly too | https://en.wikipedia.org/wiki?curid=301998 |
Weather-related fatalities in the United States For example, in the US, 21 people died in a cold wave in January 2014, which also caused property damage valued at US$2.5 billion | https://en.wikipedia.org/wiki?curid=301998 |
Compact dimension In string theory, a model used in theoretical physics, a compact dimension is curled up in itself and very small (usually Planck length). Anything moving along this dimension's direction would return to its starting point almost instantaneously, and the fact that the dimension is smaller than the smallest particle means that it cannot be observed by conventional means. Extra dimensions in a theory which are made compact are said to have undergone compactification. | https://en.wikipedia.org/wiki?curid=303712 |
NAMD Nanoscale Molecular Dynamics (NAMD, formerly Not Another Molecular Dynamics Program) is computer software for molecular dynamics simulation, written using the Charm++ parallel programming model. It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms). It has been developed by the collaboration of the Theoretical and Computational Biophysics Group (TCB) and the Parallel Programming Laboratory (PPL) at the University of Illinois at Urbana–Champaign. It was introduced in 1995 by Nelson "et al." as a parallel molecular dynamics code enabling interactive simulation by linking to the visualization code VMD. has since matured, adding many features and scaling beyond 500,000 processor cores. has an interface to quantum chemistry packages ORCA and MOPAC, as well as a scripted interface to many other quantum packages. Together with Visual Molecular Dynamics (VMD) and QwikMD, NAMD’s interface provides access to hybrid QM/MM simulations in an integrated, comprehensive, customizable, and easy-to-use suite. is available as freeware for non-commercial use by individuals, academic institutions, and corporations for in-house business uses. | https://en.wikipedia.org/wiki?curid=306767 |
Disdrometer A disdrometer is an instrument used to measure the drop size distribution and velocity of falling hydrometeors. Some disdrometers can distinguish between rain, graupel, and hail. The uses for disdrometers are numerous. They can be used for traffic control, scientific examination, airport observation systems, and hydrology. The latest disdrometers employ microwave or laser technologies. 2D video disdrometers can be used to analyze individual raindrops and snowflakes. | https://en.wikipedia.org/wiki?curid=306882 |
Emergent coastline An emergent coastline is a stretch along the coast that has been exposed by the sea by a relative fall in sea levels by either isostasy or eustasy. are the opposite of submergent coastlines, which have experienced a relative rise in sea levels. The emergent coastline may have several specific landforms: The Scottish Gaelic word "machair" or "machar" refers to a fertile low-lying raised beach found on some of the coastlines of Ireland and Scotland (especially the Outer Hebrides). Hudson Bay, in Canada's north, is an example of an emergent coastline. It is still emerging by as much as 1 cm per year. Another example of emergent coastline is the Eastern Coastal Plains of the Indian Subcontinent. | https://en.wikipedia.org/wiki?curid=307197 |
Submergent coastline Submergent coastlines are stretches along the coast that have been inundated by the sea by a relative rise in sea levels from either isostacy or eustacy. are the opposite of emergent coastlines, which have experienced a relative fall in sea levels. Features of a submergent coastline are drowned river valleys or rias and drowned glaciated valleys or fjords. Estuaries are often the drowned mouths of rivers. The Western Coastal Plains of the Indian subcontinent are examples of submergent coastline. The ancient city of Dvārakā, which is mentioned in the great epic Mahabharata, is now under water. The coastline also forms the estuaries of the Narmada and the Tapti Rivers. | https://en.wikipedia.org/wiki?curid=307200 |
Adamantine Spar Adamantine is a mineral, often referred to as adamantine spar. It is a silky brown form of corundum. It has a Mohs rating of 9. Adamantine is also used as an adjective to refer to non-metallic, brilliant light reflecting and transmitting properties, known as "adamantine luster". Diamond is the best known material to be described as having adamantine luster, although anglesite, cerussite and corundum in some of its forms are also described in this way. | https://en.wikipedia.org/wiki?curid=309702 |
Huntingdon Life Sciences (HLS) is a contract research organisation (CRO) founded in 1951 in Cambridgeshire, England. It has two laboratories in the United Kingdom and one in the United States. With over 1,600 staff, it is now the largest non-clinical CRO in Europe. In September 2015, Huntingdon Life Sciences, Harlan Laboratories, GFA, NDA Analytics and LSR associates merged into Envigo. In 2009, HLS was bought outright and is now in private ownership. Prior to this, the latest annual report (2008) showed that the company had revenues of $US242.4m and an operating profit of 14.8%. Although HLS is the third-largest non-clinical CRO in the world, it is probably better known to the general public as the target of a high-profile animal rights campaign. The campaign, in the main, has been orchestrated by the animal rights group Stop Huntingdon Animal Cruelty (SHAC). HLS has two facilities in the UK (Huntingdon, Cambridgeshire and Eye, Suffolk), one in the USA (East Millstone, New Jersey) and an office in Japan (Tokyo). was founded in the UK in 1951 as Nutrition Research Co. Ltd., a commercial organisation that initially focused on nutrition, veterinary, and biochemical research. The original facilities were split over two locations; the main offices were within Cromwell House in the town of Huntingdon, Cambs, UK; and the main laboratories were at the Hartford Field Station (just over a mile away). It then became involved with pharmaceuticals, food additives, and industrial and consumer chemicals | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences In 1959 it changed its name to Nutritional Research Unit Ltd. The company benefited in the early 1960s from increased government regulatory testing requirements, especially in the pharmaceutical industry. In 1964 it was acquired by the U.S. medical supply firm of Becton Dickinson. In April 1983, Becton Dickinson created Huntingdon Research Centre PLC. It then offered four million American depositary receipts (ADRs) for sale at $15 each, representing the company's entire interest in Huntingdon. In 1985, as it began to expand its operations, the company changed its name to Huntingdon International Holdings plc. In that year it established Huntingdon Analytical Services Inc. to conduct business in the United States. To augment its CRO business, Huntingdon acquired Minnesota's Twin City Testing Laboratory Inc. and affiliated companies in 1985, followed by the acquisition of Nebraska Testing Corporation in 1986; Travis Laboratories and Kansas City Test Laboratory Inc. in 1989; and Southwestern Laboratories, Inc. in 1990. Huntingdon also diversified its operations, primarily in the United States, becoming involved in engineering and environmental services. In 1987, HLS purchased Northern Engineering and Testing, Inc., and then in 1988 bought Empire Soils Investigations Inc., Chen Associates Inc., and Asteco Inc. In 1988 HLS was floated on the London Stock Exchange and in 1989 obtained a listing on the New York Stock Exchange. In 1990 Huntingdon acquired the St. Louis branch of Envirodyne Engineers Inc | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences and Whiteley Holdings Ltd. And in 1991 it acquired Austin Research Engineers, Inc., followed by Travers Morgan Ltd. By the early 1990s, Huntingdon was organised into three business groups: the Life Sciences Group, the Engineering/Environmental Group, and the Travers Morgan Group, which offered engineering and environmental consulting services outside of the United States. However, only the Life Sciences Group showed long-term promise. Travers Morgan was allowed to lapse into insolvency, control passed into other hands, and Huntingdon wrote off the investment. In 1995 the engineering and environmental businesses were sold to Maxim Engineers Inc. of Dallas, Texas. To bolster its CRO business and reinforce its U.S. presence, Huntingdon in 1995 acquired the toxicology business of Applied Biosciences International for $32.5 million in cash, plus the Leicester Clinical Research Centre. The deal not only included a U.S. laboratory located near Princeton, New Jersey, it brought with it two British facilities as well. In 1997 Huntingdon International Holdings changed its name to Group. The U.K. subsidiary, Huntingdon Research Centre, changed its name to Ltd., while the U.S. business operated as Inc. In 2002, HLS moved its financial centre to the United States and incorporated in Maryland as Life Sciences Research. In 2009, HLS was bought outright and once again is in private ownership. HLS provides contract research organization services in pre-clinical and non-clinical biological safety evaluation research | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences As with other major CROs operating in this business area, its major business is serving the pharmaceutical industry. However, more than a third of its business comes from non-pharmaceutical sources, the most important of which is the crop protection industry which accounts for around 60% of their non-pharmaceutical business. The latest available public figures from 2008 show that HLS employs more than 1,600 staff across all of its facilities. They break down as: HLS uses animals in the biomedical research it conducts for its customers. The most recent numbers released state that in the UK around 60,000 animals are used annually. This number is broken down by species: Huntingdon is criticised by animal rights and animal welfare groups for using animals in research, for instances of animal abuse and for the wide range of substances it tests on animals, particularly non-medical products. It is claimed by SHAC that 500 animals died every day at HLS (182,500 a year), a figure at odds with HLS' published numbers. Huntingdon's labs were infiltrated by undercover animal rights activists in 1997 in the UK and in 1998 in the US. In 1997, film secretly recorded inside HLS in the UK by BUAV and subsequently broadcast on Channel 4 television as "It's a Dog's Life", showed serious breaches of animal-protection laws, including a beagle puppy being held up by the scruff of the neck and repeatedly punched in the face, and animals being taunted | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences The laboratory technicians responsible were suspended from HLS the day after the broadcast. All three were later dismissed. Two of the men seen hitting and shaking dogs were found guilty under the Protection of Animals Act 1911 of "cruelly terrifying dogs." It was the first time laboratory technicians had been prosecuted for animal cruelty in the UK. HLS admitted that the technicians' behaviour was deplorable and a new management team was introduced the following year which, according to "The Daily Telegraph", "introduced greater openness and new training methods." In 1998, an undercover investigator for People for the Ethical Treatment of Animals (PETA) used a camera hidden in her glasses to make 50 hours of videotape of the HLS laboratories in Princeton, New Jersey. She also made four 90-minute audiotapes, photocopied 8,000 company documents, and copied the company's client list. According to PETA some of the film she shot showed a monkey being dissected while still alive and conscious. The president of HLS in New Jersey, Alan Staple, said the monkey was alive but sedated during the dissection. HLS obtained a "gagging order" in the US that prevents PETA from publicising or talking about any of the information that they discovered. The order also prevented PETA from communicating with the American Department of Agriculture, which had been going to investigate the evidence. The Stop Huntingdon Animal Cruelty (SHAC) campaign is based in the UK and US, and has aimed to close the company down since 1999 | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences According to its website, the campaign's methods are restricted to non-violent direct action, as well as lobbying and demonstrations. It targets not only HLS itself, but any company, institution, or person allegedly doing business with the laboratory, whether as clients, suppliers, or even disposal and cleaning services, and the employees of those companies. Despite its stated non-violent position, SHAC members have been convicted of crimes of violence against HLS employees. On 25 October 2010 five SHAC members received prison sentences for threatening HLS staff. SHAC has also been accused of encouraging arson and violent assault. An HLS director was assaulted in front of his child. HLS managing director Brian Cass was sent a mousetrap primed with razor blades, and in February 2001 was attacked by three men armed with pickaxe handles and CS gas. Another businessman with links to HLS was attacked and knocked unconscious adjacent to a barn his assailants had set alight. Both SHAC and Animal Liberation Front activists have engaged in harassment and intimidation, including issuing hoax bomb threats and death threats. "The Daily Mail" cites as an example the sending of 500 letters to the neighbours of a company manager who did business with HLS; the letter contained an unsupported allegation that the man was a paedophile, with police having to inform all 500 households that the allegations were false | https://en.wikipedia.org/wiki?curid=312268 |
Huntingdon Life Sciences In 2008 seven of SHAC's senior members were described by prosecutors as "some of the key figures in the Animal Liberation Front" and found guilty of conspiracy to blackmail HLS. The campaign against HLS led to its share price crashing, the Royal Bank of Scotland closing its bank account, and the British government arranging for the Bank of England to give them an account. In 2000, HLS was dropped from the New York Stock Exchange because of its market capitalization had fallen below NYSE limits. From 2006, "The Daily Telegraph" reports, the British Government took the decision to tackle "the problem of animal rights extremism." On 1 May 2007, a police campaign called "Operation Achilles" was enacted against SHAC, a series of raids involving 700 police officers in England, Amsterdam, and Belgium. In total, 32 people linked to the group were arrested, and seven leading members of SHAC, including Greg Avery, were found guilty of blackmail. Police estimated in 2007 that, as a consequence of the operation, "up to three quarters of the most violent activists" were jailed. "Der Spiegel" writes that the number of attacks on HLS and their business declined drastically but "the movement is by no means dead." | https://en.wikipedia.org/wiki?curid=312268 |
Aspergillus niger is a fungus and one of the most common species of the genus "Aspergillus". It causes a disease called "black mold" on certain fruits and vegetables such as grapes, apricots, onions, and peanuts, and is a common contaminant of food. It is ubiquitous in soil and is commonly reported from indoor environments, where its black colonies can be confused with those of "Stachybotrys" (species of which have also been called "black mold"). Some strains of "A. niger" have been reported to produce potent mycotoxins called ochratoxins; other sources disagree, claiming this report is based upon misidentification of the fungal species. Recent evidence suggests some true "A. niger" strains do produce ochratoxin A. It also produces the isoflavone orobol. "Aspergillus niger" is included in "Aspergillus" subgenus "Circumdati", section "Nigri". The section "Nigri" includes 15 related black-spored species that may be confused with "A. niger", including "A. tubingensis", "A. foetidus", "A. carbonarius", and "A. awamori". A number of morphologically similar species were in 2004 described by Samson "et al." In 2007 the strain of ATCC 16404 "Aspergillus niger" was reclassified as "Aspergillus brasiliensis" (refer to publication by Varga et al.). This has required an update to the U.S. Pharmacopoeia and the European Pharmacopoeia which commonly use this strain throughout the pharmaceutical industry. "Aspergillus niger" causes black mold of onions and ornamental plants. Infection of onion seedlings by "A | https://en.wikipedia.org/wiki?curid=312439 |
Aspergillus niger niger" can become systemic, manifesting only when conditions are conducive. "A. niger" causes a common postharvest disease of onions, in which the black conidia can be observed between the scales of the bulb. The fungus also causes disease in peanuts and in grapes. "Aspergillus niger" is less likely to cause human disease than some other "Aspergillus" species. In extremely rare instances, humans may become ill, but this is due to a serious lung disease, aspergillosis, that can occur. Aspergillosis is, in particular, frequent among horticultural workers who inhale peat dust, which can be rich in "Aspergillus" spores. The fungus has also been found in the mummies of ancient Egyptian tombs and can be inhaled when they are disturbed. "A. niger" is one of the most common causes of otomycosis (fungal ear infections), which can cause pain, temporary hearing loss, and, only in severe cases, damage to the ear canal and tympanic membrane. "A. niger" has been cultivated both on Czapek medium plates and malt extract agar oxoid (MEAOX) plates. "Aspergillus niger" is cultured for the industrial production of many substances. Various strains of "A. niger" are used in the industrial preparation of citric acid (E330) and gluconic acid (E574), and have been assessed as acceptable for daily intake by the World Health Organization. "A. niger" fermentation is "generally recognized as safe" (GRAS) by the United States Food and Drug Administration under the Federal Food, Drug, and Cosmetic Act | https://en.wikipedia.org/wiki?curid=312439 |
Aspergillus niger Many useful enzymes are produced using industrial fermentation of "A. niger". For example, "A. niger" glucoamylase () is used in the production of high-fructose corn syrup, and pectinases (GH28) are used in cider and wine clarification. Alpha-galactosidase (GH27), an enzyme that breaks down certain complex sugars, is a component of Beano and other products that decrease flatulence. Another use for "A. niger" within the biotechnology industry is in the production of magnetic isotope-containing variants of biological macromolecules for NMR analysis. "Aspergillus niger" is also cultured for the extraction of the enzyme, glucose oxidase (), used in the design of glucose biosensors, due to its high affinity for β-D-glucose. "Aspergillus niger" growing from gold-mining solution contained cyano-metal complexes, such as gold, silver, copper, iron, and zinc. The fungus also plays a role in the solubilization of heavy-metal sulfides. Alkali-treated "A. niger" binds to silver to 10% of dry weight. Silver biosorption occurs by stoichiometric exchange with Ca(II) and Mg(II) of the sorbent. The "A. niger" ATCC 1015 genome was sequenced by the Joint Genome Institute in a collaboration with other institutions. The genomes of two "A. niger" strains have been fully sequenced. | https://en.wikipedia.org/wiki?curid=312439 |
Raoul Pictet Raoul-Pierre Pictet (4 April 1846 – 27 July 1929) was a Swiss physicist. He was the first person to liquefy nitrogen. Pictet was born in Geneva. He served as professor in the university of that city. He devoted himself largely to problems involving the production of low temperatures and the liquefaction and solidification of gases. On December 22, 1877, the Academy of Sciences in Paris received a telegram from Pictet in Geneva reading as follows: "Oxygen liquefied to-day under 320 atmospheres and 140 degrees of cold by combined use of sulfurous and carbonic acid." This announcement was almost simultaneous with that of Cailletet who had liquefied oxygen by a completely different process. Pictet died in Paris in 1929. | https://en.wikipedia.org/wiki?curid=314421 |
Monoclonal antibody Monoclonal antibodies (mAb or moAb) are antibodies that are made by identical immune cells that are all clones of a unique parent cell. Monoclonal antibodies can have monovalent affinity, in that they bind to the same epitope (the part of an antigen that is recognized by the antibody). In contrast, polyclonal antibodies bind to multiple epitopes and are usually made by several different plasma cell (antibody secreting immune cell) lineages. Bispecific monoclonal antibodies can also be engineered, by increasing the therapeutic targets of one single monoclonal antibody to two epitopes. Given almost any substance, it is possible to produce monoclonal antibodies that specifically bind to that substance; they can then serve to detect or purify that substance. This has become an important tool in biochemistry, molecular biology, and medicine. When used as medications, non-proprietary drug names end in -mab (see "Nomenclature of monoclonal antibodies") and many immunotherapy specialists use the word mab anacronymically. The idea of "magic bullets" was first proposed by Paul Ehrlich, who, at the beginning of the 20th century, postulated that, if a compound could be made that selectively targeted a disease-causing organism, then a toxin for that organism could be delivered along with the agent of selectivity. He and Élie Metchnikoff received the 1908 Nobel Prize for Physiology or Medicine for this work. In the 1970s, the B-cell cancer multiple myeloma was known | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody It was understood that these cancerous B-cells all produce a single type of antibody (a paraprotein). This was used to study the structure of antibodies, but it was not yet possible to produce identical antibodies specific to a given antigen. Production of monoclonal antibodies involving human–mouse hybrid cells was first described by Jerrold Schwaber in 1973 and remains widely cited among those using human-derived hybridomas. In 1975, Georges Köhler and César Milstein succeeded in making fusions of myeloma cell lines with B cells to create hybridomas that could produce antibodies, specific to known antigens and that were immortalized. They and Niels Kaj Jerne shared the Nobel Prize in Physiology or Medicine in 1984 for the discovery. In 1988, Greg Winter and his team pioneered the techniques to humanize monoclonal antibodies, eliminating the reactions that many monoclonal antibodies caused in some patients. In 2018, James P. Allison and Tasuku Honjo received the Nobel Prize in Physiology or Medicine for their discovery of cancer therapy by inhibition of negative immune regulation, using monoclonal antibodies that prevent inhibitory linkages. Much of the work behind production of monoclonal antibodies is rooted in the production of hybridomas, which involves identifying antigen-specific plasma/plasmablast cells (ASPCs) that produce antibodies specific to an antigen of interest and fusing these cells with myeloma cells. Rabbit B-cells can be used to form a rabbit hybridoma | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody Polyethylene glycol is used to fuse adjacent plasma membranes, but the success rate is low, so a selective medium in which only fused cells can grow is used. This is possible because myeloma cells have lost the ability to synthesize hypoxanthine-guanine-phosphoribosyl transferase (HGPRT), an enzyme necessary for the salvage synthesis of nucleic acids. The absence of HGPRT is not a problem for these cells unless the de novo purine synthesis pathway is also disrupted. Exposing cells to aminopterin (a folic acid analogue, which inhibits dihydrofolate reductase, DHFR), makes them unable to use the de novo pathway and become fully auxotrophic for nucleic acids, thus requiring supplementation to survive. The selective culture medium is called HAT medium because it contains hypoxanthine, aminopterin and thymidine. This medium is selective for fused (hybridoma) cells. Unfused myeloma cells cannot grow because they lack HGPRT and thus cannot replicate their DNA. Unfused spleen cells cannot grow indefinitely because of their limited life span. Only fused hybrid cells referred to as hybridomas, are able to grow indefinitely in the medium because the spleen cell partner supplies HGPRT and the myeloma partner has traits that make it immortal (similar to a cancer cell). This mixture of cells is then diluted and clones are grown from single parent cells on microtitre wells | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody The antibodies secreted by the different clones are then assayed for their ability to bind to the antigen (with a test such as ELISA or Antigen Microarray Assay) or immuno-dot blot. The most productive and stable clone is then selected for future use. The hybridomas can be grown indefinitely in a suitable cell culture medium. They can also be injected into mice (in the peritoneal cavity, surrounding the gut). There, they produce tumors secreting an antibody-rich fluid called ascites fluid. The medium must be enriched during "in vitro" selection to further favour hybridoma growth. This can be achieved by the use of a layer of feeder fibrocyte cells or supplement medium such as briclone. Culture-media conditioned by macrophages can be used. Production in cell culture is usually preferred as the ascites technique is painful to the animal. Where alternate techniques exist, ascites is considered unethical. Several monoclonal antibody technologies had been developed recently, such as phage display, single B cell culture, single cell amplification from various B cell populations and single plasma cell interrogation technologies. Different from traditional hybridoma technology, the newer technologies use molecular biology techniques to amplify the heavy and light chains of the antibody genes by PCR and produce in either bacterial or mammalian systems with recombinant technology | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody One of the advantages of the new technologies is applicable to multiple animals, such as rabbit, llama, chicken and other common experimental animals in the laboratory. After obtaining either a media sample of cultured hybridomas or a sample of ascites fluid, the desired antibodies must be extracted. Cell culture sample contaminants consist primarily of media components such as growth factors, hormones and transferrins. In contrast, the "in vivo" sample is likely to have host antibodies, proteases, nucleases, nucleic acids and viruses. In both cases, other secretions by the hybridomas such as cytokines may be present. There may also be bacterial contamination and, as a result, endotoxins that are secreted by the bacteria. Depending on the complexity of the media required in cell culture and thus the contaminants, one or the other method ("in vivo" or "in vitro") may be preferable. The sample is first conditioned, or prepared for purification. Cells, cell debris, lipids, and clotted material are first removed, typically by centrifugation followed by filtration with a 0.45 µm filter. These large particles can cause a phenomenon called membrane fouling in later purification steps. In addition, the concentration of product in the sample may not be sufficient, especially in cases where the desired antibody is produced by a low-secreting cell line. The sample is therefore concentrated by ultrafiltration or dialysis. Most of the charged impurities are usually anions such as nucleic acids and endotoxins | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody These can be separated by ion exchange chromatography. Either cation exchange chromatography is used at a low enough pH that the desired antibody binds to the column while anions flow through, or anion exchange chromatography is used at a high enough pH that the desired antibody flows through the column while anions bind to it. Various proteins can also be separated along with the anions based on their isoelectric point (pI). In proteins, the isoelectric point (pI) is defined as the pH at which a protein has no net charge. When the pH > pI, a protein has a net negative charge, and when the pH < pI, a protein has a net positive charge. For example, albumin has a pI of 4.8, which is significantly lower than that of most monoclonal antibodies, which have a pI of 6.1. Thus, at a pH between 4.8 and 6.1, the average charge of albumin molecules is likely to be more negative, while mAbs molecules are positively charged and hence it is possible to separate them. Transferrin, on the other hand, has a pI of 5.9, so it cannot be easily separated by this method. A difference in pI of at least 1 is necessary for a good separation. Transferrin can instead be removed by size exclusion chromatography. This method is one of the more reliable chromatography techniques. Since we are dealing with proteins, properties such as charge and affinity are not consistent and vary with pH as molecules are protonated and deprotonated, while size stays relatively constant | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody Nonetheless, it has drawbacks such as low resolution, low capacity and low elution times. A much quicker, single-step method of separation is protein A/G affinity chromatography. The antibody selectively binds to protein A/G, so a high level of purity (generally >80%) is obtained. However, this method may be problematic for antibodies that are easily damaged, as harsh conditions are generally used. A low pH can break the bonds to remove the antibody from the column. In addition to possibly affecting the product, low pH can cause protein A/G itself to leak off the column and appear in the eluted sample. Gentle elution buffer systems that employ high salt concentrations are available to avoid exposing sensitive antibodies to low pH. Cost is also an important consideration with this method because immobilized protein A/G is a more expensive resin. To achieve maximum purity in a single step, affinity purification can be performed, using the antigen to provide specificity for the antibody. In this method, the antigen used to generate the antibody is covalently attached to an agarose support. If the antigen is a peptide, it is commonly synthesized with a terminal cysteine, which allows selective attachment to a carrier protein, such as KLH during development and to support purification. The antibody-containing medium is then incubated with the immobilized antigen, either in batch or as the antibody is passed through a column, where it selectively binds and can be retained while impurities are washed away | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody An elution with a low pH buffer or a more gentle, high salt elution buffer is then used to recover purified antibody from the support. Product heterogeneity is common in monoclonal antibodies and other recombinant biological products and is typically introduced either upstream during expression or downstream during manufacturing. These variants are typically aggregates, deamidation products, glycosylation variants, oxidized amino acid side chains, as well as amino and carboxyl terminal amino acid additions. These seemingly minute structural changes can affect preclinical stability and process optimization as well as therapeutic product potency, bioavailability and immunogenicity. The generally accepted purification method of process streams for monoclonal antibodies includes capture of the product target with protein A, elution, acidification to inactivate potential mammalian viruses, followed by ion chromatography, first with anion beads and then with cation beads. Displacement chromatography has been used to identify and characterize these often unseen variants in quantities that are suitable for subsequent preclinical evaluation regimens such as animal pharmacokinetic studies. Knowledge gained during the preclinical development phase is critical for enhanced product quality understanding and provides a basis for risk management and increased regulatory flexibility | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody The recent Food and Drug Administration's Quality by Design initiative attempts to provide guidance on development and to facilitate design of products and processes that maximizes efficacy and safety profile while enhancing product manufacturability. The production of recombinant monoclonal antibodies involves repertoire cloning, CRISPR/Cas9, or phage display/yeast display technologies. Recombinant antibody engineering involves antibody production by the use of viruses or yeast, rather than mice. These techniques rely on rapid cloning of immunoglobulin gene segments to create libraries of antibodies with slightly different amino acid sequences from which antibodies with desired specificities can be selected. The phage antibody libraries are a variant of phage antigen libraries. These techniques can be used to enhance the specificity with which antibodies recognize antigens, their stability in various environmental conditions, their therapeutic efficacy and their detectability in diagnostic applications. Fermentation chambers have been used for large scale antibody production. While mouse and human antibodies are structurally similar, the differences between them were sufficient to invoke an immune response when murine monoclonal antibodies were injected into humans, resulting in their rapid removal from the blood, as well as systemic inflammatory effects and the production of human anti-mouse antibodies (HAMA). Recombinant DNA has been explored since the late 1980s to increase residence times | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody In one approach, mouse DNA encoding the binding portion of a monoclonal antibody was merged with human antibody-producing DNA in living cells. The expression of this "chimeric" or "humanised" DNA through cell culture yielded part-mouse, part-human antibodies. Ever since the discovery that monoclonal antibodies could be generated, scientists have targeted the creation of "fully" human products to reduce the side effects of humanised or chimeric antibodies. Two successful approaches have been identified: transgenic mice and phage display. As of November 2016, thirteen of the nineteen "fully" human monoclonal antibody therapeutics on the market were derived from transgenic mice technology. Adopting organizations who market transgenic technology include: Phage display can be used to express variable antibody domains on filamentous phage coat proteins (Phage major coat protein). These phage display antibodies can be used for various research applications. ProAb was announced in December 1997 and involved high throughput screening of antibody libraries against diseased and non-diseased tissue, whilst Proximol used a free radical enzymatic reaction to label molecules in proximity to a given protein. Monoclonal antibodies have been approved to treat cancer, cardiovascular disease, inflammatory diseases, macular degeneration, transplant rejection, multiple sclerosis and viral infection. In August 2006, the Pharmaceutical Research and Manufacturers of America reported that U.S | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody companies had 160 different monoclonal antibodies in clinical trials or awaiting approval by the Food and Drug Administration. Once monoclonal antibodies for a given substance have been produced, they can be used to detect the presence of this substance. Proteins can be detected using the Western blot and immuno dot blot tests. In immunohistochemistry, monoclonal antibodies can be used to detect antigens in fixed tissue sections, and similarly, immunofluorescence can be used to detect a substance in either frozen tissue section or live cells. Antibodies can also be used to purify their target compounds from mixtures, using the method of immunoprecipitation. Therapeutic monoclonal antibodies act through multiple mechanisms, such as blocking of targeted molecule functions, inducing apoptosis in cells which express the target, or by modulating signalling pathways. One possible treatment for cancer involves monoclonal antibodies that bind only to cancer cell-specific antigens and induce an immune response against the target cancer cell. Such mAbs can be modified for delivery of a toxin, radioisotope, cytokine or other active conjugate or to design bispecific antibodies that can bind with their Fab regions both to target antigen and to a conjugate or effector cell. Every intact antibody can bind to cell receptors or other proteins with its Fc region | https://en.wikipedia.org/wiki?curid=315043 |
Monoclonal antibody Monoclonal antibodies used for autoimmune diseases include infliximab and adalimumab, which are effective in rheumatoid arthritis, Crohn's disease, ulcerative colitis and ankylosing spondylitis by their ability to bind to and inhibit TNF-α. Basiliximab and daclizumab inhibit IL-2 on activated T cells and thereby help prevent acute rejection of kidney transplants. Omalizumab inhibits human immunoglobulin E (IgE) and is useful in treating moderate-to-severe allergic asthma. Monoclonal antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine like CiteAb. Below are examples of clinically important monoclonal antibodies. Several monoclonal antibodies, such as Bevacizumab and Cetuximab, can cause different kinds of side effects. These side effects can be categorized into common and serious side effects. Some common side effects include: Among the possible serious side effects are: | https://en.wikipedia.org/wiki?curid=315043 |
Reactive center A reactive center, also called a propagating center, in chemistry is a particular location, usually an atom, within a chemical compound that is the likely center of a reaction in which the chemical is involved. In chain-growth polymer chemistry this is also the point of propagation for a growing chain. The reactive center is commonly radical, anionic, or cationic in nature, but can also take other forms. | https://en.wikipedia.org/wiki?curid=317212 |
Starve-fed In emulsion polymerization, starve-fed refers to a method of monomer addition where the monomer is introduced gradually into the reaction vessel at a rate that allows the majority of monomer to be consumed by the reaction before more is added. The purpose of this method is generally to control the distribution of different monomers into a copolymer. Many monomers have different reaction rates and so, if all the monomers are added to the system at the same time, tend to react in blocks. This blockiness in the polymer leads to significantly different properties in the final polymer from one with a more statistically random distribution of monomers. This method is utilized in synthesizing core-shell latex particles by emulsion polymerization, in order to carefully prepare the final structure. | https://en.wikipedia.org/wiki?curid=317218 |
Biological life cycle In biology, a biological life cycle (or just life cycle or lifecycle when the biological context is clear) is a series of changes in form that an organism undergoes, returning to the starting state. "The concept is closely related to those of the life history, development and ontogeny, but differs from them in stressing renewal." Transitions of form may involve growth, asexual reproduction, or sexual reproduction. In some organisms, different "generations" of the species succeed each other during the life cycle. For plants and many algae, there are two multicellular stages, and the life cycle is referred to as alternation of generations. The term life history is often used, particularly for organisms such as the red algae which have three multicellular stages (or more), rather than two. Life cycles that include sexual reproduction involve alternating haploid ("n") and diploid (2"n") stages, i.e., a change of ploidy is involved. To return from a diploid stage to a haploid stage, meiosis must occur. In regard to changes of ploidy, there are 3 types of cycles: The cycles differ in when mitosis (growth) occurs. Zygotic meiosis and gametic meiosis have one mitotic stage: mitosis occurs during the "n" phase in zygotic meiosis and during the 2"n" phase in gametic meiosis. Therefore, zygotic and gametic meiosis are collectively termed haplobiontic (single mitotic phase, not to be confused with haplontic) | https://en.wikipedia.org/wiki?curid=319610 |
Biological life cycle Sporic meiosis, on the other hand, has mitosis in two stages, both the diploid and haploid stages, termed diplobiontic (not to be confused with diplontic). The study of reproduction and development in organisms was carried out by many botanists and zoologists. Wilhelm Hofmeister demonstrated that alternation of generations is a feature that unites plants, and published this result in 1851 (see plant sexuality). Some terms (haplobiont and diplobiont) used for the description of life cycles were proposed initially for algae by Nils Svedelius, and then became used for other organisms. Other terms (autogamy and gamontogamy) used in protist life cycles were introduced by Karl Gottlieb Grell. The description of the complex life cycles of various organisms contributed to the disproof of the ideas of spontaneous generation in the 1840s and 1850s. A zygotic meiosis is a meiosis of a zygote immediately after karyogamy, which is the fusion of two cell nuclei. This way, the organism ends its diploid phase and produces several haploid cells. These cells divide mitotically to form either larger, multicellular individuals, or more haploid cells. Two opposite types of gametes (e.g., male and female) from these individuals or cells fuse to become a zygote. In the whole cycle, zygotes are the only diploid cell; mitosis occurs only in the haploid phase. The individuals or cells as a result of mitosis are haplonts, hence this life cycle is also called haplontic life cycle | https://en.wikipedia.org/wiki?curid=319610 |
Biological life cycle Haplonts are: In gametic meiosis, instead of immediately dividing "meiotically" to produce haploid cells, the zygote divides "mitotically" to produce a multicellular diploid individual or a group of more unicellular diploid cells. Cells from the diploid individuals then undergo meiosis to produce haploid cells or gametes. Haploid cells may divide again (by mitosis) to form more haploid cells, as in many yeasts, but the haploid phase is not the predominant life cycle phase. In most diplonts, mitosis occurs only in the diploid phase, i.e. gametes usually form quickly and fuse to produce diploid zygotes. In the whole cycle, gametes are usually the only haploid cells, and mitosis usually occurs only in the diploid phase. The diploid multicellular individual is a diplont, hence a gametic meiosis is also called a diplontic life cycle. Diplonts are: In sporic meiosis (also commonly known as intermediary meiosis), the zygote divides mitotically to produce a multicellular diploid sporophyte. The sporophyte creates spores via meiosis which "also" then divide mitotically producing haploid individuals called gametophytes. The gametophytes produce gametes via mitosis. In some plants the gametophyte is not only small-sized but also short-lived; in other plants and many algae, the gametophyte is the "dominant" stage of the life cycle. Haplodiplonts are: Some animals have a sex-determination system called haplodiploid, but this is not related to the haplodiplontic life cycle | https://en.wikipedia.org/wiki?curid=319610 |
Biological life cycle Some red algae (such as "Bonnemaisonia" and "Lemanea") and green algae (such as "Prasiola") have vegetative meiosis, also called somatic meiosis, which is a rare phenomenon. Vegetative meiosis can occur in haplodiplontic and also in diplontic life cycles. The gametophytes remain attached to and part of the sporophyte. Vegetative (non-reproductive) diploid cells undergo meiosis, generating vegetative haploid cells. These undergo many mitosis, and produces gametes. A different phenomenon, called vegetative diploidization, a type of apomixis, occurs in some brown algae (e.g., "Elachista stellaris"). Cells in a haploid part of the plant spontaneously duplicate their chromosomes to produce diploid tissue. Parasites depend on the exploitation of one or more hosts. Those that must infect more than one host species to complete their life cycles are said to have complex or indirect life cycles, while those that infect a single species have direct life cycles. If a parasite has to infect a given host in order to complete its life cycle, then it is said to be an obligate parasite of that host; sometimes, infection is facultative—the parasite can survive and complete its life cycle without infecting that particular host species. Parasites sometimes infect hosts in which they cannot complete their life cycles; these are accidental hosts. A host in which parasites reproduce sexually is known as the definitive, final or primary host | https://en.wikipedia.org/wiki?curid=319610 |
Biological life cycle In intermediate hosts, parasites either do not reproduce or do so asexually, but the parasite always develops to a new stage in this type of host. In some cases a parasite will infect a host, but not undergo any development, these hosts are known as paratenic or transport hosts. The paratenic host can be useful in raising the chance that the parasite will be transmitted to the definitive host. For example, the cat lungworm ("Aelurostrongylus abstrusus") uses a slug or snail as an intermediate host; the first stage larva enters the mollusk and develops to the third stage larva, which is infectious to the definitive host—the cat. If a mouse eats the slug, the third stage larva will enter the mouse's tissues, but will not undergo any development. The primitive type of life cycle probably had haploid individuals with asexual reproduction. Bacteria and archaea exhibit a life cycle like this, and some eukaryotes apparently do too (e.g., Cryptophyta, Choanoflagellata, many Euglenozoa, many Amoebozoa, some red algae, some green algae, the imperfect fungi, some rotifers and many other groups, not necessarily haploid). However, these eukaryotes probably are not primitively asexual, but have lost their sexual reproduction, or it just was not observed yet. Many eukaryotes (including animals and plants) exhibit asexual reproduction, which may be facultative or obligate in the life cycle, with sexual reproduction occurring more or less frequently. | https://en.wikipedia.org/wiki?curid=319610 |
Antarctic Circumpolar Wave The (ACW) is a coupled ocean/atmosphere wave that circles the Southern Ocean in approximately eight years at . Since it is a wave-2 phenomenon (there are two ridges and two troughs in a latitude circle) at each fixed point in space a signal with a period of four years is seen. The wave moves eastward with the prevailing currents. Although the "wave" is seen in temperature, atmospheric pressure, sea ice and ocean height, the variations are hard to see in the raw data and need to be filtered to become apparent. Because the reliable record for the Southern Ocean is short (since the early 1980s) and signal processing is needed to reveal its existence, some climatologists doubt the existence of the wave. Others accept its existence but say that it varies in strength over decades. The wave was discovered simultaneously by and . Since then, ideas about the wave structure and maintenance mechanisms have changed and grown: by some accounts it is now to be considered as part of a global ENSO wave. | https://en.wikipedia.org/wiki?curid=325020 |
Richard D. James (scientist) Richard D. James (born 1952) is a mechanician and materials scientist. He is currently the Russell J. Penrose Professor and Distinguished McKnight University Professor at the Department of Aerospace Engineering and Mechanics at the University of Minnesota. He was educated at Brown University and at Johns Hopkins University under the direction of J.L. Ericksen. James is known for his research in phase transitions. He has received several awards, including the Alexander von Humboldt Senior Research Award and the William Prager Medal. More recently, the ASME awarded him the Koiter Medal for "..pioneering the modern vision of phase transformations and materials instabilities in solids, explaining how microstructures form and evolve, and demonstrating how to take advantage of this to design new active materials". With John M. Ball, he currently serves as one of the chief editors of the Archive for Rational Mechanics and Analysis. | https://en.wikipedia.org/wiki?curid=327344 |
Palynology is literally the "study of dust" (from , "strew, sprinkle" and "-logy") or of "particles that are strewn". A classic palynologist analyses particulate samples collected from the air, from water, or from deposits including sediments of any age. The condition and identification of those particles, organic and inorganic, give the palynologist clues to the life, environment, and energetic conditions that produced them. The term is commonly used to refer to a subset of the discipline, which is defined as "the study of microscopic objects of macromolecular organic composition (i.e., compounds of carbon, hydrogen, nitrogen and oxygen), not capable of dissolution in hydrochloric or hydrofluoric acids". It is the science that studies contemporary and fossil palynomorphs, including pollen, spores, orbicules, dinocysts, acritarchs, chitinozoans and scolecodonts, together with particulate organic matter (POM) and kerogen found in sedimentary rocks and sediments. does not include diatoms, foraminiferans or other organisms with siliceous or calcareous exoskeletons. is literally the "study of dust" (from , "strew, sprinkle" and "-logy") or of "particles that are strewn". A classic palynologist analyses particulate samples collected from the air, from water, or from deposits including sediments of any age. The condition and identification of those particles, organic and inorganic, give the palynologist clues to the life, environment, and energetic conditions that produced them | https://en.wikipedia.org/wiki?curid=328274 |
Palynology The term is commonly used to refer to a subset of the discipline, which is defined as "the study of microscopic objects of macromolecular organic composition (i.e., compounds of carbon, hydrogen, nitrogen and oxygen), not capable of dissolution in hydrochloric or hydrofluoric acids". It is the science that studies contemporary and fossil palynomorphs, including pollen, spores, orbicules, dinocysts, acritarchs, chitinozoans and scolecodonts, together with particulate organic matter (POM) and kerogen found in sedimentary rocks and sediments. does not include diatoms, foraminiferans or other organisms with siliceous or calcareous exoskeletons. as an interdisciplinary science stands at the intersection of earth science (geology or geological science) and biological science (biology), particularly plant science (botany). Stratigraphical palynology, a branch of micropalaeontology and paleobotany, studies fossil palynomorphs from the Precambrian to the Holocene. Palynomorphs are broadly defined as organic-walled microfossils between 5 and 500 micrometres in size. They are extracted from sedimentary rocks and sediment cores both physically, by ultrasonic treatment and wet sieving, and chemically, by chemical digestion to remove the non-organic fraction. Palynomorphs may be composed of organic material such as chitin, pseudochitin and sporopollenin. Palynomorphs that have a taxonomy description are sometimes referred to as palynotaxa | https://en.wikipedia.org/wiki?curid=328274 |
Palynology Palynomorphs form a geological record of importance in determining the type of prehistoric life that existed at the time the sedimentary formation was laid down. As a result, these microfossils give important clues to the prevailing climatic conditions of the time. Their paleontological utility derives from an abundance numbering in millions of cells per gram in organic marine deposits, even when such deposits are generally not fossiliferous. Palynomorphs, however, generally have been destroyed in metamorphic or recrystallized rocks. Typically, palynomorphs are dinoflagellate cysts, acritarchs, spores, pollen, fungi, scolecodonts (scleroprotein teeth, jaws and associated features of polychaete annelid worms), arthropod organs (such as insect mouthparts), chitinozoans and microforams. Palynomorph microscopic structures that are abundant in most sediments are resistant to routine pollen extraction including strong acids and bases, and acetolysis, or density separation. A palynofacies is the complete assemblage of organic matter and palynomorphs in a fossil deposit. The term was introduced by the French geologist André Combaz in 1964. Palynofacies studies are often linked to investigations of the organic geochemistry of sedimentary rocks. The study of the palynofacies of a sedimentary depositional environment can be used to learn about the depositional palaeoenvironments of sedimentary rocks in exploration geology, often in conjunction with palynological analysis and vitrinite reflectance | https://en.wikipedia.org/wiki?curid=328274 |
Palynology Palynofacies can be used in two ways: The earliest reported observations of pollen under a microscope are likely to have been in the 1640s by the English botanist Nehemiah Grew, who described pollen and the stamen, and concluded that pollen is required for sexual reproduction in flowering plants. By the late 1870s, as optical microscopes improved and the principles of stratigraphy were worked out, Robert Kidston and P. Reinsch were able to examine the presence of fossil spores in the Devonian and Carboniferous coal seams and make comparisons between the living spores and the ancient fossil spores. Early investigators include Christian Gottfried Ehrenberg (radiolarians, diatoms and dinoflagellate cysts), Gideon Mantell (desmids) and Henry Hopley White (dinoflagellate cysts). Quantitative analysis of pollen began with Lennart von Post's published work. Although he published in the Swedish language, his methodology gained a wide audience through his lectures. In particular, his Kristiania lecture of 1916 was important in gaining a wider audience. Because the early investigations were published in the Nordic languages (Scandinavian languages), the field of pollen analysis was confined to those countries. The isolation ended with the German publication of Gunnar Erdtman's 1921 thesis. The methodology of pollen analysis became widespread throughout Europe and North America and revolutionized Quaternary vegetation and climate change research | https://en.wikipedia.org/wiki?curid=328274 |
Palynology Earlier pollen researchers include Früh (1885), who enumerated many common tree pollen types, and a considerable number of spores and herb pollen grains. There is a study of pollen samples taken from sediments of Swedish lakes by Trybom (1888); pine and spruce pollen was found in such profusion that he considered them to be serviceable as "index fossils". Georg F. L. Sarauw studied fossil pollen of middle Pleistocene age (Cromerian) from the harbour of Copenhagen. Lagerheim (in Witte 1905) and C. A.Weber (in H. A. Weber 1918) appear to be among the first to undertake 'percentage frequency' calculations. The term "palynology" was introduced by Hyde and Williams in 1944, following correspondence with the Swedish geologist Ernst Antevs, in the pages of the "Pollen Analysis Circular" (one of the first journals devoted to pollen analysis, produced by Paul Sears in North America). Hyde and Williams chose "palynology" on the basis of the Greek words "paluno" meaning 'to sprinkle' and "pale" meaning 'dust' (and thus similar to the Latin word "pollen"). Pollen analysis in North America stemmed from Phyllis Draper, an MS student under Sears at the University of Oklahoma. During her time as a student, she developed the first pollen diagram from a sample that depicted the percentage of several species at different depths at Curtis Bog. This was the introduction of pollen analysis in North America; pollen diagrams today still often remain in the same format with depth on the y-axis and abundances of species on the x-axis | https://en.wikipedia.org/wiki?curid=328274 |
Palynology Pollen analysis advanced rapidly in this period due to advances in optics and computers. Much of the science was revised by Johannes Iversen and Knut Fægri in their textbook on the subject. Chemical digestion follows a number of steps. Initially the only chemical treatment used by researchers was treatment with Potassium hydroxide (KOH) to remove humic substances; defloculation was accomplished through surface treatment or ultra-sonic treatment, although sonification may cause the pollen exine to rupture. In 1924, the use of hydrofluoric acid (HF) to digest silicate minerals was introduced by Assarson and Granlund, greatly reducing the amount of time required to scan slides for palynomorphs. Palynological studies using peats presented a particular challenge because of the presence of well-preserved organic material, including fine rootlets, moss leaflets and organic litter. This was the last major challenge in the chemical preparation of materials for palynological study. Acetolysis was developed by Gunnar Erdtman and his brother to remove these fine cellulose materials by dissolving them. In acetolysis the specimen is treated with acetic anhydride and sulfuric acid, dissolving cellulistic materials and thus providing better visibility for palynomorphs. Some steps of the chemical treatments require special care for safety reasons, in particular the use of HF which diffuses very fast through the skin and, causes severe chemical burns, and can be fatal | https://en.wikipedia.org/wiki?curid=328274 |
Palynology Another treatment includes kerosene flotation for chitinous materials. Once samples have been prepared chemically, they are mounted on microscope slides using silicon oil, glycerol or glycerol-jelly and examined using light microscopy or mounted on a stub for scanning electron microscopy. Researchers will often study either modern samples from a number of unique sites within a given area, or samples from a single site with a record through time, such as samples obtained from peat or lake sediments. More recent studies have used the modern analog technique in which paleo-samples are compared to modern samples for which the parent vegetation is known. When the slides are observed under a microscope, the researcher counts the number of grains of each pollen taxon. This record is next used to produce a pollen diagram. These data can be used to detect anthropogenic effects, such as logging, traditional patterns of land use or long term changes in regional climate can be applied to problems in many scientific disciplines including geology, botany, paleontology, archaeology, pedology (soil study), and physical geography: Because the distribution of acritarchs, chitinozoans, dinoflagellate cysts, pollen and spores provides evidence of stratigraphical correlation through biostratigraphy and palaeoenvironmental reconstruction, one common and lucrative application of palynology is in oil and gas exploration. | https://en.wikipedia.org/wiki?curid=328274 |
Restricted use pesticide Restricted use pesticides or "RUP" are pesticides not available to the general public in the United States. The "Restricted Use" classification restricts a product, or its uses, to use by a certificated pesticide applicator or under the direct supervision of a certified applicator. This means that a license is required to purchase and apply the product. Certification programs are administered by the federal government, individual states, and by company policies that vary from state to state. This is managed by the United States Environmental Protection Agency (EPA) under the Worker Protection Standard. Pesticides are classified as "restricted use" for a variety of reasons, such as potential for or history of groundwater contamination. Atrazine is the most widely used restricted-use herbicide, however there are over 700 such "restricted use" pesticides as of 2017. Many insecticides and fungicides used in fruit production are restricted use. The list is part of Title 40 of the Code of Federal Regulations (40 CFR 152.175). The Worker Protection Standard (WPS) identifies the type of requirements that must be satisfied to obtain the proper license needed to purchase and apply restricted use pesticide. The Hazard Communication Standard requires all employers to disclose all hazards to employees separately from the WPS. Licensed pest control supervisors must maintain application records for 3 years or more, as determined by state and federal laws | https://en.wikipedia.org/wiki?curid=329553 |
Restricted use pesticide These records must identify the date, location, and type of pesticide that has been applied. Additionally, the licensed pest control supervisors must notify the local government agency that is responsible for air quality to satisfy laws governing the Right to know regarding public health and safety risks when restricted use pesticides are applied outside buildings. | https://en.wikipedia.org/wiki?curid=329553 |
Digital microfluidics (DMF) is another platform for lab-on-a-chip systems that is based upon the manipulation of microdroplets. Droplets are dispensed, moved, stored, mixed, reacted, or analyzed on a platform with a set of insulated electrodes. can be used together with analytical analysis procedures such as mass spectrometry, colorimetry, electrochemical, and electrochemiluminescense. In analogy to digital microelectronics, digital microfluidic operations can be combined and reused within hierarchical design structures so that complex procedures (e.g. chemical synthesis or biological assays) can be built up step-by-step. And in contrast to continuous-flow microfluidics, digital microfluidics works much the same way as traditional bench-top protocols, only with much smaller volumes and much higher automation. Thus a wide range of established chemical procedures and protocols can be seamlessly transferred to a nanoliter droplet format. Electrowetting, dielectrophoresis, and immiscible-fluid flows are the three most commonly used principles, which have been used to generate and manipulate microdroplets in a digital microfluidic device. A digital microfluidic (DMF) device set-up depends on the substrates used, the electrodes, the configuration of those electrodes, the use of a dielectric material, the thickness of that dielectric material, the hydrophobic layers, and the applied voltage. ]</nowiki>]] A common substrate used is this type of system is glass | https://en.wikipedia.org/wiki?curid=331579 |
Digital microfluidics Depending if the system is open or closed, there would be either one or two layers of glass. The bottom layer of the device contains a patterned array of individually controllable electrodes. When looking at a closed system, there is usually a continuous ground electrode found through the top layer made usually of indium tin oxide (ITO). The dielectric layer is found around the electrodes in the bottom layer of the device and is important for building up charges and electrical field gradients on the device. A hydrophobic layer is applied to the top layer of the system to decrease the surface energy where the droplet will actually we be in contact with. The applied voltage activates the electrodes and allows changes in the wettability of droplet on the device’s surface. In order to move a droplet, a control voltage is applied to an electrode adjacent to the droplet, and at the same time, the electrode just under the droplet is deactivated. By varying the electric potential along a linear array of electrodes, electrowetting can be used to move droplets along this line of electrodes. Modifications to this foundation can also be fabricated into the basic design structure. One example of this is the addition of electrochemiluminescence detectors within the indium tin oxide layer (the ground electrode in a closed system) which aid in the detection of luminophores in droplets | https://en.wikipedia.org/wiki?curid=331579 |
Digital microfluidics In general, different materials may also be used to replace basic components of a DMF system such as the use of PDMS instead of glass for the substrate. Liquid materials can be added, such as oil or another substance, to a closed system to prevent evaporation of materials and decrease surface contamination. Also, DMF systems can be compatible with ionic liquid droplets with the use of an oil in a closed device or with the use of a catena (a suspended wire) over an open DMF device. can be light-activated. Optoelectrowetting can be used to transport sessile droplets around a surface containing patterned photoconductors. The photoelectrowetting effect can also be used to achieve droplet transport on a silicon wafer without the necessity of patterned electrodes. Droplets are formed using the surface tension properties of a liquid. For example, water placed on a hydrophobic surface such as wax paper will form spherical droplets to minimize its contact with the surface. Differences in surface hydrophobicity affect a liquid’s ability to spread and ‘wet’ a surface by changing the contact angle. As the hydrophobicity of a surface increases, the contact angle increases, and the ability of the droplet to wet the surface decreases. The change in contact angle, and therefore wetting, is regulated by the Young-Lippmann equation. | https://en.wikipedia.org/wiki?curid=331579 |
Discordant coastline A discordant coastline occurs where bands of different rock type run perpendicular to the coast. The differing resistance to erosion leads to the formation of headlands and bays. A hard rock type such as granite is resistant to erosion and creates a promontory whilst a softer rock type such as the clays of Bagshot Beds is easily eroded creating a bay. Part of the Dorset coastline running north from the Portland limestone of Durlston Head is a clear example of a discordant coastline. The Portland limestone is resistant to erosion; then to the north there is a bay at Swanage where the rock type is a softer greensand. North of Swanage, the chalk outcrop creates the headland which includes Old Harry Rocks. The converse of a discordant coastline is a concordant coastline. | https://en.wikipedia.org/wiki?curid=332714 |
Therapeutic index The therapeutic index (TI; also referred to as therapeutic ratio) is a quantitative measurement of the relative safety of a drug. It is a comparison of the amount of a therapeutic agent that causes the therapeutic effect to the amount that causes toxicity.The related terms therapeutic window or safety window refer to a range of doses which optimize between efficacy and toxicity, achieving the greatest therapeutic benefit without resulting in unacceptable side-effects or toxicity. Classically, in an established clinical indication setting of an approved drug, TI refers to the ratio of the dose of drug that causes adverse effects at an incidence/severity not compatible with the targeted indication (e.g. toxic dose in 50% of subjects, TD) to the dose that leads to the desired pharmacological effect (e.g. efficacious dose in 50% of subjects, ED). In contrast, in a drug development setting TI is calculated based on plasma exposure levels. In the early days of pharmaceutical toxicology, TI was frequently determined in animals as lethal dose of a drug for 50% of the population (LD) divided by the minimum effective dose for 50% of the population (ED). Today, more sophisticated toxicity endpoints are used. For many drugs, there are severe toxicities that occur at sublethal doses in humans, and these toxicities often limit the maximum dose of a drug | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index A higher therapeutic index is preferable to a lower one: a patient would have to take a much higher dose of such a drug to reach the toxic threshold than the dose taken to elicit the therapeutic effect. Generally, a drug or other therapeutic agent with a narrow therapeutic range (i.e. having little difference between toxic and therapeutic doses) may have its dosage adjusted according to measurements of the actual blood levels achieved in the person taking it. This may be achieved through therapeutic drug monitoring (TDM) protocols. TDM is recommended for use in the treatment of psychiatric disorders with lithium due to its narrow therapeutic range. A high therapeutic index (TI) is preferable for a drug to have a favorable safety and efficacy profile. At early discovery / development stage, the clinical TI of a drug candidate is not known. However, understanding the preliminary TI of a drug candidate is of utmost importance as early as possible since TI is an important indicator of the probability of the successful development of a drug. Recognizing drug candidates with potentially suboptimal TI at earliest possible stage helps to initiate mitigation or potentially re-deploy resources. In a drug development setting, TI is the quantitative relationship between efficacy (pharmacology) and safety (toxicology), without considering the nature of pharmacological or toxicological endpoints themselves | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index However, to convert a calculated TI to something that is more than just a number, the nature and limitations of pharmacological and/or toxicological endpoints must be considered. Depending on the intended clinical indication, the associated unmet medical need and/or the competitive situation, more or less weight can be given to either the safety or efficacy of a drug candidate with the aim to create a well balanced indication-specific safety vs efficacy profile. In general, it is the exposure of a given tissue to drug (i.e. drug concentration over time), rather than dose, that drives the pharmacological and toxicological effects. For example, at the same dose there may be marked inter-individual variability in exposure due to polymorphisms in metabolism, DDIs or differences in body weight or environmental factors. These considerations emphasize the importance of using exposure rather than dose for calculating TI. To account for delays between exposure and toxicity, the TI for toxicities that occur after multiple dose administrations should be calculated using the exposure to drug at steady state rather than after administration of a single dose. A review published by Muller and Milton in Nature Reviews Drug Discovery critically discusses the various aspects of TI determination and interpretation in a translational drug development setting for both small molecules and biotherapeutics. The therapeutic index varies widely among substances, even within a related group | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index For instance, the opioid painkiller remifentanil is very forgiving, offering a therapeutic index of 33,000:1, while Diazepam, a benzodiazepine sedative-hypnotic and skeletal muscle relaxant, has a less forgiving therapeutic index of 100:1. Morphine is even less so with a therapeutic index of 70. Less safe are cocaine, a stimulant and local anaesthetic, and ethanol (colloquially, the "alcohol" in alcoholic beverages), a widely available sedative consumed worldwide – the therapeutic indices for these substances are 15:1 and 10:1, respectively. Even less-safe are drugs such as digoxin, a cardiac glycoside; its therapeutic index is approximately 2:1. Other examples of drugs with a narrow therapeutic range, which may require drug monitoring both to achieve therapeutic levels and to minimize toxicity, include: paracetamol (acetaminophen), dimercaprol, theophylline, warfarin and lithium carbonate. Some antibiotics and antifungals require monitoring to balance efficacy with minimizing adverse effects, including: gentamicin, vancomycin, amphotericin B (nicknamed 'amphoterrible' for this very reason), and polymyxin B. Radiotherapy aims to minimize the size of tumors and kill cancer cells with high energy. The source of high energy arises from x-rays, gamma rays, charged particles and heavy particles | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index The therapeutic ratio in radiotherapy for cancer treatment is related to the maximum radiation dose by which death of cancer cells is locally controlled and the minimum radiation dose by which cells in normal tissues have low acute and late morbidity. Both of parameters have sigmoidal dose-response curves. Thus, a favorable outcome in dose-response curve is the response of tumor tissue is greater than that of normal tissue to the same dose, meaning that the treatment is effective to tumors and does not cause serious morbidity to normal tissue. Reversely, overlapping response of two tissues is highly likely to cause serious morbidity to normal tissue and ineffective treatment to tumors. The mechanism of radiation therapy is categorized into direct and indirect radiation. Both of direct and indirect radiations induce DNAs to have a mutation or chromosomal rearrangement during its repair process. Direct radiation creates a free DNA radical from radiation energy deposition that damages DNA. Indirect radiation occurs from radiolysis of water, creating a free hydroxyl radical, hydronium and electron. Then, hydroxyl radical transfers its radical to DNA. Or together with hydronium and electron, a free hydroxyl radical can damage base region of DNA. Cancer cells have imbalance of signals in cell cycle. G1 and G2/M arrest are found to be major checkpoints by irradiation in human cells | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index G1 arrest delays repair mechanism before synthesis of DNA in S phase and mitosis in M phase, suggesting key checkpoint to lead survival of cells. G2/M arrest occurs when cells need to repair after S phase before the mitotic entry. It was also known that S phase is the most resistant to radiation and M phase was the most sensitive to radiation. p53, a tumor suppressor protein that plays a role in G1 and G2/M arrest, enabled the understanding of the cell cycle by radiation. For example, irradiation to myeloid leukemia cell leads to an increase in p53 and a decrease in the level of DNA synthesis. Patients with Ataxia telangiectasia delays have hypersensitivity to radiation due to the delay of accumulation of p53. In this case, cells are able to replicate without repair of their DNA, prone to incidence of cancer. Most cells are in G1 and S phase and irradiation at G2 phase showed increased radiosensitivity and thus G1 arrest has been on focus for therapeutic treatment. Irradiation to a tissue creates response to both irradiated and non-irridiated cells. It was found that even cells up to 50–75 cell diameter distant from irradiated cells have phenotype of enhanced genetic instability such as micronucleation. This suggests the effect of cell-to-cell communication such as paracrine and juxtacrine signaling. Normal cells do not lose DNA repair mechanism whereas cancer cells often lose during radiotherapy | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index However, the nature of high energy radiation can override the ability of damaged normal cell to repair, leading to cause another risk for carcinogenesis. This suggests a significant risk associated with radiation therapy. Thus, it is desirable to improve the therapeutic ratio during radiotherapy. Employing IG-IMRT, protons and heavy ions are likely to minimize dose to normal tissues by altered fractionation. Molecular targeting to DNA repair pathway can lead to radiosensitization or radioprotection. Examples are direct and indirect inhibitors on DNA double-strand breaks. Direct inhibitors target proteins (PARP family) and kinases (ATM, DNA-PKCs) that are involved in DNA repair. Indirect inhibitors target proteins tumor cell signaling proteins such as EGFR and insulin growth factor. The effective therapeutic index can be affected by targeting, in which the therapeutic agent is concentrated in its area of effect. For example, in radiation therapy for cancerous tumors, shaping the radiation beam precisely to the profile of a tumor in the "beam's eye view" can increase the delivered dose without increasing toxic effects, though such shaping might not change the therapeutic index. Similarly, chemotherapy or radiotherapy with infused or injected agents can be made more efficacious by attaching the agent to an oncophilic substance, as is done in peptide receptor radionuclide therapy for neuroendocrine tumors and in chemoembolization or radioactive microspheres therapy for liver tumors and metastases | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index This concentrates the agent in the targeted tissues and lowers its concentration in others, increasing efficacy and lowering toxicity. Sometimes the term safety ratio is used instead, particularly when referring to psychoactive drugs used for non-therapeutic purposes, e.g. recreational use. In such cases, the "effective" dose is the amount and frequency that produces the "desired" effect, which can vary, and can be greater or less than the therapeutically effective dose. The "Certain Safety Factor", also referred to as the "Margin of Safety (MOS)", is the ratio of the lethal dose to 1% of population to the effective dose to 99% of the population (LD/ED). This is a better safety index than the LD for materials that have both desirable and undesirable effects, because it factors in the ends of the spectrum where doses may be necessary to produce a response in one person but can, at the same dose, be lethal in another. A therapeutic index does not consider drug interactions or synergistic effects. For example, the risk associated with benzodiazepines increases significantly when taken with alcohol, opiates, or stimulants when compared with being taken alone. also does not take into account the ease or difficulty of reaching a toxic or lethal dose. This is more of a consideration for recreational drug users, as the purity can be highly variable. The protective index is a similar concept, except that it uses TD (median "toxic" dose) in place of LD | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index For many substances, toxic effects can occur at levels far below those needed to cause death, and thus the protective index (if toxicity is properly specified) is often more informative about a substance's relative safety. Nevertheless, the therapeutic index is still useful as it can be considered an upper bound for the protective index, and the former also has the advantages of objectivity and easier comprehension. The "therapeutic window" (or pharmaceutical window) of a drug is the range of drug dosages which can treat disease effectively without having toxic effects. Medication with a small therapeutic window must be administered with care and control, frequently measuring blood concentration of the drug, to avoid harm. Medications with narrow therapeutic windows include theophylline, digoxin, lithium, and warfarin. Optimal biological dose (OBD) is the quantity of a drug that will most effectively produce the desired effect while remaining in the range of acceptable toxicity. The maximum tolerated dose (MTD) refers to the highest dose of a radiological or pharmacological treatment that will produce the desired effect without unacceptable toxicity. The purpose of administering MTD is to determine whether long-term exposure to a chemical might lead to unacceptable adverse health effects in a population, when the level of exposure is not sufficient to cause premature mortality due to short-term toxic effects | https://en.wikipedia.org/wiki?curid=334955 |
Therapeutic index The maximum dose is used, rather than a lower dose, to reduce the number of test subjects (and, among other things, the cost of testing), to detect an effect that might occur only rarely. This type of analysis is also used in establishing chemical residue tolerances in foods. Maximum tolerated dose studies are also done in clinical trials. MTD is an essential aspect of a drug's profile. All modern healthcare systems dictate a maximum safe dose for each drug, and generally have numerous safeguards (e.g. insurance quantity limits and government-enforced maximum quantity/time-frame limits) to prevent the prescription and dispensing of quantities exceeding the highest dosage which has been demonstrated to be safe for members of the general patient population. Patients are often unable to tolerate the theoretical MTD of a drug due to the occurrence of side-effects which are not innately a manifestation of toxicity (not considered to severely threaten a patients health) but cause the patient sufficient distress and/or discomfort to result in non-compliance with treatment. Such examples include emotional "blunting" with antidepressants, pruritus with opiates, and blurred vision with anticholinergics. | https://en.wikipedia.org/wiki?curid=334955 |
Crag and tail A crag (sometimes spelled cragg, or in Scotland craig) is a rocky hill or mountain, generally isolated from other high ground. Crags are formed when a glacier or ice sheet passes over an area that contains a particularly resistant rock formation (often granite, a volcanic plug or some other volcanic structure). The force of the glacier erodes the surrounding softer material, leaving the rocky block protruding from the surrounding terrain. Frequently the crag serves as a partial shelter to softer material in the wake of the glacier, which remains as a gradual fan or ridge forming a tapered ramp (called the tail) up the leeward side of the crag. In older examples, or those latterly surrounded by the sea, the tail is often missing, having been removed by post-glacial erosion. Examples of such crag and tail formations include: | https://en.wikipedia.org/wiki?curid=342237 |
Bilayer A bilayer is a double layer of closely packed atoms or molecules. The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions (such as p-n junctions, Schottky junctions, etc.). Layered materials, such as graphene, boron nitride, or transition metal dichalchogenides, have unique electronic properties as a bilayer system and are an active area of current research. In biology a common example is the Lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell. | https://en.wikipedia.org/wiki?curid=342851 |
Culm (botany) A culm is the aerial (above-ground) stem of a grass or sedge. It is derived from Latin 'stalk', and it originally referred to the stem of any type of plant. In the production of malted grains, the culms refer to the rootlets of the germinated grains. The culms are normally removed in a process known as "deculming" after kilning when producing barley malt, but form an important part of the product when making sorghum or millet malt. These culms are very nutritious and are sold off as animal feed. | https://en.wikipedia.org/wiki?curid=343179 |
Lev Dyomin Lev Stepanovich Dyomin (; 11 January 1926 – 18 December 1998) was a Soviet cosmonaut who flew on the Soyuz 15 spaceflight in 1974. This spaceflight was intended to dock with the space station Salyut 3, but the docking failed. Dyomin was born in Moscow. He gained a doctoral degree in engineering from the Soviet Air Force Engineering Academy and the rank of Colonel in the Soviet Air Force. Aged 48 at the time of his flight on Soyuz 15, he was the oldest cosmonaut up to that point as well as the first grandfather to go into space. He remained in the program until leaving in 1982 to pursue deep-sea research. Dyomin died of cancer, in Zvyozdny Gorodok, in 1998. He was awarded: | https://en.wikipedia.org/wiki?curid=351394 |
Gennady Sarafanov Gennadi Vasiliyevich Sarafanov (; b. January 1, 1942 in Sinenkiye, Saratov Oblast, Russia – d. September 29, 2005, Moscow) was a Soviet cosmonaut who flew on the Soyuz 15 spaceflight in 1974. This mission was intended to dock with the space station Salyut 3, but failed to do so after the docking system malfunctioned. Sarafanov graduated from the Soviet Air Force academy and held the rank of Colonel. He only made a single spaceflight before resigning from the space program in 1986 and lecturing in technology. He was awarded: | https://en.wikipedia.org/wiki?curid=351409 |
Aleksei Gubarev Aleksei Aleksandrovich Gubarev (; 29 March 1931 – 21 February 2015) was a Soviet cosmonaut who flew on two space flights: Soyuz 17 and Soyuz 28. Gubarev graduated from the Soviet Naval Aviation School in 1952 and went on to serve with the Soviet Air Force. He undertook further studies at the Gagarin Air Force Academy before being accepted into the space programme. He was originally trained for the Soviet lunar programme and for military Soyuz flights before training for Salyut missions. His next mission, in 1978, was Soyuz 28, the first Interkosmos flight, where he was accompanied by Vladimír Remek from Czechoslovakia. In 1971, he became backup commander for the ill-fated Soyuz 11 mission, which killed the three-man crew when the craft depressurized in space. He resigned as a cosmonaut in 1981 and took up an administrative position at the Gagarin Cosmonaut Training Centre. In the 1980s he worked at the 30th Central Scientific Research Institute, Ministry of Defence (Russia). His awards includes the Gagarin Gold Medal, which was bestowed upon him twice. He was an honorary citizen of Kaluga, Arkalyk, Tselinograd, and Prague. Gubarev published a book, "The Attraction of Weightlessness", in 1982. Gubarev died at the age of 83 on 21 February 2015. Foreign awards: | https://en.wikipedia.org/wiki?curid=351889 |
Charged particle In physics, a charged particle is a particle with an electric charge. It may be an ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons. It can also be an electron or a proton, or another elementary particle, which are all believed to have the same charge (except antimatter). Another charged particle may be an atomic nucleus devoid of electrons, such as an alpha particle. A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles. | https://en.wikipedia.org/wiki?curid=352541 |
Genentech Genentech, Inc., is a biotechnology corporation which became a subsidiary of Roche in 2009. Research and Early Development operates as an independent center within Roche. As of February 2020, employed 13,638 people. The company was founded in 1976 by venture capitalist Robert A. Swanson and biochemist Herbert Boyer. Boyer is considered to be a pioneer in the field of recombinant DNA technology. In 1973, Boyer and his colleague Stanley Norman Cohen demonstrated that restriction enzymes could be used as "scissors" to cut DNA fragments of interest from one source, to be ligated into a similarly cut plasmid vector. While Cohen returned to the laboratory in academia, Swanson contacted Boyer to found the company. Boyer worked with Arthur Riggs and Keiichi Itakura from the Beckman Research Institute, and the group became the first to successfully express a human gene in bacteria when they produced the hormone somatostatin in 1977. David Goeddel and Dennis Kleid were then added to the group, and contributed to its success with synthetic human insulin in 1978. In 1990 F. Hoffmann-La Roche AG acquired a majority stake in Genentech. In 2006 acquired Tanox in its first acquisition deal. Tanox had started developing Xolair and development was completed in collaboration with Novartis and Genentech; the acquisition allowed to keep more of the revenue. in March 2009 Roche acquired by buying shares it didn't already control for approximately $46.8 billion | https://en.wikipedia.org/wiki?curid=362464 |
Genentech In July 2014, Genentech/Roche acquired Seragon for its pipeline of small-molecule cancer drug candidates for $725 million cash upfront, with an additional $1 billion of payments dependent on successful development of products in Seragon's pipeline. was a pioneering research-driven biotechnology company that has continued to conduct R&D internally as well as through collaborations. Genentech's research collaborations include: Genentech's corporate headquarters are in South San Francisco, California (), with additional manufacturing facilities in Vacaville, California; Oceanside, California; and Hillsboro, Oregon. In December 2006, sold its Porriño, Spain, facility to Lonza and acquired an exclusive right to purchase Lonza's mammalian cell culture manufacturing facility under construction in Singapore. In June 2007, began the construction and development of an "E. coli" manufacturing facility, also in Singapore, for the worldwide production of Lucentis (ranibizumab injection) bulk drug substance. Inc Political Action Committee is a U.S. Federal Political Action Committee (PAC), created to "aggregate contributions from members or employees and their families to donate to candidates for federal office." In November 1999, agreed to pay the University of California, San Francisco $200 million to settle a nine-year-old patent dispute. In 1990, UCSF sued for $400 million in compensation for alleged theft of technology developed at the university and covered by a 1982 patent | https://en.wikipedia.org/wiki?curid=362464 |
Genentech claimed that they developed Protropin (recombinant somatotropin/human growth hormone), independently of UCSF. A jury ruled that the university's patent was valid in July 1999, but wasn't able to decide whether Protropin was based upon UCSF research or not. Protropin, a drug used to treat dwarfism, was Genentech's first marketed drug and its $2 billion in sales has contributed greatly to its position as an industry leader. The settlement was to be divided as follows: $30 million to the University of California General Fund, $85 million to the three inventors and two collaborating scientists, $50 million towards a new teaching and research campus for UCSF, and $35 million to support university-wide research. In 2009, "The New York Times" reported that Genentech's talking points on health care reform appeared verbatim in the official statements of several Members of Congress during the national health care reform debate. Two U.S. Representatives, Joe Wilson and Blaine Luetkemeyer, both issued the same written statements: "One of the reasons I have long supported the U.S. biotechnology industry is that it is a homegrown success story that has been an engine of job creation in this country. Unfortunately, many of the largest companies that would seek to enter the biosimilar market have made their money by outsourcing their research to foreign countries like India." The statement was originally drafted by lobbyists for Genentech. | https://en.wikipedia.org/wiki?curid=362464 |
Crivitz (crater) Crivitz is an impact crater on Mars. It is named after the small town of Crivitz in western Mecklenburg-Vorpommern, Germany. The crater has a diameter of 6.1 kilometers and is located at 14.5 deg south and 174.7 deg east, within the larger crater Gusev. The name was proposed by Stephan Gehrke in 2002, at the time a research associate of the TU Berlin working on cartographic software and a visiting researcher at the United States Geological Survey. | https://en.wikipedia.org/wiki?curid=362546 |
Myr The abbreviation myr, "million years", is a unit of a quantity of (i.e. ) years, or 31.536 teraseconds. is in common use where the term is often written, such as in Earth science and cosmology. is seen with "mya", "million years ago". Together they make a reference system, one to a quantity, the other to a particular place in a year numbering system that is "time before the present". is deprecated in geology, but in astronomy "myr" is standard. Where "myr" "is" seen in geology it is usually "Myr" (a unit of mega-years). In astronomy it is usually "Myr" (million years). In geology the debate of the millennia concerns the use of "myr" remains open concerning "the use of "Myr" plus "Mya"" versus "using "Mya" only". In either case the term "Ma" is used in geology literature conforming to ISO 31-1 (now ISO 80000-3) and NIST 811 recommended practices. Traditional style geology literature is written The "ago" is implied, so that any such year number "X Ma" between 66 and 145 is "Cretaceous", for good reason. But the counter argument is that having "myr" for a duration and "Mya" for an age mixes unit systems, and tempts capitalization errors: "million" need not be capitalized, but "mega" must be; "ma" would technically imply a "milliyear" (a thousandth of a year, or 8 hours). On this side of the debate, one avoids "myr" and simply adds "ago" explicitly (or adds "BP"), as in In this case, "79 Ma" means only a quantity of 79 million years, without the meaning of "79 million years ago". | https://en.wikipedia.org/wiki?curid=363092 |
Photochemistry is the branch of chemistry concerned with the chemical effects of light. Generally, this term is used to describe a chemical reaction caused by absorption of ultraviolet (wavelength from 100 to 400 nm), visible light (400–750 nm) or infrared radiation (750–2500 nm). In nature, photochemistry is of immense importance as it is the basis of photosynthesis, vision, and the formation of vitamin D with sunlight. Photochemical reactions proceed differently than temperature-driven reactions. Photochemical paths access high energy intermediates that cannot be generated thermally, thereby overcoming large activation barriers in a short period of time, and allowing reactions otherwise inaccessible by thermal processes. is also destructive, as illustrated by the photodegradation of plastics. Photoexcitation is the first step in a photochemical process where the reactant is elevated to a state of higher energy, an excited state. The first law of photochemistry, known as the Grotthuss–Draper law (for chemists Theodor Grotthuss and John W. Draper), states that light must be absorbed by a chemical substance in order for a photochemical reaction to take place. According to the second law of photochemistry, known as the Stark-Einstein law (for physicists Johannes Stark and Albert Einstein), for each photon of light absorbed by a chemical system, no more than one molecule is activated for a photochemical reaction, as defined by the quantum yield | https://en.wikipedia.org/wiki?curid=363430 |
Photochemistry When a molecule or atom in the ground state (S) absorbs light, one electron is excited to a higher orbital level. This electron maintains its spin according to the spin selection rule; other transitions would violate the law of conservation of angular momentum. The excitation to a higher singlet state can be from HOMO to LUMO or to a higher orbital, so that singlet excitation states S, S, S… at different energies are possible. Kasha's rule stipulates that higher singlet states would quickly relax by radiationless decay or internal conversion (IC) to S. Thus, S is usually, but not always, the only relevant singlet excited state. This excited state S can further relax to S by IC, but also by an allowed radiative transition from S to S that emits a photon; this process is called fluorescence. Alternatively, it is possible for the excited state S to undergo spin inversion and to generate a triplet excited state T having two unpaired electrons with the same spin. This violation of the spin selection rule is possible by intersystem crossing (ISC) of the vibrational and electronic levels of S and T. According to Hund's rule of maximum multiplicity, this T state would be somewhat more stable than S. This triplet state can relax to the ground state S by radiationless IC or by a radiation pathway called phosphorescence. This process implies a change of electronic spin, which is forbidden by spin selection rules, making phosphorescence (from T to S) much slower than fluorescence (from S to S) | https://en.wikipedia.org/wiki?curid=363430 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.